input
stringlengths
2.04k
268k
output
stringlengths
127
13.7k
<1. Background> The JSF is a joint, multinational acquisition program for the Air Force, Navy, and Marine Corps and eight international partners. The program began in November 1996 with a 5-year competition between Lockheed Martin and Boeing to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition, and the program entered system development and demonstration in October 2001. Program goals are to develop and field an affordable, highly common family of stealthy, next-generation strike fighter aircraft for the Navy, Air Force, Marine Corps, and U.S. allies. The JSF is a single-seat, single-engine aircraft, designed to rapidly transition between air-to-ground and air-to-air missions while still airborne. To achieve its mission, the JSF will incorporate low-observable (stealth) technologies, defensive avionics, advanced onboard and offboard sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. The JSF family consists of three variants. The conventional takeoff and landing (CTOL) variant will primarily be an air- to-ground replacement for the Air Force s F-16 Falcon and the A-10 Warthog aircraft, and will complement the F-22A Raptor. The short takeoff and vertical landing (STOVL) variant will be a multi-role strike fighter to replace the Marine Corps F/A-18C/D Hornet and AV-8B Harrier aircraft. The carrier-suitable variant will provide the Navy a multi-role, stealthy strike aircraft to complement the F/A-18E/F Super Hornet. DOD is planning to buy a total of 2,456 JSFs and allies are expected to procure a minimum of 730 CTOL and STOVL aircraft. Because of the program s sheer size and the numbers of aircraft it will replace, the JSF is the linchpin of DOD s long-term plan to modernize tactical air forces. It is DOD s largest acquisition program, with total cost currently estimated at $300 billion, and the longest in planned duration, with procurement projected through 2034. In addition, the JSF remains DOD s largest cooperative program. Our international partners are providing about $4.8 billion toward development, and foreign firms are part of the industrial base producing aircraft. DOD s funding requirements for the JSF assume economic benefits from these foreign purchases in reducing unit costs for U.S aircraft. Table 1 shows the evolution of DOD s official estimated cost, quantity, and deliveries from the initiation of system development in October 2001 to the current official program of record dated December 2007 and submitted to the Congress in April 2008. It depicts quantities reduced in the last major program restructure in 2004, the impacts of increased costs on unit prices, and the slip in delivering initial operating capability to the warfighter. In our March 2008 report, we stated that JSF costs would likely grow much higher than reported because the program of record at that time did not include all acquisition costs (including the alternate engine program directed by the Congress), made overoptimistic assumptions, and did not fully reflect the mounting cost and schedule pressures from manufacturing inefficiencies and compressed time frames for completing development. We questioned the Mid-Course Risk Reduction Plan adopted by DOD in September 2007 that cut two development test aircraft, reduced test flights, and accelerated the reduction in the prime contractor s development workforce in order to replenish management reserves depleted by design changes and manufacturing problems. We recommended that DOD accomplish a full and comprehensive estimate of the total program verified by an independent third party and revisit the Mid-Course Risk Reduction Plan with an intensive analysis of the causes of management reserve depletion, progress against the baseline manufacturing schedule, and correction of deficiencies in the contractor s earned value management system. DOD agreed to make a comprehensive independent cost estimate, but decided to go ahead as planned with the Mid-Course Risk Reduction Plan, stating that it would monitor and evaluate progress and revise the plan later if it failed to achieve expectations. <2. More Money and Time Will Be Needed to Complete JSF Development, While DOD Plans to Accelerate Procurement> Two recent estimates indicate that JSF development will cost more and take longer to complete than reported to the Congress in April 2008, primarily because of contract cost overruns and extended time to complete flight testing. DOD also plans to accelerate aircraft procurement over the next 6 years buying more aircraft sooner than planned last year. This new plan will require significantly more procurement funding sooner, but officials did not assess its net effect on total program costs through completion of JSF acquisition. <2.1. New Estimates Project Rising Costs and Further Delays to Complete JSF Development> Development costs are projected to increase between $2.4 billion and $7.4 billion and the schedule for completing system development extended from 1 to 3 years, according to recent estimates one by the JSF Program Office and one by a joint team of Office of the Secretary of Defense (OSD), Air Force, and Navy officials. Cost overruns on both the aircraft and engine contracts, delays in manufacturing test aircraft, and a need for a longer, more robust flight test program are the primary cost drivers. The joint team s estimate is higher than the program office s because it included costs for the alternate engine program directed by the Congress and used more conservative assumptions based on current and legacy aircraft experiences. Program officials contend that funding the program to the higher cost estimate is premature and believe processes are in place to substantially improve on the test experiences of past programs. Regardless, both estimates agree that cost and time to complete development have increased from the official program of record at the time of our review. (See table 2.) The program office s revised development cost estimate projects an additional $2.4 billion and a 1-year extension in the schedule compared to the official program of record reported to the Congress. This would increase the system development portion of the acquisition program to $46.8 billion and delay its completion to October 2014. The cost increases primarily resulted from the following factors. $1.2 billion for aircraft development. Program officials declared a cost overrun on the prime air system contract because of increased labor hours, higher prices, and supply shortages. Included in this figure is $200 million to be added to the contractor s management reserve. Last year, we reported that mounting cost and schedule pressures depleted reserves much faster than anticipated. By summer 2007, the program had spent about two-thirds of budgeted funds but had accomplished only half the work required. Since then, DOD s efforts to restore reserves and fix manufacturing inefficiencies have not fully achieved intended results, requiring another cash infusion. $800 million for engine development. According to officials from the Defense Contract Management Agency, the engine contractor continued to face development problems, which resulted in a contract cost overrun. Higher costs for labor and materials, supplier problems, and the rework needed to correct deficiencies with the engine blade design discovered during ground testing were major contributing factors. $300 million for flight test extension. The program extended system development by 1 year to provide more time for development and operational flight testing. In April 2008, an operational test review team recommended extending the development contract by 1 year. The review team considered but dismissed several other options to address the schedule problem, including deferring requirements. On the other hand, the joint estimating team estimates that it will cost $14.8 billion to complete JSF development, $7.4 billion more than the official program of record at the time of our review. This would increase the total development costs to $51.8 billion from the $44.4 billion reported to the Congress last year a 17 percent increase. The joint team also projected a 3-year program extension beyond the program of record in order to complete system development, 2 years more than the new program office estimate. The joint team s estimate was $5 billion more than the new program office estimate. Several factors account for the difference between the two estimates. Alternate engine. The joint estimating team included $1.4 billion to complete development of an alternate engine for the JSF; the program office estimate did not include alternate engine costs. The Congress has directed DOD to develop a second source for the JSF engine to induce competition and to ensure that one engine s failures would not ground all JSFs, thereby reducing operational risks in the future. DOD has not wanted to pursue this second engine source and twice removed funding from the JSF program line. Engineering staffing. The joint estimating team projected a need for the contractor to retain considerably more engineering staff and for longer periods of time than the program office estimate to complete development, evaluate test results, and correct deficiencies. Releasing engineering staff prematurely risks not discovering problems until late in development or during fielding, when they would be more expensive to address. Software development. The joint estimating team believes that the software productivity rate will be less than the program s calculation and anticipates much more growth in software requirements. The JSF aircraft is expected to require 7.5 million lines of computer code the most by far of any aircraft. By comparison, the F/A-18E/F has only 1.1 million and the F-22A 2.2 million. Experiences on past acquisitions have shown 30 to 100 percent growth in software requirements over time, while the JSF Program Office estimate assumed no growth. Flight testing. The joint estimating team projects that flight testing will require more time and effort than the program office has built into its current schedule. Continuing delays in delivering test aircraft are expected to hamper and further compress test schedules. In particular, the joint team projects that the two aircraft dedicated to carrier suitability tests will be late off the production line, thereby delaying test activities. It also projected that the JSF will require about 2,700 hours of flight testing for mission systems, significantly more than the 1,700 hours that the program office currently estimates. Manufacturing production hours. The joint estimating team projects that production span times for the JSF will be longer than the program office estimates based on the program s performance to date and experience of recent programs, such as the F/A-18 E/F and the F-22A. The span time is an indicator of how long the manufacturing effort takes and when flight testing can begin. The program office assumes that span times will decrease over the course of the development contract. We note that span times typically increase during development, as was the case for both the B-2 bomber and F-22A programs. Program officials believe that their estimate is more accurate and that providing extra funding to address future risks is premature and does not provide incentives for contractors to control costs. The program office attributes its lower cost estimate to several factors. First, the quantity, quality, and flexibility of the JSF laboratories should enable the program to reduce more risks in a laboratory environment, rather than through flight testing, which is considerably more expensive. In addition, the program s efforts to develop the final software system infrastructure early should reduce significant software problems later in the program, according to the program office. The program office also believes that costs will be lower because progress in several key development areas is either matching plans or is ahead of where legacy programs were at similar points of time in their development. For example, the program is currently reducing engineering staff as planned. The program is also producing software at a rate significantly higher than that of the F-22A program and is at least 18 months ahead of where the F-22A program was at a similar point in developing mission systems, according to program officials. Officials told us that they intend to fund the fiscal year 2010 development budget based on the joint team s higher estimate. However, it is not clear at this stage which estimate will serve as the basis for future budget submissions. <2.2. Much Higher Annual Procurement Funding Required to Accelerate JSF Procurement> The program office and joint estimating team also projected procurement funding requirements for the 6-year period fiscal years 2010-2015 based on DOD plans to accelerate procurement of operational aircraft. Through this effort, DOD wants to recapitalize tactical air forces sooner and mitigate projected fighter shortfalls in the future. Compared to last year, this accelerated plan would procure an additional 169 aircraft during these 6 years, moving aircraft that had been scheduled for procurement beyond 2015 to earlier years. According to the two estimates, this plan would require from $21.8 billion to $33.4 billion more funding than the official program of record, as shown in table 3. The joint team s estimate is higher than the program office s, primarily for these reasons: The joint team projected slower gains in production efficiency than the program office. Typically, production efficiency is improved and unit costs are lowered over time as a workforce becomes more experienced building a new product and manufacturing processes are honed. The joint team also assumed fewer savings from commonality. Commonality a key selling point for the JSF program refers to the use of the same or similar parts, structures, and subsystems shared by the three variants. Greater commonality can save money by decreasing development times and facilitating economic order quantities. The team projected higher labor and material costs and longer production span times, based on JSF performance to date in manufacturing development test aircraft. Table 4 shows the additional aircraft and funding requirements for DOD s accelerated plan compared to the official program of record. These quantities are for the United States only; during this same period, the international partners are expected to buy 273 aircraft. <2.3. Procurement of Operational Aircraft under Cost Reimbursement Contracts to Continue; Increases the Government s Exposure to Risks> The JSF program is procuring a substantial number of aircraft on cost reimbursement contracts. Cost contracts place most of the risk on the buyer DOD in this case who is liable to pay more than budgeted should labor, material, or other incurred costs be more than expected when the contract was signed. JSF officials plan to procure at least the first four low-rate production lots under cost reimbursement contracts and to transition to fixed-price instruments when appropriate, possibly between lots 5 and 7 (fiscal years 2011 to 2013). It is unclear exactly how and when this will happen, but the expectation is to transition to fixed pricing once the air vehicle has a mature design, has been demonstrated in flight tests, and is producible at established cost targets. To date, DOD has procured the first three lots for a total of 30 aircraft and $7.4 billion on cost reimbursement terms. Under the accelerated procurement plan, DOD could procure as many as 360 aircraft costing about $57 billion through fiscal year 2013 on cost reimbursement contracts, as illustrated in figure 1. Cost reimbursement contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract. According to the Federal Acquisition Regulation, cost reimbursement contracts are suitable for use only when uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed- price contract. Cost reimbursement contracts for weapon production are considered appropriate when the program lacks sufficient knowledge about system design, manufacturing processes, and testing results to establish firm prices and delivery dates. In contrast, a fixed-price contract provides for a pre-established price, places more of the risk and responsibility for costs on the contractor, and provides more incentive for efficient and economical performance. Procuring up to 360 production aircraft on cost reimbursement contracts -nearly 15 percent of the total DOD program seems to be a tacit acknowledgment by DOD and the contractor that knowledge on JSF design, production processes, and costs for labor and material is not yet sufficiently mature and that pricing information is not exact enough for the contractor to assume the risk under a fixed-price contract. It also seems to be a consequence of the substantial concurrency of development, test, and production built into the JSF schedule. Significant overlap of these activities means that DOD is procuring considerable quantities of operational aircraft while development test aircraft are still on the production line and much in advance of testing to prove aircraft performance and suitability. Establishing a clear and accountable path to ensure that the contractor assumes more of the risk is prudent. We note that the significant ramp up in JSF production under the accelerated profile starts with lot 5, the fiscal year 2011 procurement of 70 aircraft. <3. Continued Manufacturing Inefficiencies Will Make It Difficult for the Program to Meet Its Production Schedule> Manufacturing of development test aircraft is taking more time, money, and effort than planned, but officials believe that they can work through these problems and deliver the 11 remaining test aircraft by early 2010. However, by that time, DOD may have procured as many as 62 production aircraft, accumulating a backlog of aircraft to be produced. Manufacturing inefficiencies and parts shortages continue to delay the completion and delivery of development test aircraft needed for flight testing. The contractor has not yet demonstrated mature manufacturing processes, or an ability to produce at currently planned annual rates. It has taken steps to improve manufacturing, the supplier base, and schedule management. However, given the manufacturing challenges, we believe that DOD s plan to accelerate production in the near term adds considerable risk and will be difficult to achieve in a cost-effective manner. <3.1. Time and Money Needed for Manufacturing Development Test Aircraft Continue to Increase> The prime contractor has restructured the JSF manufacturing schedule three times, each time lengthening the time to deliver aircraft to the test program. Delays and manufacturing inefficiencies are prime causes of contract cost overruns. The contractor has produced two development test aircraft an original non production representative model and the first STOVL aircraft. It now projects delivering the remaining 11 aircraft in 2009 and early 2010. Problems and delays are largely the residual effects from difficulties early in development. The effects of the late release of engineering drawings, design changes, delays in establishing a supplier base, and parts shortages continue to cause delays and force inefficient production line work-arounds where unfinished work is completed out of station. Data provided by the Defense Contract Management Agency and the JSF Program Office show continuing critical parts shortages, out-of- station work, and quality issues. The total projected labor hours to manufacture test aircraft increased by 40 percent just in the past year, as illustrated in figure 2. An efficient production line establishes an orderly flow of work as a product moves from workstation to workstation and on to final assembly. Out-of-station work, sometimes referred to as traveled work, refers to completing unfinished work on major components, for example, the wings, after they have left the wing workstation and moved down the production line to another station, such as mate and final assembly. decline in labor hours, indicating lesser gains in worker efficiency. As of June 2008, the planned hours for these two major stations increased by about 90 percent over the June 2007 schedule, which itself had shown an increase from the 2006 schedule. The overlap in the work schedule between manufacturing the wing and mating (connecting) it to the aircraft fuselage has been a major concern for several years because it causes inefficient out-of-station work. The contractor continues to address this concern, but the new schedule indicates that this problem will continue at least through 2009. One indicator of its persistence is the projected hours for building the last test aircraft. As figure 4 shows, estimated labor hours increased more than 80 percent from the June 2007 to June 2008 schedules, and the planned hours for wing assembly and for the mate and delivery phases more than tripled. <3.2. Prime Contractor Actions to Improve Schedule Management, Manufacturing Efficiency, and Supplier Base> Our evaluation determined that the prime contractor now has good tools and integrated processes in place that should improve its schedule management activities and is also implementing actions to improve manufacturing efficiency, the delivery of parts, and proactive oversight of subcontractors. The effects from these recent actions are not yet fully apparent, and the contractor has not yet accomplished its own schedule risk assessment that could provide more insight into impacts from areas of risk and uncertainty. The coming year will be critical for implementing management improvements in order to accomplish a firm and effective transition from manufacturing a few test aircraft to producing operational aircraft at high annual rates. <3.2.1. Prime Contractor s Scheduling Management Processes Meet Many Best Practices, but Program Risks Are Not Entirely Visible> The prime contractor demonstrated to us that its schedule management processes largely meet established best practices criteria. With improvements implemented in 2008, the contractor s management systems meet or partially meet eight of the nine established criteria. For example, the master schedule can identify and track activities associated with over 600 projects. It also establishes the critical path between activities, allowing the program to examine the impacts of schedule delays and determine schedule flexibility. Appendix II discusses our examination of the prime contractor s schedule management process against best practices criteria in more detail. The one area not meeting best practices was related to performing a schedule risk analysis, and as a result, the contractor has limited insight into areas of risk and uncertainty in the schedule. The prime contractor has not conducted its own risk assessment that would (1) determine the level of confidence it has in meeting completion dates and (2) identify and apportion reserve funds for contingencies. A thorough risk analysis could improve management insight and subsequent corrective actions on two recurring problem areas in particular: schedule slippage and inadequate management reserve levels. Naval Air Systems Command officials did accomplish an independent schedule risk analysis that indicated that the program could slip more than 2 years based on the productivity risks associated with software development and assembly of the various airframes as well as the time needed to complete all flight testing. Both the contractor and the JSF Program Office disputed the findings of the Naval Air Systems Command schedule risk analysis primarily because the analysis was done without direct involvement of program officials. <3.2.2. Improvements in Manufacturing and Supplier Base Are Ongoing> The prime contractor is implementing changes designed to address the manufacturing inefficiencies and parts shortages discussed earlier. These include (1) increasing oversight of key subcontractors that are having problems, (2) securing long-term raw material purchase price agreements for both the prime and key subcontractors, and (3) implementing better manufacturing line processes. On this latter point, according to program officials, the prime contractor has taken specific steps to improve wing manufacturing performance one of the most troublesome workstations. Defense Contract Management Agency officials noted that the contractor produced the second STOVL aircraft variant with less work performed out of station than for the first STOVL aircraft. Also, program office and contractor officials report some alleviation of parts shortages and improvements in quality, but also believe that the effects from previous design delays, parts shortages, and labor inefficiencies will continue to persist over the near term. The lag time between taking action and demonstrating improvement may partly explain why some manufacturing performance metrics are not demonstrating a clear continued rate of quality improvement, as would be desirable and expected for a program ramping up annual production rates. This lag time may be evident in two important metrics scrap, rework, and repair rates and manufacturing defect rates both of which have increased somewhat since 2006. Program and contracting officials point out, however, that while this performance is not desirable, these and other metrics compare very favorably with those of prior acquisitions at similar stages of development, including the F-16 and F-22A. Supplier costs are expected to make up an even more substantial share of total expenses as the program moves further into production. According to contractor officials, efforts are focused on maturing the supply base and working more closely and directly with key suppliers to reduce costs, alleviate parts shortages, and support higher production rates. Key suppliers have struggled to develop critical and complex parts while others have had problems with limited production capacity. For example, the supplier responsible for the advanced electro-hydraulic actuation system had delivered parts with missing subcomponents and parts that were not built to specifications. The major team mate supplier responsible for fuselage and tail assembly has experienced delays caused by its limited machining capacity. Given these supplier issues, manufacturing inefficiencies, and accumulating backlog in production, we believe that the program s plans to accelerate production in the near term adds considerable risk and will be difficult to achieve in a cost-effective manner. <4. The JSF s Test Plan Is Improved but Faces Numerous Challenges to Complete Development on Time and on Budget> DOD will make significant investments in both dollars and the number of aircraft procured before completing JSF flight testing. DOD s proposal to accelerate procurement further increases financial risks in a very challenging test environment. DOD s new test plan adds an extra year to the schedule and better aligns resources, but is still aggressive with little room for error, presenting a formidable challenge to complete system development, support initial operational testing, and, eventually, a full rate production decision. DOD decisions to reduce development test aircraft and flight tests add to the risks, while any additional delays in manufacturing test aircraft will further compress the schedule. The JSF has just begun development flight testing with two test aircraft and has already experienced some setbacks normal in any program, but of special concern when assets are minimal. Some in DOD forecast that another 2 or more years beyond the 1-year extension just approved will eventually be needed to successfully prove aircraft performance and complete system development. The department has stated that the contractor s state-of-the-art ground test labs and a flying test bed will mitigate risks in the flight regimen and their use will effectively substitute for flight testing. This approach is promising, but not yet proven. <4.1. Significant Investments Made before Development Flight Tests Are Completed> DOD is investing heavily in procuring JSF aircraft before flight testing proves that they will perform as expected. Procuring aircraft before testing successfully demonstrates that the design is mature and that the weapon system will work as intended increases the likelihood of design and requirements changes resulting in subsequent cost growth, schedule delays, and performance limitations. Also, systems already built and fielded may later require substantial modifications, further adding to costs. The uncertain environment as testing progresses is one reason why the prime contractor and DOD are using cost reimbursable contracts until rather late in procurement. Table 5 depicts planned investments in both dollars and aircraft prior to the completion of development flight testing. Under the accelerated production plan and using the lower procurement cost estimate prepared by the program office, DOD may procure 360 aircraft at a total estimated cost of $57 billion before development flight testing is completed. This overlap will be further exacerbated should the joint estimating team s predictions of higher cost and lengthier schedule prove accurate. <4.2. Flight Testing Is Still in Its Infancy and Has Fallen Behind Schedule> The JSF program had completed about 2 percent of its development flight testing as of November 2008. Figure 5 shows the expected ramp up in flight testing with most effort occurring in fiscal years 2010 through 2012. Past programs have shown that many problems are not discovered until flight testing. As such, the program is likely to experience considerable cost growth in the future as it steps up its flight testing, discovers new problems, and makes the necessary technical and design corrections. While the program has been able to demonstrate basic aircraft flying capabilities, it has recently experienced testing delays and has fallen behind the flight test plan established in 2007. At the time of our review, the program had flown about half of its planned 155 flight tests for 2008. The test program currently has two development test aircraft and an integrated airborne test bed, with the following experiences to date: Sixty-five test flights on the original, non production representative prototype contributed to discoveries in, among other things, landing gear door fitting, aerial refueling operations, and weapons bay functions. The prototype experienced a 3-month delay because of engine bay nacelle vent fan malfunctions that were subsequently resolved. Initial testing of the first of 12 production representative prototypes began in June 2008 a STOVL variant flown in conventional mode. By the time of our review it accumulated 14 test flights demonstrating important handling qualities and reducing risks associated with, among other things, the landing gear, fuel system performance, and STOVL doors operation. Engine problems delayed full STOVL testing by 6 months. Thirty-seven flights on the cooperative avionics test bed tested mission system software and demonstrated communication and navigation capabilities. Looking ahead, the program expects to take delivery of the remaining development test aircraft during 2009 and early in fiscal year 2010. In the same time frame, it also plans to begin flight testing 6 of its 12 production representative prototype test aircraft (CTOL and STOVL aircraft), including the first 2 aircraft dedicated to mission system testing. The first carrier variant development test aircraft is expected to begin full flight testing including ship suitability testing in 2010. A fully integrated, mission capable aircraft is not expected to enter flight testing until 2012 by which time DOD plans to have already purchased 241 aircraft for about $43 billion under cost reimbursement contracts. <4.3. Program s Test Plan Extends Development and Relies Heavily on Ground Testing and Simulations to Verify Aircraft Performance> The JSF Program Office developed a new and improved test plan in the spring of 2008 that extended the development period by 1 year, better aligned test resources and availability dates, and lessened the overlap between development and operational testing. The new plan is still aggressive, however, and has little room for error discovery, rework, and recovery from downtime should test assets be grounded or otherwise unavailable. The sheer complexity of the JSF program with 7.5 million lines of software code, three variants, and multi-mission development suggests that the aircraft will encounter many unforeseen problems during flight testing requiring additional schedule to rework. Our past work has shown that programs that do not allow sufficient time to address the inevitable problems discovered in testing, run a greater risk of significant cost increases and schedule delays when problems do arise. The joint estimating team noted that the program s flight test plan assumed a higher productivity than has been seen on recent flight test programs. As such, the joint team believes that flight testing will require an additional 2 years to complete beyond the 1 year already added to development and suggests that more flight test hours will be necessary to test mission systems and the carrier variant in particular. The Mid-Course Risk Recovery Plan, approved in late 2007, cut two development test aircraft, reduced test flights, and relies more on ground laboratories and simulations to verify performance, adding substantial risk to the program. Our 2008 report discussed the objections from several prominent DOD offices to the Mid-Course Risk Recovery Plan. The Director of Operational Test and Evaluation, for example, identified risks in the revised verification strategy and cited inadequate capacity to handle the pace of mission testing and ship suitability and signature testing. This increases the likelihood of not finding and resolving critical design deficiencies until operational testing, when it is more costly and disruptive to do so. The test plan relies heavily on a series of advanced and robust simulation labs and a flying test bed to verify aircraft and subsystem performance. Figure 6 shows that 83 percent of the aircraft s capabilities are to be verified through labs, the flying test bed, and subject matter analysis, while only 17 percent of test points are to be verified through flight testing. The JSF program spent $5 billion on its system of simulation labs and models. Program officials argue that their heavy investment in simulation labs will allow early risk reduction, thereby reducing the need for additional flight testing, controlling costs, and meeting the key milestones of the program s aggressive test plan. The JSF program s simulation labs appear more prolific, integrated, and capable than the labs used in past aircraft programs. The program utilizes 18 labs for development whereas only 9 were used for the F-22A, 7 for the F-18, and 5 for the F-16. According to program officials, the greater number of labs allows engineers to work simultaneously on different development blocks, reducing bottlenecks that may occur in testing. In contrast, engineers of the F-18 and F-22A programs had to interrupt or shut down development on one development block while they were making corrections to another. Also in contrast to past programs, key JSF simulation labs are colocated at a Lockheed Martin plant in Fort Worth, Texas. The F-22A program utilized three locations and two different companies. According to the program office, the colocation of the key testing labs facilitates more seamless integration of key aircraft components. Program officials also noted that JSF labs use the actual aircraft components to a greater extent than labs did in past programs and also have greater software processing capacity. This allows for more realistic data, which should reduce the need for additional flight testing. Further, the JSF utilizes the first fully integrated airborne test bed for mission system testing. According to program officials, the test bed s design is geospatially proportionate to an actual F-35 aircraft, enhancing its ability to accurately verify aircraft performance. While the labs appear more prolific, integrated, and complex than those used in legacy programs, concerns about their extensive use in verifying aircraft performance remain. The extent of the JSF program s planned lab use is unprecedented, but the ability to substitute for flight testing has not yet been demonstrated. In addition, the labs have yet to be fully accredited. Accreditation is required to ensure the adequate capability of labs and models to perform verification tasks. It is critical that the models behave like aircraft to ensure that the system s performance requirements are being verified accurately. The program office said that it is on track to complete the accreditation of the labs in time to begin verifying system performance. However, the Director of Operational Test and Evaluation reports that the progress of the accreditation support packages is behind schedule and suggests that more flight testing may be needed as the accreditation process reveals the limitations of the models. Some DOD officials are also concerned that the labs will be understaffed. The Director of Operational Test and Evaluation and DOD s joint estimating team both reported that the program s current resource plans reduce engineering staff too rapidly. Engineering and test personnel are critical to analyzing the data generated from the labs. Without adequate staff, there is a greater risk that the labs will not be sufficiently utilized, which could, in turn, result in schedule delays or cost increases. <4.4. Early Operational Assessment Identifies Several Challenges That May Have Operational Impact If Not Addressed> While the program is projecting that it will meet all key performance parameters, most will not be verified through ground and flight testing until fiscal years 2010 through 2013. In addition, a 2008 operational assessment by the Air Force Operational Test and Evaluation Center pointed out several technical challenges that it believes are likely to have a severe operational impact if not adequately addressed. While some of the report s concerns are not specific requirements, some DOD officials believe that the shortfalls may adversely affect the JSF s ability to meet warfighter needs. For example: The current F-35 power system may cause excessive damage to runway surfaces which could limit its ability to operate in certain locations. The program is still evaluating the problem and plans to gather data and conduct further studies when full-scale models or actual aircraft are available. According to a program official, changes to the aircraft s design or to the current concept of operations may be needed. The program has alerted the services and believes it will have a better understanding of the problem sometime in 2009. Thermal management challenges hamper the ability to conduct missions in hot and cold environments. The Director of Operational Test and Evaluation reported that an alternative main engine fuel pump to remedy this problem is under development but will not be available before the low-rate initial production Lot 3, which is likely to affect operational testing. The test team aborted a test sortie because of high fuel temperatures in June. The JSF s advanced integrated support system aims to improve and streamline aircraft logistics and maintenance functions in order to reduce life cycle costs. The current integrated support system for the JSF prohibits operating two detachments from one squadron simultaneously. This limitation will severely affect current operating practices, which include dividing one squadron of aircraft into subgroups to deploy and operate at different locations. <5. Conclusions> The JSF is DOD s largest and most complex acquisition program and the linchpin of efforts to recapitalize our tactical air forces. It is now in its most challenging phase, at a crossroads of a sort. Challenges are many continuing cost and schedule pressures; complex, extensive, and unproven software requirements; and a nascent, very aggressive test program with diminished flight test assets. Looking forward, the contractor plans to complete work expeditiously to deliver the test assets, significantly step up flight testing, begin verifying mission system capabilities, mature manufacturing processes, and quickly ramp up production of operational aircraft. As such, the credibility of the program s test plans, manufacturing capacity, and subsequent cost and schedule estimates should become more apparent. The program must move forward, but given all these challenges, accelerating procurement in a cost-reimbursement contract environment where uncertainties in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed- price contract places very significant financial risk on the government. Accelerating plans also does not equate to an ability to deliver to those plans. Because the program s manufacturing processes are still maturing and flight testing is still in its infancy, incorporating an accelerated production schedule introduces even more risk and uncertainty to the program. Our past work has shown that programs that make production decisions prior to fully proving a system s design through testing and demonstration of mature manufacturing processes have an increased risk of design and production changes and retrofits of completed aircraft. Until the contractor demonstrates that it can produce aircraft in a timely and efficient manner, DOD cannot fully grasp future funding requirements. DOD needs tangible assurance from the prime contractor that it can meet expected development and production expectations. By accelerating low-rate production quantities before manufacturing and testing processes are mature, DOD accepts most of the contractors production and manufacturing inefficiencies. At minimum, the contractor needs to develop a detailed plan demonstrating how it can successfully meet program development and production goals in the near future within cost and schedule parameters. With an improved contracting framework and a more reasoned look to the future, the JSF program can more effectively meet DOD and warfighter needs in a constrained budget environment. <6. Recommendations for Executive Action> Given the program s ongoing manufacturing problems and nascent flight test program, we believe that moving forward with an accelerated procurement plan is very risky. This risk is reflected by the extended use of cost reimbursement contracts for low-rate production quantities a contract mechanism that places most of the cost risk on the government. As such, to enhance congressional oversight, increase the likelihood of more successful program outcomes, and maintain confidence that the program is on track to meet planned cost, schedule, and performance goals, we recommend that the Secretary of Defense take the following two actions: 1. Direct the Under Secretary of Defense for Acquisition, Technology and Logistics to report to the congressional defense committees by October 1, 2009. This report should include, at minimum, an explanation of the cost and other risks associated with a cost- reimbursable contract as compared to a fixed-price contract for JSF s future low-rate production quantities, the program s strategy for managing and mitigating risks associated with the use of cost contracts, and plans for transitioning to fixed-price contracts for production to include time frames and criteria. 2. Direct the JSF Program Office to ensure that the prime contractor performs periodic schedule risk analyses for the JSF program to provide better insight into management reserve, production efficiencies, and schedule completion to allow for corrections as early as possible. <7. Agency Comments and Our Evaluation> DOD provided us with written comments on a draft of this report. The comments are reprinted in appendix III. DOD substantively agreed with our first recommendation regarding a report to the Congress on contracting strategy, but believed that the Under Secretary of Defense for Acquisition, Technology and Logistics should be responsible for the report, not the JSF Program Office as stated in our draft. As the milestone decision authority, the Under Secretary is responsible for approving the contracting strategy, contract awards, and the transition to full rate production. We agree with DOD and revised the recommendation accordingly. DOD also agreed with the second recommendation and will direct that the prime contractor perform periodic schedule risk analysis. In coordination with the JSF Program Office, the department intends to determine an optimum schedule for the contractor that will provide insight into JSF cost and schedule to influence key milestones and decision making. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To determine the Joint Strike Fighter (JSF) program s progress in meeting cost, schedule, and performance goals, we received briefings by program and contractor officials and reviewed financial management reports, budget documents, annual selected acquisition reports, monthly status reports, performance indicators, and other data. We compared reported progress with prior years data, identified changes in cost and schedule, and obtained officials reasons for these changes. We interviewed Department of Defense (DOD), JSF Program Office, and contractor officials to obtain their views on progress, ongoing concerns and actions taken to address them, and future plans to complete JSF development and accelerate procurement. This review was the fifth and final effort under the mandate of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. We were provided sufficient information to make the assessments contained in this report. In assessing program cost estimates, we compared the official program cost estimate in the December 31, 2007, selected acquisition report to estimates developed by the JSF program and an independent joint estimating team for fiscal years 2010 through 2015. Because the fiscal year 2010 budget had not been submitted to the Congress at the time of the draft report, some of the report s findings are based on preliminary cost projections that existed at the time of our review rather than the official program of record. As such, the cost projections in this report may be different than the final fiscal year 2010 program of record. We interviewed members of the joint estimating team to obtain a detailed understanding of the methodology, data, and approach used in developing their cost estimate and schedule risk analysis of the JSF program. We also interviewed JSF program officials to understand the program s cost estimate methodology, assumptions, and results and to obtain their response to the joint estimating team s analysis. Based on this analysis, we were able to identify significant differences in the cost estimating methodologies and assumptions used by the joint estimating team and the program office and to determine major risk drivers in the program. To assess the program s plans and risks in manufacturing and its capacity to accelerate production from fiscal years 2010 through 2015, we analyzed manufacturing cost and work performance data to assess progress against plans. We compared budgeted program labor hours to actual labor hours, identified growth trends, and noted differences between future labor requirements and current plans to release engineering staff. We reviewed data and briefings provided by the program and the Office of the Secretary of Defense (OSD) to assess supplier performance and ability to support accelerated production from fiscal years 2010 through 2015. We also determined reasons for manufacturing delays, discussed program and contractor plans to improve, and projected expected impacts on development and operational tests. We also reviewed the prime contractor s schedule estimates and compared them with relevant best practices to determine the extent to which they reflect key estimating practices that are fundamental to having a reliable schedule. In doing so, we interviewed cognizant program officials to discuss their use of best practices in creating the program s current schedule and interviewed officials from Naval Air Systems Command to understand their approach and to obtain results of their independent schedule risk analysis. To assess plans, progress, and risks in test activities, we examined program documents and interviewed DOD, program office, and contractor officials about current test plans and progress. To assess progress toward test plans, we compared the number of flight tests conducted as of October 2008 to those in the original test plan established in 2007. We also reviewed documents and interviewed prime contractor officials regarding flight testing, the integrated airborne test bed, and ground testing. To further assess the ground labs and test bed, we interviewed DOD and program officials and toured the testing labs and aircraft at the Lockheed Martin plant in Fort Worth, Texas. In performing our work, we obtained information and interviewed officials from the JSF Joint Program Office, Arlington, Virginia; Lockheed Martin Aeronautics, Fort Worth, Texas; Defense Contract Management Agency, Fort Worth, Texas; Defense Contract Management Agency, East Hartford, Connecticut; Naval Air Systems Command, Patuxent River, Maryland; Air Force Operational Test and Evaluation Center, Kirtland Air Force Base, New Mexico; Air Force Cost Analysis Agency, Arlington, Virginia; and OSD offices of the Under Secretary of Defense for Acquisition, Technology and Logistics, the Director of Program Analysis and Evaluation and its Cost Analysis Improvement Group, and the Director of Operational Test and Evaluation in Washington, D.C. We conducted this performance audit from June 2008 to March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Assessment of Prime Contractor Schedule Management Processes <8. Prime Contractor s Scheduling Management Processes Meet Many Best Practices, but Program Risks Are Not Entirely Visible> The success of any program depends in part on having a reliable schedule of when the program s set of work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for systematic execution of a program, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. In general, best practices and related federal guidance call for a program schedule to identify, sequence, integrate, and resource all key activities to be performed, and to understand and proactively address activities that pose critical risks. More specifically, our research has identified nine practices associated with effective schedule estimating. These practices are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying float between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine dates. Of these nine practices, the JSF program either met or partially met eight with only one not being met. The area that did not meet the practices related to performing a schedule risk analysis. Specifically, the JSF program has not conducted its own schedule risk analysis that would determine the level of confidence it has in meeting completion dates. Further, an assessment is also critical to identifying and apportioning reserves for contingencies. Since the JSF program has not conducted its own schedule risk analysis, it has limited insight into areas of risk and uncertainty in the schedule. Naval Air Systems Command officials did accomplish an independent schedule risk analysis that indicated that the program could slip more than 2 years based on the productivity risks associated with software development and assembly of the various airframes as well as the time needed to complete all flight testing. In addition to a schedule risk analysis not being performed, we found several other schedule management concerns that further reduce the visibility of manufacturing risks. First, the use of best scheduling practices at the subcontractor level is still being developed, potentially affecting the integration of subcontractor schedules into the integrated master schedule. Integrating prime and subcontractor schedules is critical to meeting program schedules and cost expectations. The prime contractor is working with subcontractors to increase their level of schedule maturity. Another area of concern is that out-of-station work made it difficult to identify specific span times for individual manufacturing tasks. As a result, the detailed information related to the manufacturing work was not visible in the master schedule. Furthermore, because of the program s enormous size and complexity, the schedule has been difficult to maintain, requiring manual validation processes to ensure its integrity and validity. Ongoing JSF schedule validity will be an area that needs careful attention as it represents a potential weak point in the overall implementation of the integrated master schedule. Despite this shortcoming, it is also important to recognize the significant progress that the JSF program team has made in the area of schedule management. Since the previous Defense Contract Management Agency schedule review, both the schedule and the processes to manage it have greatly improved. For example, the schedule can track and verify activities associated with over 600 projects. It also successfully captures and sequences key activities and establishes the critical path between key activities, allowing the program to examine the impacts of schedule delays and determine schedule flexibility. <8.1. The Sheer Size and Complexity of the JSF Schedule Have Created Major Challenges to Ensuring Schedule Integrity and Validity> The JSF schedule is maintained in Microsoft Project and consists of over 600 individual projects. Because the size and complexity of the schedule is so immense, it has been difficult to maintain. As such, a number of manual validation processes are required to ensure its integrity and validity. To its credit, the contractor has developed custom processes and tools to help manage the program schedule. However, because of its enormous size and complexity, the JSF s ongoing schedule validity will be an area that needs careful attention as it represents a potential weak point in the overall implementation of the integrated master schedule. Because the schedule was so large, we reviewed a subset of it, focusing on the delivery of one airframe for each variant of the F-35 being produced (i.e., BF4, AF1, and AF3). This subset schedule covered a time span from August 2006 through September 2014, and we analyzed it against our best practices for effective schedule estimating. See table 6 for the results of our analyses relative to each of the nine practices. Appendix III: Comments from the Department of Defense Appendix IV: GAO Contact and Staff Acknowledgments <9. Acknowledgments> In addition to the contact named above, the following staff members made key contributions to this report: Bruce Fairbairn, Assistant Director; Ridge Bowman; Charlie Shivers; Georgeann Higgins; Matt Lea; Karen Richey; Tim Boatwright; and Greg Campbell. Related GAO Products Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
Why GAO Did This Study The Joint Strike Fighter (JSF) is the Department of Defense's (DOD) most complex and ambitious aircraft acquisition, seeking to simultaneously produce and field three different versions of the aircraft for the Air Force, Navy, Marine Corps, and eight international partners. The total investment required now exceeds $1 trillion--more than $300 billion to acquire 2,456 aircraft and $760 billion in life cycle operating and support costs, according to program estimates. The Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 requires GAO to review the JSF program annually for 5 years. This is the fifth and final report under the mandate in which GAO (1) determines the program's progress in meeting cost, schedule, and performance goals; (2) assesses manufacturing results and schedule risks; and (3) evaluates development test plans, progress, and risks. GAO's work included analyses of a wide range of program documents, cost data and interviews with defense and contractor officials. What GAO Found JSF development will cost more and take longer than reported to the Congress last year, and DOD wants to accelerate procurement. Two recent estimates project additional costs ranging from $2.4 billion to $7.4 billion and 1 to 3 more years to complete development. Despite cost and schedule troubles, DOD wants to accelerate JSF procurement by 169 aircraft from fiscal years 2010 through 2015; this could require up to $33.4 billion in additional procurement funding for those 6 years. DOD plans to procure hundreds of aircraft on cost-reimbursement contracts, magnifying the financial risk to the government. Ongoing manufacturing inefficiencies and parts problems have significantly delayed the delivery of test assets. The prime contractor has extended manufacturing schedules three times and delivered 2 of 13 test aircraft. The program is still recovering from earlier problems that resulted in design changes, late parts deliveries, and inefficient manufacturing. The contractor is taking positive steps to improve operations, the supplier base, and schedule management. Schedule risk analyses could further enhance management insight into problem areas and inform corrective actions. Officials expect to deliver all test aircraft and fix many problems by 2010. By then, DOD plans to have purchased 62 operational aircraft and will be ramping up procurement. Procuring large numbers of production jets while still working to deliver test jets and mature manufacturing processes does not seem prudent, and looming plans to accelerate procurement will be difficult to achieve cost effectively. DOD's revised test plan adds a year to the schedule, better aligns resources and availability dates, and lessens the overlap between development and operational testing, but it still allows little time for error discovery and rework. DOD's decision late in 2007 to reduce test aircraft and flight tests adds to risks while any additional delays in delivering test aircraft will further compress the schedule. The revised plan relies on state-of-the-art simulation labs, a flying test bed, and desk studies to verify nearly 83 percent of JSF capabilities. Only 17 percent is to be verified through flight testing. Despite advances, the ability to so extensively substitute for flight testing has not yet been demonstrated. Significant overlap of development, test, and procurement results in DOD making substantial investments before flight testing proves that the JSF will perform as expected. Under the accelerated procurement plan, DOD may procure 360 aircraft costing an estimated $57 billion before completing development flight testing.
<1. Highlights of Major Issues Related to the U.S. Government s Consolidated Financial Statements for Fiscal Years 2007 and 2006> As has been the case for the previous 10 fiscal years, the federal government did not maintain adequate systems or have sufficient and reliable evidence to support certain material information reported in the U.S. government s accrual basis consolidated financial statements. The underlying material weaknesses in internal control, which generally have existed for years, contributed to our disclaimer of opinion on the U.S. government s accrual basis consolidated financial statements for the fiscal years ended 2007 and 2006. Appendix I describes the material weaknesses that contributed to our disclaimer of opinion in more detail and highlights the primary effects of these material weaknesses on the accrual basis consolidated financial statements and on the management of federal government operations. The material weaknesses that contributed to our disclaimer of opinion were the federal government s inability to satisfactorily determine that property, plant, and equipment and inventories and related property, primarily held by the Department of Defense (DOD), were properly reported in the consolidated financial statements; implement effective credit reform estimation and related financial reporting processes at certain federal credit agencies; reasonably estimate or adequately support amounts reported for certain liabilities, such as environmental and disposal liabilities, or determine whether commitments and contingencies were complete and properly reported; support significant portions of the total net cost of operations, most notably related to DOD, and adequately reconcile disbursement activity at certain agencies; adequately account for and reconcile intragovernmental activity and balances between federal agencies; ensure that the federal government s consolidated financial statements were (1) consistent with the underlying audited agency financial statements, (2) properly balanced, and (3) in conformity with Generally Accepted Accounting Principles; and, identify and either resolve or explain material differences that exist between certain components of the budget deficit reported in Treasury s records, used to prepare the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities, and related amounts reported in federal agencies financial statements and underlying financial information and records. Due to the material weaknesses and the additional limitations on the scope of our work, as discussed in our audit report, there may also be additional issues that could affect the accrual basis consolidated financial statements that have not been identified. In addition to the material weaknesses that contributed to our disclaimer of opinion, which were discussed above, we found three other material weaknesses in internal control as of September 30, 2007. These weaknesses are discussed in more detail in appendix II, including the primary effects of the material weaknesses on the accrual basis consolidated financial statements and on the management of federal government operations. These other material weaknesses were the federal government s inability to determine the full extent to which improper payments occur, identify and resolve information security control weaknesses and manage information security risks on an ongoing basis, and effectively manage its tax collection activities. Further, our audit report discusses certain significant deficiencies in internal control at the governmentwide level. These significant deficiencies involve the following areas: preparing the Statement of Social Insurance for certain programs, and monitoring and oversight regarding certain federal grants and entities that offer Medicare health plan options. Individual federal agency financial statement audit reports identify additional control deficiencies which were reported by agency auditors as material weaknesses or significant deficiencies at the individual agency level. We do not deem these additional control deficiencies to be material weaknesses at the governmentwide level. Regarding agencies internal controls, in December 2004, OMB revised OMB Circular No. A-123, Management s Responsibility for Internal Control, which became effective for fiscal year 2006. In fiscal year 2006, agencies began to implement the more rigorous requirements of the revised OMB Circular No. A-123, which include management identification, assessment, testing, correction, and documentation of internal controls over financial reporting for each account or group of accounts, as well as an annual assurance statement from the agency head as to whether internal control over financial reporting is effective. OMB recognized that due to the complexity of some agencies, implementation of these new requirements may span more than 1 year. Accordingly, certain agencies have adopted multiyear implementation plans. According to OMB s Federal Financial Management Report for 2007, 16 of the 24 CFO Act agencies have performed assessments required by OMB Circular No. A-123 for all key processes, while the remaining 8 CFO Act agencies are phasing in implementation of the requirements by testing a portion of the key processes and providing plans for testing the remaining processes within 3 years. Also, according to that report, to achieve its strategic goal of improving effectiveness of internal control over financial reporting, OMB has developed priority actions that include updating guidance, as necessary, based on lessons learned from agencies implementation of the circular. It will be important that OMB continue to monitor and oversee federal agencies implementation of these new requirements. <1.1. Addressing Major Impediments to an Opinion on the Accrual Basis Consolidated Financial Statements> Three major impediments to our ability to render an opinion on the U.S. government s accrual basis consolidated financial statements continued to be: (1) serious financial management problems at DOD, (2) the federal government s inability to adequately account for and reconcile intragovernmental activity and balances between federal agencies, and (3) the federal government s ineffective process for preparing the consolidated financial statements. Extensive efforts by DOD officials and cooperative efforts between agency chief financial officers, Treasury officials, and OMB officials will be needed to resolve these serious obstacles to achieving an opinion on the U.S. government s accrual basis consolidated financial statements. <1.1.1. Financial Management at DOD> Essential to further improving financial management governmentwide and ultimately to achieving an opinion on the U.S. government s consolidated financial statements is the resolution of serious weaknesses in DOD s business operations. DOD is one of the largest and most complex organizations in the world. Since the first financial statement audit of a major DOD component was attempted almost 20 years ago, we have reported that weaknesses in DOD s business operations, including financial management, not only adversely affect the reliability of reported financial data, but also the economy, efficiency, and effectiveness of its operations. DOD continues to dominate GAO s list of high-risk programs designated as vulnerable to waste, fraud, abuse, and mismanagement, bearing responsibility, in whole or in part, for 15 of 27 high-risk areas. Eight of these areas are specific to DOD and include DOD s overall approach to business transformation, as well as business systems modernization and financial management. Collectively, these high-risk areas relate to DOD s major business operations, including financial management, which directly support the warfighters, including their pay, the benefits provided to their families, and the availability and condition of equipment and supplies they use both on and off the battlefield. Successful transformation of DOD s financial management operations will require a multifaceted, cross-organizational approach that addresses the contribution and alignment of key elements, including sustained leadership, strategic plans, people, processes, and technology. Congress clearly recognized, in the National Defense Authorization Act for Fiscal Year 2008, the need for executive-level attention in ensuring that DOD was on a sustainable path toward achieving business transformation. This legislation codifies Chief Management Officer (CMO) responsibilities at a high level in the department assigning them to the Deputy Secretary of Defense and establishing a full-time Deputy CMO and designating CMO responsibilities within the military services. However, in less than a year, our government will undergo a change in administrations, which raises questions about the continuity of effort and the sustainability of the progress that DOD has made to date. As such, we believe the CMO position should be codified as a separate position from the Deputy Secretary of Defense in order to provide full-time attention to business transformation over the long term, subject to an extended term appointment. Because business transformation is a long-term and complex process, we have recommended a term of at least 5 to 7 years to provide sustained leadership and accountability. Importantly, DOD has taken steps toward developing and implementing a framework for addressing the department s long-standing financial management weaknesses and improving its capability to provide timely, reliable, and relevant financial information for analysis, decision making, and reporting, a key defense transformation priority. Specifically, this framework, which is discussed in both the department s Enterprise Transition Plan (ETP) and the Financial Improvement and Audit Readiness (FIAR) Plan, includes the department s Standard Financial Information Structure (SFIS) and Business Enterprise Information System (BEIS). DOD intends this framework to define and put into practice a standard DOD-wide financial management data structure as well as enterprise-level capabilities to facilitate reporting and comparison of financial data across the department. DOD s efforts to develop and implement SFIS and BEIS should help to improve the consistency and comparability of the department s financial information and reporting; however, a great deal of work remains before the financial management capabilities of DOD and its components transformation efforts achieve financial visibility. Examples of work remaining include data cleansing; improvements to current policies, processes, procedures, and controls; and implementation of fully integrated systems. In 2007, DOD introduced refinements to its approach for achieving financial statement auditability. These refinements include the following: Requesting audits of entire financial statements rather than attempting to build upon audits of individual financial statement line items. Focusing on improvements in end-to-end business processes, or segments that underlie the amounts reported on the financial statements. Using audit readiness validations and annual verification reviews of segment improvements to help ensure sustainability of corrective actions and improvements. Forming a working group to begin auditability risk assessments of financial systems at key decision points in their development and deployment life cycle to help ensure that the processes and internal controls support repeatable production of auditable financial statements. We are encouraged by DOD s efforts and emphasize the necessity for consistent management oversight toward achieving financial management capabilities and reporting of meaningful and measurable transformation effort benchmarks and accomplishments. We will continue to monitor DOD s efforts to transform its business operations and address its financial management challenges as part of our continuing DOD business enterprise architecture and financial audit readiness oversight. <1.1.2. Intragovernmental Activity and Balances> Federal agencies are unable to adequately account for and reconcile intragovernmental activity and balances. OMB and Treasury require the chief financial officers (CFO) of 35 executive departments and agencies to reconcile, on a quarterly basis, selected intragovernmental activity and balances with their trading partners. In addition, these agencies are required to report to Treasury, the agency s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts as of the end of each fiscal year. A substantial number of the agencies did not adequately perform the required reconciliations for fiscal years 2007 and 2006. For these fiscal years, based on trading partner information provided to Treasury via agencies closing packages, Treasury produced a Material Difference Report for each agency showing amounts for certain intragovernmental activity and balances that significantly differed from those of its corresponding trading partners as of the end of the fiscal year. Based on our analysis of the Material Difference Reports for fiscal year 2007, we noted that a significant number of CFOs were unable to adequately explain the differences with their trading partners or did not provide adequate documentation to support responses. For both fiscal years 2007 and 2006, amounts reported by federal agency trading partners for certain intragovernmental accounts were not in agreement by significant amounts. In addition, a significant number of CFOs cited differing accounting methodologies, accounting errors, and timing differences for their material differences with their trading partners. Some CFOs simply indicated that they were unable to explain the differences with their trading partners with no indication when the differences will be resolved. As a result of the above, the federal government s ability to determine the impact of these differences on the amounts reported in the accrual basis consolidated financial statements is significantly impaired. In 2006, OMB issued Memorandum No. M-07-03, Business Rules for Intragovernmental Transactions (Nov. 13, 2006), and Treasury issued the Treasury Financial Manual Bulletin No. 2007-03, Intragovernmental Business Rules (Nov. 15, 2006). This guidance added criteria for resolving intragovernmental disputes and major differences between trading partners for certain intragovernmental transactions and called for the establishment of an Intragovernmental Dispute Resolution Committee. OMB is currently working with the Chief Financial Officers Council to create the Intragovernmental Dispute Resolution Committee. Treasury is also taking steps to help resolve material differences in intragovernmental activity and balances. For example, Treasury is requiring federal agencies to provide a plan of action on how the agency is addressing certain of its unresolved material differences. Resolving the intragovernmental transactions problem remains a difficult challenge and will require a strong commitment by federal agencies to fully implement the recently issued business rules and continued strong leadership by OMB and Treasury. <1.1.3. Preparing the Consolidated Financial Statements> Although further progress was demonstrated in fiscal year 2007, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles (GAAP). Treasury has showed progress by demonstrating that amounts in the Statement of Social Insurance were consistent with the underlying federal agencies audited financial statements and that the Balance Sheet and the Statement of Net Cost were consistent with federal agencies financial statements prior to eliminating intragovernmental activity and balances. However, Treasury s process for compiling the consolidated financial statements did not ensure that the information in the remaining three principal financial statements and notes were fully consistent with the underlying information in federal agencies audited financial statements and other financial data. During fiscal year 2007, Treasury, in coordination with OMB, continued to develop and implement corrective action plans and milestones for short-term and long-range solutions for certain internal control weaknesses we have reported regarding the process for preparing the consolidated financial statements. Resolving some of these internal control weaknesses will be a difficult challenge and will require a strong commitment from Treasury and OMB as they execute and implement their corrective action plans. <2. Federal Agencies Financial Management Systems> Under the Federal Financial Management Improvement Act of 1996 (FFMIA), as a part of the CFO Act agencies financial statement audits, auditors are required to report whether agencies financial management systems comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger (SGL) at the transaction level. These factors, if implemented successfully, help provide a solid foundation for improving accountability over government operations and routinely producing sound cost and operating performance information. As shown in figure 1, 19 out of the 24 CFO Act agencies received an unqualified opinion on their financial statements in fiscal year 2007; however, 8 of these 19 agencies systems did not substantially comply with one or more of the three FFMIA requirements. This shows that irrespective of these unqualified clean opinions on the financial statements, many agencies still do not have reliable, useful and timely financial information with which to make informed decisions and ensure accountability on an ongoing basis. The modernization of federal financial management systems has been a long-standing challenge at many federal agencies. As shown in figure 1, auditors reported that 13 of the 24 CFO Act agencies systems did not substantially comply with one or more of the three FFMIA requirements for fiscal year 2007. This compares with 17 agencies for fiscal year 2006. Although the number of agencies reported as not substantially compliant has declined, the federal government s capacity to manage with timely and useful data remains limited, thereby hampering its ability to effectively administer and oversee its major programs. For fiscal year 2007, noncompliance with federal financial management systems requirements was the most frequently cited deficiency of the three FFMIA requirements. One of the federal financial management systems requirements is for agencies to have integrated financial management systems. Based on our review of the fiscal year 2007 audit reports, we identified the lack of integrated financial management systems to be one of the six problem areas for the 13 agency systems that are reported as not being substantially compliant with FFMIA. Figure 2 summarizes these six areas and the number of agencies with problems reported in each area. The lack of integrated financial management systems typically results in agencies expending major time, effort, and resources, including in some cases, hiring external consultants to develop information that their systems should be able to provide on a daily or recurring basis. In addition, nonintegrated systems are more prone to error which could result in information that is not reliable, useful, or timely. Figure 2 also shows that auditors for 11 CFO Act agencies had reported the lack of accurate and timely recording of financial information as a problem in fiscal year 2007. Accurate and timely recording of financial information is essential for effective financial management. Furthermore, the majority of participants at a recent Comptroller General s forum on improving financial management systems agreed that financial management systems are not able to provide, or provide little, information that is reliable, useful, and timely to assist management in their day-to-day decision making, which is the ultimate goal of FFMIA. Participants at the forum also discussed current financial management initiatives and the strategies for transformation of federal financial management. To reduce the cost and improve the outcome of federal financial management systems implementations, OMB continues to move forward on a key initiative the financial management line of business (line of business), by leveraging common standards and shared solutions. OMB anticipates that the line of business initiative will help achieve the goals of improving the cost, quality, and performance of financial management operations. OMB and the Financial Systems Integration Office have demonstrated continued progress toward implementation of the line of business initiative by issuing a common governmentwide accounting classification structure, financial services assessment guide, and exposure drafts of certain standard business processes. However, as we previously recommended, OMB needs to continue defining standard business processes. A critical factor for success will be ensuring that agencies cannot continue developing and implementing their own stovepiped systems. Failure to do so may require additional work, increase costs to adopt these standard business processes, and further delay the transformation of federal financial management systems. In a January 2008 memo, OMB recognized the risks associated with nonstandardized processes and updated its guidance on the line of business. Current plans are for the Financial Systems Integration Office to continue developing business standards and incorporate them into software requirements and permit agencies and shared service providers to utilize only the certified products as configured. Along with these changes, continued high-priority and sustained top-level commitment by OMB and leaders throughout the federal government will be necessary to fully and effectively achieve the common goals of the line of business and FFMIA. <3. The Nation s Long- Term Fiscal Challenge> The nation s long-term fiscal challenge is a matter of utmost concern. The federal government faces large and growing structural deficits due primarily to rising health care costs and known demographic trends. There is a need to engage in a fundamental review of what the federal government does, how it does it, and how it is financed. Understanding and addressing the federal government s financial condition and the nation s long-term fiscal challenge are critical to maintain fiscal flexibility so that policymakers can respond to current and emerging social, economic, and security challenges. While some progress has been made in recent years in addressing the federal government s short-term fiscal condition, the nation has not made progress on its long-term fiscal challenge. However, even this short-term deficit is understated: It masks the fact that the federal government has been using the Social Security surplus to offset spending in the rest of government for many years. If the Social Security surplus is excluded, the on-budget deficit in fiscal year 2007 was more than double the size of the unified deficit. For example, Treasury reported a unified deficit of $163 billion and an on-budget deficit of $344 billion in fiscal year 2007. While the federal government s unified budget deficit has declined in recent years, its liabilities, contingencies and commitments, and social insurance responsibilities have increased. As of September 30, 2007, the U.S. government reported in the 2007 Financial Report that it owed (i.e., liabilities) more than it owned (i.e., assets) by more than $9 trillion. Further, the Statement of Social Insurance in the Financial Report disclosed $41 trillion in social insurance responsibilities, including Medicare and Social Security, up more than $2 trillion from September 30, 2006. Information included in the Financial Report, such as the Statement of Social Insurance along with long-term fiscal simulations and fiscal sustainability reporting can help increase understanding of the federal government s long-term fiscal outlook. Over the next few decades, the nation s fiscal challenge will be shaped largely by rising health care costs and known demographic trends. As the baby boom generation retires, federal spending on retirement and health care programs Social Security and Medicare, and Medicaid will grow dramatically. The future costs of Social Security and Medicare commitments are reported in the Statement of Social Insurance in the Financial Report. We were able to render an unqualified opinion on the 2007 Statement of Social Insurance a significant accomplishment for the federal government. The statement displays the present value of projected revenues and expenditures for scheduled benefits of social insurance programs. For Social Security and Medicare alone, projected expenditures for scheduled benefits exceed earmarked revenues (i.e., dedicated payroll taxes and premiums) by approximately $41 trillion over the next 75 years in present value terms. Stated differently, one would need approximately $41 trillion invested today to deliver on the currently promised benefits not covered by earmarked revenues for the next 75 years. Table 1 shows a simplified version of the Statement of Social Insurance by its primary components. Although these social insurance commitments dominate the long-term outlook, they are not the only federal programs or activities that bind the future. GAO developed the concept of fiscal exposures to provide a framework for considering the wide range of responsibilities, programs, and activities that may explicitly or implicitly expose the federal government to future spending. In addition to the social insurance commitments, the federal government s fiscal exposures include about $11 trillion in liabilities reported on the Balance Sheet, $1 trillion of other commitments and contingencies, as well as other potential exposures that cannot be quantified. So beyond dealing with Medicare and Social Security, policymakers need to look at other policies that limit the federal government s flexibility not necessarily to eliminate all of them but to at least be aware of them and make a conscious decision to reform them in a manner that will be responsible, equitable, and sustainable. Long-term fiscal simulations of future revenues and costs for all federal programs offer a comprehensive assessment of the federal government s long-term fiscal outlook. Since 1992, GAO has published long-term fiscal simulations of what might happen to federal deficits and debt levels under varying policy assumptions. GAO s simulations which are neither forecasts nor predictions continue to show ever-increasing long-term deficits resulting in a federal debt level that ultimately spirals out of control. The timing of deficits and the resulting debt buildup varies depending on the assumptions used. For example, figure 3 shows GAO s simulation of the deficit path based on recent trends and policy preferences. In this simulation, we start with the Congressional Budget Office s (CBO) baseline and then assume that (1) all expiring tax provisions are extended through 2018 and then revenues are brought to their historical level as a share of gross domestic product (GDP) plus expected revenue from deferred taxes (2) discretionary spending grows with the economy, and (3) no structural changes are made to Social Security, Medicare, or Medicaid. Over the long term, the nation s fiscal challenge stems primarily from rising health care costs and, to a lesser extent, the aging of the population. Absent significant changes on the spending or revenue sides of the budget or both, these long-term deficits will encumber a growing share of federal resources and test the capacity of current and future generations to afford both today s and tomorrow s commitments. Figure 4 looks behind the deficit path to the composition of federal spending. It shows that the estimated growth in the major entitlement programs leads to an unsustainable fiscal future. In this figure, the category all other spending includes much of what many think of as government discretionary spending on such activities as national defense, homeland security, veterans health benefits, national parks, highways and mass transit, and foreign aid, plus mandatory spending on the smaller entitlement programs such as Supplemental Security Income, Temporary Assistance for Needy Families, and farm price supports. The growth in Social Security, Medicare, Medicaid, and interest on debt held by the public dwarfs the growth in all other types of spending. A government that in one generation does nothing more than pay interest on its debt and mail checks to retirees and some of their health providers is unacceptable. The federal government s increased spending and rising deficits will drive a rising debt burden. At the end of fiscal year 2007, debt held by the public exceeded $5 trillion. Figure 5 shows that this growth in the federal government s debt cannot continue unabated without causing serious harm to the economy. In the last 200 years, only during and after World War II has debt held by the public exceeded 50 percent of GDP. But this is only part of the story. The federal government for years has been borrowing the surpluses in the Social Security trust funds and other similar funds and using them to finance federal government costs. When such borrowings occur, Treasury issues federal securities to these government funds that are backed by the full faith and credit of the U.S. government. Although borrowing by one part of the federal government from another may not have the same economic and financial implications as borrowing from the public, it represents a claim on future resources and hence a burden on future taxpayers and the future economy. If federal securities held by those funds are included, the federal government s total debt is much higher about $9 trillion as of the end of fiscal year 2007. As shown in figure 6, total federal debt increased over each of the last four fiscal years. On September 29, 2007, the statutory debt limit had to be raised for the third time in 4 years in order to avoid being breached; between the end of fiscal year 2003 and the end of fiscal year 2007, the debt limit had to be increased by about one-third. It is anticipated that actions will need to be taken in fiscal year 2009 to avoid breaching the current statutory debt limit of $9,815 billion. A quantitative measure of the long-term fiscal challenge measure is called the fiscal gap. The fiscal gap is the amount of spending reduction or tax increases that would be needed today to keep debt as a share of GDP at or below today s ratio. The fiscal gap is an estimate of the action needed to achieve fiscal balance over a certain time period such as 75 years. Another way to say this is that the fiscal gap is the amount of change needed to prevent the kind of debt explosion implicit in figure 5. The fiscal gap can be expressed as a share of the economy or in present value dollars. Under GAO s alternative simulation, closing the fiscal gap would require spending cuts or tax increases equal to 6.7 percent of the entire economy over the next 75 years, or about $54 trillion in present value terms. To put this in perspective, closing the gap would require an increase in today s federal tax revenues of about 36 percent or an equivalent reduction in today s federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled) to be maintained over the entire period. Policymakers could phase in the policy changes so that the tax increases or spending cuts would grow over time and allow people to adjust. The size of these annual tax increases and spending cuts would be more than five times the fiscal year 2007 deficit of 1.2 percent of GDP. Delaying action would make future adjustments even larger. Under our alternative simulation, waiting even 10 years would require a revenue increase of about 45 percent or noninterest spending cuts of about 40 percent. This gap is too large for the federal government to grow its way out of the problem. To be sure, additional economic growth would certainly help the federal government s financial condition and ability to address this fiscal gap, but it will not eliminate the need for action. Understanding and addressing the federal government s financial condition and the nation s long-term fiscal challenge are critical to the nation s future. As we reported in December 2007, several countries have begun preparing fiscal sustainability reports to help assess the implications of their public pension and health care programs and other challenges in the context of overall sustainability of government finances. European Union members also annually report on longer-term fiscal sustainability. The goal of these reports is to increase public awareness and understanding of the long-term fiscal outlook in light of escalating health care cost growth and population aging, to stimulate public and policy debates, and to help policymakers make more informed decisions. These countries used a variety of measures, including projections of future revenue and spending and summary measures of fiscal imbalance and fiscal gaps, to assess fiscal sustainability. Last year, we recommended that the United States should prepare and publish a long-range fiscal sustainability report. I am pleased to note that the Federal Accounting Standards Advisory Board (FASAB) will soon issue a draft of a proposed standard on fiscal sustainability reporting. Here in the first half of 2008, the long-term fiscal challenge is not in the distant future. In fact, the oldest members of the baby boom generation are now eligible for Social Security retirement benefits and will be eligible for Medicare benefits in less than 3 years. The budget and economic implications of the baby boom generation s retirement have already become a factor in CBO s 10-year budget projections and that impact will only intensify as the baby boomers age. The financial markets also are noticing. Earlier this year, Moody s Investors Service issued its annual report on the United States. In that report, it noted that absent Medicare and Social Security reforms, the long- term fiscal health of the United States and the federal government s current Aaa sovereign credit rating were at risk. Likewise, Standard and Poor s noted in a recent report that Medicare and Social Security reform is necessary to prevent a much worse long-term fiscal deterioration. These comments serve to note the significant longer-term interest rate risk that the federal government faces absent meaningful action to address these long-range challenges. Higher longer-term interest costs would only serve to complicate the nation s fiscal, economic, and other challenges in future years. At some point, action will need to be taken to change the nation s fiscal course. The sooner appropriate actions are taken, the sooner the miracle of compounding will begin to work for the federal budget rather than against it. Conversely, the longer that action to deal with the nation s long- term fiscal outlook is delayed, the greater the risk that the eventual changes will be disruptive and destabilizing and future generations will have to bear a greater burden of the cost. Simply put, the federal government is on an imprudent and unsustainable long-term fiscal path that is getting worse with the passage of time. <3.1. A Possible Way Forward> Meeting this long-term fiscal challenge overarches everything. It is the nation s largest sustainability challenge, but it is not the only one. Aligning the federal government to meet the challenges and capitalize on the opportunities of the 21st century will require a fundamental review of what the federal government does, how it does it, and how it is financed. In addressing the growing costs of the major entitlement programs and reexamining other major programs, policies, and activities, attention should be paid to both the spending and the revenue sides of the budget. Programs that run through the tax code sometimes referred to as tax expenditures must be reexamined along with those that run through the spending side. Moving forward, the federal government needs to start making tough choices in setting priorities and linking resources and activities to results. Meeting the nation s long-term fiscal challenge will require a multipronged approach bringing people together to tackle health care, Social Security, and the tax system as well as strengthening oversight of programs and activities, including creating approaches to better facilitate the discussion of integrated solutions to crosscutting issues; and reengineering and reprioritizing the federal government s existing programs, policies, and activities to address 21st century challenges and capitalize on related opportunities. Regarding the tax system, although tax reform may need to play a role in meeting our challenges, any system will need to include design features and reasonable service and enforcement efforts to maximize compliance. Under the current system, the tax gap the difference between the tax amounts taxpayers pay voluntarily on time and what they should pay under the laws contributes to the nation s long-term fiscal challenges and can undermine compliance if those who comply see their friends, neighbors, and business competitors avoiding their tax obligations. According to the latest Internal Revenue Service (IRS) estimates for tax year 2001, the federal government falls $345 billion short of collecting all of the taxes owed before voluntary late payments and IRS enforcement actions and $290 billion afterwards. Although the extent to which we can reduce the tax gap is unknown, meaningful reductions can contribute resources to dealing with our long-term challenges. There are also some process changes that might help the discussion by increasing the transparency and relevancy of key financial, performance, and budget reporting and estimates that highlight the fiscal challenge. Stronger budget controls for both spending and tax policies to deal with both near-term and longer-term deficits may also be helpful. In summary, to effectively address the nation s long-term fiscal challenge, tackling health care cost growth and other existing entitlement programs will be essential. However, this entitlement reform alone will not get the job done. The federal government also needs to reprioritize and constrain other spending and consider whether revenues at the historical average of 18.3 percent of GDP will be sufficient that may involve discussion of the tax system. I am pleased that GAO has been able to offer you specific analysis and tools to assist you in this important work. However, only elected officials can and should decide which issues to address as well as how and when to address them. Addressing these problems will require tough choices, and the fiscal clock is ticking. <4. The Federal Financial Reporting Model> The Financial Report provides useful information on the government s financial position at the end of the fiscal year and changes that have occurred over the course of the year. However, in evaluating the nation s fiscal condition, it is critical to look beyond the short-term results and consider the overall long-term financial condition and long-term fiscal challenge of the government that is, the sustainability of the federal government s programs, commitments, and responsibilities in relation to the resources expected to be available. The current federal financial reporting model does not clearly, comprehensively and transparently show the wide range of responsibilities, programs, and activities that may either obligate the federal government to future spending or create an expectation for such spending. Thus, it does not provide the best possible picture of the federal government s overall performance, financial condition, and future fiscal outlook. Accounting and financial reporting standards have continued to evolve to provide adequate transparency and accountability over the federal government s operations, financial condition and fiscal outlook. However, after 11 years of reporting at the governmentwide level, it is appropriate to consider the need for further revisions to the current federal financial reporting model, which could affect both consolidated and agency reporting. While the current reporting model recognizes some of the unique needs of the federal government, a broad reconsideration of the federal financial reporting model could address the following types of questions: What kind of information is most relevant and useful for a sovereign nation? Do traditional financial statements convey information in a transparent manner? What is the role of the balance sheet in the federal government reporting model? How should items that are unique to the federal government, such as social insurance commitments and the power to tax, be reported? In addition, further enhancements to accounting and financial reporting standards are needed to effectively convey the long-term financial condition of the U.S. government and annual changes therein. For example, the federal government s financial reporting should be expanded to disclose the reasons for significant changes during the year in scheduled social insurance benefits and funding. It should also include (1) a Statement of Fiscal Sustainability that provides a long-term look at the sustainability of social insurance programs in the context of all federal programs, and (2) other sustainability information, including intergenerational equity. The Federal Accounting Standards Advisory Board is currently considering possible changes to social insurance reporting and has initiated a project on fiscal sustainability reporting. Engaging in a reevaluation of the federal financial reporting model could stimulate discussion that would bring about a new way of thinking about the federal government s financial and performance reporting needs. To understand various perceptions and needs of the stakeholders for federal financial reporting, a wide variety of stakeholders from the public and private sector should be consulted. Ultimately, the goal of such a reevaluation would be reporting enhancements that can help the Congress deliberate on strategies to address the federal government s challenges, including its long-term fiscal challenge. <5. Closing Comments> In closing, it is important that the progress that has been made in improving federal financial management activities and practices be sustained by the current administration as well as the new administration that will be taking office next year. Across government, financial management improvement initiatives are underway, and if effectively implemented, they have the potential to greatly improve the quality of financial management information as well as the efficiency and effectiveness of agency operations. However, the federal government still has a long way to go before realizing strong federal financial management. For DOD, the challenges are many. We are encouraged by DOD s efforts toward addressing its long-standing financial management weaknesses, but consistent and diligent management oversight toward achieving financial management capabilities, including audit readiness is needed. Federal agencies need to improve the government s financial management systems. The civilian CFO Act agencies must continue to strive toward routinely producing not only annual financial statements that can pass the scrutiny of a financial audit, but also quarterly financial statements and other meaningful financial and performance data to help guide decision makers on a day-to-day basis. Addressing the nation s long-term fiscal challenge constitutes a major transformational challenge that may take a generation or more to resolve. GAO is committed to sustained attention to this fiscal challenge to help ensure that this is not the first generation to leave its children and grandchildren a legacy of failed fiscal stewardship and the hardships that would bring. Given the size of the projected deficit, the leadership and efforts of many people will be needed to put the nation on a more prudent and sustainable longer-term fiscal path. Given the federal government s current financial condition and the nation s long-term fiscal challenge, the need for the Congress and federal policymakers and management to have reliable, useful, and timely financial and performance information is greater than ever. Sound decisions on the current and future direction of vital federal government programs and policies are more difficult without such information. We will continue to stress the need for development of more meaningful financial and performance reporting on the federal government. Until the problems discussed in this testimony are effectively addressed, they will continue to have adverse implications for the federal government and the taxpayers. Finally, I want to emphasize the value of sustained congressional interest in these issues. It will be key that, going forward, the appropriations, budget, authorizing, and oversight committees hold agency top leadership accountable for resolving the remaining problems and that they support improvement efforts. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. <6. GAO Contacts and Acknowledgments> For further information regarding this testimony, please contact McCoy Williams, Managing Director; and Gary Engel, Director; Financial Management and Assurance at (202) 512-2600, as well as Susan Irving, Director; Federal Budget Analysis, Strategic Issues at (202) 512-9142. Key contributions to this testimony were also made by staff on the Consolidated Financial Statement audit team. Appendix I: Material Weaknesses Contributing to Our Disclaimer of Opinion on the Accrual Basis Consolidated Financial Statements The continuing material weaknesses discussed below contributed to our disclaimer of opinion on the federal government s accrual basis consolidated financial statements. The federal government did not maintain adequate systems or have sufficient reliable evidence to support information reported in the accrual basis consolidated financial statements, as described below. <7. Property, Plant, and Equipment and Inventories and Related Property> The federal government could not satisfactorily determine that property, plant, and equipment (PP&E) and inventories and related property were properly reported in the consolidated financial statements. Most of the PP&E and inventories and related property are the responsibility of the Department of Defense (DOD). As in past years, DOD did not maintain adequate systems or have sufficient records to provide reliable information on these assets. Other agencies, most notably the National Aeronautics and Space Administration, reported continued weaknesses in internal control procedures and processes related to PP&E. Without reliable asset information, the federal government does not fully know the assets it owns and their location and condition and cannot effectively (1) safeguard assets from physical deterioration, theft, or loss; (2) account for acquisitions and disposals of such assets; (3) ensure that the assets are available for use when needed; (4) prevent unnecessary storage and maintenance costs or purchase of assets already on hand; and (5) determine the full costs of programs that use these assets. <8. Loans Receivable and Loan Guarantee Liabilities> Federal agencies that account for the majority of the reported balances for direct loans and loan guarantee liabilities continue to have internal control weaknesses related to their credit reform estimation and related financial reporting processes. While progress in addressing these long-standing weaknesses was reported by certain federal credit agencies, certain deficiencies in the Department of Agriculture s credit reform processes contributed to its auditor being unable to obtain sufficient, appropriate evidence to support related accounts. As such, for fiscal year 2007, we have added this area to the list of material weaknesses contributing to our disclaimer of opinion on the accrual basis consolidated financial statements. These issues and the complexities associated with estimating the costs of lending activities significantly increase the risk that material misstatements in agency and governmentwide financial statements could occur and go undetected. Moreover, these weaknesses continue to adversely affect the federal government s ability to support annual budget requests for federal lending programs, make future budgetary decisions, manage program costs, and measure the performance of lending activities. <9. Liabilities and Commitments and Contingencies> The federal government could not reasonably estimate or adequately support amounts reported for certain liabilities. For example, DOD was not able to estimate with assurance key components of its environmental and disposal liabilities. In the past, DOD could not support a significant amount of its estimated military postretirement health benefits liabilities included in federal employee and veteran benefits payable. These unsupported amounts related to the cost of direct health care provided by DOD-managed military treatment facilities. This year, the auditor s report on the financial statements that include the estimated military postretirement health benefits liabilities had not been issued as of the date of our audit report. Further, the federal government could not determine whether commitments and contingencies, including those related to treaties and other international agreements entered into to further the U.S. government s interests, were complete and properly reported. Problems in accounting for liabilities affect the determination of the full cost of the federal government s current operations and the extent of its liabilities. Also, weaknesses in internal control supporting the process for estimating environmental and disposal liabilities could result in improperly stated liabilities as well as affect the federal government s ability to determine priorities for cleanup and disposal activities and to appropriately consider future budgetary resources needed to carry out these activities. In addition, if disclosures of commitments and contingencies are incomplete or incorrect, reliable information is not available about the extent of the federal government s obligations. <10. Cost of Government Operations and Disbursement Activity> The previously discussed material weaknesses in reporting assets and liabilities, material weaknesses in financial statement preparation, as discussed below, and the lack of adequate disbursement reconciliations at certain federal agencies affect reported net costs. As a result, the federal government was unable to support significant portions of the total net cost of operations, most notably related to DOD. With respect to disbursements, DOD and certain other federal agencies reported continued weaknesses in reconciling disbursement activity. For fiscal years 2007 and 2006, there was unreconciled disbursement activity, including unreconciled differences between federal agencies and Treasury s records of disbursements and unsupported federal agency adjustments, totaling billions of dollars, which could also affect the balance sheet. Unreliable cost information affects the federal government s ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. If disbursements are improperly recorded, this could result in misstatements in the financial statements and in certain data provided by federal agencies for inclusion in The Budget of the United States Government (hereafter referred to as the President s Budget ) concerning obligations and outlays. <11. Accounting for and Reconciliation of Intragovernmental Activity and Balances> Federal agencies are unable to adequately account for and reconcile intragovernmental activity and balances. OMB and Treasury require the chief financial officers (CFO) of 35 executive departments and agencies to reconcile, on a quarterly basis, selected intragovernmental activity and balances with their trading partners. In addition, these agencies are required to report to Treasury, the agency s inspector general, and GAO on the extent and results of intragovernmental activity and balances reconciliation efforts as of the end of the fiscal year. A substantial number of the agencies did not adequately perform the required reconciliations for fiscal years 2007 and 2006. For these fiscal years, based on trading partner information provided to Treasury via agencies closing packages, Treasury produced a Material Difference Report for each agency showing amounts for certain intragovernmental activity and balances that significantly differed from those of its corresponding trading partners as of the end of the fiscal year. Based on our analysis of the Material Difference Reports for fiscal year 2007, we noted that a significant number of CFOs were unable to adequately explain the differences with their trading partners or did not provide adequate documentation to support responses. For both fiscal years 2007 and 2006, amounts reported by federal agency trading partners for certain intragovernmental accounts were not in agreement by significant amounts. In addition, a significant number of CFOs cited differing accounting methodologies, accounting errors, and timing differences for their material differences with their trading partners. Some CFOs simply indicated that they were unable to explain the differences with their trading partners with no indication when the differences will be resolved. As a result of the above, the federal government s ability to determine the impact of these differences on the amounts reported in the accrual basis consolidated financial statements is significantly impaired. <12. Preparation of Consolidated Financial Statements> While further progress was demonstrated in fiscal year 2007, the federal government continued to have inadequate systems, controls, and procedures to ensure that the consolidated financial statements are consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles (GAAP). In addition, as discussed in our scope limitation section of our audit report, Treasury could not provide the final fiscal year 2007 accrual basis consolidated financial statements and adequate supporting documentation in time for us to complete all of our planned auditing procedures. During our fiscal year 2007 audit, we found the following: Treasury has showed progress by demonstrating that amounts in the Statement of Social Insurance were consistent with the underlying federal agencies audited financial statements and that the Balance Sheet and the Statement of Net Cost were consistent with federal agencies financial statements prior to eliminating intragovernmental activity and balances. However, Treasury s process for compiling the consolidated financial statements did not ensure that the information in the remaining three principal financial statements and notes were fully consistent with the underlying information in federal agencies audited financial statements and other financial data. At the federal agency level, for fiscal year 2007, auditors for many of the CFO Act agencies reported material weaknesses or other significant deficiencies regarding agencies financial reporting processes which, in turn, could affect the preparation of the consolidated financial statements. For example, auditors for several agencies reported that a significant number of adjustments were required to prepare the agencies financial statements. These and other auditors are also required to separately audit financial information sent by the federal agencies to Treasury via a closing package. In connection with preparing the consolidated financial statements, Treasury had to create adjustments to correct significant errors found in agencies audited closing package information. To make the fiscal years 2007 and 2006 consolidated financial statements balance, Treasury recorded net decreases of $6.7 billion and $11 billion, respectively, to net operating cost on the Statement of Operations and Changes in Net Position, which it labeled Other Unmatched transactions and balances. An additional net $2.5 billion and $10.4 billion of unmatched transactions were recorded in the Statement of Net Cost for fiscal years 2007 and 2006, respectively. Treasury is unable to fully identify and quantify all components of these unreconciled activities. The federal government could not demonstrate that it had fully identified and reported all items needed to reconcile the operating results, which for fiscal year 2007 showed a net operating cost of $275.5 billion, to the budget results, which for the same period showed a unified budget deficit of $162.8 billion. Treasury s elimination of certain intragovernmental activity and balances continues to be impaired by the federal agencies problems in handling their intragovernmental transactions. As previously discussed, amounts reported for federal agency trading partners for certain intragovernmental accounts were not in agreement by significant amounts. This resulted in the need for intragovernmental elimination entries by Treasury that recorded the net differences between trading partners as Other Unmatched transactions and balances, in order to force the Statements of Operations and Changes in Net Position into balance. In addition, differences in other intragovernmental accounts, primarily related to transactions with the General Fund, have not been reconciled, still remain unresolved, and total hundreds of billions of dollars. Therefore, the federal government continues to be unable to determine the impact of unreconciled intragovernmental activity and balances on the accrual basis consolidated financial statements. We have consistently reported that certain financial information required by GAAP was not disclosed in the consolidated financial statements. In 2006, the Federal Accounting Standards Advisory Board issued a new standard that eliminated or lessened the disclosure requirements for the consolidated financial statements related to certain information that Treasury had not been reporting. While Treasury made progress in addressing some of the remaining omitted information, there continue to be disclosures required by GAAP that are excluded from the consolidated financial statements. Also, certain material weaknesses noted in this report, for example, commitments and contingencies related to treaties and other international agreements, preclude Treasury from determining if a disclosure is required by GAAP in the consolidated financial statements and us from determining if the omitted information is material. Further, Treasury s ability to report information in accordance with GAAP will also remain impaired until federal agencies, such as DOD, can provide Treasury with complete and reliable information required to be reported in the consolidated financial statements. Other internal control weaknesses existed in Treasury s process for preparing the consolidated financial statements, involving inadequate or ineffective (1) documentation of certain policies and procedures; (2) management reviews of adjustments and key iterations of the financial statements, notes, and management discussion and analysis provided to GAO for audit; (3) supporting documentation for certain adjustments made to the consolidated financial statements; (4) processes for monitoring the preparation of the consolidated financial statements; and (5) spreadsheet controls. The consolidated financial statements include financial information for the executive, legislative, and judicial branches, to the extent that federal agencies within those branches have provided Treasury such information. However, as we have reported in past years, there continue to be undetermined amounts of assets, liabilities, costs, and revenues that are not included, and the federal government did not provide evidence or disclose in the consolidated financial statements that the excluded financial information was immaterial. As in previous years, Treasury did not have adequate systems and personnel to address the magnitude of the fiscal year 2007 financial reporting challenges it faced, such as weaknesses in Treasury s process for preparing the consolidated financial statements noted above. We found that personnel at Treasury s Financial Management Service had excessive workloads that required an extraordinary amount of effort and dedication to compile the consolidated financial statements; however, there were not enough personnel with specialized financial reporting experience to help ensure reliable financial reporting by the reporting date. In addition, the federal government does not perform quarterly compilations at the governmentwide level, which leads to almost all of the compilation effort being performed during a condensed time period at the end of the year. <13. Components of the Budget Deficit> Both the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities report a budget deficit for fiscal years 2007 and 2006 of $162.8 billion and $247.7 billion, respectively. The budget deficit is calculated by subtracting actual budget outlays (outlays) from actual budget receipts (receipts). For several years, we have been reporting material unreconciled differences between the total net outlays reported in selected federal agencies Statement of Budgetary Resources (SBR) and Treasury s central accounting records used to compute the budget deficit reported in the consolidated financial statements. OMB and Treasury have continued to work with federal agencies to reduce these material unreconciled differences. However, billions of dollars of differences still exist in this and other components of the deficit because the federal government does not have effective processes and procedures for identifying, resolving, and explaining material differences in the components of the deficit between Treasury s central accounting records and information reported in agency financial statements and underlying agency financial information and records. Until these differences are timely reconciled by the federal government, their effect on the U.S. government s consolidated financial statements will be unknown. In fiscal year 2007, we again noted that several agencies auditors reported internal control weaknesses (1) affecting the agencies SBRs, and (2) relating to monitoring, accounting, and reporting of budgetary transactions. These weaknesses could affect the reporting and calculation of the net outlay amounts in the agencies SBRs. In addition, such weaknesses also affect the agencies ability to report reliable budgetary information to Treasury and OMB and may affect the unified budget outlays reported by Treasury in its Combined Statement of Receipts, Outlays, and Balances, and certain amounts reported in the President s Budget. Appendix II: Other Material Weaknesses The federal government did not maintain effective internal control over financial reporting (including safeguarding assets) and compliance with significant laws and regulations as of September 30, 2007. In addition to the material weaknesses discussed in appendix I that contributed to our disclaimer of opinion on the accrual basis consolidated financial statements, we found the following three other material weaknesses in internal control. <14. Improper Payments> Although showing progress under OMB s continuing leadership, agencies fiscal year 2007 reporting under the Improper Payments Information Act of 2002 (IPIA) does not reflect the full scope of improper payments. For fiscal year 2007, federal agencies estimates of improper payments, based on available information, totaled about $55 billion. The increase from the prior year estimate of $41 billion was primarily attributable to a component of the Medicaid program reporting improper payments for the first time totaling about $13 billion for fiscal year 2007, which we view as a positive step to improve transparency over the full magnitude of improper payments. Major challenges remain in meeting the goals of the act and ultimately better ensuring the integrity of payments. For fiscal year 2007, four agency auditors reported noncompliance issues with IPIA related to agencies risk assessments, sampling methodologies, implementing corrective action plans, and recovering improper payments. We also identified issues with agencies risk assessments such as not completing risk assessments of all programs and activities or not conducting annual reviews of any programs and activities. OMB s current guidance allows for annual risk assessments to be conducted less often than annually (generally every 3 years) for programs where baselines are already established, are in the process of being measured, or are scheduled to be measured by an established date. For fiscal year 2007, we noted that 4 agencies were implementing a 3-year cycle for conducting risk assessments. Furthermore, select agencies have not reported improper payment estimates for 14 risk-susceptible federal programs with total program outlays of about $170 billion for fiscal year 2007. Lastly, we found that major management challenges and internal control weaknesses continue to plague agency operations and programs susceptible to significant improper payments. For example, in the Department of Education s fiscal year 2007 Performance and Accountability Report, the Office of Inspector General reported that its recent investigations continue to uncover problems, including inadequate attention to improper payments and failure to identify and take corrective action to detect and prevent fraudulent activities by grantees. <15. Information Security> Although progress has been made, serious and widespread information security control weaknesses continue to place federal assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. GAO has reported information security as a high-risk area across government since February 1997. During fiscal year 2007, federal agencies did not consistently implement effective controls to prevent, limit, or detect unauthorized access to computing resources. Specifically, agencies did not always (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) apply encryption to protect sensitive data on networks and portable devices; (4) log, audit, and monitor security-relevant events; and (5) restrict physical access to information assets. In addition, agencies did not consistently configure network devices and services to prevent unauthorized access and ensure system integrity, such as patching key servers and workstations in a timely manner; assign incompatible duties to different individuals or groups so that one individual does not control all aspects of a process or transaction; and maintain or test continuity of operations plans for key information systems. Such information security control weaknesses unnecessarily increase the risk that the reliability and availability of data that are recorded in or transmitted by federal financial management systems could be compromised. A primary reason for these weaknesses is that federal agencies have not yet fully institutionalized comprehensive security management programs, which are critical to identifying information security control weaknesses, resolving information security problems, and managing information security risks on an ongoing basis. The administration has taken important actions to improve information security, such as issuing extensive guidance on information security and requiring agencies to perform specific actions to protect certain personally identifiable information. However, until agencies effectively and fully implement agencywide information security programs, federal data and systems, including financial information, will remain at risk. <16. Tax Collection Activities> During fiscal year 2007, material internal control weaknesses and systems deficiencies continued to affect the federal government s ability to effectively manage its tax collection activities, an issue that has been reported in our financial statement audit reports for the past 10 years. Due to errors and delays in recording taxpayer information, payments, and other activities, taxpayers were not always credited for payments made on their taxes owed, which could result in undue taxpayer burden. In addition, the federal government did not always follow up on potential unreported or underreported taxes and did not always pursue collection efforts against taxpayers owing taxes to the federal government. Moreover, the federal government did not have cost benefit information, related cost-based performance measures, or a systematic process for ensuring it is using its resources to maximize its ability to collect what is owed and minimize the disbursements of improper tax refunds. As a result, the federal government is vulnerable to loss of tax revenue and exposed to potentially billions of dollars in losses due to inappropriate refund disbursements.
Why GAO Did This Study The Congress and the President need to have reliable, useful and timely financial and performance information to make sound decisions on the current and future direction of vital federal government programs and policies. Unfortunately, except for the 2007 Statement of Social Insurance, GAO was again unable to provide assurance on the reliability of the consolidated financial statements of the U.S. government (CFS) due primarily to certain material weaknesses in the federal government's internal control. GAO has reported that unless these weaknesses are adequately addressed, they will, among other things, (1) hamper the federal government's ability to reliably report a significant portion of its assets, liabilities, costs, and other related information; and (2) affect the federal government's ability to reliably measure the full cost as well as the financial and nonfinancial performance of certain programs and activities. This testimony presents the results of GAO's audit of the CFS for fiscal year 2007 and discusses the federal government's long-term fiscal outlook. What GAO Found For the 11th consecutive year, three major impediments prevented GAO from rendering an opinion on the federal government's accrual basis consolidated financial statements: (1) serious financial management problems at the Department of Defense, (2) the federal government's inability to adequately account for and reconcile intragovernmental activity and balances between federal agencies, and (3) the federal government's ineffective process for preparing the consolidated financial statements. In addition, financial management system problems continue to hinder federal agency accountability. Although the federal government still has a long way to go, significant progress has been made in improving federal financial management. For example, audit results for many federal agencies have improved and federal financial system requirements have been developed. In addition, GAO was able to render an unqualified opinion on the 2007 Statement of Social Insurance. Further, for the first time, the federal government issued a summary financial report which is intended to make the information in the Financial Report of the U.S. Government (Financial Report) more accessible and understandable to a broader audience. It is important that this progress be sustained by the current administration as well as the new administration that will be taking office next year and that the Congress continues its oversight to bring about needed improvements to federal financial management. Given the federal government's current financial condition and the nation's long-term fiscal challenge, the need for the Congress and federal policymakers and management to have reliable, useful, and timely financial and performance information is greater than ever. Information included in the Financial Report, such as the Statement of Social Insurance along with long-term fiscal simulations and fiscal sustainability reporting, can help increase understanding of the nation's long-term fiscal outlook. The nation's long-term fiscal challenge is a matter of utmost concern. The federal government faces large and growing structural deficits due primarily to rising health care costs and known demographic trends. Simply put, the federal government is on an imprudent and unsustainable long-term fiscal path. Addressing this challenge will require a multipronged approach. Moreover, the longer that action is delayed, the greater the risk that the eventual changes will be disruptive and destabilizing. Finally, the federal government should consider the need for further revisions to the current federal financial reporting model to recognize the unique needs of the federal government. A broad reconsideration of issues, such as the kind of information that may be relevant and useful for a sovereign nation, could lead to reporting enhancements that might help provide the Congress and the President with more useful financial information to deliberate strategies to address the nation's long-term fiscal challenge.
<1. USPS s Financial Condition> USPS faces a dire financial situation and does not have sufficient revenues to cover its expenses, putting its mission of providing prompt, reliable, and efficient universal services to the public at risk. USPS continues to incur operating deficits that are unsustainable, has not made required payments of $11.1 billion to prefund retiree health benefit liabilities,USPS lacks liquidity to maintain its financial solvency or finance needed and has reached its $15 billion borrowing limit. Moreover, capital investment. As presented in table 1, since fiscal year 2006, USPS has achieved about $15 billion in savings and reduced its workforce by about 168,000, while also experiencing a 25 percent decline in total mail volume and net losses totaling $40 billion. As a result of significant declines in volume and revenue, USPS reported that it took unprecedented actions to reduce its costs by $6.1 billion in fiscal year 2009. Also in 2009, a cash shortfall necessitated congressional action to reduce USPS s mandated payment to prefund retiree health benefits from $5.4 billion to $1.4 billion. In 2011, USPS s $5.5 billion required retiree health benefit payment was delayed until August 1, 2012. USPS missed that payment as well as the $5.6 billion that was due by September 30, 2012. USPS continues to face significant decreases in mail volume and revenues as online communication and e-commerce expand. While remaining among USPS s most profitable products, both First-Class Mail and Standard Mail volumes have declined in recent years as illustrated in figure 1. First-Class Mail which is highly profitable and generates the majority of the revenues used to cover overhead costs declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Standard Mail (primarily advertising) has declined 23 percent since it peaked in fiscal year 2007, and USPS projects that it will remain roughly flat through fiscal year 2020. Standard Mail is profitable overall, but it takes about three pieces of Standard Mail, on average, to equal the profit from the average piece of First-Class Mail. First-Class Mail and Standard Mail also face competition from electronic alternatives, as many businesses and consumers have moved to electronic payments over the past decade in lieu of using the mail to pay bills. For the first time, in 2010, fewer than 50 percent of all bills were paid by mail. In addition to lost mail volume and revenue, USPS also has incurred financial liabilities, that totaled $96 billion at the end of fiscal year 2012, that included unfunded pension and retiree health benefit liabilities. Table 2 shows the amounts of these liabilities over the last 6 fiscal years. One of these liabilities, USPS s debt to the U.S. Treasury, increased over this period from $4 billion to its statutory limit of $15 billion. Thus, USPS can no longer borrow to maintain its financial solvency or finance needed capital investment. USPS continues to incur unsustainable operating deficits. In this regard, the USPS Board of Governors recently directed postal management to accelerate restructuring efforts to achieve greater savings. These selected USPS liabilities increased from 83 percent of revenues in fiscal year 2007 to 147 percent of revenues in fiscal year 2012 as illustrated in figure 2. This trend demonstrates how USPS liabilities have become a large and growing financial burden. USPS s dire financial condition makes paying for these liabilities highly challenging. In addition to reaching its limit in borrowing authority in fiscal year 2012, USPS did not make required prefunding payments of $11.1 billion for fiscal year 2011 and 2012 retiree health benefits. At the end of fiscal year 2012, USPS had $48 billion in unfunded retiree health benefit liabilities. Looking forward, USPS has warned that it suffers from a severe lack of liquidity. As USPS has reported: Even with some regulatory and legislative changes, our ability to generate sufficient cash flows from current and future management actions to increase efficiency, reduce costs, and generate revenue may not be sufficient to meet all of our financial obligations. For this reason, USPS has stated that it continues to lack the financial resources to make its annual retiree health benefit prefunding payment. USPS has also reported that in the short term, should circumstances leave it with insufficient liquidity, it may need to prioritize payments to its employees and suppliers ahead of those to the federal government. For example, near the end of fiscal year 2011, in order to maintain its liquidity, USPS temporarily halted its regular contributions for the Federal Employees Retirement System (FERS) that are supposed to cover the cost of benefits being earned by current employees. However, USPS has since made up those missed FERS payments. USPS s statements about its liquidity raise the issue of whether USPS will need additional financial help to remain solvent while it restructures and, more fundamentally, whether it can remain financially self-sustainable in the long term. USPS has also raised the concern that its ability to negotiate labor contracts is essential to maintaining financial stability and that failure to do so could have significant adverse consequences on its ability to meet its financial obligations. Most USPS employees are covered by collective bargaining agreements with four major labor unions which have established salary increases, cost-of-living adjustments, and the share of health insurance premiums paid by employees and USPS. When USPS and its unions are unable to agree, binding arbitration by a third-party panel is used to establish agreement. There is no statutory requirement for USPS s financial condition to be considered in arbitration. In 2010, we reported that the time has come to reexamine USPS s 40-year-old structure for collective bargaining, noting that wages and benefits comprise 80 percent of its costs at a time of escalating losses and a dramatically changed competitive environment.Congress should consider revising the statutory framework for collective bargaining to ensure that USPS s financial condition be considered in binding arbitration. <2. USPS Initiatives to Reduce Costs and Increase Revenues> USPS has several initiatives to reduce costs and increase its revenues to curtail future net losses. In February 2012, USPS announced a 5-year business plan with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016. This plan included savings from a change in the delivery schedule; however, USPS has now put all changes in delivery service on hold, which will reduce its ability to achieve the full 5-year business plan savings. USPS has begun implementing other parts of the plan, which includes initiatives to save: $9 billion in mail processing, retail, and delivery operations, including consolidation of the mail processing network, and restructuring retail and delivery operations; $5 billion in compensation and benefits and non-personnel $8.5 billion through proposed legislative changes, such as eliminating the obligation to prefund USPS s retiree health benefits. o $2.7 billion of this $8.5 billion was estimated savings from moving to a 5-day delivery schedule for all types of mail. o USPS subsequently proposed a modified reduction in its delivery schedule, maintaining package delivery on Saturday, with estimated annual savings of $2 billion, but as noted, USPS has now put even this proposed change in service delivery on hold. Simultaneously, USPS s 5-year plan would further reduce the overall size of the postal workforce by roughly 155,000 career employees, with many of those reductions expected to result from attrition. According to the plan, half of USPS s career employees are currently eligible for full or early retirement. Reducing its workforce is vital because as noted compensation and benefits costs continue to generate about 80 percent of USPS s expenses. Compensation alone (primarily wages) exceeded $36 billion in fiscal year 2012, or close to half of its costs. Compensation costs decreased by $542 million in fiscal year 2012 as USPS offered separation incentives to postmasters and mail handlers to encourage more attrition. This fiscal year, separation incentives were offered to employees represented by the American Postal Workers Union (e.g., mail processing and retail clerks) to encourage further attrition as processing and retail operations are redesigned and consolidated to more closely correspond with workload. Another key area of potential savings included in the 5-year plan focused on reducing compensation and benefit costs. USPS s largest benefit payments in fiscal year 2012 included: $7.8 billion in current-year health insurance premiums for employees, retirees, and their survivors (USPS s health benefit payments would have been $13.4 billion if USPS had paid the required $5.6 billion retiree health prefunding payment); $3.0 billion in FERS pension funding contributions; $1.8 billion in social security contributions; $1.4 billion in workers compensation payments; and $1.0 billion in Thrift Savings Plan contributions. USPS has proposed administering its own health care plan for its employees and retirees and withdrawing from the Federal Employee Health Benefits (FEHB) program so that it can better manage its costs and achieve significant savings, which USPS has estimated could be over $7 billion annually. About $5.5 billion of the estimated savings would come from eliminating the retiree health benefit prefunding payment and another $1.5 billion would come from reducing health care costs. We are currently reviewing USPS s proposal including its potential financial effects on participants and USPS. To increase revenue, USPS is working to increase use of shipping and package services. With the continued increase in e-commerce, USPS projects that shipping and package volume will grow by 7 percent in fiscal year 2013, after increasing 7.5 percent in fiscal year 2012. Revenue from these two product categories represented about 18 percent of USPS s fiscal year 2012 operating revenue. However, USPS does not expect that continued growth in shipping and package services will fully offset the continued decline of revenue from First-Class Mail and other products. We recently reported that USPS is pursuing 55 initiatives to generate revenue. Forty-eight initiatives are extensions of existing lines of postal products and services, such as offering Post Office Box customers a suite of service enhancements (e.g., expanded lobby hours and earlier pickup times) at selected locations and increasing public awareness of the availability of postal services at retail stores. The other seven initiatives included four involving experimental postal products, such as prepaid postage on the sale of greeting cards, and three that were extensions of nonpostal services that are not directly related to mail delivery. USPS offers 12 nonpostal services including Passport Photo Services, the sale of advertising to support change-of-address processing, and others generating a net income of $141 million in fiscal year 2011. Another area of potential revenue generation is USPS s increased use of negotiated service agreements that offer competitively priced contracts as well as promotions with temporary rate reductions that are targeted to retain mail volume. We are currently reviewing USPS s use of negotiated service agreements. As USPS attempts to reduce costs and increase revenue, its mission to provide universal service continues. USPS s network serves more than 152 million residential and business delivery points. In May 2011, we reported that many of USPS s delivery vehicles were reaching the end of their expected 24-year operational life and that USPS s financial challenges pose a significant barrier to replacing or refurbishing its fleet.As a result, USPS s approach has been to maintain the delivery fleet until USPS determines how to address longer term needs, but USPS has been increasingly incurring costs for unscheduled maintenance because of breakdowns. The eventual replacement of its vehicle delivery fleet represents yet another financial challenge facing USPS. We are currently reviewing USPS s investments in capital assets. <3. Actions Needed to Improve USPS s Financial Situation> We have issued a number of reports on strategies and options for USPS to improve its financial situation by optimizing its network and restructuring the funding of its pension and retiree health benefit liabilities. <3.1. Network Optimization> To assist Congress in addressing issues related to reducing USPS s expenses, we have issued several reports analyzing USPS s initiatives to optimize its mail processing, delivery, and retail networks. In April 2012, we issued a report related to USPS s excess capacity in its network of 461 mail processing facilities. We found that USPS s mail processing network exceeds what is needed for declining mail volume. USPS proposed consolidating its mail processing network, a plan based on proposed changes to overnight delivery service standards for First- Class Mail and Periodicals. Such a change would have enabled USPS to reduce an excess of 35,000 positions and 3,000 pieces of mail equipment, among other things. We found, however, that stakeholder issues and other challenges could prevent USPS from implementing its plan for consolidating its mail processing network. Although some business mailers and Members of Congress expressed support for consolidating mail processing facilities, other mailers, Members of Congress, affected communities, and employee organizations raised concerns. Key issues raised by business mailers were that closing facilities could increase their transportation costs and decrease service. Employee associations were concerned that reducing service could result in a greater loss of mail volume and revenue that could worsen USPS s financial condition. We reported that if Congress preferred to retain the current delivery service standards and associated network, decisions will need to be made about how USPS s costs for providing these services will be paid. Over the past several years, USPS has proposed transitioning to a new delivery schedule. Most recently, in February of this year, USPS proposed limiting its delivery of mail on Saturdays to packages a growing area for USPS and to Express Mail, Priority Mail, and mail addressed to Post Office Boxes. Preserving Saturday delivery for packages would address concerns previously raised by some stakeholders, such as delivery of needed medications. USPS estimated that this reduced Saturday delivery would produce $2 billion in annual savings after full implementation, which would take about two years to achieve, and result in a mail volume decline of less than one percent. Based on our 2011 work,February 2013 estimate, we note that the previous and current estimates and recent information from USPS on their are primarily based on eliminating city and rural carrier work hours on Saturdays. In our prior work, stakeholders raised a variety of concerns about these estimates, several of which are still relevant. For example, USPS s estimate assumed that most of the Saturday workload transferred to weekdays would be absorbed through more efficient delivery. USPS estimated that its current excess capacity should allow it to absorb the Saturday workload on Monday. If that is not the case, some of the projected savings may not be realized. Another concern stakeholders raised was that USPS may have underestimated the size of the potential volume loss from eliminating Saturday delivery due to the methodology used to develop its estimates. Since mail volume has declined from the prior estimate, the accuracy of the estimated additional impact of eliminating Saturday delivery is unclear. The extent to which USPS would be able to achieve its most recent estimate of $2 billion in annual savings depends on how well and how quickly it can realign its workforce and delivery operations. Nevertheless, we agree that such a change in USPS s delivery schedule would likely result in substantial savings. A change to 5-day service would be similar to changes USPS has made in the past. USPS is required by law to provide prompt, reliable, and efficient services, as nearly as practicable. The Postal Regulatory Commission (PRC) has reported that delivery frequency is a key element of universal postal service. The Postal Service s universal service obligation is broadly outlined in multiple statutes and encompasses multiple dimensions including delivery frequency. Other key dimensions include geographic scope, range of products, access to services and facilities, affordable and uniform pricing, service quality, and security of the mail. The frequency of USPS mail delivery has evolved over time to account for changes in communication, technology, transportation, and postal finances. The change to 5-day service would be a similar change. Until 1950, residential deliveries were made twice a day in most cities. Currently, while most customers receive 6-day delivery, some customers receive 5-day or even 3-day-a-week delivery, including businesses that are not open 6 days a week; resort or seasonal areas not open year- round; and areas not easily accessible, some of which require the use of boats, airplanes, or trucks. Following USPS s most recent proposed change in delivery in February 2013, we issued a legal opinion concerning the proposal in response to a congressional request. As requested, we addressed whether a requirement contained in the USPS s annual appropriations acts for the past three decades and contained in its fiscal year 2012 appropriations act that it continue 6-day delivery of mail at not less than the 1983 level was still in effect under the partial year Continuing Appropriations Resolution. We concluded that the Continuing Resolution carried forward this requirement, explaining that absent specific legislative language, a continuing resolution maintains the status quo regarding government funding and operations. Although the 6-day delivery proviso is an operational directive, not an appropriation, we saw no language in the Continuing Resolution to indicate that Congress did not expect it to continue to apply. The full-year 2013 Continuing Resolution that Congress then enacted on March 21, shortly after we issued our opinion, which provided funding through the end of fiscal year 2013, likewise has continued the effectiveness of the 6-day proviso. On April 10, 2013, the USPS Board of Governors announced that based on the language of the March 21, 2013, Continuing Resolution, it would delay implementation of USPS s proposed delivery schedule until legislation is passed that provides it with the authority to implement a financially appropriate and responsible delivery schedule. By statute, the Board directs the exercise of the power of the Postal Service, directs and controls the Postal Service s expenditures, and reviews its policies and practices. Thus, the Board, which has the lead responsibility for taking actions within the scope of the Postal Service s existing statutory authority to maintain its financial solvency, has determined that full 6-day service will continue for the present time. In April 2012, we reported that USPS has taken several actions to restructure its retail network which included almost 32,000 postal managed facilities in fiscal year 2012 through reducing its workforce and its footprint while expanding retail alternatives. We also reported on concerns customers and other stakeholders have expressed regarding the impact of post office closures on communities, the adequacy of retail alternatives, and access to postal services, among others. We discussed challenges USPS faces, such as legal restrictions and resistance from some Members of Congress and the public, that have limited USPS s ability to change its retail network by moving postal services to more nonpostal-operated locations (such as grocery stores), similar to what other nations have done. The report concluded that USPS cannot support its current level of services and operations from its current revenues. We noted that policy issues remain unresolved related to what level of retail services USPS should provide, how the cost of these services should be paid, and how USPS should optimize its retail network. In November 2011, we reported that USPS had expanded access to its services through alternatives to post offices in support of its goals to improve service and financial performance and recommended that USPS develop and implement a plan with a timeline to guide efforts to modernize USPS's retail network, and that addresses both traditional post offices and retail alternatives as well. We added that the plan should also include: (1) criteria for ensuring the retail network continues to provide adequate access for customers as it is restructured; (2) procedures for obtaining reliable retail revenue and cost data to measure progress and inform future decision making; and (3) a method to assess whether USPS's communications strategy is effectively reaching customers, particularly those customers in areas where post offices may close. In November 2012, we reported that although contract postal units (CPUs) independent businesses compensated by USPS to sell most of the same products and services as post offices at the same price have declined in number, they have supplemented post offices by providing additional locations and hours of service. More than 60 percent of CPUs are in urban areas where they can provide customers nearby alternatives when they face long lines at post offices. In fiscal year 2011, after compensating CPUs, USPS retained 87 cents of every dollar of CPU revenue. We found that limited interest from potential partners, competing demands on USPS staff resources, and changes to USPS's retail network posed potential challenges to USPS's use of CPUs. <3.2. Addressing USPS Benefit Liabilities> To assist Congress in addressing issues related to funding USPS s liabilities, we have also issued several reports that address USPS s liabilities, including its retiree health benefits, pension, and workers compensation. In December 2012, we reported that USPS s deteriorating financial outlook will make it difficult to continue the current schedule for prefunding postal retiree health benefits in the short term, and possibly to fully fund the remaining $48 billion unfunded liability over the remaining decades of the statutorily required actuarial funding schedule. However, we also reported that deferring funding could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for some or all of its liability. We stated that failure to prefund these benefits is a potential concern. Making affordable prefunding payments would protect the viability of USPS by not saddling it with bills later on, when employees are already retired and no longer helping it generate revenue; it can also make the promised benefits more secure. Thus, as we have previously reported, we continue to believe that it is important for USPS to prefund these benefits to the maximum extent that its finances permit. We also recognize that without congressional or further USPS actions to align revenue and costs, USPS will not have the finances needed to make annual payments and reduce its long term retiree health unfunded liability. No funding approach will be viable unless USPS can make the required payments. We reported on options with regard to the FERS surplus, noting the degree of uncertainty inherent in this estimate and reporting on the implications of alternative approaches to accessing this surplus.estimated FERS surplus decreased from 2011 to 2012, and at the end of The fiscal year 2012, USPS had an estimated FERS surplus of $3.0 billion and an estimated CSRS deficit of $18.7 billion. In 2012, we reported on workers compensation benefits paid to both postal and nonpostal beneficiaries under the Federal Employees Compensation Act (FECA). USPS has large FECA program costs. At the time of their injury, 43 percent of FECA beneficiaries in 2010 were employed by USPS. FECA provides benefits to federal workers who sustained injuries or illnesses while performing federal duties and benefits are not taxed or subject to age restrictions. Various proposals to modify FECA benefit levels have been advanced. At the request of Congress, we have provided information to assist them in making decisions about the FECA program. <4. Concluding Observations> In summary, to improve its financial situation, USPS needs to reduce its expenses to close its gap between revenue and expenses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, such as replacing its aging vehicle fleet. In addition, as noted in prior reports, congressional action is needed to (1) modify USPS s retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS s ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS s financial condition into account. As we have continued to underscore, Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS s financial viability. In previous reports, we have provided strategies and options, to both reduce costs and enhance revenues, that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS s ability to reduce costs and improve efficiency; we have also reported on implications for addressing USPS s benefit liabilities. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of reliable and affordable postal services. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. <5. GAO Contact and Staff Acknowledgments> For further information about this statement, please contact Lorelei St. James, Director, Physical Infrastructure, at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contact named above, Frank Todisco, Chief Actuary; Samer Abbas, Teresa Anderson, Barbara Bovbjerg, Kyle Browning, Colin Fallon, Imoni Hampton, Kenneth John, Hannah Laufe, Kim McGatlin, Amelia Shachoy, Andrew Sherrill, and Crystal Wesco made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study USPS is in a serious financial crisis as its declining mail volume has not generated sufficient revenue to cover its expenses and financial obligations. First-Class Mail--which is highly profitable and generates the majority of the revenues used to cover overhead costs--declined 33 percent since it peaked in fiscal year 2001, and USPS projects a continued decline through fiscal year 2020. Mail volume decline is putting USPS's mission of providing prompt, reliable, and efficient universal services to the public at risk. This testimony discusses (1) USPS's financial condition, (2) initiatives to reduce costs and increase revenues, and (3) actions needed to improve USPS's financial situation. The testimony is based primarily on GAO's past and ongoing work, its analysis of USPS's recent financial results, and recent information on USPS's proposal for a change in delivery service. In previous reports, GAO has provided strategies and options that USPS and Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS's ability to reduce costs and improve efficiency. GAO has also stated that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS's financial viability. What GAO Found The U.S. Postal Service (USPS) continues to incur unsustainable operating deficits, has not made required payments of $11.1 billion to prefund retiree health benefits, and has reached its $15 billion borrowing limit. Thus far, USPS has been able to operate within these constraints, but now faces a critical shortage of liquidity that threatens its financial solvency and ability to finance needed capital investment. USPS had an almost 25 percent decline in total mail volume and net losses totaling $40 billion since fiscal year 2006. While USPS achieved about $15 billion in savings and reduced its workforce by about 168,000 over this period, its debt and unfunded benefit liabilities grew to $96 billion by the end of fiscal year 2012. USPS expects mail volume and revenue to continue decreasing as online bill communication and e-commerce expand. USPS has several initiatives to reduce costs and increase its revenues. To reduce costs, USPS announced a 5-year business plan in February 2012 with the goal of achieving $22.5 billion in annual cost savings by the end of fiscal year 2016, which included a proposed change in the delivery schedule. USPS has now put all changes in delivery service on hold, which will reduce its ability to achieve the full 5-year business plan savings. USPS has begun implementing other parts of the plan, which includes needed changes to its network. To achieve greater savings, USPS's Board of Governors recently directed postal management to accelerate these efforts. To increase revenue, USPS is pursuing 55 initiatives. While USPS expects shipping and package services to continue to grow, such growth is not expected to fully offset declining mail volume. USPS needs to reduce its expenses to avoid even greater financial losses, repay its outstanding debt, continue funding its retirement obligations, and increase capital for investment, including replacing its aging vehicle fleet. Also, Congress needs to act to (1) modify USPS's retiree health benefit payments in a fiscally responsible manner; (2) facilitate USPS's ability to align costs with revenues based on changing workload and mail use; and (3) require that any binding arbitration resulting from collective bargaining takes USPS's financial condition into account. No one action in itself will address USPS's financial condition; GAO has previously recommended a comprehensive package of actions. If Congress does not act soon, USPS could be forced to take more drastic actions that could have disruptive, negative effects on its employees, customers, and the availability of postal services. USPS also reported that it may need to prioritize payments to employees and suppliers ahead of those to the federal government.
<1. Background> Virtually all the cocaine destined for the United States originates in the Andean countries of Colombia, Peru, and Bolivia and travels through the transit zone. The United States provides significant counternarcotics assistance toward reducing coca cultivation, disrupting cocaine production, and preventing cocaine from reaching the United States. Drug traffickers move cocaine and other drugs to the United States through two main vectors or corridors. In recent years, approximately 90 percent of cocaine moving toward the United States has gone through the Central American/Mexican corridor and then over the border to the United States. The remainder, roughly 10 percent, transits the Caribbean and enters the United States through Florida, Puerto Rico, and other eastern locations. (See fig. 1.) More than 25 countries lie within the transit zone. The President has designated eight of these as major drug transit countries based on the estimated volume of illicit drugs that pass through their territory each year. These countries are the Bahamas, the Dominican Republic, Ecuador, Guatemala, Haiti, Jamaica, Panama, and Venezuela. <1.1. Transit Zone Drug Trafficking Tactics> Drug trafficking organizations and associated criminal networks are extremely adaptive. They shift routes and operating methods quickly in response to pressure from law enforcement organizations or rival traffickers. They ship drugs through the transit zone primarily by sea, though their methods have become more evasive in recent years. They typically use go-fast boats and fishing vessels to smuggle cocaine from Colombia to Central America and Mexico en route to the United States. Go-fast boats are capable of traveling at speeds over 40 knots, are difficult to detect in open water, and are often used at night. When drug traffickers travel in daylight, they often use boats painted blue, or that can be quickly covered with a blue tarpaulin, thereby becoming virtually impossible to see. Even when detected, go-fast boats can often outrun conventional ships deployed in the transit zone. Traffickers also use mother ships in concert with fishing vessels to transport illicit drugs into open waters and then distribute the load among smaller boats at sea. In addition, traffickers use evasive maritime routes and change them frequently. Some boats travel as far southwest as the Galapagos Islands in the Pacific Ocean before heading north toward Mexico, while others travel through Central America s littoral waters, close to shore, where they can hide among legitimate maritime traffic. In addition, the Joint Interagency Task Force- South (JIATF-South), under Defense s U.S. Southern Command, has reported an increase in suspicious flights particularly departing from Venezuela. In addition, traffickers fly loads of cocaine to remote, ungoverned spaces such as northern Guatemala, near the Mexican border and abandon the planes. Planes, however, generally carry much smaller loads than most maritime vessels used for drug trafficking. Traffickers also are using increasingly sophisticated concealment methods. For example, they have built fiberglass semisubmersible craft that can avoid both visual- and sonar-detection, hidden cocaine within the hulls of boats, and transported liquefied cocaine in fuel tanks. According to Defense s Office of Counternarcotics, Counterproliferation, and Global Threats, these shifts in drug trafficking patterns and methods have likely taken place largely in response to U.S. and international counternarcotics efforts in the Pacific Ocean and Caribbean, although measuring causes and effects is imprecise. In addition, according to Defense, drug trafficking organizations and associated criminal networks commonly enjoy greater financial and material resources (including weapons as well as communication, navigation, and other technologies) than do governments in the transit zone. <1.2. U.S. Counternarcotics Strategy> The U.S. National Drug Control Strategy s goal is to reduce illegal drug usage in the United States. One priority is to disrupt the illegal drug trade abroad including in the transit zone by attacking the power structures and finances of international criminal organizations. This involves seizing large quantities of cocaine from transporters, disrupting major drug trafficking organizations, arresting their leaders, and seizing their assets. The strategy also calls for the United States to support democratic institutions and the rule of law in allied nations, strengthen these nations prosecutorial efforts, and prosecute foreign traffickers. According to State s International Narcotics Control Strategy Report, the goal of U.S. counternarcotics assistance to other countries is to help their governments become full and self-sustaining partners in the fight against drugs. ONDCP produces the National Drug Control Strategy, establishes policies, priorities, and objectives for the nation s drug control program, and evaluates, coordinates, and oversees the counternarcotics efforts of executive branch agencies, including assistance to countries in the transit zone. State/INL manages and funds law enforcement assistance, including programs implemented by a variety of other U.S. agencies, such as DHS s Coast Guard, U.S. Customs and Border Protection (CBP), and Immigration and Customs Enforcement. State also administers security assistance programs generally implemented by Defense, including Foreign Military Financing and International Military Education and Training programs, which are intended to strengthen the overall capacity of foreign forces to address security threats, including violence and instability associated with drug trafficking. Defense, primarily through its Office of Counternarcotics, Counterproliferation, and Global Threats, provides guidance and oversight, as well as funds for counternarcotics and related security activities in the transit zone. DEA works to disrupt drug trafficking operations and dismantle criminal organizations, bringing leaders to prosecution either in the United States or in other countries; it maintains offices in countries throughout the transit zone. USAID also supports the U.S. counternarcotics effort indirectly through its rule of law and alternative development programs. Table 1 shows assistance provided by State, DEA, Defense, and USAID to support counternarcotics-related programs and activities in transit zone countries for fiscal years 2003 through 2007. As part of the Merida Initiative, the President has asked the Congress to provide $1.1 billion in fiscal years 2008 and 2009 to train and equip Mexican and Central American security forces to combat criminal organizations. Of this amount, $950 million would be dedicated to Mexico, and $150 million would be dedicated to Central American countries. This proposal is under consideration by the Congress. The Administration s proposal is for all programs to be administered by the State Department, although other U.S. agencies may be involved in implementation. <2. Overall, U.S. Assistance Has Enhanced International Cooperation in Disrupting Illegal Drug Markets> Since 2003, through U.S.-supported international counternarcotics programs, the United States and the eight major drug transit countries we reviewed, except Venezuela, have enhanced their cooperation in combating drug trafficking, primarily through improvements in investigations and intelligence gathering, maritime and land-based operations, and prosecutions of drug traffickers. Measuring the results of a wide variety of assistance programs across many countries over time is difficult as U.S. agencies have compiled limited and inconsistent performance data. Nevertheless, the improvements attained through these programs have contributed to the U.S. strategy of disrupting the illicit drug market through drug seizures, arrests, prosecutions, and drug crop eradication, according to information provided by State and DEA. <2.1. Intelligence-Gathering and Investigations> Actionable intelligence is a critical component of interdiction, and the United States often requires access to raw information and sources from partner nations to develop this intelligence. State/INL, DEA, and Defense have helped all eight partner nations we reviewed develop organizations and methods for gathering, analyzing, and sharing intelligence and information that have led to arrests and seizures of drugs and assets. These efforts have included establishing vetted and specialized investigative units; strengthening investigative authority of local law enforcement; and installing data networks within and among countries to compile, analyze, and share information. DEA and State/INL have established vetted investigative units, staffed by local law enforcement officers, in all of the major drug transit countries we reviewed. These units have worked closely with U.S. officials to develop successful investigations. The United States provides these units with operational support, from money to pay agents and confidential sources to vehicles and surveillance equipment. For example, in the Dominican Republic, a vetted unit within the counternarcotics police used a U.S.-supported wire intercept program to conduct more than 730 wiretap operations in fiscal year 2007. The program provided daily support to numerous major investigations in the United States and abroad, including the investigation of eight priority target organizations. In Ecuador, DEA estimates that one vetted investigative unit has been responsible for 70 percent of all drug seizures in that country. In Jamaica, according to State, intelligence-driven operations coordinated with DEA and the vetted unit targeted major drug traffickers, and collaboration between Jamaican and international law enforcement agencies have resulted in significant seizures of cocaine and arrests of midlevel and major traffickers including kingpins and the dismantling of their organizations, in Jamaica, the United States, the Bahamas, and Colombia. Since late 2004, almost all significant bilateral investigations with Jamaica have included a wire intercept component using DEA-funded facilities. In the Bahamas, according to State, intelligence gathering and surveillance equipment provided by State/INL enabled local law enforcement to dismantle two Bahamas-based drug trafficking organizations in 2006. DEA has also helped governments draft legislation to broaden the scope of investigative tools available to law enforcement organizations. For example, Guatemala s Organized Crime Bill, put into effect in 2007, authorizes wire intercepts and undercover operations, and allows drugs to be delivered under controlled circumstances in order to identify the traffickers involved referred to as controlled delivery. DEA also encouraged legislation in Jamaica to authorize wire intercepts and fingerprinting of suspects. Similarly, legislation in Venezuela enhanced police investigative powers in 2005 by allowing controlled deliveries. In addition, with U.S. technical and financial assistance, several countries are operating information centers to collect, analyze, and disseminate statistical and case-related data to aid local and foreign law enforcement officials in criminal investigations. In the Dominican Republic, for example, the DEA-sponsored Caribbean Center for Drug Information serves as a clearinghouse for narcotics-related intelligence for countries throughout the Caribbean and Latin America. According to DEA, Caribbean countries are both frequent contributors to and beneficiaries of the center s intelligence analysis services. In addition, Defense funded the installation of a computer network in countries throughout the transit zone, including in six of the eight countries we reviewed, called the Cooperating Nation Information Exchange System, through which participating countries share information in real time regarding aircraft and vessels suspected of transporting drugs. Defense also posts liaisons throughout the region who facilitate the exchange of actionable intelligence between the United States and host nation counterparts to assist in planning counternarcotics operations. <2.2. Maritime and Land-Based Operations> The United States generally cannot intercept shipments of drugs and their precursors and apprehend traffickers in the sovereign territory of another nation without the consent, and often the active participation, of that country s government. The United States has reached cooperative agreements with several partner nations that expand U.S. authority and U.S. ability to conduct interdiction operations in the transit zone. In addition, assistance provided by State/INL, Defense, and U.S. law enforcement agencies has enabled the countries we reviewed to undertake or participate in land-based police, military, and other counternarcotics operations involving seizures, arrests, and eradication. <2.2.1. Maritime Operations> State/INL, Defense, and the Coast Guard have provided partner countries with equipment, such as new and refurbished boats; infrastructure, such as docks and piers; and training for maritime, littoral, and riverine patrol and interdiction operations. With this support, several countries have participated in short- and long-term maritime interdiction operations with the United States and other countries since 2003. For example, in the Bahamas, State/INL donated several fast response boats, which the Bahamian police force has deployed throughout the country for use with U.S. helicopters and personnel under Operation Bahamas, Turks and Caicos. According to State, these boats have been used in interdiction missions, participating in the seizure of go-fast drug smuggling boats. Since 2003, the United States has also entered into maritime law enforcement cooperation agreements or procedures with four of the eight major drug transit countries we reviewed, affording U.S. forces improved access to suspect vessels in international and territorial waters. (See app. II for a listing of maritime law enforcement agreements with transit zone countries.) For example, in 2003, the Dominican Republic entered into a bilateral agreement granting the United States permanent over-flight rights for counternarcotics operations. In 2006, the United States and Ecuador negotiated operating procedures to facilitate interdiction of suspect Ecuadorian-flagged vessels. According to State, in fiscal year 2007, these new procedures enabled the United States to board seven Ecuadorian flagged vessels and remove about 26 metric tons of cocaine. In addition, according to State, under the terms of maritime agreements, Guatemala and Panama have provided valuable support for international interdictions by permitting the Coast Guard to fly suspected drug traffickers to the United States. This has allowed U.S. assets to remain on station and continue pursuing drug interdiction and homeland security missions. In Panama, according to State, the Coast Guard s 2007 seizure of over 32 metric tons of cocaine including the single largest maritime drug seizure in U.S. history was directly related to cooperative efforts executed under provisions of the bilateral agreement between Panama and the United States. Bilateral maritime agreements have proven valuable in the other major drug transit countries, as well. Even Venezuela, which has ceased to cooperate with the United States on many counternarcotics initiatives, continues to honor the provisions of its ship-boarding agreement, authorizing the United States to board Venezuelan flagged vessels on the high seas suspected of being engaged in narcotics trafficking. In addition, in Ecuador, the United States operates a counternarcotics forward operating location to support host nation and interagency drug detection and monitoring efforts in the transit zone. Facilities such as this permit the United States and allied nations to deploy interdiction assets closer to cocaine departure points in the source zone. According to U.S. officials at the forward operating location in Manta, Ecuador, this facility supported over 1,150 counternarcotics missions in 2007 by providing logistical support for U.S. aircraft that detect and monitor narcotics trafficking. <2.3. Land-Based Operations> In several countries, State/INL, in collaboration with DEA and DHS agencies, has helped finance the operations of special law enforcement units to target drug traffickers at airports, seaports, and other transit checkpoints using X-ray equipment, canines, and other methods and technologies. For example, since 2003, Jamaican authorities have arrested thousands of departing passengers at the country s two international airports on drug charges, aided by the use of drug detection equipment provided by the United States and Great Britain. With funding from State and USAID, DHS has deployed advisors and specialized teams for both short- and long-term details to provide training and technical assistance in such areas as customs documentation, airport/border/seaport interdiction operations, mail processing, container examination, security, firearms, and officer safety. In addition, due to Ecuador s close proximity to drug-producing regions of Colombia, Defense, State/INL, and DEA have provided extensive support for police and military counternarcotics operations there. According to State/INL, it and DEA have provided nearly all the logistical support for Ecuador s counternarcotics police, including construction or refurbishment of facilities and the provision of vehicles and equipment. State reported in 2007 that U.S.-supported canine units, which were deployed at airports and checkpoints, were involved in nearly all of Ecuador s drug interdictions. During 2007, the counternarcotics police conducted a series of interdiction operations throughout the country, which resulted in the largest amount of land-based seizures in the country s history. With logistical support from Defense and State/INL, the Ecuadorian armed forces conducted nine operations in 2006 and 17 in 2007, which led to the discovery and destruction of 47 camps used by the Revolutionary Armed Forces of Colombia, 36 hectares of coca plants, as well as a number of cocaine producing laboratories. In Guatemala, which has recently experienced a growth in illicit opium poppy cultivation, State/INL and Defense have provided aerial reconnaissance, transportation, and other logistical support for several large-scale, manual eradication missions. In 2007, Guatemalan authorities destroyed nearly 450 hectares of poppy over half of the estimated area of cultivation. <2.4. Prosecution> Dismantling drug trafficking organizations requires the criminal prosecution of key traffickers. State/INL and USAID have supported judicial reforms within some partner nations intended to make judicial systems more fair, impartial, and efficient, and have strengthened the capacity of prosecutors to work effectively within those systems on drug- related cases. For example, in Ecuador and the Dominican Republic, State/INL and USAID sponsored training of police, prosecutors, and judges on the application of new criminal procedure codes. State/INL has also supported national task forces in several countries to prosecute drug-related crimes. In Guatemala, State/INL has worked with the country s Attorney General to support three task forces dealing with narcotics, corruption, and money-laundering cases. In 2004, the anticorruption prosecutor in Guatemala brought cases against over 380 individuals, including many high-ranking former public officials, army officers, and police. In Venezuela, until 2005, State/INL provided extensive logistical support, and DEA provided advice and supervision, to help develop the professional investigative and operational capability of the Prosecutors Drug Task Force, which was composed of three dozen vetted prosecutors and investigators from three agencies. According to State and DEA, the work of this task force resulted in multiton seizures of drugs, the arrest of numerous traffickers (including at least one kingpin), and asset seizures. In many cases where prosecution in the United States is warranted and legal, partner nations have also transferred or extradited drug-related defendants. For example, in 2007, Haiti s President authorized the narcotics police to cooperate with DEA and U.S. Customs and Border Protection (CBP) personnel in locating, arresting, and removing nine high- level drug trafficking defendants. Also, according to State, the Dominican Republic and Jamaica have been particularly cooperative with the U.S. Marshals Service in locating, extraditing, and deporting defendants. <2.5. Measuring Results Is Difficult, but Cocaine Interdiction in the Transit Zone Reflects International Cooperation> While State/INL, DEA, and others have reported the results of their assistance programs, they have not done so in a comprehensive and consistent manner among partner nations and over time. Reports we reviewed showed that some programs have helped disrupt drug markets through seizures and arrests. Other programs, such as alternative development, justice reform, and security service capacity building, are less directly related to drug interdiction operations but are designed to have longer-term and more systemic results, which are more difficult to measure. <2.5.1. Assistance Programs Are Diffuse, and Results Are Not Reported Comprehensively and Consistently> The Government Performance and Results Act of 1993 requires federal agencies to develop performance measures to assess progress in achieving their goals and to communicate their results to the Congress. The act requires agencies to set multiyear strategic goals in their strategic plans and corresponding annual goals in their performance plans, measure performance toward the achievement of those goals, and report on their progress in their annual performance reports. These reports are intended to provide important information to agency managers, policymakers, and the public on what each agency accomplished with the resources it was given. Moreover, the act calls for agencies to develop performance goals that are objective, quantifiable, and measurable, and to establish performance measures that adequately indicate progress toward achieving those goals. Our previous work has noted that the lack of clear, measurable goals makes it difficult for program managers and staff to link their day-to-day efforts to achieving the agency s intended mission. U.S.-funded transit zone counternarcotics assistance encompasses a wide variety of initiatives across many countries, but State/INL and other agencies have collected limited information on results. Records we obtained from State/INL and DEA, including State s annual International Narcotics Control Strategy Reports and End Use Monitoring Reports, provide information on the outcomes of these initiatives but do not do so comprehensively. For example, in our review of State s International Narcotics Control Strategy Reports for 2003 to 2007, we identified over 120 counternarcotics initiatives in the countries we reviewed, but for over half of these initiatives, the outcomes were unclear or not addressed at all in the reports. Table 2 depicts the range of U.S.-supported counternarcotics efforts in the countries we reviewed, including those described in State or DEA records as having negligible or unsatisfactory outcomes. State has attempted to measure the outcomes of counternarcotics programs in its annual mission performance reports, which report on a set of performance indicators for each country. However, these indicators have not been consistent over time or among countries. In our review of mission performance reports for four major drug transit countries covering fiscal years 2002 through 2006, we identified 86 performance indicators directly and indirectly related to counternarcotics efforts; however, over 60 percent of these indicators were used in only one or two annual reporting cycles, making it difficult to discern performance trends over time. Moreover, nearly 80 percent of these performance indicators were used for only one country, making it difficult to compare program results among countries. <2.5.2. Cocaine Seizures and Disruptions Reflect Cooperation with Partner Nations> Program specific information we reviewed indicates that these U.S. counternarcotics assistance programs, along with other efforts, have contributed to an active international interdiction effort in the transit zone. Data reported by the U.S. interagency counternarcotics community indicate that, since 2002, the United States and its partner nations have removed, through seizures and disruptions, between 22 and 38 percent of the estimated amount of cocaine flowing through the transit zone, excluding Mexico (see table 3). U.S. agencies have supported a wide variety of programs that relate to the counternarcotics effort indirectly, and results are therefore difficult to assess. These programs generally focus on root causes of drug-related crime, as well as strengthening the overall rule of law and security of partner nations. Since these programs are not directly associated with interdiction efforts and outcomes, and some are long-term efforts, their results and effect on the overall success of counternarcotics efforts are even more difficult to assess. State/INL has funded programs, including training and public awareness campaigns, which address some of the underlying causes of drug trafficking, such as local drug consumption and corruption. Some programs are also aimed at strengthening institutions, such as public health, educational, and financial accountability organizations, which can help prevent drug-related crime by fostering a culture that does not tolerate drug consumption and corruption. Very limited data were available in State reports to discern trends in either corruption or drug consumption that could be attributable to these programs. However, in several countries, State/INL has reported an increased willingness among local law enforcement entities to prosecute public officials. Several USAID programs combat narcotics trafficking indirectly in vulnerable populations by addressing underlying social problems, such as crime, inadequate public services, and lack of economic opportunities. In Jamaica, according to State/INL, anticrime and community policing programs contributed to a 16 percent reduction in crime in 2006, but the crime rate rose again in 2007. Development programs in Ecuador have helped stabilize communities along the border with Colombia most likely to become involved in drug trafficking by providing social services and productive infrastructure, including water and sanitation systems, bridges, roads, and irrigation canals. These programs have also helped strengthen local governments and promote citizen participation in a number of municipalities and parishes. While these programs have focused national and international development assistance on these vulnerable communities, their effect on the drug trade in the Colombian border region of Ecuador has not been evaluated. Defense and DHS have implemented many programs aimed at building the overall capacity and professionalism of military and security organizations through international cooperation. Defense officials in both Ecuador and Panama told us they considered all U.S. cooperative programs with the security forces of those countries to be counternarcotics-related because they help counter the threats posed by trafficking organizations, including incursions by the irregular armed forces of neighboring Colombia. However, because many defense assistance programs in partner nations do not have specific goals related to interdiction, it is difficult to assess the effectiveness of the programs for counternarcotics. Similarly, State/INL has funded training programs through DHS agencies to help improve overall immigration, customs, and coast guard operations. <3. Several Factors Impede the Effectiveness of the Counternarcotics Efforts> Several factors relating to U.S. assistance programs have impeded international counternarcotics efforts in the transit zone. Partner nations have limited resources to devote to counternarcotics efforts, and many U.S.-supported counternarcotics initiatives are not self-sustaining but, rather, are dependent on continued U.S. funding. Limited political support of U.S.-funded initiatives, as well as corruption, have also kept these nations from becoming full partners in the international counternarcotics effort a goal of U.S. assistance, according to State. In addition, the effect of U.S. cargo container security assistance for the counternarcotics effort has been limited. <3.1. Many Partner Nations Cannot Sustain U.S. Supported Initiatives> The inability of transit zone countries to patrol their shores effectively and conduct other maritime operations presents a major gap in drug interdiction. In many of the countries we reviewed, State has reported that partner nations cannot operate U.S.-provided maritime assets for counternarcotics missions due to a lack of operations and of maintenance resources. Some examples are as follows: In the Dominican Republic, the United States has provided a wide range of new and refurbished boats, including interceptor and patrol craft, that the Dominican Navy has been unable to employ due to a lack of fuel, fuel filters, and other routine maintenance supplies. Also, several U.S. vessels that were transferred to the Dominican Navy as excess defense articles are in poor condition due to a lack of preventive maintenance and funds for repairs. State reported in 2006 that the Navy s maintenance command lacked necessary equipment, parts, and training. In Haiti, State/INL and the U.S. Coast Guard provided substantial support to the Haitian coast guard, including interceptor boats, vessel overhauls and retrofitting, infrastructure improvements, and training and equipment. However, according to State, a lack of necessary equipment, maintenance, fuel, and logistical support has continued to impair the Haitian coast guard s ability to conduct maritime operations and combat drug trafficking effectively. In Guatemala, State/INL provided the counternarcotics police force with two fiberglass boats one located on the Caribbean Coast and the other on the Pacific Coast for limited counternarcotics operations. In 2007, State reported that both boats were inoperable because the police had not maintained the engines. In Jamaica, the United States donated several patrol vessels capable of intercepting go-fast boats. State reported in 2006 that the vessels had only limited operational capability because they were not in good working order. In Panama, the U.S. Coast Guard donated a 180-foot cutter. However, Panama s National Maritime Service, which is highly dependent on U.S. support for operations and maintenance resources, has been unable to keep the vessel seaworthy. U.S. agencies have not always planned for the sustainability of the counternarcotics-related assets they provided to partner nations. According to State officials we spoke to, when receiving these assets, country officials have typically signed agreements accepting the long-term responsibility of operating and maintaining them, including providing the necessary staff, as well as fuel, parts, and other maintenance resources, unless these are provided for by the United States. However, the long-term cost of operating and maintaining the assets and the source of funding are not typically included in such agreements, according to these officials. In 2007, Defense began providing additional boats to partner nations, including Panama, the Dominican Republic, Jamaica, and the Bahamas, under its Enduring Friendship program, for use in maritime security operations. However, Defense has not developed plans to address long- term sustainability of these assets over their expected 10-year operating life. These interceptor boats were accompanied by support equipment such as trucks and trailers for on-land mobility; radios; infrared cameras; as well as training and a limited maintenance program, at a cost of between $6 million and $11 million for each country. However, Defense did not make provisions to ensure that the partner countries can fuel the donated boats and maintain them beyond an initial short-term maintenance contract period. For the boats provided to the Dominican Republic, for example, the agreement between Defense and the Dominican Republic did not specify the estimated costs or funding source of operating the boats and related equipment. The agreement indicated the United States may provide some additional support for repair parts, contingent upon the availability of funds. The ability to provide the necessary resources to operate these assets over the long-term is a concern, according to the U.S. and partner nation officials we spoke to in the Dominican Republic and Panama. According to the Coast Guard attach in the Dominican Republic, although the Dominican Navy has added these and other boats to its fleet in recent years, it has not increased its budget for fuel since 2002, and the cost of fuel has since doubled. Similarly, according to a senior official of the Panamanian National Maritime Service, it has been operating under a static budget and fuel allotment, even as the number of assets and staff under its control has increased. State/INL officials in both Panama and the Dominican Republic told us that these countries have not effectively used the interceptor boats for counternarcotics purposes. In addition, the personnel operating the interceptor boats have limited maritime interdiction skills. Training included in the assistance package focused on operation and maintenance of boats and included some training in first aid, navigation, and communications but only limited training on interdiction tactics. The memorandum of understanding between Defense and the Dominican Republic for this program indicated that Defense may provide some additional training, contingent upon the availability of funds, but did not specify the training likely to be needed for conducting counternarcotics operations or its cost. In some cases, however, State/INL has recognized the long-term sustainability challenges associated with providing capital assets and has planned and budgeted for operations and maintenance costs, thus helping to ensure the assets will be used effectively for counternarcotics operations. For example, State reported that it has funded maintenance contracts in the Bahamas that provide a means for keeping U.S.-provided boats operational for drug interdiction missions. Also, State/INL helps sustain U.S.-funded initiatives along the Ecuador-Colombian border by funding a large spare parts program, as well as maintenance training for the heavy trucks and other vehicles it has provided, and by budgeting for gas and maintenance costs over the long term. State/INL has also funded contractors to maintain the electrical and plumbing systems in the police buildings constructed for the police in Ecuador. According to State, the counternarcotics police of several partner nations are dependent on logistical and operational support of State/INL and DEA, as the following examples show: In the Dominican Republic, the police force s effectiveness in counternarcotics affairs is almost completely attributable to equipment, training, and close support provided by DEA and State/INL over several years. State also reported that the financial intelligence unit, begun with U.S. support in 2003, lacks the resources and institutional support to perform effectively and has reported no real successes in implementing its money-laundering legislation since the unit was established. In Ecuador, State/INL and DEA provide almost all logistical and operational support to the Ecuadorian National Police Anti-Drug Division. In Haiti, the lack of government resources makes the national police largely dependent upon DEA and State/INL for logistical and advisory support. <3.2. Some Countries Limited Political Support Impairs U.S. Counternarcotics Initiatives> According to State, a few governments in the region have demonstrated limited political support for U.S. counternarcotics efforts. In particular, Venezuelan cooperation has declined dramatically in recent years, and in 2006 and 2007 State reported that Venezuela had failed demonstrably to make substantial efforts in the war on drugs. In 2005, the Venezuelan President accused DEA of espionage and planning a coup, and the government eventually withdrew from both U.S.-supported vetted units and has refused counternarcotics cooperation with the United States. State has also reported that Jamaica has shown limited political support for some U.S. counternarcotics-related initiatives. For example, the United States helped develop a corporate reform strategy for the Jamaican constabulary forces, but it was never implemented due to a combination of internal resistance to change and a lack of power to ensure implementation of the strategy s recommendations. In addition, the government of Jamaica has not enacted an initiative to permit extended data-sharing between U.S. and Jamaican law enforcement agencies concerning money-laundering cases. In Ecuador, even though the government has generally supported U.S.- funded counternarcotics initiatives, political developments may threaten future international cooperation in maritime operations. Ecuador s President has stated that he no longer supports a U.S. military presence in Ecuador and that his government will not renew the agreement allowing the United States to operate its forward operating location there when the agreement expires in November 2009. <3.3. Corruption Undermines Interdiction Efforts> The United States relies on the cooperation of partner nations law enforcement and security agencies in the transit zone to conduct successful counternarcotics operations. But, corruption in these agencies limits the extent to which U.S. law enforcement agencies can involve their counterparts in investigations. According to data compiled by Transparency International, a civil society organization that monitors corruption issues worldwide, corruption is a major problem in transit zone countries. Its Corruption Perception Index (CPI) ranks countries from 0 (highly corrupt) to 10 (highly clean) based on a series of indicators. Seven of the eight countries in our review received a score of 3.3 or lower (compared with a score of 7.2 for the United States and 8.7 for Canada). The eighth country, the Bahamas, was not reviewed. (See table 4.) In addition, U.S. officials have noted instances of official corruption particularly among military and police units that have limited the opportunities for and scope of cooperation with the United States and, in some cases, undermined specific interdiction operations. Some examples of this official corruption are as follows: Guatemala disbanded its antinarcotics police unit in 2002 in response to reports of widespread corruption within the agency and its general lack of effectiveness in combating the country s drug problem. The government reassigned most of the unit s law enforcement agents to the national civilian police, and the U.S. government suspended major joint operations in light of these circumstances. With U.S. assistance, Guatemala established a successor antinarcotics police force but, in 2005, in a joint operation with DEA, the Chief and Deputy of this agency were arrested in the United States for corruption. Later that year, DEA suspected that members of the antinarcotics force stole 475 kilograms of cocaine from an evidence storage facility. Further, according to DEA, Guatemalan antinarcotics agents misused intelligence leads provided by DEA to extort investigative targets. In Haiti, in 2003, State reported strong evidence of Haitian law enforcement officials leaking information on planned operations and trafficking drugs. DEA reported that planes under surveillance for drug shipments were met and off-loaded by heavily armed uniformed police officials with vehicles that transported the drugs. In 2007, the Haitian government removed both the National Police Director of Administration and Director of Logistics for suspected corruption. In the Dominican Republic, the government removed 24 judges from office for improperly handing out favorable sentences to known traffickers in 2006. Corruption has also hampered Dominican-based, money-laundering investigations, according to DEA. In Panama, in 2005, the head of a police counternarcotics unit was arrested and charged with corruption. In 2007, after years of lackluster counternarcotics cooperation from the National Maritime Service, the former head of this organization was also arrested on corruption charges. In the Bahamas, State reported in 2003 that it was reluctant to include Bahamian defense personnel in Operation Bahamas, Turks and Caicos and to share sensitive law enforcement information with them due to corruption concerns. In Ecuador, in 2002, the Deputy Chief of Operations of the Ecuadorian Army was arrested for facilitating the transshipment of drugs through cargo containers by providing trafficking organizations with false security seals. <3.4. Cargo Container Security Programs Have Had Limited Effect on the Counternarcotics Effort> According to DEA, drug smuggling on containerized cargo ships poses a significant threat to U.S. counternarcotics efforts. Both State/INL and DHS have provided cargo security assistance to countries in the transit zone. However, most of State/INL s initiatives have not been effective, and DHS has not routinely used its program of targeting and scanning cargo containers overseas to detect illicit drugs. State/INL has supported counternarcotics initiatives at cargo container ports in four of the eight countries we reviewed, and in three of those countries State s International Narcotics Control Strategy Reports indicate that these initiatives were largely ineffective. For example, in Guatemala, although State/INL has provided technical assistance, logistical support, and training for the country s port security program for several years, State reported in 2006 that the program had little interdiction success, and seizures were very low. In the Bahamas, State/INL supported a canine unit of the Bahamian Customs Department to help detect drugs shipments at the Freeport container port but discontinued the assistance in 2004 due to high maintenance costs and its failure to produce expected results. In Venezuela, the United States funded a sophisticated container inspection facility at a large port known to be an embarkation point for multiton shipments of cocaine, but the Venezuelan government has not put it into operation. DHS, through CBP, has implemented the Container Security Initiative (CSI) overseas, which may have potential for greater use in counternarcotics operations. CBP initiated CSI under its existing authority in January 2002 to assist selected overseas ports in targeting suspicious containers and scanning their contents. The program subsequently received specific congressional authorization in 2006. By 2007, CSI operated in 58 ports, including seven in the transit zone. CBP officers stationed at foreign ports collaborating with host-country partners use intelligence and automated risk assessment information to target shipments and identify those that may contain weapons of mass destruction or other terrorist contraband. DHS has generally not used the technology to detect and interdict illicit drug shipments, though CSI technology can help detect illicit drugs. In fact, the automated targeting system that CSI uses to help target containers for inspection was originally designed for this purpose. The first time the CSI scanning equipment was used in the port of Caucedo in the Dominican Republic, operators detected a shipment of cocaine that was packed amidst canned fruit. Ultimately, several metric tons of cocaine were seized, and suspected traffickers were arrested. CBP officials we spoke with noted that including routine container screening for drugs in CSI s scope of operations would be difficult and could conflict with achievement of CBP s counterterrorism objectives. They indicated that CSI s budget and staffing are based on its counterterrorism-related workload, and targeting drugs would require additional research and analysis resources. They said that, without more funding, DHS would have to shift its priorities away from counterterrorism activities. Furthermore, they said that expanding existing agreements with CSI participating countries would entail difficult and sensitive negotiations. According to these officials, proposing that CSI should search for illicit drugs could undermine the international political support for CSI and other CBP counterterrorism initiatives because additional container screening could cause transshipment delays and related economic costs and involve other concessions that participating countries may not be willing to accept. However, CBP officials acknowledged that they have not formally assessed the feasibility of conducting additional container targeting and inspection in selected major drug transit countries. In particular, CBP has neither calculated the related costs, including human resource requirements, nor has it consulted with State and Justice on the related diplomatic and security risks or the potential law enforcement benefits. Senior officials we spoke to at State/INL, DEA, and ONDCP indicated that it would be useful to examine the feasibility of a limited expansion of CSI on an interagency basis. <4. Conclusions> U.S. agencies, primarily State, DEA, Defense, DHS, and USAID, have supported initiatives that have fostered cooperation with partner nations in the transit zone, enabling these nations to engage in the counternarcotics effort in ways that the United States alone could not. Through these initiatives, U.S. law enforcement has been able to extend the scope and effectiveness of its drug interdiction activities by (1) gaining access to critical information and intelligence, (2) participating in seizure and eradication operations within the territory or jurisdiction of partner nations, and (3) bringing suspected drug traffickers to justice. The absence of comprehensive and consistent reporting on the results of these initiatives makes it difficult to monitor their outcomes over time, assess their relative effectiveness, and make resource allocation decisions based on results. However, available information concerning results indicates that the United States has not made significant progress toward its goal of assisting governments to become full and self-sustaining partners in the international counternarcotics effort. Partner nations are dependent on U.S. logistical, financial, and advisory support, and without this support many U.S. initiatives are not sustainable. U.S. agencies have funded initiatives and invested in assets, particularly for maritime operations, without planning for the long-term operations and maintenance of these assets, and partner nations have not utilized many of them to their maximum capacity. U.S. assistance in cargo container security has been largely ineffective for the international counternarcotics effort. However, DHS has invested in cargo container security programs overseas that in at least one instance helped detect illicit drugs being shipped in the transit zone. DHS has raised a number of concerns about using CSI routinely for this purpose, but has not assessed the feasibility of this program as another tool in the U.S. counternarcotics strategy. <5. Recommendations for Executive Action> To link U.S.-funded initiatives in transit zone countries to the priority of disrupting illicit drug markets and the goal of assisting nations to become full and self-sustaining partners in the international counternarcotics effort, we recommend that the Secretary of State, in consultation with the Director of ONDCP, the Secretaries of Defense and Homeland Security, the Attorney General, and the Administrator of USAID, report the results of U.S.-funded counternarcotics initiatives more comprehensively and consistently for each country in the annual International Narcotics Control Strategy Report. We recommend that the Secretary of State, in consultation with the Director of ONDCP, the Secretaries of Defense and Homeland Security, and the Attorney General, (1) develop a plan to ensure that partner nations in the transit zone can effectively operate and maintain all counternarcotics assets that the United States has provided, including boats and other vehicles and equipment, for their remaining useful life and report this plan to the Congress for the fiscal year 2010 appropriations cycle and (2) ensure that, before providing a counternarcotics asset to a partner nation, agencies determine the total operations and maintenance cost over its useful life and, with the recipient nation, develop a plan for funding this cost. To help maximize cargo container security assistance, we recommend that the Secretary of Homeland Security, in consultation with the Secretary of State and the Attorney General, determine the feasibility of expanding the Container Security Initiative to include routine targeting and scanning of containers for illicit drugs in major drug transit countries in the transit zone, and report the results to the Congress. Factors to be assessed should include the cost, workload and staffing ramifications, the potential benefits to international counternarcotics law enforcement efforts, the political support of CSI participating countries, statutory authority, and any risks associated with such an expansion. <6. Agency Comments and Our Evaluation> We provided a draft of this report to Defense, DHS, Justice/DEA, ONDCP, State, and USAID for their comment. DHS, Justice/DEA, ONDCP, and State provided written comments, which are reproduced in appendixes III through VI. All agencies provided technical corrections, which we incorporated into the report, as appropriate. State generally agreed with the report s conclusions, but disagreed with our recommendation on results reporting. State noted that it has already developed performance measures to reflect progress in achieving broad counternarcotics goals and development goals in general, though they do not necessarily capture program-specific results. We modified our recommendation to emphasize the need for more systematic reporting of program-specific results that would link U.S. counternarcotics efforts to State s broader performance goals and measures. State also noted that there is substantial variation in assistance programs in particular countries in terms of program types and funding levels. However, we observed that many programs in several countries are similar in nature and lend themselves to the comparison of results among countries. Developing a method of reporting these results more comprehensively and consistently across years and among country programs in the International Narcotics Control Strategy Report would address our concern. State partially agreed with our recommendation on sustainability planning and indicated that its project designs, agreements with recipient countries, and monitoring mechanisms are appropriate for addressing sustainability concerns, given the unpredictability of recipient countries long-term priorities and budgetary resources. Furthermore, State commented that it has limited ability to influence the coordinated sustainability planning of other agencies and has no influence over Defense s Enduring Friendship program. Given past experience, we question whether providing assets is justifiable without more specific and detailed plans that give better assurances that the recipient country and participating agencies are committed to funding specified operations and maintenance costs. State is in a unique position as the lead foreign affairs agency to ensure that all participating U.S. agencies involved in providing counternarcotics assets agree on a discrete sustainability plan. State, in particular, can influence Defense s sustainability planning when it approves security assistance programs, such as Enduring Friendship. Section 1206 of the National Defense Authorization Act of 2006, under which Enduring Friendship was authorized, requires State and Defense to jointly approve all projects and coordinate their implementation. State fully supports any consultation needed to determine the feasibility of expanding DHS s container security assistance program. DHS did not concur with our recommendation to study the feasibility of expanding the CSI program. According to DHS, expanding CSI to include narcotics interdiction would unnecessarily broaden the program s strategic goals and is inconsistent with its mandate to secure the international supply chain from high-risk shipments with a potential risk of terrorism and acts of terrorism. CSI s mandate does not prohibit leveraging the program s resources for other agency missions. In addition, the CSI Strategic Plan for 2006-2011 states that, at some point in the future, consideration should be given to potential expansion of the program from focusing on terrorism alone to encompassing other activities known to support terrorism, such as smuggling narcotics, violations of intellectual property rights and currency violations. A logical first step would be for relevant stakeholders to study the feasibility of enlisting CSI as a counternarcotics tool, formally assessing the program s statutory authority, among other factors. DHS also noted that 90 percent of cocaine moves through Mexico, but that no CSI ports are located in Mexico. However, as we reported, approximately 90 percent of the cocaine flowing toward the United States has gone through the Central American/Mexican corridor, in which four CSI ports are located. ONDCP accepted our recommendation that it assist other agencies in developing performance measures and sustainability plans for U.S.- provided counternarcotics assets. In addition, ONDCP strongly concurred with the recommendation to determine the feasibility of expanding CSI. DEA said that, while it is difficult to measure the outcome of all U.S. counternarcotics efforts, it has tracked statistical data to ensure that it is achieving its strategic goals and assists State and ONDCP in developing overall performance measures for U.S. counternarcotics programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Defense, Homeland Security, and State; the Attorney General; the Administrator of USAID; the Director of ONDCP; and interested congressional committees. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology Our review encompassed U.S.-funded counternarcotics initiatives during 2003 through 2007, in countries in the Caribbean Sea and Central America, as well as Ecuador and Venezuela. We specifically focused our efforts on eight countries designated by the President as major drug transit countries. Those countries include the Bahamas, the Dominican Republic, Ecuador, Guatemala, Haiti, Jamaica, Panama, and Venezuela. Although Mexico is also a major transit country, we excluded it from our review because we reviewed U.S. counternarcotics assistance to that country in 2007. To identify U.S.-funded assistance programs and initiatives, we reviewed the Department of State s (State) International Narcotics Control Strategy Reports (INCSR), Mission Performance Plans for fiscal years 2003 through 2007, and end use monitoring reports, as well as work plans, activity reports, and country summaries provided by the Drug Enforcement Administration (DEA). We also met with Washington, D.C.- based representatives of the White House s Office of National Drug Control Policy (ONDCP); the Department of State s (State) Bureau of International Narcotics and Law Enforcement (State/INL); the Department of Justice s Drug Enforcement Administration (DEA) and Criminal Division; the Departments of Defense s (Defense) Office of Deputy Assistant Secretary of Defense for Counternarcotics, Counterproliferation and Global Threats; the Department of Homeland Security s (DHS) Office of Counternarcotics Enforcement, Immigration and Customs Enforcement, U.S. Coast Guard, and U.S. Customs and Border Protection (CBP); and the U.S. Agency for International Development (USAID). We also met with cognizant officials at the U.S. Southern Command in Miami, Florida, and the Joint Interagency Task Force (JIATF)-South in Key West, Florida. We included among the initiatives we reviewed those cooperative endeavors that may not have had any identifiable costs associated with them, including support for legislative reform in partner nations and efforts to reach agreements that enhance international cooperation in interdiction, including bilateral maritime law enforcement agreements and agreements to establish forward operating locations. To document the cost of U.S. counternarcotics support, we focused on fiscal years 2003 through 2007 by reviewing and analyzing program funding data from the various departments and agencies in Washington, D.C., including State and Defense, USAID, and DEA for background purposes. While we performed some checks on the data, we did not perform a full reliability assessment of them. We believe the data provide a reasonable indication of spending on counternarcotics-related activities, but we recognize that the data the agencies gave us included funding for some activities that go beyond counternarcotics assistance and may include some U.S. interdiction-related activities. To assess program results and factors that have impeded counternarcotics efforts, we reviewed State s International Narcotics Control Strategy Reports and Mission Strategic Plans for fiscal years 2002 through 2007, End Use Monitoring reports, and other relevant State documents. At DEA, we reviewed Significant Action Reports and blue notes reports to the agency s Administrator on selected significant drug interdiction and other activities as well as all available rightsizing reports and all country and work plans for the eight major drug transit countries in our review. We also reviewed evaluations, midterm and final reports, and other program documents for those activities which USAID officials and State/INL documents had identified as related to the international counternarcotics effort. In addition, we reviewed Transparency International s Corruption Perception Index to evaluate the level of corruption in these countries. The index is based on the results of surveys of business people and citizens and analysis by country experts. It ranks countries by the degree of corruption perceived to exist in each country rather than by actual corruption, which is difficult to measure directly. In a previous GAO report, we determined that Transparency International s data were sufficiently reliable to provide a broad gauge of corruption and demonstrate that levels of corruption vary among countries. To obtain more detailed information on program results and impediments, we traveled to four of the eight major drug trafficking countries in our review: the Dominican Republic, Ecuador, Guatemala, and Panama. We chose these four countries based on: (1) the size of the U.S. assistance program; (2) the location within the various geographic regions of the transit zone, including Central America, South America, and the Caribbean Islands; (3) designation as a major money-laundering country; (4) posting of senior embassy officials representing State/INL, DEA, DHS, ICE, and CBP, and Defense; and (5) implementation of major USAID rule of law and alternative development initiatives. During our visits we obtained information from U.S. embassy officials, host government officials, and local program beneficiaries. While in country, we visited a number of project sites relating to maritime operations, port security, intelligence gathering, drug crop eradication, alternative development, and other activities. To identify trends in cocaine flow, seizures, and disruptions, we reviewed data from the Interagency Assessment on Cocaine Movement from 2002 through 2007, with limited updated data provided by ONDCP. In the course of previous work, we discussed how cocaine flow data were developed with officials from the Defense Intelligence Agency and the Central Intelligence Agency, Crime and Narcotics Center. In addition, we discussed how seizure and disruption data were developed with officials from ONDCP. Overall, the data have limitations, due in part to the illegal nature of the drug trade and the time lag inherent in collecting meaningful data. Notwithstanding the limitations of the drug production and seizure data, we determined that these data were sufficiently reliable to provide an overall indication of the magnitude and nature of the illicit drug trade since 2003. We supplemented this data with information about trends in drug trafficking, interdiction, and cooperation with transit zone countries obtained from officials at JIATF-South. Finally, the information and observations on foreign law in this report do not reflect our independent legal analysis but are based on interviews with cognizant officials and secondary sources. Appendix II: Counternarcotics Maritime Law Enforcement Agreements The United States has signed Counternarcotics Maritime Law Enforcement agreements with 25 countries in the transit and source zones. According to Coast Guard officials, these agreements have improved cooperation with nations in the region and increased the United States and, in particular, the Coast Guard s capability to board suspect vessels and detain suspected drug traffickers. These bilateral agreements typically have six provisions to them. The United States and the other countries negotiate each provision separately, which means that some countries may agree to some provisions and not others. The six parts provide for the following: Ship-boarding provisions establish an expedited process for U.S. law enforcement agencies to obtain authorization from the competent authority of a designated country to board and search a vessel flying its flag and suspected of being engaged in illicit traffic outside the territorial waters of any nation. In certain limited circumstances, U.S. vessels may stop, board, and search suspicious vessels flying the flag of a designated country without having specific permission. Ship-rider provisions permit countries to place law enforcement officials on each other s vessels. Pursuit provisions allow U.S. law enforcement agencies, under very limited circumstances, to pursue aircraft and vessels in a country s airspace and territorial waters. In particular, the provisions permit U.S. law enforcement agencies to stop, board, and search a suspect vessel if the country does not have a vessel or aircraft available to respond immediately. Entry-to-investigate provisions allow the U.S. law enforcement agencies, under very limited circumstances, to enter a country s airspace or territorial waters to investigate aircraft or vessels suspected of illicit drug trafficking. Specifically, the provisions permit U.S. law enforcement agencies to board and search a suspect vessel if the country does not have a vessel or aircraft available to respond immediately. Over-flight provisions permit the U.S. law enforcement aircraft to fly over the country s territorial waters, with appropriate notice to the country s coastal authorities. Relay order-to-land provisions allow U.S. law enforcement agencies to relay an order to land from the host country to the suspect aircraft. Moreover, an additional International Maritime Interdiction Support clause permits U.S. law enforcement agencies, principally the Coast Guard, to transport suspected drug traffickers through that country to the United States for prosecution and provides for expedited access to that country s dockside facility to search suspect vessels. Since 2003 the United States has entered into support clauses with five countries. Table 5 lists the law enforcement agreements, including the international maritime interdiction support clause, that the United States has negotiated with countries in the transit and source zones. Appendix III: Comments from the Department of Homeland Security The following are GAO s comments on the Department of Homeland Security s letter dated June 27, 2008. <7. GAO Comments> 1. A decision on whether to enlist CSI as a counternarcotics tool should be based on a reasoned study of its feasibility by relevant stakeholders, formally assessing the program s statutory authority among other factors. ONDCP and State support this recommendation. 2. CSI s mandate does not prohibit leveraging the program s resources for other agency missions. In addition, DHS/CBP notes in the CSI Strategic Plan for 2006-2011 that, at some point in the future, consideration should be given to potential expansion of the program from focusing on terrorism alone to encompassing other activities known to support terrorism, such as smuggling narcotics, violations of intellectual property rights and currency violations. 3. As noted in our report, seven CSI ports are located the Caribbean and Central America. Four of those ports (Balboa, Colon, and Manzanillo in Panama and Puerto Cortes in Honduras), are located on the central American isthmus and, as such, are included in the Central American/Mexican Corridor, through which nearly 90 percent of cocaine destined for the United States moves. (See fig. 1.) 4. As DHS notes in its letter, the United States has unique and unparalleled cooperation and information sharing between the CBP officers at the foreign seaports and the host government customs personnel. CSI s way of working with host country partners may serve as a model for combating corruption and gaining the political support necessary to make U.S. efforts successful. 5. We are not recommending that CSI target maritime containers destined for other countries. Rather, we are recommending that, in addition to the factors CSI currently uses to target containers for inspection, DHS study the feasibility of using potential narcotics trafficking as one of the targeting factors. 6. We added information on CBP s advisory program. 7. The focus of our review was on U.S. initiatives to assist selected major drug transit zone countries and we intentionally did not address many U.S. interdiction operations DHS described. We have reported previously on CBP s activities. See GAO, Drug Control: Agencies Need to Plan for Likely Declines in Drug Interdiction Assets, and Develop Better Performance Measures for Transit Zone Operations, GAO-06- 200 (Washington, D.C.: Nov. 15, 2005); and GAO, Drug Control: Difficulties in Measuring Costs and Results of Transit Zone Interdiction Efforts, GAO-02-13 (Washington, D.C.: Jan. 25, 2002). Appendix IV: Comments from the Department of Justice, Drug Enforcement Administration Appendix V: Comments from the Department of State The following are GAO s comments on the Department of State s letter dated June 25, 2008. <8. GAO Comments> 1. We modified our recommendation to emphasize the need for State to report the results of U.S.-funded counternarcotics initiatives more comprehensively and consistently for each country in its annual International Narcotics Control Strategy Report. Such information would complement and aid interpretation of the broad performance measures State includes in its performance plan. 2. While the broad performance measures State has developed are important, they do not capture results of specific assistance programs, and, without other information, their usefulness in managing counternarcotics programs is limited. Reviewing program-specific results in a particular country over time or comparing results among countries with similar programs can help identify patterns and lessons learned that may be useful in evaluating and managing these programs more effectively. Furthermore, without consistently reported program- specific results information, State cannot assess the extent to which the results of specific programs have contributed to overall progress reflected in State overall performance measures. 3. While we agree that assistance programs vary, we observed that many programs in several countries are similar and comparable. 4. Past experience with U.S.-provided assets has shown that State s monitoring of nation s commitment alone has not been sufficient to ensure that such assets are utilized effectively. We question whether such U.S. investments are justifiable without stronger up-front assurances, beyond an agreement, that recipient countries or sponsoring U.S. agencies can afford the long-term operations and maintenance costs and are committed to providing those resources. We continue to believe that explicit sustainability plans are needed that include a projection of total asset ownership costs that have been considered and agreed upon by all relevant parties with adequate contingency plans in case assumptions change about sustainability and commitment. 5. State is in a unique position as the lead foreign affairs agency to ensure that all participating U.S. agencies involved in providing counternarcotics assets agree on a discreet sustainability plan. State, in particular, can influence Defense s sustainability planning when it approves security assistance programs, such as Enduring Friendship. Section 1206 of the National Defense Authorization Act of 2006, under which Enduring Friendship was authorized, requires State and Defense to jointly approve all projects and coordinate their implementation. Appendix VI: Comments from the Office of National Drug Control Policy Appendix VII: GAO Contact and Staff Acknowledgments <9. Staff Acknowledgments> In addition to the individual named above, A. H. Huntington, III, Assistant Director; Joseph Carney; Miriam A. Carroll; and James Michels made key contributions to this report.
Why GAO Did This Study Each year, criminal organizations transport hundreds of tons of illegal drugs from South America to the United States through a 6 million square mile "transit zone" including Central America, the Caribbean, the Gulf of Mexico, and the eastern Pacific Ocean. Since fiscal year 2003, the United States has provided over $950 million to support counternarcotics efforts in transit zone countries, which historically lacked the capacity to interdict drugs. GAO was asked to examine (1) how the United States has assisted transit zone countries in disrupting drug trafficking and (2) what factors have impeded these efforts. GAO analyzed relevant data, met with U.S. and foreign officials, and visited selected countries. What GAO Found U.S. government assistance has improved international counternarcotics cooperation with the eight major drug transit countries GAO reviewed, except Venezuela. First, assistance programs have helped partner nations gather, process, and share information and intelligence leading to arrests and drug seizures. Second, they have enabled these nations to participate in counternarcotics operations--both at sea and on land--by providing assets (such as interceptor boats and vehicles), logistical support, and training for police units. Third, U.S. assistance has helped strengthen the capacity of prosecutors to work more effectively on drug-related cases. Assessing the impact of such a wide variety of programs is difficult because some are indirectly related to drug interdiction, and because results reporting has been limited and inconsistent. Despite gains in international cooperation, several factors, including resource limitations and lack of political will, have impeded U.S. progress in helping governments become full and self-sustaining partners in the counternarcotics effort--a goal of U.S. assistance. These countries have limited resources to devote to this effort, and many initiatives are dependent on U.S. support. Programs to build maritime interdiction capacity have been particularly affected, as partner nations lack fuel and other resources needed to operate and maintain U.S.-provided boats. Limited political support, particularly in Venezuela, and corruption have also hindered U.S. counternarcotics efforts. In addition, the Department of Homeland Security (DHS) has implemented a Container Security Initiative (CSI) that targets and scans containers for weapons of mass destruction and terrorist contraband. But CSI has not routinely been used for illicit drug detection, despite its applicability for this purpose.
<1. Background> LANL is organized in a matrix that allows programs to draw on scientific, engineering, and experimental capabilities from throughout the laboratory. Programs are funded and managed out of LANL s 15 directorates, such as Weapons Physics or Chemistry, Life and Earth Sciences, but LANL s scientists and engineers work in 64 technical divisions that are discipline specific. These technical divisions, such as Applied Physics or Biology, accomplish the work of the laboratory and support its operations. Program managers in the directorates fund work in the technical divisions in order to meet milestones determined with NNSA or other work sponsors. To this end, employees in the technical divisions may support multiple programs with their work and may be called on to provide specific expertise to different programs. LANL s facilities are managed by its directorates and provide specific capabilities, such as high-performance computers, LANL employees use for their work, as well as general office and meeting space. When LANL was originally sited and constructed during the Manhattan Project, according to laboratory officials, its infrastructure was intentionally spread out as a safety and security precaution. What was once a benefit now makes LANL s management and operation complex. Spread across 40 square miles and including 155 miles of roads, 130 miles of electrical transmission lines, 90 miles of gas transmission lines, and 9.4 million square feet of facility space, LANL employs 12,000 to 14,000 people every day. LANL s approximately 2,700 structures are grouped together across the laboratory into 49 major technical areas that include major scientific and experimental facilities, environmental cleanup areas, and waste management locations (see fig. 1). However spread out the technical areas, LANL only considers less than 400 acres of its site to be highly suited for development because of the difficulty of developing the site s steep slopes and because of the need to maintain safety and security buffers around specific work activities. The most heavily developed area of the laboratory is Technical Area-3, LANL s core scientific and administrative area, which accounts for half of the laboratory s employees and total floor space. While individual scientific and engineering directorates within LANL are responsible for managing and securing its facilities, multiple programs across these organizations share facilities to accomplish their objectives. For example, LANL s Chemistry and Metallurgy Research facility is managed by LANL s Chemistry, Life and Earth Sciences directorate. The facility, however, is occupied by over 500 employees to support a number of programs across LANL that require its analytical chemistry and materials property testing capabilities (see fig. 2). These programs include manufacturing nuclear weapon pits, experimenting with nuclear fuels for civilian energy production, and producing nuclear heat sources for National Aeronautics and Space Administration missions. LANL s shared facilities are protected at different levels depending on the type and amount of classified resources they house or store. DOE Manual 470.4-2, Physical Protection, defines these different levels and the types of safeguards that must be in place to ensure that classified resources are adequately protected. Table 1 summarizes these security levels and appropriate safeguards from lowest to highest level of security. To determine the overall effectiveness of LANL s implementation of DOE security requirements and the laboratory s security performance, two DOE organizations periodically conduct independent reviews. DOE s Office of Independent Oversight conducts assessments, typically every 18 months. These assessments identify the weaknesses of LANL s security program and produce findings that laboratory officials must take action to correct. NNSA s Los Alamos Site Office is also required to conduct surveys annually. These surveys are based on observations of performance, including compliance with DOE and NNSA security directives. While the two types of reviews differently categorize the topics and subtopics they cover, the reviews overlap substantially. They both address security program management, protective forces, physical security, classified information protection, control and accountability of nuclear materials, personnel security, and cyber security. Furthermore, they both use a color- coding system to rate each area of review as either Green (satisfactory or effective), Yellow (marginal or needs improvement), or Red (unsatisfactory or significant weakness). The results of these reviews affect LANS s ability to earn its performance-based award fee for successful management and operation of LANL. Under the contract between LANS and NNSA for the management and operation of LANL, NNSA is to establish the work to be accomplished by LANL, set requirements to be met, and provide performance direction for what NNSA wants in each of its programs. NNSA does this by annually issuing a performance evaluation plan that documents the process and associated performance objectives, performance incentives, award term incentives, and associated measures and targets for evaluating LANS s performance. In the performance evaluation plans for fiscal years 2007 and 2008, performance objectives and award fee incentives were specifically provided for security performance. LANL s contract requires the development of a Contractor Assurance System to increase accountability and improve management and performance. The Contractor Assurance System, according to the LANL official responsible for its implementation, is an integrated performance-based management system that is designed to include independent assessment and that is available as a tool for federal oversight. Notwithstanding the development of the Contractor Assurance System, under the contract with LANS, NNSA preserves its right to conduct direct oversight, particularly in the area of security. The Secretary of Energy has authority under 10 C.F.R. 824.4(b) of DOE s Procedural Rules for the Assessment of Civil Penalties for Classified Information Security Violations to issue compliance orders that direct management and operating contractors to take specific corrective actions to remediate deficiencies that contributed to security violations regarding classified information. On July 12, 2007, the Secretary of Energy issued a compliance order to LANS as a result of the security incident uncovered in October 2006 when a subcontractor employee removed classified information from LANL without authorization. Analysis of the incident identified numerous breakdowns in LANL s classified information protection program and concluded that these breakdowns were caused, in part, by poor security practices. The Compliance Order directs LANS to take comprehensive steps to ensure that it identifies and addresses critical classified information and cyber security deficiencies at LANL. These steps must be completed by December 2008. Violation of the Compliance Order would subject LANS to civil penalties of up to $100,000 per violation per day until compliance is reached. <2. LANL Conducts Over 175 Program Activities That Fall into Three Major and Two Support Program Categories> LANL has three major program categories Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy. Nuclear Weapons Science programs ensure the safety, performance, and reliability of the U.S. nuclear deterrent. Threat Reduction Science and Support programs support nonproliferation and counterproliferation efforts. Fundamental Science and Energy programs address other national security concerns, particularly energy security, and provide basic scientific capabilities that support laboratory missions. LANL has two support program categories Environmental Programs and Safeguards and Security. Environmental Programs address the remediation and disposition of waste at LANL. Safeguards and Security programs provide LANL with physical and cyber security protection. In addition to activities across these program categories that are supported by DOE and NNSA, LANL conducts millions of dollars in work for other federal agencies on specific research projects. <2.1. Nuclear Weapons Science Programs Ensure the Safety, Performance, and Reliability of the U.S. Nuclear Deterrent> LANL s primary mission is to ensure the safety, performance, and reliability of nuclear weapons in the nation s stockpile without performing underground nuclear weapon tests. It is responsible for the design, evaluation, annual assessment, and certification of the United States W76 and W88 submarine launched ballistic missile warheads, the W78 intercontinental ballistic missile warhead, and the B61 nuclear bomb and works in cooperation with NNSA s other nuclear weapons design laboratories and production plants. Because the United States stopped conducting underground nuclear weapon tests in 1992, LANL weapons scientists and engineers are involved in hundreds of research projects in programs aimed at developing strong physics modeling and predictive capabilities that provide information about nuclear weapons performance. Of particular focus since 2001 has been the development of a common methodology, known as Quantification of Margins and Uncertainties, for quantifying critical design and engineering factors during the operation of a nuclear weapon and the margin for these factors above which the weapons could fail to perform as designed. Furthermore, LANL is involved in two ongoing life extension programs, for the W76 and B61, which are efforts to refurbish aging weapons and extend their lifetimes for 20 to 30 years. In addition, LANL builds, operates, and maintains the infrastructure necessary to carry out its nuclear weapons mission and to support other laboratory missions. In fiscal year 2007, LANL conducted work on 41 Nuclear Weapons Science programs supported by about 3,400 FTEs and with a budget from NNSA of about $1.5 billion, which represented over half of LANL s total budget and approximately 87 percent of the funds received from NNSA for all of LANL s major program categories. Appendix II provides additional detail on LANL s Nuclear Weapons Science programs. Out of the $1.5 billion total budget for LANL s Nuclear Weapons Science programs, nearly $560 million or 37 percent was budgeted for the operation of the facilities that support these programs, as well as new line item construction projects. In addition, the following five other programs together represent another 45 percent of LANL s Nuclear Weapons Science budget: Pit Manufacturing and Certification. Since 2001 LANL has been working to reconstitute the nation s capability to manufacture and certify pits, which was lost when DOE s Rocky Flats Plant near Denver, Colorado, closed in 1989. This program re-establishes an immediate capability to manufacture pits in support of the nuclear weapons stockpile, plans for long-term pit manufacturing capability, and manufactures specific quantities of W88 pits. In fiscal year 2007 the year LANL delivered the first war reserve W88 pits for the nation s stockpile the budget for Pit Manufacturing and Certification was $226.9 million, and the program was supported by 599 FTEs. Advanced Simulation and Computing. To compensate in part for the loss of underground nuclear testing as a means for gathering data on nuclear weapon performance, a program of advanced simulation and computing hardware, software, and code was implemented to provide predictive computer models, supported by above ground experimental data and archived data from past underground nuclear tests, that simulate nuclear weapon performance. In fiscal year 2007, the budget for Advanced Simulation and Computing was $202.5 million, and the program was supported by 446 FTEs. Stockpile Services. This program supports research, development, and production work that is applicable to multiple nuclear weapon systems rather than a specific weapon system. For example, scientists may conduct basic research on critical factors of nuclear weapon operations in this program or run tests on components shared by nuclear weapon systems. In fiscal year 2007, the budget for Stockpile Services was $140.7 million, and the program was supported by 361 FTEs. Stockpile Systems. For each weapon type for which LANL is responsible, this program supports routine maintenance; periodic repair; replacement of components; and surveillance testing to assure the weapon type s continued safety, security, and reliability. In fiscal year 2007, the budget for Stockpile Systems was $67.4 million, and the program was supported by 162 FTEs. Life Extension Program. This program extends the lifetimes of warheads or the components of these warheads to ensure that they continue to perform as designed. LANL is currently focused on programs to extend the lifetimes of the B61 and W76 weapon types by 20 and 30 years, respectively. In fiscal year 2007, the budget for LANL s life extension programs was $44.1 million, and the programs were supported by 120 FTEs. LANL s directorate for Weapons Programs is responsible for the conduct of these programs and carries them out primarily through three associate directorates Weapons Physics, Weapons Engineering, and Stockpile Manufacturing and Support as well as an office of Weapons Infrastructure. These organizations draw upon scientific, engineering, and experimental capabilities from throughout the laboratory to answer specific points of inquiry and to solve problems related to the nuclear weapons stockpile. For example, the Weapons Physics associate directorate has identified 10 key capabilities that it believes are necessary to ensure that it can execute its weapons program work, many of which also aid scientific work outside of Nuclear Weapons Science programs. These capabilities, which reside in technical organizations outside of the Weapons Program Directorate, include expertise in high-performance computing, dynamic model validation, and radiochemistry. This matrixed approach, according to LANL officials, allows LANL s technical staff to work among peers in their respective fields and to apply their expertise to Nuclear Weapons Science programs as the need arises. <2.2. Threat Reduction Science and Support Programs Support Nonproliferation and Counterproliferation Efforts> In addition to helping ensure the safety and reliability of the U.S. nuclear deterrent, LANL applies science and technology to reduce the global threat of weapons of mass destruction (WMD), the proliferation of WMD, and terrorism. LANL pursues this mission through programs in three areas. First, the laboratory s nuclear nonproliferation programs, primarily funded by NNSA, focus on ways to address nuclear and radiological threats domestically and internationally. Second, LANL scientists familiar with WMD support the work of the Intelligence Community. Third, LANL conducts research programs supported by federal agencies, such as the Departments of Defense and Homeland Security, that provide foundational science and technology solutions to defeat chemical, radiological, biological, and nuclear WMD. Programs in these latter two areas are conducted as work for other federal agencies and are discussed in more detail in a subsequent section of this report. In fiscal year 2007, NNSA supported 12 Threat Reduction Science and Support nuclear nonproliferation programs at LANL that relied on over 480 FTEs and had a budget of about $225 million. Of these 12 programs, 9 were budgeted at over $1 million each in fiscal year 2007. Appendix III provides additional detail on these Threat Reduction Science and Support programs. Over 60 percent of the budget NNSA provided to support Threat Reduction Science and Support programs was for two programs: Nonproliferation and Verification Research and Development. This program conducts scientific research and development and provides monitoring, sensing, and measurement technologies to observe the earth from space-based satellites and produces and updates data for ground- based systems in order to detect banned nuclear explosions. In particular, LANL produces electromagnetic pulse and radiation sensors that are integrated into U.S. Air Force satellites and develops algorithms used to process remote sensing data. In fiscal year 2007, the budget for Nonproliferation and Verification Research and Development was $95.5 million, and the program was supported by 254 FTEs. U.S. Surplus Fissile Materials Disposition. NNSA funds efforts to dispose of the country s surplus plutonium and highly enriched uranium. LANL supports plutonium disposition efforts by developing the processing technologies that will be used in a facility currently planned for construction at the Savannah River Site in South Carolina. This facility will disassemble surplus nuclear weapon pits and convert the plutonium in them into a powder form that can later be fabricated into a fuel useable in commercial nuclear reactors. In fiscal year 2007, LANL s budget for this plutonium disposition work was $43 million, and the work was supported by 117 FTEs. LANL s Directorate for Threat Reduction is responsible for conducting the laboratory s Threat Reduction Science and Support programs. Those programs primarily supported by NNSA are carried out through the directorate s Nuclear Nonproliferation program office. This office employs scientific, engineering, and experimental capabilities from throughout the laboratory to accomplish program missions. According to LANL officials, these capabilities, such as nuclear device design and radiochemistry, were initially developed to support Nuclear Weapons Science missions but are now being leveraged to support Threat Reduction Science and Support missions. In turn, these officials told us results from Threat Reduction Science and Support programs provide feedback to Nuclear Weapons Science programs. For example, information on techniques to disarm nuclear weapons that are learned in threat reduction work can be used to improve the safety and security of the U.S. nuclear weapons stockpile. <2.3. Fundamental Science and Energy Programs Address Energy Security and Other Emerging National Security Challenges and Support Basic Scientific Research> As a national security science laboratory, LANL s mission also includes the development and application of science and technology to solve emerging national security challenges beyond those presented by WMD. LANL s Fundamental Science and Energy programs are managed by the laboratory s Science, Technology and Engineering Directorate, and funds to support these programs come from multiple offices within DOE, as well as other federal agencies. In fiscal year 2007, DOE supported 40 programs focusing on energy security specifically, fossil energy, civilian nuclear energy, alternative energy, and fusion. In addition, DOE supported basic scientific work in such areas as advanced computing, biology, environmental science, nuclear physics, and materials science, as well as Laboratory-Directed Research and Development projects. In total, DOE provided $151 million for Fundamental Science and Energy programs that supported over 380 FTEs. Appendix IV describes, in detail, LANL s DOE supported Fundamental Science and Energy programs. Work for other federal agencies and Laboratory-Directed Research and Development projects in Fundamental Science and Energy are discussed in a subsequent section of this report. LANL officials told us the laboratory s Fundamental Science and Energy programs, in conjunction with its Nuclear Weapons Science and Threat Reduction Science and Support programs, provide an integrated approach to national security science because these programs leverage one another s scientific, engineering, and experimental capabilities. For example, according to a senior LANL Science, Technology and Engineering official, LANL s Nuclear Weapons Science researchers developed expertise in underground work, such as tunnel boring, to facilitate underground nuclear testing, and this expertise has been translated for use in fossil energy activities. Specifically, the scientists and engineers responsible for the nuclear weapon test readiness program work out of the Fundamental Science and Energy organization. Similarly, capabilities in high-performance computing and simulation utilized by Nuclear Weapons Science programs have been applied to many other national security and Fundamental Science and Energy applications. Furthermore, a senior LANL Nuclear Weapons Science official told us that 7 of the 10 key capabilities identified for Weapons Physics work, such as high-performance computing, computational math and physics, and weapons material properties and characterization, are managed out of the same directorate responsible for LANL s Fundamental Science and Energy programs. More than one-quarter of LANL s career employees work in more than one of LANL s major program areas, and laboratory officials told us a substantial number of employees develop the critical skills needed for the Nuclear Weapons Science and Threat Reduction Science and Support programs by first working in Fundamental Science and Energy programs. <2.4. Environmental Programs Address Remediation of Waste at LANL> LANL s Environmental Programs support the laboratory s scientific work by addressing legacy contamination, legacy waste disposition, and new waste at the site produced as a function of programmatic work. This waste is categorized as either legacy generated before 1998 or newly generated. DOE s Office of Environmental Management provides funding for activities to remediate legacy contaminated sites and to dispose of legacy waste, and NNSA provides funding for activities to dispose of newly generated waste. LANL charges program organizations for disposition of newly generated waste, providing an additional stream of funds to support Environmental Programs. In fiscal year 2007, DOE s Office of Environmental Management supported LANL s legacy remediation and waste activities with a budget of over $146 million that supported about 325 FTEs. Costs and FTEs associated with processing newly generated waste and managing and operating the facilities that process them are paid for by the Nuclear Weapons Science facilities and operations programs discussed above. This work generally amounts to $40 million per year, and 87 FTEs support newly generated waste-processing activities. LANL s legacy contamination remediation activities focus on remediation of contaminated sites and decontamination and decommissioning of contaminated structures. LANL must complete its work on contaminated sites by 2015 to comply with a Consent Order from the state of New Mexico s Environment Department to remediate soil and groundwater contamination. According to the LANL official responsible for this work, as of May 2007, LANL had cleaned up 1,434 of the 2,194 contaminated sites; however, the remaining sites are more difficult to address. This LANL official estimated that between 2007 and 2015, remediation of all of the sites will cost approximately $900 million. LANL s newly generated waste activities focus on liquid and solid waste processing and disposal. Radioactive liquid waste at LANL is processed at the laboratory s Radioactive Liquid Waste Treatment facility, a building that is 45 years old. Upgrades to the treatment facility are currently under way, and the upgraded facility is expected to be operational by 2010. Solid waste typically comprising discarded rags, tools, equipment, soils, and other solid materials contaminated by man-made radioactive materials are processed at LANL s Technical Area-54 Area G Disposal Site. Engineering and design work has begun on a replacement facility for processing solid waste, and the facility is expected to be operational in 2014. <2.5. LANL s Safeguards and Security Program Provides Physical and Cyber Security Protection> LANL s Safeguards and Security program aims to provide the laboratory with protection measures that are consistent with the threats and risks detailed in the laboratory s Site Safeguards and Security Plan. This plan, which NNSA reviews annually, details levels of protection that must be provided in different areas of the laboratory to ensure secure programmatic operations and covers such topics as protective forces, site perimeter security, accountability and control over special nuclear material, protection of hard copy and electronic classified information, alarms, intrusion detection systems, identification badges, and security clearances. In fiscal year 2007, $140 million and over 900 FTEs supported Safeguards and Security operations. In addition, construction projects provide new and upgraded security protection at key areas. Specifically, an additional $48 million was budgeted to support two construction projects in fiscal year 2007. The first is the second phase of the Nuclear Materials Safeguards and Security Upgrade project, which focuses on providing upgraded perimeter protection for the facility at LANL where pits are manufactured. The second project focuses on creating a more secure entry point for vehicle traffic at LANL by establishing access control stations and altering traffic patterns on public roads (see fig. 3). While LANL employs security professionals, the technical divisions, in practice, have been responsible for securing their own classified resources by operating their own vault-type rooms, classified computer networks, and classified work areas. These divisions also operated accountability systems for maintaining control over classified resources. Professional security staff advise technical divisions on security requirements and check on whether established practices are appropriately implemented and managed. More recently, security professionals have been deployed to technical divisions to assist directly with security operations, and according to LANL officials, classified resource protection has been centralized to a greater extent through such actions as consolidating storage of all accountable classified documents into one location. <2.6. LANL s Programs Include Millions of Dollars in Work for Other Federal Agencies and Laboratory-Directed Research and Development Projects> <2.6.1. Work for Others> According to LANL, the laboratory s budget for work for others projects in fiscal year 2007 was $462.4 million or about 17 percent of the laboratory s total budgetary resources and these projects relied on nearly 800 FTEs. NNSA s Site Office reported that LANL scientists and engineers conducted work on over 1,200 individual projects for other federal agencies and outside entities in fiscal year 2007. Of these 1,200 projects, only 93 had fiscal year 2007 budgets of $1 million or more, and the budgets for these 93 projects totaled about $270 million, or 58 percent of all projects budgets in fiscal year 2007. Nearly 60 percent of the $270 million available for these 93 projects came from the following two sources: Defense related intelligence agencies sponsored 26 of the 93 projects. These projects are described by LANL as International Technology projects. The Department of Homeland Security sponsored an additional 24 of the 93 projects. The largest of these projects supports the National Infrastructure Simulation and Analysis Center. The National Infrastructure Simulation and Analysis Center applies LANL s expertise in computer- based modeling and simulation for national response to national security events, such as a nuclear or radiological device explosion or an outbreak of infectious disease. Other projects focus on research and development related to defeating chemical and biological weapons, detecting the movement of radioactive materials, and providing threat assessment capabilities. Work for others activities are concentrated in LANL s Threat Reduction Science and Support and Fundamental Science and Energy programs. In particular, 27 Threat Reduction Science and Support programs received several hundred million dollars in fiscal year 2007. Twenty Fundamental Science and Energy programs received about $162 million to conduct work for others activities in fiscal year 2007. Of this total, 41 percent came from other DOE entities, such as other national laboratories; 19 percent from the Department of Health and Human Services; 13 percent from the National Aeronautics and Space Administration; and 10 percent from universities and institutions. <2.6.2. Laboratory-Directed Research and Development> In addition to programs supported by NNSA, DOE, and other federal and nonfederal work sponsors, LANL supports a program of Laboratory- Directed Research and Development (LDRD) that focuses on forefront areas of science and technology that are relevant to NNSA and DOE missions but are not directly funded by specific NNSA or DOE programs. LDRD projects are largely self-initiated and are funded indirectly by LANL through contributions made by directly funded programs. To this end, funds allocated for use on LDRD projects are not a budgeted expense, but do contribute to the cost of LANL s work. DOE guidance requires that the maximum funding level for LDRD projects not exceed 8 percent of a laboratory s total operating and capital equipment budget. In fiscal year 2007, LANL provided just under $130 million to conduct work on 199 LDRD projects involving approximately 470 FTEs. These projects ranged in scope from research on predictive climate modeling, to nanotechnology in semiconductors, to medical technologies, to plutonium sciences. DOE guidance requires that LDRD projects normally conclude within 36 months of inception. <3. LANL s Nuclear Weapons Science Programs Rely on Classified Resources to Accomplish Their Missions to a Greater Extent Than Do Other LANL Programs> To carry out its programs, LANL s major and support programs operate in a wide variety of shared facilities, ranging from office buildings, to laboratories, to manufacturing facilities for nuclear weapon pits and high explosives. In this regard, LANL officials identified 633 such facilities, which are protected at different security levels. Of these 633 facilities, 607 are used by LANL s major programs. Table 2 provides information on the different levels of security at which LANL s major and support program facilities are protected. Facilities with appropriate levels of security house or store a variety of classified resources, ranging from special nuclear material to classified documents. At least 365 facilities are protected in their entirety at the Limited Area level or above, which is sufficient to allow them to store classified documents or perform classified activities. In contrast, Category I special nuclear material will be found in a facility that has all of the protections provided by Limited, Exclusion, Protected, and Material Access Areas. Table 3 provides information on the different types of classified resources housed or stored in these facilities. LANL s Nuclear Weapons Science programs rely on facilities that house classified resources to a much greater extent than do the laboratory s Threat Reduction Science and Support or Fundamental Science and Energy programs. In contrast, LANL s Environmental and Safeguards and Security support programs rely on facilities that house classified resources to a minor extent. Specifically, Nuclear Weapons Science programs use 322 facilities that require security protections for classified resources. Thirty-two of these 322 facilities are protected at the highest levels as Exclusion, Protected, and Material Access Areas. Nuclear Weapons Science programs are the primary users meaning they use more space in a facility than any of the other major or support programs at LANL of 28 of these 32 facilities, including LANL s single Category I special nuclear material facility, known as Plutonium Facility 4 at Technical Area-55. Threat Reduction Science and Support programs use 105 facilities that require security protections for classified resources, 31 of which are protected as Exclusion, Protected, and Material Access Areas. Of these 31, Threat Reduction Science and Support is the primary user of 14, including all of LANL s facilities for Sensitive Compartmented Information. Finally, Fundamental Science and Energy uses 103 facilities that require security protections for classified resources. While 15 of these are protected as Exclusion, Protected, and Material Access Areas, Fundamental Science and Energy is not the primary user of any of these 15 facilities. Finally, LANL s Nuclear Weapons Science programs are the primary users of facilities storing or housing different types of classified resources to a greater extent than are LANL s Threat Reduction Science and Support or Fundamental Science and Energy programs. Table 4 provides information on the primary-user facilities that house or store classified resources, as well as vault-type rooms. <4. LANL Is Implementing Over Two Dozen Initiatives Officials Believe Will Reduce Security Risk and Improve Protection of Classified Resources> LANL has initiatives under way that are principally aimed at reducing, consolidating, and better protecting classified resources, as well as reducing the physical footprint of the laboratory by closing unneeded facilities. LANL officials believe that these initiatives will reduce the risk of incidents that can result in the loss of control over classified resources. In concert with these actions, LANL is implementing a series of engineered and administrative controls to better protect and control classified resources. <4.1. LANL Is Reducing and Consolidating Classified Resources and Its Physical Footprint> According to NNSA security officials, the size and geographic dispersal of LANL s facilities creates challenges for classified operations at the laboratory because classified resources must be shared among programs that use remote facilities. This condition increases the number of instances in which laboratory employees move and hand off classified resources a situation that has created accountability problems. To address this problem, LANL is reducing classified holdings at the laboratory; consolidating storage of and access to these resources in fewer facilities that are more centrally located and controlled; and where possible, eliminating hard copies and classified removable electronic media by transferring the information to LANL s classified red computer network. Simultaneously, LANL is reducing the overall size of its physical footprint by eliminating facilities that are in poor or failing condition or are excess to mission needs. <4.1.1. Classified Resources Reduction and Consolidation> LANL is undertaking a number of initiatives that security officials believe will improve LANL s security posture and, thereby, risk to the laboratory s operations. These initiatives are being managed in the short-term by a Security Improvements Task Force, a multidisciplinary team chartered in January 2007 to improve physical security operations. The Task Force targeted six types of classified resources for immediate consolidation and reduction: (1) accountable classified removable electronic media; (2) classified removable electronic media that do not need to be tracked with an accountability system; (3) classified parts; (4) accountable classified documents; (5) classified documents that do not need to be tracked with an accountability system; and (6) vaults and vault-type rooms. With respect to each type of resource, LANL developed a baseline inventory of resources, identified resources that could be destroyed, or, in the case of vaults and vault-type rooms, emptied and consolidated remaining resources into fewer facilities. As of March 2008, the latest date for which data is available, LANL had significantly reduced and consolidated each of these resources, as described: Accountable classified removable electronic media. LANL reduced the number of pieces of accountable classified removable electronic media actively in use from a high of 87,000 pieces in 2003 to about 4,300 pieces. Classified removable electronic media. LANL instituted a spring cleaning project in May 2007 that contributed to the destruction of 610 pieces of classified removable electronic media. According to a senior LANL security official, LANL completed an assessment of its classified removable electronic media holdings in February 2008 and estimates there are approximately 6,500 pieces of nonaccountable classified removable electronic media at the laboratory. Security officials said unneeded media will be destroyed during a second spring cleaning effort in May 2008. Classified parts. LANL has allocated nearly $1.7 million for a project to inventory tens of thousands of classified nuclear weapon parts, destroy those that are no longer useful, and centrally manage those that remain. Through a laboratorywide effort, nearly 30,000 classified parts were identified and destroyed between February 2007 and March 2008 by either melting the parts, grinding them into shapes that are no longer classified, or by blowing them up. According to LANL officials, additional destruction of classified parts is under way. Accountable classified documents. LANL completed consolidation of all accountable documents into a single storage library in November 2007. While accountable classified documents are created and destroyed on an ongoing basis, as of March 2008, LANL was managing just over 6,000 accountable classified documents. Classified documents. According to a senior LANL security official, the laboratory completed an assessment of nonaccountable classified documents in February 2008 and estimates there are approximately 9 million classified documents at the laboratory. From April 2007 through February 2008, LANL destroyed over 1.6 million pages of classified documents, and another destruction effort is planned for May 2008. Vaults and vault-type rooms. LANL has reduced the number of vault-type rooms at the laboratory from 142 to 111 and plans to further reduce the number to 106. One LANL security official said he thought the laboratory could ultimately reduce the number of vault-type rooms to 100. Of the remaining vaults and vault-type rooms, LANL officials told us all have been comprehensively inspected and any security deficiencies remedied. During fiscal year 2007, LANL built a prototype super vault-type room, a model for future vault-type room operations, that consolidates classified resources in a highly secure, access-controlled environment staffed by security professionals. According to LANL officials, the super vault-type room has allowed LANL to consolidate 65 percent of its accountable classified removable electronic media holdings in one location. In addition to classified resource storage, the super vault-type room offers classified mailing, scanning, faxing, and printing services, thereby reducing the number of locations, equipment, and people handling classified resources in other parts of the laboratory. In addition, LANL is taking steps to reduce the number of special nuclear material storage facilities that must be protected at the site. In 2000, there were 19 such nuclear facilities at LANL, and by 2006, this number had decreased to 11. LANL plans to further reduce the number of nuclear facilities at the site to five by 2016. The number of facilities that store Category I special nuclear material has already been reduced from nine to one. This remaining Category I facility LANL s Plutonium Facility 4 at Technical Area-55 (see fig. 4) contains the nation s only plutonium research, development, and manufacturing facility and the laboratory s only Material Access Area. It is protected with a combination of safeguards that include fences, controlled access points, electronic sensors and surveillance, and armed guards. According to the LANL Director, the laboratory has embarked on a multiyear transformation effort to reduce its facility footprint and better manage its infrastructure investments. Many facilities at LANL were built in the early 1950s and are beginning to show signs of structural or systems failure. Other structures at LANL, such as trailers, are temporary and do not provide quality office or laboratory space. Furthermore, the geographic separation of LANL s facilities makes effective collaboration difficult, according to LANL program managers. LANL officials told us that reducing the laboratory s physical footprint will save facility operation costs and reduce deferred maintenance costs, which LANL estimated at $321.5 million in fiscal year 2007. Officials said it will also enhance scientific collaboration and improve safety and security. LANL s goal in fiscal year 2007 was to reduce its existing facility footprint by 400,000 square feet and to reduce it by a further 1.6 million square feet in fiscal year 2008. To determine which facilities would be reduced, several of LANL s directorates prepared footprint reduction plans targeting facilities that (1) have significant deferred maintenance costs, (2) are in poor or failing condition, (3) are expensive to maintain because they were not designed or built for energy efficiency, and (4) are considered excess to current and anticipated mission needs. In fiscal year 2007, LANL exceeded its footprint reduction goal by reducing existing facility square footage by just over 500,000 square feet. Seventy-seven facilities were reduced to contribute to this total. According to LANL and NNSA officials, the criteria used to determine whether a facility is considered to be reduced vary. Generally, a facility is considered reduced when it is closed, the utilities have been disconnected, and it is no longer occupied by laboratory employees. However, in at least one instance, LANL considered a portion of a facility to be reduced, while another portion remained occupied and building utilities were still connected. A reduced facility may still require environmental remediation and will eventually require disposition, either through demolition, transfer, or sale. <4.2. LANL Is Introducing Engineered and Administrative Controls to Protect Classified Resources> LANL is also introducing engineered and administrative controls to improve the physical security of its remaining classified resources and to reduce the security risks associated with their use. According to LANL, implementing these controls can help reduce errors in handling classified resources and, therefore, reduce risk. The super vault-type room is a solution engineered to address the risk of mishandling accountable classified resources by putting responsibility for these classified resources in the hands of security professionals. A senior LANL security official told us that the laboratory relies on these controls to influence and change laboratory employees behavior. For example, a LANL official said increased mandatory and additional random searches of employees leaving vault-type rooms an engineered control should help raise employees awareness of unauthorized removal of classified documents or media from vault-type rooms. Furthermore, simplifying security orders an administrative control should help LANL employees understand and implement their security obligations. Examples of engineered controls, beyond the initiatives to reduce and consolidate the seven types of classified resources discussed above, include improving security perimeters around the laboratory and around specific facilities; adding to and reinforcing existing vehicle access control points; expanding a random drug testing program to include all new and existing LANL employees and subcontractors; increasing random searches performed by protective forces on individuals in secure areas to ensure they are not leaving with classified resources; expanding the classified red computer network to a greater number of facilities, further enabling the reduction of accountable and nonaccountable classified electronic media; significantly reducing laboratory computers ability to create new accountable and nonaccountable classified removable electronic media; initiating a pilot program to attach radio frequency identification tags to cellular phones and two-way paging devices that set off an alarm when these devices are brought into restricted areas; and upgrading security alarm systems. Examples of administrative controls include issuing manuals to formalize facility operations, maintenance, engineering, training, and safety requirements across LANL; updating and simplifying physical security orders to ensure requirements are easily understood and can be implemented; reinforcing the applicability of security requirements to subcontractors through a meeting and a new appendix to subcontractors contracts; enhancing procedures for escorting individuals into vault-type rooms; eliminating the practice of allowing cleared individuals to hold the door for other cleared individuals entering restricted facilities, known as piggybacking, by requiring that all individuals entering restricted facilities swipe their badges; implementing Human Performance Assessments of security incidents that identify how a lack of engineered or administrative controls, which can be corrected, contribute to human errors; and reissuing work control policies emphasizing Integrated Safeguards and Security Management, a system intended to provide each LANL employee with a framework for performing work securely and fulfilling individual security responsibilities. <5. While LANL s Initiatives Address Many Security Problems Identified in Prior External Evaluations, Other Significant Security Problems Have Received Insufficient Attention> Many of the initiatives LANL is undertaking address security findings identified in external evaluations, particularly those conducted by DOE s Office of Independent Oversight and NNSA s Site Office. Some of these initiatives are being implemented in response to DOE s 2007 Compliance Order, which resulted from the October 2006 security incident. Despite these efforts, however, significant security problems have not been fully addressed. Furthermore, in fiscal year 2007 LANL s initiative to reduce the physical footprint of its site reduced maintenance costs more than it addressed facility security. <5.1. Many of LANL s Initiatives Address Security Problems Identified by DOE s Office of Independent Oversight and NNSA s Site Office between Fiscal Years 2000 and 2008> Between fiscal years 2000 and 2008, DOE s Office of Independent Oversight issued four complete assessments of security at LANL. Over the same period, NNSA s Los Alamos Site Office conducted seven surveys of laboratory security. These assessments and surveys identified a variety of security problems at LANL, many of which are being addressed through initiatives LANL is currently implementing. Some examples follow: Inadequate accounting for classified documents. Issues with the adequacy of LANL s accounting for classified documents were raised by the Site Office in fiscal years 2005 and 2006 and by DOE s Office of Independent Oversight in fiscal year 2007. These issues related to the inconsistent handling of classified documents by document custodians in LANL s divisions and to the timeliness of updates to LANL s classified document and media accountability policies to ensure that they reflected DOE s policies. Several of LANL s ongoing security initiatives and engineered and administrative controls are intended to address these concerns by centrally storing and handling accountable classified documents in vaults, vault-type rooms, and the super vault-type room staffed by security professionals and by implementing an automated system to update classification guidance. Inadequate accounting for classified nuclear weapon parts. Findings about the adequacy of LANL s accounting for classified parts were raised by the Site Office in fiscal year 2001 and by DOE s Office of Independent Oversight in fiscal years 2003, 2007, and 2008. These findings related to improper marking of classified parts with their appropriate classification level and storage of classified parts in containers and facilities that are considered nonstandard, or out of compliance with DOE rules governing classified resource storage. These rules include requirements for building alarms, frequency of security guard patrols, and facility vulnerability assessments. Furthermore, the DOE Inspector General reviewed LANL s management of classified parts in 2007 and had additional findings about the inventory systems used to maintain accountability over classified parts. While LANL has not resolved issues related to nonstandard storage (see discussion in a subsequent section of this report), LANL officials told us that by destroying nearly 30,000 classified parts at the laboratory, they have established a goal to reduce the number of nonstandard storage facilities from 24 to 0 by the end of August 2008. LANL is also developing a new, centrally controlled inventory system for tracking classified parts and has created administrative procedures and guidance for the system s use. Inconsistent efforts to reduce classified holdings. A finding about the consistency of LANL s efforts to reduce classified holdings was raised by the Site Office in fiscal year 2001. The Site Office noted that despite the existence of LANL procedures for regularly reviewing classified inventories to reduce them to the minimum necessary, routine review and reduction of classified inventories was not occurring. While other surveys and assessments did not discuss this finding, LANL s current initiatives to reduce accountable and nonaccountable documents and classified removable electronic media, which began in 2003, have significantly reduced holdings, and future classified holdings reduction targets are being developed. Through engineered controls, LANL is also attempting to limit the ability and the need to create new classified removable electronic media and to make the information previously stored on removable media available through the laboratory s classified computer network. Specifically, to prevent the creation of new media, LANL is removing functions on classified computers that would allow media to be created or copied and is deploying new classified computing systems that do not contain the capability to create removable electronic media. In addition, LANL has undertaken an effort to upload the information stored on classified removable electronic media to the laboratory s classified computer network before the media are either destroyed or permanently archived. LANL officials said this will reduce the risk that media could be mishandled, thus improving the laboratory s physical security. However, LANL officials also acknowledged that transferring information from classified media to a classified network represents a shift from physical security risk to cyber security risk. A senior LANL official told us this risk is minimized by ensuring that LANL s classified network is appropriately protected and access to the network is properly controlled. Insufficient security at vault-type rooms. Findings about the sufficiency of security at LANL s vault-type rooms were raised by the Site Office in fiscal year 2005 and by DOE s Office of Independent Oversight in fiscal years 2007 and 2008. These findings concerned the adequacy of security patrols, sensor detection, and unauthorized access. LANL has addressed concerns about vault-type room security through comprehensive physical assessments of all vault-type rooms, and a laboratory security official told us that all identified deficiencies have been remedied. Furthermore, the official told us that in the future LANL intends to recertify vault-type rooms every 2 years, instead of every 3 years. Finally, LANL has reduced the number of vault-type rooms in operation at the laboratory facilitating more frequent security patrols and has increased mandatory and random searches of individuals exiting vault-type rooms. LANL is also implementing security initiatives in response to the October 2006 security incident. Specifically, DOE s July 2007 Compliance Order, which resulted from this incident, required LANL to submit an integrated corrective action plan to address critical security issues at the laboratory, including many of those identified by the Site Office and Office of Independent Oversight since 1999. According to LANL s analysis of past information and cyber security findings, the root causes of 76 percent of these findings were related to inadequate policies, procedures, or management controls. Correspondingly, many of the administrative controls LANL is now implementing and that it included in its integrated corrective action plan address these policy, procedural, and management problems, including reissuing policies and guidance for improving implementation of Integrated Safeguards and Security Management, which LANL officials told us will help individual employees ensure they execute their security responsibilities as part of their regular work; providing Human Performance Assessments as a component of security incident reports to help managers identify challenges in their work environments that can be improved to reduce the likelihood and severity of security errors made by employees; revising policies for escorting visitors into vault-type rooms to ensure visitors access to classified resources is properly limited; and improving communication of security requirements to subcontractors by adding an additional exhibit to their contracts. <5.2. Not All Security Problems Are Being Fully Addressed> While many of the initiatives and engineered and administrative controls LANL is implementing address past security concerns, some significant security problems identified by DOE s Office of Independent Oversight and NNSA s Site Office have not been fully addressed. Specifically, LANL s storage of classified parts in unapproved storage containers and its process for ensuring that actions taken to correct security deficiencies are completed have been cited repeatedly in past external evaluations, but LANL has not implemented complete security solutions in these areas. In addition, LANL s actions to address other long-standing security concerns, such as the laboratory s process for conducting self-assessments of its security performance and its system for accounting for special nuclear material, have been planned but have not, as yet, been fully implemented. More specific examples include the following: Classified nuclear weapon parts storage. LANL uses General Services Administration-approved security containers for standard storage of classified resources. Classified resources that cannot be readily stored in approved containers for example, because of their size are stored in vaults, vault-type rooms, or nonstandard storage facilities. According to LANL officials, there are 24 nonstandard storage areas at the laboratory. Requests for nonstandard storage are made through a process approved by NNSA s Site Office. LANL management reviews all nonstandard storage requests, and requests are approved by LANL s Physical Security group. The approval process requires LANL to conduct risk assessments for these nonstandard storage areas. While the Site Office has never independently raised concerns about the adequacy of nonstandard storage areas in its surveys, the Office of Independent Oversight has consistently called attention to this issue. Specifically, in fiscal years 2003, 2007, and 2008, the Office of Independent Oversight noted problems with the safeguards LANL said were in place to protect nonstandard storage areas and questioned the risk assessment methodology LANL has used to determine appropriate protections. In 2007, the Chief of DOE s Office of Health, Safety and Security, which oversees independent assessments, testified that LANL is overly dependent on nonstandard storage for the protection of many of its classified nuclear weapon parts and that the overall impact of deficiencies in nonstandard storage arrangements on the protection of these parts is substantial. LANL officials told us their goal is to eliminate all 24 nonstandard storage areas at the laboratory by August 2008 and, in the interim, continue to apply for waivers to rules governing standardized storage through the Site Office s approval process. However, LANL s plans for eliminating specific nonstandard storage areas show the elimination of one area planned for the second quarter of fiscal year 2009 as much as seven months later than LANL s August 2008 goal and four others that will remain nonstandard storage areas. Furthermore, a recent status report on nonstandard storage area elimination activities showed that nearly all activities were at risk of schedule delay. Process for ensuring that corrective actions are completed. When evaluations result in findings of security deficiencies, LANL must prepare a corrective action plan that charts a path forward for resolving the finding. To resolve a deficiency and complete its corrective action plan, DOE requires LANL to conduct a root-cause analysis, risk assessment, and cost-benefit analysis to ensure that the corrective action implemented truly resolves the deficiency identified. In fiscal year 2007, the Office of Independent Oversight questioned the completeness of corrective action plans some of which did not include the required risk assessments leading to concerns about whether actions taken to address security deficiencies would in fact prevent recurrence. This concern is similar to our 2003 finding that corrective action plans are often inconsistent with DOE requirements. The fiscal year 2008 Office of Independent Oversight assessment noted that weaknesses in corrective action plans causal analyses remain. Specifically, the Office of Independent Oversight found that some corrective action plans root-cause analyses were insufficient to properly identify security deficiencies. According to LANL officials, in fiscal year 2008, LANL revised its self-assessment program to ensure that root-cause analyses are included in all corrective action plans and that these plans are sufficient. In fiscal year 2007 the Site Office and the Office of Independent Oversight raised concerns about the timeliness of LANL s submission of corrective action plans and the length of time it takes to close corrective action plans by resolving findings. The fiscal year 2007 Performance Evaluation Plan that NNSA developed to establish priorities for the laboratory provided LANS with financial incentives totaling over $1 million to complete LANL s corrective actions on schedule. While the Site Office noted significant improvement in the timeliness and closure of corrective action plans in its fiscal year 2007 survey, LANL did not meet the fiscal year 2007 performance milestone. NNSA s fiscal year 2008 Performance Evaluation Plan provides LANS with a $100,000 financial incentive to improve the timeliness of corrective action plan development and up to an additional $357,000 to close corrective action plans quickly and on time. Inadequate self-assessment. Concerns about the adequacy of LANL s assessments of its own security performance were raised by the Site Office in fiscal years 2003, 2005, 2006, and 2007 and by DOE s Office of Independent Oversight in fiscal year 2008. These concerns related to the comprehensiveness of LANL s self-assessments, the extent to which self- assessments included discussion of all internal findings, and the extent to which these findings were analyzed and addressed through corrective actions. NNSA provided LANS with a nearly $600,000 financial incentive under the fiscal year 2007 Performance Evaluation Plan to improve LANL s self-assessment program. According to NNSA s evaluation of LANL s fiscal year 2007 performance, LANL did not meet NNSA s goal but did make progress toward it by significantly improving self-assessment. The Office of Independent Oversight s fiscal year 2008 assessment also noted improvements but recommended further areas for attention. These recommendations included ensuring that self-assessments address all aspects of each assessment topic, such as classified information protection and physical security. LANL officials said training on conducting self-assessments is currently being developed. Control and accountability system for special nuclear material. DOE requires that LANL maintain a system for tracking special nuclear material inventories, documenting nuclear material transactions, issuing periodic reports, and detecting potential material losses. According to LANL and Site Office security officials, the system LANL uses, known as the Material Accountability and Safeguards System (MASS), is over 20 years old and was developed with a now outdated computer language. While LANL has not reported any incidents involving the loss or diversion of special nuclear material in recent years, the Site Office and Office of Independent Oversight raised concerns in fiscal years 2002, 2003, 2005, 2006, and 2007 related to LANL s system. Such concerns included the absence of controls in MASS to detect internal transfers of nuclear materials that could result in safeguards category limits being exceeded in time to prevent the transfer. According to a senior LANL official, a project to upgrade the system was approved to proceed in January 2008 and is scheduled to be completed by February 2010 at a cost of $3 million. <5.3. LANL s Footprint Reduction Initiative Reduced Maintenance Costs More Than It Addressed Facility Security> LANL s initiative to reduce the physical footprint of its facilities focuses on eliminating facilities that are in poor and failing condition, thus reducing the laboratory s deferred maintenance burden, which according to a LANL estimate, totaled over $320 million in fiscal year 2007. Additionally, the initiative focuses on facilities that have no enduring mission need, thus avoiding future operations costs. While the footprint reduction plans put together by LANL s Weapons Physics and Weapons Engineering directorates both state that security improvements would result from facility reduction, LANL officials responsible for setting priorities for reducing facilities told us that the facilities security problems were not seriously considered when planning for footprint reduction. In that regard, we found that of the 77 facilities LANL counted toward meeting its footprint reduction goal of 400,000 square feet in fiscal year 2007, only 2 facilities contained any classified resources. Specifically, these two facilities included (1) a large, Limited Area administrative facility that contained six vault-type rooms, stored classified parts, and provided access to LANL s classified network; and (2) a Limited Area facility used for high explosives work and that provided access to LANL s classified network. Closing vault-type rooms and eliminating classified network access points has the potential to improve security at LANL by reducing or consolidating the number of classified resources that require security protection. In the case of the administrative building described above, the facility was replaced by a newly constructed administrative building that has 11 vault-type rooms 5 more than the original administrative building contained. However, in commenting on our report, LANL officials said that the new administrative building incorporates more modern safety and security standards than the original administrative building. To this end, the security benefits derived from LANL s fiscal year 2007 footprint reduction efforts are unclear. In commenting on our report, LANL officials noted that Security and Safeguards Requirement Integration Teams participate in footprint reduction projects to ensure that facilities and the classified information they house or store remain secure during the closure process. While subsequent documentation provided by the leader of LANL s physical security organization does show that Security and Safeguards Requirement Integration Teams assist with facility reduction efforts in this manner, it does not show that these teams evaluate facility security weaknesses as criteria for identifying which facilities at LANL should be closed. <6. LANL s and NNSA s Management Approaches to Sustain Security Improvements Over the Long-Term Are in the Early Stages of Development or Contain Weaknesses> DOE, NNSA, and even LANL officials have found that LANL has consistently failed to sustain past security initiatives. For example, in DOE s 2007 Compliance Order, the Secretary of Energy wrote that although some corrective steps were taken by the previous LANL contractor in response to security incidents, the October 2006 incident demonstrated that problems continued. Similarly, NNSA s Office of Defense Nuclear Security noted in 2007 that after each security incident at LANL, the laboratory has responded by changing policies and procedures and investing in new equipment and systems. The result, according to the Office of Defense Nuclear Security, had been a steady improvement in security through mitigation of immediate problems; however, the inability to halt what NNSA has characterized as a string of incidents involving the failure to account for classified information demonstrated that LANL had not identified and addressed the root causes of security incidents. In its own analysis of the October 2006 security incident, LANL determined that the incident s root cause was inconsistent and ineffective implementation of Integrated Safeguards and Security Management principles in its classified work, despite the fact that a DOE policy governing implementation of Integrated Safeguards and Security Management throughout the DOE complex had been in place since at least 2001. In acknowledging the problem of sustaining security improvements, LANL officials described three management approaches they intend to use to ensure that security improvements currently being implemented are sustained over the long-term: (1) DOE s July 2007 Compliance Order, (2) LANL s Contractor Assurance System, and (3) NNSA s annual performance evaluation plans. However, each management approach cited by LANL officials either contains weaknesses that will affect LANL s ability to fully ensure security initiatives are sustained or is in an early stage of development. Furthermore, our January 2007 findings regarding the NNSA Site Office s capacity to oversee security at LANL have not yet been addressed. <6.1. DOE s July 2007 Compliance Order Is Not Designed to Be a Tool for Management Change> LANL officials told us that completing the efforts required by DOE s July 2007 Compliance Order would ensure that security improvements are sustained. However, the Compliance Order is not designed to provide LANL with a management tool for sustaining long-term security initiatives or for future security improvement. Rather, it serves as a mechanism for DOE to enforce financial penalties against LANS should LANL fail to implement the required actions that address past security problems. Specifically, the actions required by the Compliance Order must be completed by December 2008. If they are not completed, LANS is subject to civil penalties of up to $100,000 per violation per day. In September 2007 LANL submitted an integrated corrective action plan to DOE in partial fulfillment of Compliance Order requirements. This plan outlined the 27 actions LANL intends to take to address seven critical security issues identified as having contributed to the October 2006 security incident and to meet the requirements of the Compliance Order. Of these seven critical security issues, five pertain to the physical security of classified information and resources. These five issues include the following: LANL has not consistently or effectively implemented the principles and functions of Integrated Safeguards and Security Management in the management of classified work; LANL s classified information security training is not fully effective; LANL has not provided effective leadership and management in protecting classified information; LANL s assurance system has not effectively resolved classified information protection issues; and LANL has not, in some cases, effectively sustained corrective actions. The majority of the actions LANL outlined in its plan to address these issues are discrete, rather than representing long-term efforts aimed at improving LANL s overall security performance. They include, for example, documenting that managers have met with employees to communicate and reinforce expectations with regard to integrating the principles of Integrated Safeguards and Security Management into daily work activities; implementing personnel actions with respect to the October 2006 security incident, such as placing formal reprimands in employees personnel files and putting employees on unpaid leave; and revising the laboratory s policy on escorting visitors into vault-type rooms. While actions of this type should contribute to security improvements in the short-term, discrete actions such as these do not ensure that security initiatives will be sustained over time. Moreover, while the Compliance Order provides a mechanism to assess financial penalties if LANL fails to implement the actions included in its integrated corrective action plan, the mechanism will no longer be available once actions are concluded in December 2008. <6.2. LANL s Contractor Assurance System Is Not Fully Developed or Deployed> LANL officials told us they expect to use the laboratory s new Contractor Assurance System to ensure that security improvements are sustained over time once actions under the Compliance Order are complete in December 2008. However, we found that the extent to which LANL will be able to rely on the Contractor Assurance System to ensure long-term sustainability of security improvements after December 2008 is unclear for two reasons. First, LANL officials told us that the system will not be fully developed or implemented by the time LANL completes its Compliance Order efforts in December 2008. Second, an internal assessment of the Contractor Assurance System found that (1) there is a lack of evidence that the system is being effectively deployed across the laboratory and (2) the measures included in the system may not be meaningful. LANL is designing the Contractor Assurance System to measure and track performance from the top down. Top-level measures, such as meeting program milestones set by NNSA or on-time delivery of products, are in place. Lower-level measures, such as measures of the work processes used to meet milestones and deliverables, are still in development. LANL officials responsible for designing the Contractor Assurance System told us that these lower-level measures are critical to the success of the system because they will provide the data that indicate where work processes are breaking down before milestones or deliverables are delayed. Officials also said that trend analysis from data associated with lower-level measures would indicate areas where security concerns are developing. During fiscal year 2008, LANL officials said they plan to focus on developing lower-level measures, but they will not complete these measures by December 2008. A senior official in NNSA s Site Office told us it could be another 3 to 4 years before the Contractor Assurance System is fully implemented. In its first internal assessment of the Contractor Assurance System completed in September 2007, LANL found that while the system was operational and met the requirements of the contract between NNSA and LANS, it contained significant weaknesses. For example, while upper-level management uses the system, there are gaps in its use across LANL s technical divisions and facilities. According to the assessment, these gaps could make the system ineffective. In addition, a LANL official told us that while managers are required to attend training on using the system, many do not yet recognize its usefulness. Moreover, the assessment found that because lower-level process measures have not yet been implemented, it may be difficult to use the system for its stated purpose to improve management and performance. For example, the assessment found that the Contractor Assurance System cannot yet measure key management and performance indicators, such as budget performance, fiscal accountability, and customer satisfaction or dissatisfaction with LANL products and services. In this regard, a LANL official told us that the Contractor Assurance System is not yet mature enough for laboratory officials to understand the best ways to use it and that LANL managers are still identifying which processes they need to measure in order to gather relevant performance data. In commenting on our report, LANL officials agreed with our assessment of the Contractor Assurance System and noted that efforts to improve its maturity are ongoing. <6.3. NNSA s Performance Evaluation Plans Principally Focus on Achieving Compliance with DOE Requirements and Do Not Sufficiently Reward LANL s Security Program for Improved Security Performance> LANL officials told us the laboratory also plans to realize sustained security improvements by meeting the security-related performance incentives in the annual performance evaluation plans NNSA uses to measure performance and determine an award fee. The fiscal year 2007 and fiscal year 2008 performance evaluation plans contain both objective and subjective measures of security performance that are tied to financial incentives. Objective measures of security performance use specific and discrete criteria that are not judgmental, such as achieving a particular score on a security evaluation, while subjective measures of security performance use broad criteria that are judgmental, such as effectiveness of security planning. According to NNSA s Site Office, the two sets of measures complement each other and allow NNSA to withhold incentive fees when its expectations for effective management and leadership are not met. Site Office officials told us it is possible LANL could achieve success in all of the objective security measures but fail to earn award fees on the basis of its performance assessed with subjective measures. We found that the objective measures included in the performance evaluation plans reward LANL for complying with existing DOE security requirements but do not sufficiently reward LANL for improving its security performance. Of the $51.3 million potentially available for LANS s total performance-based incentive fee in fiscal year 2008, only $1.43 million is associated with objective measures of security performance. Of this total, $1.4 million is an incentive for compliance with DOE security requirements, and only $30,000 is allocated to forward-looking and laboratorywide security improvement. According to a senior NNSA security official, compliance with DOE requirements does not assure that LANL s security program is functioning effectively, and actions to achieve compliance may not be valuable unless the actions also address management or operational needs. Specifically, in fiscal year 2008, we found the following objective provisions: $800,000 to achieve the milestones LANL sets in an annual security operating plan, which aligns LANL s security activities with its budget. The fiscal year 2008 annual security operating plan provides a roadmap for LANL security program compliance with DOE requirements and includes milestones such as submitting the Site Safeguards and Security Plan, conducting security training, publishing security policy, completing quarterly equipment maintenance requirements, and conducting inventories of special nuclear material. $200,000 to achieve an overall satisfactory rating on the Site Office s annual security survey. $400,000 to achieve 90 percent of the milestones associated with the ongoing Phase 2 Nuclear Materials Safeguards and Security Upgrade construction project. $30,000 to develop a forward-looking Safeguards and Security Modernization Plan, which according to a senior Site Office official, is in progress. This official said the Site Office expects LANL to deliver a plan that can begin to be implemented in fiscal year 2009, if the budget allows. However, the official also said the Site Office has not provided any criteria or guidance to LANL about what the plan should include. The objective measures for security performance established under the fiscal year 2007 Performance Evaluation Plan were similar to those established in fiscal year 2008. Specifically, for fiscal year 2007, we found the following incentive provisions: about $1.2 million to achieve the milestones in the fiscal year 2007 annual security operating plan, which were as compliance-oriented as they are in the fiscal year 2008 annual security operating plan; about $670,000 to ensure that inventories of special nuclear material accurately detected any gain or loss of material, excluding legacy material; about $560,000 if DOE validated that LANL s Safeguards and Security program was rated effective on five of seven ratings contained in the Office of Independent Oversight assessment and was rated overall satisfactory in the Site Office survey; and about $270,000 to achieve all of the milestones included in the fiscal year 2007 annual operating plan for cyber security. Financial incentives associated with objective measures of security performance totaled nearly $2.7 million in fiscal year 2007. The entire $2.7 million encouraged LANL to comply with existing DOE requirements for effective security operations. LANL earned $2.4 million of the $2.7 million potentially available, despite the occurrence of the October 2006 security incident. NNSA increased the potential performance award fee associated with subjective measures for laboratory performance in fiscal year 2007 as a result of the October 2006 security incident and also included subjective measures in the fiscal year 2008 Performance Evaluation Plan. These measures evaluate LANS s leadership in integrating programs, including security, across the laboratory and achieving exemplary overall laboratory performance. We found that these measures are neither compliance-based nor forward-looking, but rather focus on overall quality of performance. In fiscal year 2007, LANL received its lowest performance rating in this category, earning only 35 percent of the over $10 million potentially available. LANL s low performance rating directly reflected the occurrence of the October 2006 security incident. In fiscal year 2008, the award fee potentially available for successful achievement of subjective measures is $10.3 million, approximately $125,000 more than in fiscal year 2007. One of the 20 criteria NNSA will consider in determining the fiscal year 2008 award fee in this area is specific to overall performance, timeliness, and effectiveness of security commitments. A senior Site Office official told us that security performance will also be considered when NNSA evaluates overall laboratory leadership and management. However, according to Site Office officials, NNSA has not yet determined how it will weigh security against other criteria, such as Weapons or Threat Reduction program performance, when determining how much of the award fee LANS will earn for achieving subjective performance measures. <6.4. Prior Findings on the NNSA Site Office s Capacity to Oversee Security at LANL Have Not Yet Been Addressed> While it is important for LANL to continue to improve the performance of its security programs through the use of the management tools previously discussed, the Site Office must still directly oversee LANL s security program. Specifically, the Site Office is required to conduct a comprehensive annual survey of LANL s Safeguards and Security performance to assure DOE that the site is appropriately protected. These surveys must be validated through, among other things, document reviews, performance testing, direct observation, and interviews. To conduct these surveys, as well as routine oversight, the Site Office must be appropriately staffed with trained professionals. In our January 2007 report on the effectiveness of NNSA s management of its security programs, we found that NNSA s site offices including the Los Alamos Site Office suffered from shortages of security personnel, lacked adequate training resources and opportunities for site office security staff, and lacked data to determine the overall effectiveness of its Safeguards and Security program. We reported that these factors contributed to weakness in NNSA s oversight of security at its laboratories and production facilities. During the course of this review, senior Los Alamos Site Office officials confirmed that these problems persist. For example, they said NNSA has not developed a strategy for determining long-term staffing needs at the Site Office. As of October 2007, the Site Office employed 13 security staff enough for one person to oversee each of the topical areas the Site Office had to evaluate. This staffing level, officials said, was sufficient to cover only 15 percent of LANL s facilities. More recently, a senior security official at the Site Office said security staffing levels have decreased since October 2007. Furthermore, while NNSA had identified the need to train and certify Site Office security personnel in nuclear material control and accountability, vulnerability assessment, and personnel security, no specific funding for this training has been made available according to Site Office officials. According to the Los Alamos Site Office s Site Manager, the Site Office must employ expertise sufficient to determine, through effective oversight activities, whether LANL is implementing the policies and plans that it puts forward. <7. Conclusions> Accomplishing the mission of conducting world-class scientific work at Los Alamos National Laboratory requires the laboratory to maintain a security program that effectively addresses current security risks, anticipates future security risks, and ensures that initiatives to address both current and future risks are sustained over the long-term. While LANL has focused its attention on fixing current security risks in reaction to recent incidents and has implemented initiatives that address a number of previously identified security concerns, LANL has not developed the long-term strategic framework necessary to ensure that these fixes are sustained over time. In addition, some important security problems identified in external evaluations have not been fully addressed. Moreover, our review pointed out the potential for cyber security risks to increase as a result of actions to improve physical security. Consequently, while LANL security officials have indicated their desire to prevent future security incidents, we believe that only a long-term, integrated strategy can help ensure that they will succeed. Continuously implementing security improvement initiatives over the long- term and proactively addressing new security risks also requires an effective process for assessing contractor performance on security activities. We believe the relative immaturity of and weaknesses in the management approaches LANL and NNSA intend to use to ensure that security improvements are sustained may limit their effectiveness and result in a failure to sustain security improvement initiatives. Specifically, DOE s Compliance Order requires LANL to take immediate actions to improve security deficiencies, but the Compliance Order does not serve as a tool for ensuring these actions are sustained. In addition, we have doubts that LANL s Contractor Assurance System can sustain security improvement initiatives until it is sufficiently mature, which may take several years. Therefore, we believe performance evaluation plans hold the most promise for ensuring that security initiatives are sustained over the long-term. When the LANL management and operating contract was competed in 2005, laboratory security was a key consideration. NNSA stated that it intended to put a contract in place, along with an annual performance evaluation plan, that would communicate its priorities and provide incentives to accomplish those priorities. However, despite NNSA s persistent statements about the importance of security, we believe that the performance evaluation plans that NNSA has issued under the new LANS contract do not provide meaningful financial incentives for strategic security improvements or communicate to LANL that security is a top federal priority. Rather than reward LANL for principally complying with current DOE security requirements, in our view, financial incentives in performance evaluation plans should be focused on the long-term improvement of security program effectiveness to a greater extent. We believe that LANL needs to develop a strategic plan for laboratory security that is comprehensive, contains solutions to address all previously identified security findings, takes an integrated view of physical and cyber security, provides opportunities for periodic updates to ensure additional security risks are identified and addressed, and is tied to meaningful performance incentive fees. Finally, as LANL plans for further reductions in its facility footprint, it has an opportunity to assess facilities security weaknesses, as well as their deferred maintenance burdens and their anticipated contributions to future program missions, when it first determines which facilities should be reduced. In our view, including an assessment of facilities security weaknesses in this initial decision-making process would enhance the security benefits derived from the effort to reduce the footprint. <8. Recommendations for Executive Action> To improve security at Los Alamos National Laboratory, we recommend that the Secretary of Energy and the Administrator of NNSA require LANL to develop a comprehensive strategic plan for laboratory security that (1) addresses all previously identified security weaknesses, (2) contains specific and objective measures for developing and implementing solutions that address previously identified security weaknesses and against which performance can be evaluated, (3) takes an integrated view of physical and cyber security, (4) focuses on improving security program effectiveness, and (5) provides for periodic review and assessment of the strategic plan to ensure LANL identifies any additional security risks and addresses them. To ensure sustained improvement of LANL s security program, we recommend that the Administrator of NNSA provide meaningful financial incentives in future performance evaluation plans for implementation of this comprehensive strategic plan for laboratory security. To enhance security initiatives already under way at LANL, we recommend that NNSA require that future laboratory plans for footprint reduction include specific criteria for evaluating facilities security risks when making initial selections of facilities for footprint reduction. <9. Agency Comments and Our Evaluation> We provided NNSA with a copy of this report for review and comment. NNSA did not specifically comment on our recommendations. However, NNSA stated that while there is still much to be accomplished, NNSA believes that progress has been made in addressing reductions in classified parts, classified documents, vaults, and vault-type rooms, as well as with the implementation of engineered controls. While we acknowledge LANL s progress in our report, NNSA noted that several security problems at LANL addressed in the report specifically, nonstandard storage of classified parts and the maturation of contractor assurance systems are issues for the broader nuclear weapons complex as well. Overall, we continue to believe that the key issue is that NNSA and LANL cannot ensure that initiatives such as these will be sustained, or that changing security vulnerabilities will be identified and proactively addressed, without implementing our recommendations for a long-term strategic framework for security that effectively assesses contractor performance. NNSA s comments on our draft report are included in appendix V. NNSA also provided technical comments from LANL, which we have incorporated into this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Energy, and the Administrator of NNSA. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512- 3481 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology To identify Los Alamos National Laboratory s (LANL) major programs, we collected Department of Energy (DOE) and LANL budget, program, and activities documentation. This documentation included data on work LANL conducts for other federal agencies and nonfederal organizations, as well as projects LANL undertakes at its own direction. We used this documentation to identify major program categories and to group LANL s activities within them. Specifically, we identified three major program categories Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy; and two key support programs Environmental Programs and Safeguards and Security. LANL officials reviewed and validated our results, and based on feedback they provided, we made adjustments as needed. To determine the extent to which LANL s major and support programs rely on classified resources to meet their objectives, we collected information on classified resource use on a facility basis. Although we initially requested data on each program s use of classified resources, this data was not available because LANL maintains this data on a facility basis. LANL s facilities are shared in a matrix management approach by the laboratory s 64 technical divisions to execute programs. To enhance the accuracy and completeness of the facility-level information we collected, we developed a data collection instrument for LANL officials to complete that included specific data fields and definitions. To select the facilities for inclusion in this data collection instrument, we used LANL s real property catalogue, which lists each of the 1,283 facilities on the laboratory s campus. From this list, we excluded facilities containing only utility services, such as steam plants, and facilities with full-time occupancies of fewer than 10 people, unless the facility, based on its use for experiments, could potentially house or store classified resources. We also allowed like-facilities, such as individual bunkers used for high explosives testing, to be grouped together as one facility. Using these definitions, LANL officials determined that 633 facilities should be included in our review. We compared the facilities LANL had selected with the original real property list and agreed the 633 facilities selected by LANL represented the appropriate facilities for our analysis. Using the data collection instrument we had provided, LANL officials entered information on (1) the security protection level of each of the 633 facilities, as described by DOE Manual 470.4-2, Physical Protection, which defines different levels of security depending on the type and amount of classified resources these facilities store or house; (2) the types of classified resources housed or stored in each facility; (3) where practical, how many of each type of classified resource each facility stores or houses; (4) which of the laboratory s major and support programs rely on the classified resources in each facility; and (5) how much space each of the laboratory s major and support programs use in each facility as a percentage of that facility s gross square footage. We analyzed the data by aggregating facilities by program and apportioned classified resource usage according to three categories: (1) a program is the exclusive user of all of the space in a facility storing or housing classified resources, (2) a program is the primary user of space in a facility storing or housing classified resources because it uses more space than any of the other major or support programs at LANL, and (3) a program uses some space in a facility storing or housing classified resources. Because our analysis focused on facilities used for one of LANL s three major programs, we excluded facilities only used by laboratory support programs, resulting in final analysis of 607 of the original 633 facilities. To evaluate the completeness and accuracy of the information LANL officials provided, we compared the data with other documentary and testimonial evidence we collected during the course of our review to ensure that the data were consistent. For example, we had received briefings about the reduction of vault-type rooms at LANL, and we ensured that the total number of vault-type rooms LANL program managers had discussed with us during these briefings matched the total number of vault-type rooms identified in the facility data LANL provided. In addition, we compared the data provided on the security levels of specific facilities with our physical observations of security safeguards at these same facilities during site visits to determine whether the data LANL officials provided were consistent with our experiences at those facilities. We also conducted logic and electronic tests of the data and followed up with LANL officials to resolve discrepancies. We determined that these data were sufficiently reliable for our purposes. To identify the initiatives LANL is taking to consolidate its classified resources and reduce the scope of its physical footprint, we collected and reviewed data on LANL s plans for consolidating classified resources and interviewed key LANL, National Nuclear Security Administration (NNSA), and DOE officials. We also toured LANL facilities that house and store classified resources, such as vault-type rooms and the super vault-type room, and visited a facility where classified nuclear weapon parts are being destroyed. In addition, we identified the buildings that LANL was proposing to close as part of its footprint reduction effort and, using the information provided by LANL officials in response to our data collection instrument, determined whether closing these buildings could improve LANL s security posture by eliminating or consolidating the classified resources that may have been stored or housed in them as a result of footprint reduction. Finally, we visited sites currently undergoing closure and sites proposed for consolidation and reduction. To determine if LANL s security initiatives address previously identified security concerns, we reviewed security evaluations conducted by DOE s Office of Independent Oversight and NNSA s Site Office from fiscal years 2000 to 2008 and identified the security concerns raised by these evaluations. We then compared LANL s current initiatives with the results of our review of the security evaluations to determine if all of the security concerns were being addressed. We discussed the results of this analysis with DOE, NNSA headquarters, NNSA Site Office, and LANL contractor officials. In addition, we reviewed relevant DOE Office of Inspector General reports. To determine whether the management approach LANL is implementing under the new LANS contract is sufficient to ensure that LANL s security improvement initiatives are fully implemented and sustainable, we asked LANL and NNSA to identify how they intended to sustain security improvements and ensure the effectiveness of LANL s security. We reviewed the management approaches they identified, specifically (1) LANL s actions in response to DOE s July 2007 Compliance Order resulting from the October 2006 security incident, (2) the security-related aspects of the new Contractor Assurance System LANL is implementing, and (3) the incentives being used to improve security at LANL under the 2007 and 2008 Performance Evaluation Plans. As part of this review, we determined the extent to which each of these management approaches could sustain security improvement initiatives over the long-term and the extent to which these management approaches focused on either compliance with DOE security requirements or improved effectiveness of LANL s security program. We discussed these management approaches with LANL, NNSA headquarters, and NNSA Site Office officials. We conducted this performance audit from March 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: LANL s NNSA Supported Nuclear Weapons Science Programs LANL conducted work on 41 Nuclear Weapons Science programs in fiscal year 2007, all of which were supported by NNSA. When program objectives are shared, they have been combined in the table below. Supports the operation and maintenance of facilities and infrastructure that support the accomplishment of Nuclear Weapons Science programmatic missions Re-establishes an immediate capability to manufacture pits in support of the nuclear weapons stockpile, plans for long-term pit manufacturing capability, and manufactures specific quantities of W88 pits Supports the construction of new facilities and significant upgrades to existing facilities Provides the advanced computing infrastructure hardware, software, and code to simulate nuclear weapon performance Conducts research, development, and production work that is applicable to multiple nuclear weapon systems, as opposed to a specific weapons system (for example, basic research on critical factors of nuclear weapon operations) Appendix III: LANL s NNSA Supported Threat Reduction Science and Support Programs LANL conducted work on 12 Threat Reduction Science and Support programs in fiscal year 2007 that were supported by NNSA. Of these 12 programs, 9 had budgets in fiscal year 2007 that exceeded $1 million each. Information about these programs is in the table below. Appendix IV: LANL s Fundamental Science and Energy Programs Supported by DOE <10. Supports 35 programs at LANL that focus on research and development in carbon capture and sequestration, unconventional fuels, fuel utilization, climate, and predicting engineered natural systems> Appendix V: Comments from the National Nuclear Security Administration Appendix VI: GAO Contact and Staff Acknowledgments <11. Staff Acknowledgments> In addition to the individual named above, James Noel, Assistant Director; Nabajyoti Barkakati; Allison Bawden; Omari Norman; Rachael Schacherer; Rebecca Shea; Carol Herrnstadt Shulman; and Greg Wilshusen made key contributions to this report.
Why GAO Did This Study In 2006, a Los Alamos National Laboratory (LANL) contract employee unlawfully removed classified information from the laboratory. This was the latest in a series of high-profile security incidents at LANL spanning almost a decade. LANL conducts research on nuclear weapons and other national security areas for the National Nuclear Security Administration (NNSA). GAO was asked to (1) identify LANL's major programs and activities and how much they rely on classified resources; (2) identify initiatives LANL is taking to reduce and consolidate its classified resources and physical footprint and the extent to which these initiatives address earlier security concerns; and (3) determine whether its new management approaches will sustain security improvements over the long-term. To carry out its work, GAO analyzed LANL data; reviewed policies, plans, and budgets; and interviewed officials. What GAO Found With fiscal year 2007 budget authority of about $2.7 billion, LANL conducts work on over 175 programs that can be grouped into three major program categories--Nuclear Weapons Science, Threat Reduction Science and Support, and Fundamental Science and Energy--and two support program categories--Environmental Programs and Safeguards and Security. Respectively, LANL's major programs serve to ensure the safety, performance, and reliability of the U.S. nuclear deterrent; support nonproliferation and counterproliferation efforts; and address energy security and other emerging national security challenges. LANL's Nuclear Weapons Science programs are the primary users of the facilities housing classified resources. For example, the Nuclear Weapons Science programs are the primary users of 14 facilities that store special nuclear material while LANL's other major programs are the primary users of only 7 such facilities. LANL has over two dozen initiatives under way that are principally aimed at reducing, consolidating, and better protecting classified resources, as well as reducing the physical footprint of the laboratory by closing unneeded facilities. While many of these initiatives address security concerns identified through past external evaluations--such as efforts to consolidate storage of classified documents and media into fewer secure facilities and to destroy unneeded classified nuclear weapon parts--significant security problems at LANL have received insufficient attention. Specifically, LANL has not implemented complete security solutions to address either classified parts storage in unapproved storage containers or weaknesses in its process for ensuring that actions taken to correct security deficiencies are completed. LANL intends to use three management approaches to sustain the security improvements it has been able to achieve to this point over the long-term: (1) undertake management actions required of LANL under the Compliance Order issued by the Secretary of Energy as a result of the 2006 security incident, (2) develop a Contractor Assurance System to measure and improve LANL's performance and management, and (3) implement annual performance evaluation plans NNSA uses to measure LANL's performance and determine a contract award fee. These approaches contain weaknesses that raise doubts about their ability to sustain security improvements over the long-term. Specifically, the actions LANL has proposed to take to meet the terms of the Compliance Order are only short-term--with completion planned for December 2008. Further, according to LANL officials, the Contractor Assurance System is not fully deployed and the measures it includes may not be fully effective. Finally, the annual performance evaluation plans do not sufficiently reward improving long-term security program effectiveness.
<1. Background> The 8(a) program, administered by SBA s Office of Minority Enterprise Development, is one of the federal government s primary vehicles for developing small businesses that are owned by minorities and other socially and economically disadvantaged individuals. Firms that enter the program are eligible to receive contracts that federal agencies designate as 8(a) contracts without competition from firms outside the program. During fiscal year 1995, 6,002 firms participated in the 8(a) program. SBA data show that during fiscal year 1995, 6,625 new contracts and 25,199 contract modifications, totaling about $5.82 billion were awarded to 8(a) firms. To be eligible for the 8(a) program, a firm must be a small business that is at least 51-percent owned and controlled by one or more socially and economically disadvantaged persons. A business is small if it meets the SBA standard for size established for its particular industry. Members of certain ethnic groups, such as black and hispanic Americans, are presumed to be socially disadvantaged. To be economically disadvantaged as well, socially disadvantaged individuals cannot have personal net worth (excluding equity in a personal residence and ownership in the firms) exceeding $250,000. In addition, the firm must be an eligible business and possess a reasonable prospect for success in the private sector. Firms can participate in the 8(a) program for a maximum of 9 years. The Business Opportunity Development Reform Act of 1988 marked the third major effort by the Congress to improve SBA s administration of the 8(a) program and to emphasize its business development aspects. The legislation affirmed that the measure of success for the 8(a) program would be the number of firms that leave the program without being unreasonably reliant on 8(a) contracts and that are able to compete on an equal basis in the mainstream of the American economy. Over the years, reports by GAO, SBA s Inspector General, and others have identified continuing problems with SBA s administration of the program and/or with the program s ability to develop firms that could successfully compete in the marketplace after leaving the program. <2. Percentage of Competitively Awarded 8(a) Contract Dollars Was About the Same> To help develop firms and better prepare them to compete in the commercial marketplace after they leave the program, the act requires that 8(a) program contracts be awarded competitively to 8(a) firms when the total contract price, including the estimated value of contract options, exceeds $5 million for manufacturing contracts or $3 million for all other contracts. Of the approximately $3.13 billion in new 8(a) contracts awarded in fiscal year 1995, about $610 million, or 19.5 percent of the total dollar amount, was awarded competitively. In comparison, in fiscal year 1994, about $380 million, or 18.5 percent of the $2.06 billion in new 8(a) contracts, was awarded competitively. Between fiscal years 1991 and 1995, the total dollar value of new 8(a) contract awards increased by about 96 percent, while the value of contracts awarded competitively increased by about 190 percent. Appendix I shows the number and the dollar value of 8(a) contracts awarded competitively in fiscal years 1991 through 1995. SBA s June 1995 revisions to the 8(a) program regulations closed a major loophole involving the competitive award of indefinite delivery, indefinite quantity (IDIQ) contracts. IDIQ contracts are used when an agency does not know the precise quantity of supplies or services to be provided under a contract. As the agency identifies a specific need for goods or services, it modifies the IDIQ contract to reflect the actual costs associated with providing that quantity of goods or services, up to the maximum amount specified in the contract. Before the June 1995 revisions, SBA s 8(a) program regulations required that an agency, when determining whether an IDIQ contract should be offered on a competitive or noncompetitive (sole-source) basis, consider only the guaranteed minimum value of the contract rather than the estimated total contract amount. According to SBA, IDIQ contracts were often improperly used simply to avoid the need for competition, and wide differences often occurred between the guaranteed minimum values of IDIQ contracts and the amount eventually spent by agencies under the contracts. To avoid this problem, the June 1995 regulations require that for all 8(a) program contracts SBA accepts after August 7, 1995, including IDIQ contracts, the procuring agency must consider the total estimated value of the contract, including the value of contract options, when determining whether the contract should be awarded competitively. <3. Contract Dollars Continued to Be Concentrated in a Small Percentage of Firms> The concentration of 8(a) contract dollars among relatively few firms is a long-standing condition that continued in fiscal year 1995. SBA data show that in fiscal year 1995, 50 firms less than 1 percent of the 6,002 total firms in the 8(a) program during the fiscal year received about $1.46 billion, or about 25 percent of the $5.82 billion in total 8(a) contracts awarded. In fiscal year 1994, 50 firms about 1 percent of the 5,155 firms then in the program also received about 25 percent of the $4.37 billion in total 8(a) contract dollars awarded during the fiscal year. Twelve firms that were among the top 50 in fiscal year 1995 were also among the top 50 firms in the previous year. Furthermore, 22 firms that were among the top 50 in fiscal year 1994 were also among the top 50 firms in fiscal year 1993. Appendix II contains a table that shows the range of total contracts dollars awarded to the top 50 firms for fiscal years 1992 through 1995. While 8(a) contract dollars continue to be concentrated in a relatively few firms, many economically disadvantaged firms do not receive any 8(a) program contracts. SBA data show that of the 6,002 firms in the program during fiscal year 1995, 3,267 firms, about 54 percent, did not receive any program contracts during the fiscal year. In comparison, in fiscal year 1994, 56 percent of the 8(a) firms did not receive any program contracts. As we testified in April 1995, a key reason for the continuing concentration of contract dollars among a relatively few firms is the conflicting objectives confronting procuring officials, according to SBA officials. In SBA s view, the primary objective of procuring officials is to accomplish their agency s mission at a reasonable cost; for these officials, the 8(a) program s business development objectives are secondary. At the same time, the agency s procurement goals for the 8(a) program are stated in terms of the dollar value of contracts awarded. According to SBA, the easiest way for agencies to meet these goals is to award a few large contracts to a few firms, preferably firms with which the agencies have had experience and whose capabilities are known. In addition, according to SBA the concentration of firms receiving 8(a) contracts is no different than the concentration among firms that occurs in the normal course of federal procurement. However, while this may be true for federal procurement overall, the Congress in amending the 8(a) program in 1988 sought to increase the number of competitive small businesses owned and controlled by socially and economically disadvantaged individuals through the fair and equitable distribution of federal contracting opportunities. In 1995, SBA made several efforts to increase the award of 8(a) contracts to firms that had never received contracts. SBA required its district offices to develop action plans to increase the number of 8(a) contract opportunities offered to a greater percentage of 8(a) firms. These action plans were to include specific initiatives for marketing the program to federal procurement offices in their jurisdictions. In addition, the Departments of Defense and Veterans Affairs agreed to give special emphasis to 8(a) firms that had never received contracts. Although SBA has not assessed the impact of these activities on increasing contract awards, SBA officials believe that these steps have helped in getting 8(a) contracts to firms that had never received them. At the same time, in the view of SBA officials, the fact that some firms do not receive any 8(a) contracts may not be a problem because not all firms enter the program to receive 8(a) contracts. Rather, some firms, according to SBA officials, seek 8(a) certification in order to qualify as disadvantaged firms for other federal programs, such as the highway construction program funded by the Department of Transportation, or state and city programs that set aside contracts for disadvantaged firms. <4. Larger Percentage of Firms Met Target Levels of Non-8(a) Business> To increase the program s emphasis on business development and the viability of firms leaving the program, the act directed SBA to establish target levels of non-8(a) business for firms during their last 5 years in the program. The non-8(a) target levels increase during each of the 5 years, from a minimum of 15 percent of a firm s total contract dollars during its fifth year to a minimum of 55 percent in the firm s ninth or final program year. SBA field offices, as part of their annual reviews of firms, are responsible for determining whether firms achieve these target levels. In April 1995, we testified that SBA data showed that while 72 percent of the firms in their fifth year that had 8(a) sales met or exceeded the minimum 15-percent non-8(a) target established for the fifth year, only 37 percent of the firms in their ninth or final program year that had 8(a) sales met or exceeded the minimum 55-percent target established for that year. The data also showed that of the 1,038 firms in the fifth through the ninth year of their program term that had 8(a) sales, 37 percent did not meet the minimum targets. SBA data for fiscal year 1995 showed that of the 8(a) firms in their fifth year that had 8(a) sales during the fiscal year, about 85 percent met or exceeded the minimum non-8(a) business target of 15 percent established for that year. In comparison, of the 8(a) firms in their ninth or final program year that had 8(a) sales during the fiscal year, 58 percent met or exceeded the minimum non-8(a) business target of 55 percent established for that year. Appendix III shows the extent to which firms met their target levels for fiscal year 1995. In a September 1995 report, SBA s Inspector General (IG) discussed SBA s problems in enforcing the business-mix requirements. According to the IG, over one-third of the 8(a) firms in the last 5 years of their program term did not meet the business-mix requirements, yet they accounted for about $1.4 billion (63 percent) of total 8(a) contract revenues of all firms subject to the requirements. The IG noted that SBA s regulations identify a range of remedial actions that the agency can take to improve firms compliance with the requirements, including reducing or eliminating sole-source 8(a) contract awards, and that SBA personnel have the discretion of selecting which remedial actions to impose. The IG found, however, that SBA personnel often took minimal or no action when firms did not meet the requirements, and firms continued to obtain 8(a) contracts even though they were not complying with the regulations to develop non-8(a) business. To address this problem, the IG recommended that SBA limit the dollar value of new 8(a) contracts awarded to firms that do not meet their non-8(a) business target levels. SBA concurred with this recommendation and in March 1996 stated that it was exploring two options eliminating all new 8(a) contracts to firms that do not meet their non-8(a) business levels, or placing a limit on the dollar value of 8(a) contracts awarded to such firms. In September 1996, an SBA official told us that the agency could not propose regulations implementing such restrictions until the Department of Justice finalizes its regulations regarding federal affirmative action programs. The IG s September 1995 report also concluded that SBA could not measure the success of the 8(a) program as defined by the Congress, namely the number of firms that leave the program without being unreasonably reliant on 8(a) contracts and that are able to compete on an equal basis in the mainstream of the American economy. The IG reported that SBA s procedures did not provide for compiling and reporting data on the (1) number of companies that met their business-mix requirements while in the program and (2) companies that remained in business after they no longer had 8(a) revenues. As a result, the IG concluded that neither SBA nor the Congress could determine whether the 8(a) program was accomplishing its intended purpose or whether any changes to the program were needed. To address these problems, the IG recommended that SBA annually compile data on the numbers of firms that leave the 8(a) program that are unreasonably reliant on 8(a) contracts and those that are not. The IG also recommended that SBA (1) track former 8(a) firms after they have completed all 8(a) contracts to determine whether they are still in business and (2) annually determine how many of the firms that are still in business were unreasonably reliant on 8(a) contracts when they left the program. With regard to this recommendation, the IG noted that responses to a questionnaire it sent to former 8(a) firms that had been out of the program for approximately 1.5 to 5.5 years showed that many firms still had substantial revenues from carryover 8(a)contracts. For example, 23 percent of the respondents reported that more than 50 percent of their total revenues were from 8(a) contracts. In March 1996, SBA stated that it would begin to annually compile data on the number of firms leaving the 8(a) program that met or did not meet the business-mix requirements and, as a result, were or were not unreasonably reliant on 8(a) program contracts. SBA also stated that it was currently tracking 8(a) graduates to determine their current status and levels of revenues. Finally, SBA announced that it was developing a more thorough survey to track graduates and was considering using external data sources, such as Dun and Bradstreet, for this information. As of September 1996, SBA had not developed this survey. According to an SBA official, work on this project has been delayed by several factors, including the furloughs of SBA staff and the turnover of a top SBA official. <5. Few Firms Graduate From Program> SBA s regulations provide that any firm that (1) substantially achieves its business development goals and objectives before completing its program term and (2) has demonstrated the ability to compete in the marketplace without 8(a) program assistance may be graduated from the 8(a) program. According to the regulations, factors SBA is to consider in deciding whether to graduate a firm include the firm s sales, net worth, working capital, overall profitability, access to credit and capital, and management capacity and capability. SBA may also consider whether the firm s business and financial profile compares positively with the profiles of non-8(a) firms in the same area or a similar line of business. A determination of whether a firm should be graduated is a part of SBA s annual review of each firm. A firm has the option to appeal SBA s determination that it graduate from the 8(a) program. After graduating, a firm is no longer eligible to receive 8(a) contracts. According to SBA data, during fiscal year 1995, SBA graduated three firms from the program the first graduations in the program s history, according to SBA officials. The data also show that during fiscal year 1995, SBA terminated another 160 firms from the program for various reasons, including failure to comply with program requirements, and 250 more firms left the program because their program terms had expired during the fiscal year. According to SBA officials, SBA usually does not require that a firm graduate because of anticipated appeals and the difficulty in enforcing the graduation requirement, especially if the firm disagrees with SBA s decision. SBA s IG has identified companies that should have been, but were not, graduated from the 8(a) program. For example, the IG reported in September 1994 that its examination of 50 of the larger 8(a) firms found that most of these firms were larger and more profitable than firms not in the program. Specifically, the IG s review showed that 32 of the 50 8(a) firms exceeded their respective industries averages for the following five performance factors: business assets, revenues, gross profits, working capital, and net worth. The IG concluded that allowing such firms to continue in the program deprived other truly economically disadvantaged firms of 8(a) assistance and understated the 8(a) program s overall success because firms that had demonstrated success were not graduated. In May 1995, as a result of the IG s review, SBA established requirements for its field staff to (1) compare annually five financial performance factors of 8(a) firms with the industry averages for companies in the same line of business and (2) consider graduation from the program for any 8(a) firm that meets or exceeds three of the averages. However, a February 1996 evaluation by SBA of annual reviews conducted by SBA field staff of 8(a) firms raises questions about the ability of the field staff to conduct such analysis. SBA noted that the staffs financial analyses are very poor, staff members do not fully understand the concepts of economic disadvantage, financial condition of the firm, and access to capital, and the annual reviews contained few comparisons of the condition of 8(a) firms with similar businesses. To address this problem, SBA recommended that field staff receive training in financial analysis and guidance on the concept of continuing economic disadvantage. As of September 1996, SBA planned to provide this training during a national meeting planned for October or November 1996. <6. Applications Processed and Management and Technical Assistance Provided in Fiscal Year 1995> I would now like to provide some overall statistics regarding SBA s disposition of applications made to the 8(a) program during fiscal year 1995, and the amount of management and technical assistance provided during the year. <6.1. Applications Processed> SBA data show that during fiscal year 1995, SBA processed 1,306 8(a) program applications. SBA approved 696 of the applications and initially denied the remaining 610. Among the reasons cited for denying the 610 applications were the following: The firm lacked potential for success (367 applications). The socially and economically disadvantaged individual did not own or control the firm (364 applications). The individual who owned and controlled the firm was not socially or economically disadvantaged (263 applications). The firm was a type of business that is not eligible to participate in the program (78 applications). Of the 610 applications that SBA initially denied, 323 were reconsidered and 189 were subsequently approved, bringing to 885 the total number of applications approved during fiscal year 1995. In comparison, SBA ultimately approved 1,107 of the 1,536 applications it processed in fiscal year 1994, and 540 of the 819 applications it processed in fiscal year 1993. <6.2. Management and Technical Assistance> As small businesses, 8(a) firms are eligible to receive management and technical assistance from various sources to aid their development. SBA s primary source of such assistance has been its 7(j) program. Authorized by section 7(j) of the Small Business Act, as amended, the 7(j) program provides seminars and individual assistance to 8(a) firms. The 8(a) firms are also eligible to receive assistance from SBA s Executive Education Program, which is designed to provide the owners/managers of 8(a) firms with executive development training at a university. SBA may also provide 7(j) assistance to socially and economically disadvantaged individuals whose firms are not in the 8(a) program, firms located in areas of high unemployment, and firms owned by low-income individuals. In fiscal year 1995, SBA spent about $7.6 million for 7(j) assistance to 4,604 individuals. This figure included individuals from 1,785 8(a) firms that received an aggregate of 9,452 days of assistance, and 190 firms that received executive training under SBA s Executive Education Program. In fiscal year 1996, SBA changed the focus of the 7(j) program to provide only executive-level training. The individual assistance and seminar training previously provided will be provided by SBA s Small Business Development Centers and Service Corps of Retired Executives. This concludes my prepared statement. I would be glad to respond to any questions that you or the Members of the Committee may have. 8(a) Contracts and Dollars Awarded Competitively for Fiscal Years 1991 Through 1995 Range of Total 8(a) Contract Dollars Awarded to Top 50 8(a) Firms for Fiscal Years 1992 Through 1995 Analysis of 8(a) Firms Compliance With Their Non-8 (A) Business Requirements for Fiscal Year 1995 The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study GAO discussed the Small Business Administration's (SBA) 8(a) Minority Business Development Program, focusing on SBA progress in: (1) requiring the competitive award of high-value 8(a) contracts; (2) distributing 8(a) contracts to a larger number of firms; (3) ensuring that firms rely less on 8(a) contracts as they move through the 8(a) program; and (4) graduating from the program firms that have demonstrated that they can survive without 8(a) contracts. What GAO Found GAO noted that: (1) while the dollar amount of 8(a) contracts awarded competitively during fiscal year (FY) 1995 increased over FY 1994, the percentage of contract dollars awarded competitively remained at about 19 percent; (2) SBA revisions closed a major loophole that allowed the use of indefinite delivery, indefinite quantity contracts to avoid competition; (3) although SBA made several efforts to more widely distribute 8(a) contracts, the concentration of 8(a) program dollars to relatively few firms continued in FY 1995; (4) during FY 1995, a larger percentage of 8(a) firms in their final year of the program achieved the required level of non-8(a) business than was reported for previous years; (5) during FY 1995, SBA graduated 3 firms from the 8(a) program, the first graduations in the program's history, and terminated another 160 firms for various reasons, and 250 firms left the program; (6) during FY 1995, SBA approved 885 8(a) applications; and (7) SBA provided management and technical assistance to 8(a) firms through its 7(j) program.
<1. Background> Title XVII of EPAct 2005 Incentives for Innovative Technologies authorized DOE to guarantee loans for projects that satisfy all three of the following criteria: (1) decrease air pollutants or man-made greenhouse gases by reducing their production or by sequestering them (storing them to prevent their release into the atmosphere); (2) employ new or significantly improved technologies compared with current commercial technologies; and (3) have a reasonable prospect of repayment. Title XVII identifies 10 categories of projects that are eligible for a loan guarantee, such as renewable energy systems, advanced fossil energy technologies, and efficient end-use energy technologies. Appendix II provides a list of these categories. The LGP office is under DOE s Office of the Chief Financial Officer. LGP s actions are subject to review and approval by a Credit Review Board. The Board met for the first time in April 2007; it approves major policy decisions of the LGP, reviews LGP s recommendations to the Secretary of Energy regarding the issuance of loan guarantees for specific projects, and advises the Secretary on loan guarantee matters. DOE first received appropriated funds for the LGP s administrative costs in early 2007 and began processing preapplications in response to the August 2006 solicitation and at the same time began to obtain staff and take other steps to initiate the program. During 2007, it reviewed preapplications for 143 projects and in October 2007 invited 16 of the preapplicants to submit full applications for loan guarantees. Appendix II includes information on the 16 projects invited to submit full applications. In general, according to DOE, the processing of full applications will require DOE to have numerous interactions with the applicants and private lenders. It will also require financial, technical, environmental, and legal advisors to assist with underwriting, approving, and issuing a loan guarantee. DOE estimated that the time between receiving an application and completing negotiations for a loan guarantee contract would range from 9 to 25 months, with additional time at the beginning to prepare and issue the solicitation and at the end to close the loan. On April 11, 2008, DOE issued a fiscal year 2008 implementation plan for $38.5 billion in solicitations, to respond to a requirement that DOE provide Congress information about future solicitations 45 days prior to issuing them. On June 30, 2008, DOE simultaneously issued three solicitations that total $30.5 billion on (1) efficiency, renewable energy, and electric transmission ($10 billion), (2) nuclear power facilities ($18.5 billion), and (3) nuclear facilities for the front end of the nuclear fuel cycle ($2 billion). DOE plans to subsequently issue a fourth solicitation in late summer 2008 for advanced fossil energy projects ($8 billion). DOE is also required to annually provide Congress a report on all activities under Title XVII and issued the first report on June 15, 2007. Figure 1 shows a timeline of these and other key program events since 2005 that illustrate the status of the LGP through June 2008. <2. DOE Issued Regulations That Contained Required Elements, but One Key Aspect Is Not Clear> On October 23, 2007, DOE s final regulations for the LGP were published in the Federal Register. DOE had previously issued program guidelines in August 2006. The final regulations contain requirements for preapplication and application submissions; programmatic, technical and financial evaluation factors for applications; and lender eligibility and servicing requirements. The regulations incorporate and further clarify requirements of Title XVII related to eligibility, fees, default conditions, and audit documentation. The regulations also generally incorporate requirements set forth in OMB Circular A-129 Policies for Federal Credit Programs and Non-Tax Receivables, which prescribes policies and procedures for federal credit programs, such as applicant screening, lender eligibility, and corrective actions. Because loan guarantee programs pose significant financial risks, it is important to include appropriate mechanisms to help protect the federal government and American taxpayers from excessive or unnecessary losses. DOE changed some key aspects of the initial program guidelines in its final regulations to help make the program more attractive to lenders and potentially reduce financing costs for projects. These changes included increasing the maximum guarantee percentage, allowing the lender to separate or strip the nonguaranteed portion of the debt, and revising its interpretation of a Title XVII requirement that DOE have superior right to project assets pledged as collateral. Other important changes relate to increased specificity in key definitions and a requirement for independent engineering reports. Specifically, we found the following: Guarantee percentage. The final regulations allow for loan guarantees of up to 100 percent of the loan amount, which is limited to no more than 80 percent of the project costs, provided that, for a 100 percent guarantee, the loan must be disbursed by the Federal Financing Bank (FFB). The use of the FFB is required, in part, because a private lender may exercise less caution when underwriting and monitoring a loan with a 100 percent guarantee. The guidelines stated that DOE preferred not to guarantee more than 80 percent of the loan amount, which was limited to no more than 80 percent of the project costs. Because the regulations increased the maximum guarantee percentage, this change increases the risk that the government is willing to assume on a project by project basis. Stripping the nonguaranteed portion. When DOE guarantees 90 percent or less of a loan, the final regulations allow the nonguaranteed portion of a loan to be separated or stripped from the guaranteed portion. This change allows lenders greater flexibility in selling portions of a loan on the secondary market and could reduce overall funding costs for projects. In contrast, the guidelines and the proposed regulations did not allow stripping. Superiority of rights. Title XVII requires DOE to have superior rights to project assets pledged as collateral. In the proposed regulations, DOE interpreted this provision to require DOE to possess first lien priority to assets pledged as collateral. Therefore, holders of nonguaranteed portions of loans would be subordinate to DOE in the event of a default. In the final regulations, DOE changed its interpretation to allow proceeds received from the sale of project assets to be shared with the holders of nonguaranteed portions of loans in the event of a default. As noted in public comments on the proposed regulations, this practice is an established norm in project lending. DOE stated that it retains superiority of rights, as required by Title XVII, because DOE has sole authority to determine whether, and under what terms, the project assets will be sold at all. Key definitions. In the context of innovative technologies, the final regulations added a definition that clarified the definition of what constitutes a new or significantly improved technology, considerably expanded the definition of commercial technology already in use, and clearly linked the definitions to each other. According to the regulations, a new or significantly improved technology is one that has only recently been developed or discovered and involves a meaningful and important improvement in productivity or value in comparison with the commercial technology in use. DOE s regulations define a commercial technology as being in general use if it is employed by three or more commercial projects in the United States for at least 5 years. Independent engineering report. The final regulations require the applicant to provide an independent engineering report on the project, which was not required under the guidelines. According to the regulations, the engineering report should assess the project, including its site information, status of permits, engineering and design, contractual requirements, environmental compliance, testing and commissioning, and operations and maintenance. Although the final regulations generally address requirements from applicable guidance, we identified one key aspect related to equity requirements that is not clear. The final regulations state that DOE will evaluate whether an applicant is contributing significant equity to the project. The regulations define equity as cash contributed by the borrowers and other principals. Based on this definition, it appears that non-cash contributions, such as land, would not be considered equity. However, the LGP director told us that land and certain other non-cash contributions could be considered equity. As a result, the regulations do not fully reflect how DOE is interpreting equity and potential applicants may not have a full understanding of the program s equity requirements. <3. DOE Has Not Fully Implemented Activities Necessary for Effective and Accountable Program Management> DOE may not be well positioned to manage the LGP effectively and maintain accountability because it has not completed a number of management and internal control activities key to carrying out the program. As a result, DOE may not be able to process applications efficiently and effectively, even though DOE has begun to review its first application, and officials told us they will begin reviewing other applications as soon as they are submitted. The key activities that DOE has not sufficiently completed include (1) clearly defining its key milestones and its specific resource needs, (2) establishing policies and procedures for operating the program, and (3) agreeing upon key measures to evaluate program progress. The nature and characteristics of the LGP expose the government to substantial inherent risk; implementing these management and internal control tools is a means of mitigating some risks. <3.1. DOE Has Begun Its Application Review Process before Clearly Defining Program Milestones and Specific Resource Needs> According to our work on leading performance management practices, agencies should have plans for managing their programs that identify goals, strategies, time frames, resources, and stakeholder involvement in decision making. In January 2008 DOE completed a concept of operations document that contains, among other things: information on the LGP s organizational structure; mission, goals, and objectives; and timelines, milestones, and major program activities that must be accomplished and their sequence. However, LGP officials told us they do not consider the concept of operations a strategic or performance planning document. In addition, it is unclear whether LGP plans to set other timelines and milestones that would be available to stakeholders, such as applicants and Congress. Without associating key activities with the time frames it aims to meet, it is unclear how DOE can adequately gauge its progress or establish and maintain accountability to itself and stakeholders. As of March 2008, 14 of the 16 companies invited to submit full applications reported that they plan to submit their applications to DOE by the end of September 2008, and the other 2 plan to submit by the end of January 2009. DOE received one application in April 2008, which it has begun to review, and DOE officials told us they will begin reviewing other applications as soon as they are submitted. This influx of applications could cause a surge in workload, but it is not clear that DOE has obtained the resources it needs to carry out its application review activities. Although it is critical for agencies to determine the timing and type of resources needed, DOE has not determined the number and type of contractor resources it will need to review the applications, which could lead to delays. For example, DOE expects to need legal, engineering, environmental, and financial contracting expertise but has not completed plans describing the types of expertise needed, estimated when the expertise will be required, or determined to what extent each type of expertise will be needed. According to the LGP director, much of this expertise will have to be acquired through new contracts that DOE must negotiate and that generally take some months to put into place. To the extent that these resources are not available when needed, DOE could experience delays in reviewing the applications. In early April 2008, the LGP director said that his office is working with other DOE offices to develop these contracts and considers this activity high priority; while the completion date for an acquisition and contract vehicles strategy was initially set for the end of April, the timetable DOE includes in its agency comments letter indicates an August 2008 completion date. In addition, as of April the LGP office was 7 staff short of its authorized level of 16 for fiscal year 2008; the director told us it has faced delays in hiring permanent staff, although he indicated that the office has enough permanent staff to review the first 16 applications. He also said that the permanent and contractor staff LGP has hired have many years of project finance or loan guarantee experience at other institutions. <3.2. DOE Has Not Completed Key Policies and Procedures> Management has a fundamental responsibility to develop and maintain effective internal controls to help ensure that programs operate and resources are used efficiently and effectively to achieve desired objectives and safeguard the integrity of their programs. As of May 2008, DOE had not completed policies and procedures to select loans, identify eligible lenders and monitor loans and lenders, estimate the costs of the program, or account for the program, despite reporting to Congress in June 2007 that it would have completed most of these activities by the end of fiscal year 2007. <3.2.1. Selecting Loans> OMB Circular A-129 calls for agencies to develop policies and procedures to select loans, including appropriate applicant screening standards to determine eligibility and creditworthiness. In this regard, from August 2006 through October 2007, DOE conducted a preapplication process to help it develop final regulations; develop and test policies, criteria, and procedures for reviewing preapplications; and determine which projects it would invite to apply for loan guarantees. Conducting the preapplication process also enabled DOE to respond to congressional interest in launching the program, according to DOE officials. We found that, during its preapplication review process, DOE did not always sufficiently document why it ultimately selected projects that reviewers did not score highly or recommend initially. DOE documented the results of the selection process, including its technical and financial reviews for individual projects, its joint technical-financial reviews for categories of projects, and its decisions made during its secondary review process. However, we found that DOE s documentation for deciding which projects to recommend to the Credit Review Board did not always provide sufficient justification. While our discussions with DOE officials helped clarify the documentation for 6 of the 16 invited projects, they did not for 2 of those projects. According to DOE officials, they gave greater weight to the technical merit than the financial merit of the projects during the preapplication selection process. In addition, a consultant DOE hired to review the preapplication process found that although the files were in good working order, DOE did not consistently conduct and document its technical evaluations and did not document financial evaluations in depth. The consultant recommended that DOE take steps to establish standards for these evaluations and increase the level of transparency in the preapplication evaluation process. We also found that the financial and technical criteria DOE used to review the preapplications were not sufficiently defined in some cases. For example, a requirement that is central in considering projects overall eligibility whether it is innovative, also known as new and significantly improved was difficult to determine, according to several program managers and reviewers. After the initial review process was completed, DOE further defined what it considers new and significantly improved in its final regulations, but has not correspondingly updated the review criteria. In addition, when DOE conducted its financial reviews, it evaluated projects by assigning scores between zero and four with zero being the weakest score and four being the strongest score. However, DOE did not define what the possible scores signified. Moreover, 60 percent of a preapplicant s financial score was based on creditworthiness; yet, DOE did not require preapplicants to submit pertinent financial and credit information such as audited financial statements or credit histories. DOE has not fully developed detailed internal policies and procedures, including criteria, for selecting applications. To review the first 16 projects, DOE officials told us they will use criteria developed for the preapplication process. For projects that apply in response to future solicitations, DOE plans to amend current preapplication criteria and develop additional evaluation factors that will be specific to certain technology areas or sectors. According to DOE officials, as of May 2008, DOE has also hired one staff person to develop credit policies and procedures specific to LGP, and to fully establish its credit policy function. They also said that these credit policies and procedures would provide internal guidance related to some aspects of application review. DOE officials told us they also expect the application process guidance they developed for companies to also serve as internal review policies and procedures. This guidance provides instructions on the content and format applicants should adhere to when applying for a guarantee, such as background information; a project description; and technical, business, and financing plans. The guidance generally aligns with information in the final regulations on the factors DOE plans to review and should make it easier for companies to develop applications. However, in some cases the guidance lacks specificity for applicants. In addition, when considering the guidance for use as internal policies and procedures, as DOE has indicated it will be used, we determined that it does not contain criteria or guidance that would be sufficient for DOE reviewers. Specifically, it lacks instruction and detail regarding how DOE will determine project eligibility and review applications, such as roles and responsibilities, criteria for conducting and documenting analyses, and decision making. For example, we found the following: Project eligibility. DOE does not delineate how it will evaluate project eligibility that is, how each project achieves substantial environmental benefits and employs new or significantly improved technologies. The guidance requires applicants to submit background information on the technologies and their anticipated benefits but does not require enough detail for DOE to assess the information. Without such detail, it is unclear how DOE will measure each project s contribution to the program. Independent engineer s report. DOE s guidance does not provide sufficient detail on the technical information applicants should submit in this report, even though the guidance requires that the report comprehensively evaluate five technical elements as well as contractual requirements and arrangements. DOE officials told us that applicants generally develop this report for investors and that the reports will likely be of varying quality and detail. DOE officials also expect that, in developing a separate report that assesses this information, they will likely need to fill considerable gaps and conduct additional analyses. While DOE recognizes these reports serve an important due diligence function, DOE has not provided applicants with specific instructions on what to include. As a result, DOE is likely to lose efficiency and effectiveness when it uses the reports to aid in evaluating loan guarantee applications. Creditworthiness. For a company to be eligible for a loan guarantee, a reasonable prospect of repayment must exist and the applicant cannot have delinquent federal debt, which is critical to determine at the beginning of the review process to assess whether an applicant is even eligible. Therefore, a sound assessment of creditworthiness is essential. However, the criteria DOE has established to evaluate creditworthiness which it used during the preapplication process and plans to use for future applications did not take into account the more meaningful and thorough information required for the full application process. In addition, while DOE s guidance requests applicants to submit more complete information, such as a credit assessment, it does not provide details regarding how DOE will evaluate the information to determine creditworthiness. Project cost information. DOE s guidance for the application process instructs applicants to indicate if their cost estimates are firm or subject to change, but it does not request applicants to report a level-of-confidence in their total project estimates. GAO has reported that for management to make good decisions and determine if a program is realistically budgeted, the estimate must quantify the uncertainty so that a level of confidence can be given about the estimate. For example, an uncertainty analysis could inform DOE management that there is a 60 percent chance that a project s cost will be greater than estimated. Without requiring information on the uncertainty in project cost estimates and specifying how it will assess that information, DOE may not be able to appropriately determine a project s feasibility and identify projects that could eventually require substantially more investment or loans for completion. Without sufficient internal policies and procedures that correspond to application components, DOE s application review process will lack transparency and it will be difficult for DOE to consistently, thoroughly, and efficiently evaluate project applications. <3.2.2. Identifying Eligible Lenders and Monitoring Loans and Lenders> OMB Circular A-129 calls for agencies to establish policies and procedures to identify eligible lenders and to monitor loans and lenders. DOE has hired a director of monitoring and, according to DOE officials, is currently developing policies and procedures that will include (1) processes for identifying eligible lenders through a competitive process, as well as an associated checklist and guide for evaluating potential lenders, and (2) loan servicing and monitoring guidelines. These policies and procedures may build upon the monitoring policies of the Overseas Private Investment Corporation (OPIC). Implementing rigorous monitoring policies and procedures will help DOE ensure the success of the loan guarantee program. According to DOE officials, these policies and procedures will be completed before DOE issues the first loan guarantees. <3.2.3. Estimating Subsidy Costs> As required by the LGP s fiscal years 2007 and 2008 appropriation, DOE plans to charge borrowers fees to cover subsidy costs, as permitted by Title XVII. However, estimating the subsidy cost for the LGP will be difficult because of inherent risks due to the nature and characteristics of the program. To the extent that DOE underestimates the costs and does not collect enough fees from borrowers, taxpayers will ultimately be responsible for any shortfall. Therefore, it is critical that DOE have a sound and comprehensive methodology to develop its cost estimates. Guidance on preparing subsidy cost estimates lists procedures necessary to estimate subsidy costs, such as the development of a cash flow model; the review and approval process; and documentation of the cash flow model and underlying assumptions. OMB Circular A-129 requires agencies to develop models to estimate subsidy costs before obligating direct loans and committing loan guarantees. According to LGP officials, DOE has submitted a draft subsidy cost model to OMB for approval and has drafted documentation for the subsidy calculation process. <3.2.4. Estimating Administrative Costs> Title XVII requires DOE to collect fees from borrowers to cover applicable administrative costs. Such costs could include costs associated with evaluating applications; offering, negotiating, and closing guarantees; and servicing and monitoring the guarantees. The federal accounting standard for cost accounting states that cost information is an important basis for setting fees and reimbursements and that entities should report the full cost of programs, including the costs of (1) resources the office uses that directly or indirectly contribute to the program, and (2) identifiable supporting services other offices provide within the reporting entity. While DOE has prepared a schedule of fees to be charged for the first solicitation, it could not provide support for how it calculated the fees. DOE officials stated that they used professional judgment as a basis for the fee structure. However, DOE has not developed polices and procedures to estimate administrative costs, including a determination of which costs need to be tracked. For example, DOE has not tracked administrative costs associated with the time general counsel staff have spent working on issues related to the LGP. Therefore, DOE lacks assurance that the fees it collects will fully cover applicable administrative costs, particularly support costs from offices outside of the LGP office, such as the general counsel. According to DOE officials, some element of judgment must be used at this time in the determination of fees and as more experience is gained they will be able to develop policies and procedures designed to ensure that adequate fees are collected to cover administrative costs. <3.2.5. Accounting for the Loan Guarantee Program> In April 2008, DOE officials told us that policies and procedures to account for the LGP are nearly complete. Under the LGP regulations, DOE may issue loan guarantees for up to 100 percent of the loan amount as long as FFB disburses the loan. OMB Circular A-11, Preparation, Submission and Execution of the Budget, calls for credit issued by FFB to be budgeted for as a direct loan. Because the accounting treatment mirrors the budgeting, DOE would also account for such loans as direct loans. Accordingly, DOE has indicated that the policies and procedures will cover accounting for both direct loans and loan guarantees. <3.3. DOE Has Not Completed Its Framework for Evaluating Program Progress> DOE has also not completed the measures and metrics it will use to evaluate program progress. DOE included some of these in its fiscal year 2009 budget request and its concept of operations document, but LGP s director told us the measures and metrics have not been made final because DOE and OMB have not yet agreed on them. In assessing the draft measures and metrics, we observed the following shortcomings: DOE intends to measure outcomes directly tied to overall program goals installing new capacity, reducing greenhouse gas emissions, and reducing air pollution and has said it will develop baselines or benchmarks for these outcomes. However, it has not yet gathered and analyzed the necessary data on, for example, existing capacity or current emission levels for categories of LGP project technologies. DOE included a measure for the recovery of administrative costs but not one for the recovery of subsidy costs, which will most likely be the more significant program cost. DOE s metric to assess the effectiveness of financing decisions containing the loss rate to 5 percent may not be realistic; it is far lower than the estimated loss rate of over 25 percent that we calculated using the assumptions included in the fiscal year 2009 president s budget. <4. Inherent Risks Will Make Estimating Subsidy Costs Difficult and May Introduce Self- Selection Biases in the Projects That Ultimately Receive Loan Guarantees> The nature and characteristics of the LGP will make estimating the program s subsidy costs difficult even if DOE develops a sound and comprehensive methodology. Evaluating the risks of individual projects applying for loan guarantees will be difficult because the LGP targets innovative energy technologies and because projects will likely have unique characteristics varying in size, technology, and experience of the project sponsor. For the first solicitation alone, the technologies range from a modest energy efficiency project to multiyear advanced coal projects, and estimated project costs range from around $25 million to more than $2 billion. In fiscal year 2008, DOE plans to further diversify the types of technology projects that it will consider for its loan portfolio, including nuclear power facilities, whose project costs may be more than $5 billion for each facility. Further, DOE will not gain significant experience in each technology because the program s objective is to commercialize a limited number of each type of innovative technologies. Therefore, the types of projects will, by design, evolve over time, and the experience and data that DOE gains may not be applicable to evaluating the risks of projects applying in the future. The composition of DOE s eventual portfolio will even further limit the data available to help DOE evaluate project risks. Unlike an agency that provides a high volume of loan guarantees for relatively similar purposes, such as student loans or home loans, DOE will likely approve a small number of guarantees each year, leaving it with relatively little experience to help inform estimates for the future. In addition, DOE s loan guarantees will probably be for large dollar amounts, several of which could range from $500 million to more than $1 billion each. As a result, if defaults occur, they will be for large dollar amounts and will likely not take place during easily predicted time frames. Recoveries may be equally difficult to predict and may be affected by the condition of the underlying collateral. In addition, project risks and loan performance could depend heavily on regulatory and legislative actions, as well as future economic conditions, including energy prices and economic growth, which generally can not be predicted accurately. These factors combine to make it difficult for DOE to prepare reliable estimates of subsidy costs. To the extent that DOE underestimates the costs of the LGP and does not collect enough fees from borrowers, taxpayers will ultimately have to pay for any shortfalls. Under FCRA, DOE is required to update, or reestimate, the subsidy costs of LGP to reflect actual loan performance and changes in expected future loan performance. Shortfalls identified in annual reestimates are automatically funded by the federal government under the terms of the FCRA and are not subject to congressional scrutiny during the annual appropriation process. The likelihood of misestimates and the practice of charging fees to cover all the estimated costs may lead to biases in the projects that ultimately receive loan guarantees and tilt the portfolio of loan guarantees toward those that will not pay for themselves. In general, potential borrowers will know more about their projects and creditworthiness than DOE. As a result, borrowers will be more likely to accept loan guarantee offers if they believe DOE has underestimated the projects risks and therefore set the fee too low, than if they believe DOE has overestimated risks. Underestimated fees amount to an implicit subsidy. The CBO reported that such a bias in applicants acceptance of loan guarantees increases the likelihood that DOE s loan guarantee portfolio will have more projects for which DOE underestimated the fee. CBO evaluated the cost of the LGP and estimated that DOE would charge companies, on average, at least 1 percent lower than the likely costs of the guarantees. To the extent that DOE underestimates the fee, and does not collect enough fees from borrowers to cover the actual subsidy costs, taxpayers will bear the cost of any shortfall. Even if DOE estimates the subsidy cost with a reasonable degree of accuracy and charges the applicants fees to cover the true costs, there is a potential for a self-selection bias in the companies participating in the program toward those for which the fee is small relative to the expected benefits of the loan guarantee (such as more favorable loan terms or a lower interest rate). As CBO recently reported about the LGP, a loan guarantee would improve a project s financial viability if the cost of the guarantee is shifted to the federal government. However, when the borrower pays a fee to cover the subsidy cost, as is the case with the LGP, the cost and most of the risk stay with the project and the viability of the project may not be substantially improved. Therefore, for such projects, there is a practical limit to how large the fee can be without jeopardizing the project s financial prospects; these constraints add to the challenge of setting fees high enough to compensate for uncertainties. To the extent that some projects targeted by Title XVII are not financially viable without some form of federal assistance or favorable treatment by regulators, these projects will not pursue loan guarantees even though they are otherwise eligible. As a result, if this financial viability is not distributed evenly across technologies targeted by Title XVII, the projects that ultimately receive loan guarantees may not represent the full range of technologies targeted by Title XVII. DOE officials noted that the borrower pays option may cause the more risky potential borrowers that would be required to pay a higher fee to either (1) contribute more equity to their projects to lower the fee or (2) abandon their projects and not enter the program. If potential borrowers contribute more equity, this could decrease default risk or improve potential recoveries in the event of a default. <5. Conclusions> More than a year has passed since DOE received funding to administer the LGP and we recommended steps it should take to help manage the program effectively and maintain accountability. We recognize that it takes some time to create a new office and hire staff to implement such a program. However, instead of working to ensure that controls are in place to help ensure the program s effectiveness and to mitigate risks, DOE has focused its efforts on accelerating program operations. Moreover, because loan guarantee programs generally pose financial risk to the federal government, and this program has additional inherent risks, it is critical that DOE complete basic management and accountability activities to help ensure that it will use taxpayer resources prudently. These include establishing sufficient evaluation criteria and guidance for the selection process, resource estimates, and methods to track costs and measure program progress. Without completing these activities, DOE is hampering its ability to mitigate risks of excessive or unnecessary losses to the federal government and American taxpayers. The difficulties DOE will face in estimating subsidy costs could increase LGP s financial risk to the taxpayer. If DOE underestimates costs, the likely end result will be projects that do not fully pay for themselves and an obligation to taxpayers to make up the difference. Furthermore, the inherent risks of the program, along with the expectation that borrowers will cover the costs of their loan guarantees, may lead to self-selection bias that tilts the portfolio of projects toward those for which costs have been underestimated. Neither we nor DOE will be able to fully evaluate the extent or magnitude of the potential financial costs to the taxpayer until DOE has developed some experience and expertise in administering the program. Expanding the LGP at this juncture, when the program s risks and costs are not well understood, could unnecessarily result in significant financial losses to the government. Self-selection bias may also under certain conditions lead to less than the full range of projects of technologies targeted by Title XVII represented in the LGP. The likely costs to be borne by taxpayers and the potential for self-selection biases call into question whether the program can fully pay for itself; they also call into question whether the program will be fully effective in promoting the commercialization of a broad range of innovative energy technologies. It is important to note that, while we found that inherent risks and certain features of the program may lead to unintended taxpayer costs and that self-selection biases may reduce the scope of participation in the program, this is not an indication that the overall costs of the program outweigh the benefits. Rather, it simply means that the costs may be higher and the benefits lower than expected. Finally, the extent to which these costs and benefits will differ from expectations over the life of the program is something that cannot be reasonably estimated until DOE gains some experience in administering the LGP. Even at the current planned pace of the program, it will take a number of years before we can observe the extent to which unintended taxpayer costs are incurred or the benefits of innovative energy technologies emerge. <6. Matter for Congressional Consideration> To the extent that Congress intends for the program to fully pay for itself, and to help minimize the government s exposure to financial losses, we are suggesting that Congress may wish to consider limiting the amount of loan guarantee commitments that DOE can make under Title XVII until DOE has put into place adequate management and internal controls. We are also making recommendations to assist DOE in this regard. <7. Recommendations for Executive Action> To improve the implementation of the LGP and to help mitigate risk to the federal government and American taxpayers, we recommend that the Secretary of Energy direct the Chief Financial Officer to take the following steps before substantially reviewing LGP applications: complete detailed internal loan selection policies and procedures that lay out roles and responsibilities and criteria and requirements for conducting and documenting analyses and decision making; clearly define needs for contractor expertise to facilitate timely amend application guidance to include more specificity on the content of independent engineering reports and on the development of project cost estimates to provide the level of detail needed to better assess overall project feasibility; improve the LGP s full tracking of the program s administrative costs by developing an approach to track and estimate costs associated with offices that directly and indirectly support the program and including those costs as appropriate in the fees charged to applicants; further develop and define performance measures and metrics to monitor and evaluate program efficiency, effectiveness, and outcomes; and clarify the program s equity requirements to the 16 companies invited to apply for loan guarantees and in future solicitations. <8. Agency Comments and Our Evaluation> We provided a draft of this report to the Secretary of Energy for review and comment. DOE generally disagreed with our characterization of its progress to date in implementing the LGP. DOE stated two of our six recommendations were inapplicable to the LGP, indicated it has largely accomplished the remaining four recommendations, and disagreed with our matter for congressional consideration. DOE further stated that our report contains flawed logic, significant inaccuracies, and omissions; however, DOE did not provide evidence to support these assertions. Our evaluation of DOE s comments follows. A more detailed analysis is presented in appendix III. In particular, DOE stated that we placed disproportionate emphasis on activities that should be completed for a fully implemented loan guarantee program rather than one that is currently being implemented, and that we overlooked DOE s accomplishments to date. We disagree. We believe that our report accurately assesses the LGP in its early development stage and focused our report s analysis and recommendations on activities that should be completed before DOE begins to substantively review any applications. DOE states that it will have completed many of these activities before it issues loan guarantees, but we continue to believe these activities should be completed before DOE reviews applications and negotiates with applicants so that it can operate the program prudently. In several cases, DOE cites as complete documents and activities that were, and still are at the time of this report, in draft form. For example, in several instances DOE states that it has implemented its credit subsidy model. However, as of June 24, 2008, DOE indicated that OMB has not approved its model. Further, DOE illustrates in an updated timetable it provides in its appendix B of its comment letter that a majority of these activities are not yet complete and that several will not be complete until the end of the calendar year 2008. DOE s entire letter, including its appendixes, is reproduced as appendix III of this report. Regarding our recommendation on policies and procedures for conducting reviews, DOE cites policies and procedures that it believes are adequate for continuing program implementation. We disagree. DOE is developing credit policies and procedures, but it does not have complete internal application policies and procedures, which it should have as it begins to review and negotiate its first loan guarantee applications. DOE also lacks any substantive information in its external application guidance on how it will select technologies. DOE has indicated that some of this information will be included in future solicitations. DOE partially agreed with our recommendation to define the expertise it will need to contract for and stated that it is developing descriptions of necessary contractor expertise on a solicitation-specific basis. Although DOE may plan to complete such descriptions and other preparatory work for future solicitations, DOE did not provide us with any information for contractor expertise for the 2006 solicitation. DOE s timetable provided in Appendix B indicates an August 2008 completion date for its acquisition strategy and contract vehicles; this target may be in time for future solicitations but it is not in time for the applications that companies are now submitting and DOE is reviewing. DOE also states that it is not possible to develop generic definitions of needed contractor expertise because the department s needs will vary from solicitation to solicitation. We continue to believe it is both reasonable and feasible for DOE to develop estimates for the timing and type of resources the department will require. To be transparent and consistent in its review and negotiation processes, DOE s statements of work within sectors and across sectors should have similar frameworks and rationale. Specifically, DOE may need assistance in areas common to all technologies, such as cost and risk analysis, project management, and engineering and design reviews. DOE should be able to start defining these and other areas on the basis of past experience. DOE disagreed with our recommendation to provide more specific application guidance on the content of independent engineering reports. DOE stated that this specificity is not required, necessary, or appropriate for LGP implementation. We disagree. Providing more specificity to companies on DOE s expectations for an application s content and basic information about how it will review the projects will help companies develop higher quality application materials and help ensure thorough, consistent, and efficient evaluations. Taking this step is also likely to decrease the number of requests for more analyses or information from the applicant. We also continue to believe it is reasonable for DOE to provide more specificity on how to develop project cost estimates, including a level-of-confidence estimate, so that it can better evaluate project cost estimates. DOE disagreed with our recommendation that it track the administrative costs associated with the LGP. DOE stated it is appropriate to track the costs of the LGP office and that it plans to develop a methodology for doing so, but there is no reason to track the costs of certain support activities. We disagree. Title XVII requires DOE to charge and collect fees that the Secretary determines are sufficient to cover applicable administrative expenses. The federal accounting standard for managerial cost accounting requires agencies to determine and report the full costs of government goods and services, including both direct and indirect costs associated with support activities. Therefore, we believe it is appropriate for DOE to consider costs associated with support activities, such as costs associated with the time general counsel staff spend working on issues related to the LGP, to be applicable administrative costs. If DOE does not consider support costs when setting fees, it cannot be assured that the fees it collects will fully cover all administrative costs incurred to operate the LGP. Regarding our recommendation to further develop and define performance measures and metrics before substantially reviewing LGP applications, DOE stated it has developed initial draft performance measures and metrics with the aim of completing them by the end of calendar year 2008. We continue to believe such measures and metrics should be developed as soon as possible for the 16 projects DOE invited to apply for guarantees. In addition, DOE has emphasized its focus on selecting technologies and projects that will produce significant environmental benefits, in particular the avoidance of air pollutants and greenhouse gases. However, it is unclear how DOE will do so without gathering data to establish baseline measures and metrics associated with these benefits. DOE stated that it did not need to take additional action to implement our recommendation that it clarify the LGP s equity requirements with the 16 companies invited to apply and in future solicitations because it informed the 16 companies invited to apply of DOE s equity position. However, DOE officials told us that they communicated this information orally and did not provide specific documentation to the 16 companies. We believe it is reasonable to provide potential applicants with key information, such as the LGP s equity requirement, in writing to help ensure that all potential applicants receive the same information. Furthermore, we continue to believe that this is appropriate information to include in future solicitations. In commenting on our matter for congressional consideration, DOE disagreed with our findings that LGP does not have adequate management and internal controls in place to proceed and that it is well on the way to implementing the accepted recommendations contained in our report. We disagree. DOE has been slow to recognize the inefficiencies and inconsistencies it may face in not having key activities, policies, and procedures completed or in place before proceeding with its operations. While it is important that DOE make meaningful progress in accomplishing its mission under Title XVII, it is also important to operate the program prudently, given that billions of taxpayer dollars are at risk. DOE also made minor technical suggestions, which we incorporated as appropriate. DOE s written comments and our more detailed responses are provided in appendix III. We are sending copies of this report to congressional committees with responsibilities for energy and federal credit issues; the Secretary of Energy; and the Director, Office of Management and Budget. We are also making copies available to others upon request. This report will be available at no charge on GAO s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Frank Rusco at 202-512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To assess the Department of Energy s (DOE) progress in issuing final regulations that govern the loan guarantee program (LGP), we reviewed and analyzed relevant provisions of Title XVII of the Energy Policy Act of 2005; the LGP s August 2006 guidelines and solicitation; its 2007 notice of proposed rulemaking; public comments on the proposed rulemaking; and final regulations published in the Federal Register. We compared the final regulations to applicable requirements contained in Title XVII and OMB Circular A-129 Policies for Federal Credit Programs and Non-Tax Receivables, which prescribes policies and procedures for federal credit programs. We also discussed the final regulations with DOE officials. To assess DOE s progress in taking actions to help ensure that the program is managed effectively and to maintain accountability, we reviewed documentation related to DOE s implementation of the LGP. Specifically, we reviewed and analyzed the LGP s concept of operations, technical and financial review criteria for the preapplication process, DOE s Application Process Overview Guidance, Preapplication Evaluation Procedural Guidance, minutes of Credit Review Board meetings held between April 2007 and February 2008, and other relevant documents. As criteria, we used our Standards for Internal Control in the Federal Government and budget and accounting guidance. Further, to assess DOE s progress to develop measures and metrics, we applied GAO s Government Performance and Results Act guidance and analyzed information in Title XVII, DOE s budget request documents and other relevant documents. When DOE had completed its preapplication review process, we obtained documentation from DOE s decision files related to the 140 preapplications for 143 projects. We reviewed all decision files DOE provided to us and analyzed the documentation for the preapplications that DOE considered responsive to the August 2006 solicitation to determine if DOE conducted its review process consistently and documented its decisions sufficiently. Responsive decision files generally contained a summary of the technology; separate technical and financial review scoring sheets; minutes documenting results of joint technical- financial meetings; and a DOE summary of its secondary review process. We also reviewed other preapplication materials that DOE provided to us. We did not evaluate the financial or technical soundness of the projects that DOE invited to submit full applications. Further, we interviewed cognizant DOE officials from the LGP office, detailees from the Department of the Treasury, and contractor personnel assisting DOE with the preapplication process, the development of policies and procedures, and the implementation of the program. In addition, we interviewed officials from DOE s Office of General Counsel; Office of the Chief Financial Officer; and program offices that participated in the technical reviews of the preapplications, including the Office of Energy Efficiency and Renewable Energy, the Office of Fossil Energy, the Office of Nuclear Energy, and the Office of Electricity Delivery and Energy Reliability. We also spoke with officials from the Departments of Agriculture and Transportation to discuss policies and procedures for managing their loan guarantee programs. To examine the inherent risks associated with the LGP, including the borrower pays option of Title XVII, we reviewed our prior work on federal loan guarantee programs, including programs under the Maritime Administration, the Federal Housing Administration, and the Small Business Administration. We interviewed officials at and reviewed reports by the Congressional Budget Office. We also discussed risks with DOE officials. We conducted this performance audit from August 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Title XVII Categories, DOE s First Solicitation, and Projects DOE Invited to Submit Applications for Loan Guarantees The Energy Policy Act of 2005 (EPAct 2005) listed 10 categories of projects that would be eligible to apply for loan guarantees under Title XVII. In August 2006, DOE issued a solicitation inviting companies to submit preapplications for projects eligible to receive loan guarantees under Title XVII. The solicitation listed categories falling within 8 of the 10 Title XVII categories. The solicitation did not invite projects for two Title XVII categories: advanced nuclear energy facilities, and refineries, meaning facilities at which crude oil is refined into gasoline. Table 1 shows the 10 categories. On October 4, 2007, DOE announced that it had invited 16 projects to submit full applications for loan guarantees. Table 2 includes the projects sponsors, types, descriptions, and their current proposed locations. Appendix III: Comments from the Department of Energy The following are GAO s comments on the Department of Energy s letter dated June 13, 2008. <9. GAO Comments> 1. See Agency Comments and Our Evaluation, pages 27-30 of this report. 2. DOE s comments incorrectly cite GAO s finding. We specifically refer to DOE s determination of the type or timing of contractor resources. As we stated in the draft report, LGP s director told us he has enough resources for reviewing and negotiating the loan guarantee applications related to the 2006 solicitation that companies are submitting. 3. We recognize DOE is in the process of hiring experienced staff. Nevertheless, the nature of the program may not allow DOE to develop significant expertise for any particular technology. 4. DOE has not yet developed final metrics and measures or gathered the data necessary to establish meaningful sector-specific baselines for its 2006 solicitation, from which it formally invited 16 solar, biomass, advanced fossil energy coal, and other projects to apply for loan guarantees. 5. We do not imply that DOE may be biased toward underestimating the subsidy costs of the program. Rather, we point out that the LGP s inherent risks due to its nature and characteristics could cause DOE to underestimate its subsidy costs and therefore not collect sufficient fees from borrowers. 6. We do not believe that our report creates the impression that DOE could choose not to develop a methodology to calculate the credit subsidy cost. On the contrary, we state that it is critical that DOE develop a sound and comprehensive methodology to estimate subsidy costs because inherent risks due to the nature and characteristics of the program will make estimating subsidy costs difficult. 7. DOE did not provide us with a detailed presentation of the LGP s credit subsidy model. On several occasions, the LGP director told us that we would be given a detailed presentation once the Office of Management and Budget (OMB) approved the credit subsidy model. As of June 24, 2008, DOE stated that OMB had not approved the model. 8. We believe that our report and the Congressional Budget Office (CBO) report DOE cites adequately explain the rationale for potential biases in applicants acceptance of loan guarantees that may increase the likelihood that DOE s loan portfolio will have more projects for which DOE underestimated the fee. 9. The fiscal year 2009 President s budget states that the assumptions related to the LGP reflect an illustrative portfolio; that is, the assumptions do not apply to a specific loan. Nevertheless, the 25- percent loss rate assumption from the budget does call into question whether the 5-percent loss rate draft metric DOE established to assess the effectiveness of financing decisions is realistic. 10. We have not inaccurately characterized the operation of the Federal Credit Reform Act of 1990 (FCRA). Instead, we specifically discuss reestimates to explain that even though DOE is proceeding with LGP under the provision that borrowers pay for the subsidy cost of the program, taxpayers will bear the cost of any shortfall, depending on the extent to which DOE underestimates the risks (subsidy cost) and therefore does not collect sufficient fees from borrowers. DOE correctly states that reestimates that increase the subsidy costs are funded by permanent indefinite budget authority, but DOE does not explain that these funds come from taxpayers. Furthermore, because of the nature and characteristics of the program, we believe it is unlikely that the program as a whole will result in savings associated with the subsidy cost because, to the extent that any loans default, the cost of the default will likely be much larger than the fee collected. Lastly, we did not discuss modifications under FCRA because DOE has not completed its policies and procedures on estimating subsidy costs. We would expect one component of these policies and procedures to explain how DOE will identify, estimate the cost of, and fund modifications. 11. If a project defaults, the cost of the default will likely be greater than the fee collected, thus creating a shortfall. Under FCRA, this shortfall would be identified during the reestimate process and would ultimately be subsidized by taxpayers. 12. OMB Circular A-11, Preparation, Submission and Execution of the Budget, describes the budgetary treatment for credit programs under FCRA requirements. While DOE explains that the financing accounting is nonbudgetary (its transactions are excluded from the budget totals), DOE fails to explain the sources of the financing account funds. According to OMB Circular A-11, an upward reestimate indicates that insufficient funds had been paid to the financing account, so the increase is paid from the program account to the financing account to make it whole. The program account is a budgetary account, and its transactions do affect the deficit and may require Treasury to borrow from the public. 13. We recognize that DOE plans to take steps to assess risk and develop mitigation strategies; however, we continue to believe that the nature and characteristics of the LGP result in certain inherent risks that, by definition, DOE is unlikely to be able to mitigate or accurately quantify. As a result, there are likely to be many cases in which the risks will not be covered by the borrower fee or a risk reserve. In addition, even in instances where DOE s estimates of subsidy costs are reasonably accurate, the borrower pays option may cause some potential borrowers to not pursue loan guarantees because the fee is too high relative to the benefits to the borrower of the loan guarantee. 14. As stated in the report, the inherent risks of the program, along with the expectation that borrowers will cover the costs of their loan guarantees, may lead to self-selection bias that tilts the portfolio of projects toward those for which costs have been underestimated. To the extent that some projects targeted by Title XVII are not financially viable without some form of federal assistance or favorable treatment by regulators, these projects will not pursue loan guarantees even though they are otherwise eligible. As a result, if this financial viability is not distributed evenly across technologies targeted by Title XVII, the projects that ultimately receive loan guarantees may not represent the full range of technologies targeted by Title XVII. 15. We changed clearly to sufficiently. We distinguish between the technical and financial reviews that staff conducted, and the rational and clarity of documentation that management provided for its decision-making processes. We observed from our file review that, when preapplications contained sufficient information, reviewers applied the criteria LGP provided, and in some cases applied additional criteria in their assessments. These assessments were specific to the preapplication process, not the application process. At times the preapplications lacked meaningful information for reviewers to assess. The cases we highlight in our report are those in which the LGP office did not provide sufficient justification for inviting projects. GAO welcomes the LGP s office efforts to establish formal standards and procedures. In recommending that LGP complete its measures and metrics associated with achieving benefits and employing new and significantly improved technologies, we believe this effort will also help inform future selection processes. 16. DOE did not require preapplications to include proforma financial statements. Rather, preapplicants were required to submit financing plans, estimated project costs, and a financial model detailing the projected cash flows over the life cycle of the project. We believe that audited financial statements and credit ratings would be more useful in assessing creditworthiness. In addition, when evaluating preapplications, DOE did not combine technical and financial scores. Therefore, it is accurate to state that creditworthiness comprised 60 percent of the preapplicant s financial score. 17. DOE erroneously refers to the preapplication process here. This analysis on project evaluation is specific to our discussion of project eligibility, and DOE s use of external guidance as a proxy for internal policies and procedures for applications. 18. The statement DOE cites is in context with the prior sentence, While DOE recognizes these reports serve an important due diligence function, DOE has not provided applicants with specific instructions on what to include. This sentence is also prefaced with as a result in the draft report. We changed the word underwriting to evaluating and added applications after loan guarantees to clarify our statement. 19. We generally agreed with the consultant s finding. Specifically, we found that DOE program offices used Credit Review Board-approved criteria as well as other criteria. In one case, these criteria were appropriate to differentiate projects in accordance with Title XVII. We could not fully determine whether the use of these additional criteria had any impact on the selection process. 20. See also comment 17. DOE s response does not address our report s analysis; specifically, we are referring to DOE s application guidance. In addition, while DOE s final rule states what applicants should submit, it and the application guidance do not indicate how DOE will evaluate these submissions. 21. Federal loan guarantees do help borrowers obtain more favorable terms than they may otherwise obtain. For example, a borrower may be able to get a lower interest rate, an extended grace period, or a longer repayment period when the loan is guaranteed by the federal government. 22. For clarification, we revised the report to indicate that DOE needs to identify eligible lenders. 23. For clarification, we incorporated DOE s suggested revision. 24. We revised the report to reflect this update of information. 25. We revised the report to state According to DOE, as of May 2008, DOE has hired one staff person to develop credit policies and procedures specific to LGP, and to fully establish its credit policy function. Appendix IV: GAO Contact and Staff Acknowledgments <10. Staff Acknowledgments> In addition to the individuals named above, Marcia Carlsen and Karla Springer, Assistant Directors; Abe Dymond; Richard Eiserman; Jeanette M. Franzel; Carol Henn; Jason Kirwan; Kristen Kociolek; Steve Koons; Sarah J. Lynch; Tom McCool; Madhav Panwar; Mehrunisa Qayyum; Carol Herrnstadt Shulman; Emily C. Wold; and Barbara Timmerman made key contributions to this report.
Why GAO Did This Study Title XVII of the Energy Policy Act of 2005 established DOE's loan guarantee program (LGP) for innovative energy projects that should decrease air pollutants or greenhouse gases and that have a reasonable prospect of repayment. For fiscal years 2008 and 2009, Congress authorized the use of borrower fees to pay the costs of loan guarantees through Title XVII's "borrower pays" option, under which DOE will limit loan guarantees to $38.5 billion. Congress mandated that GAO review DOE's progress in implementing the LGP. GAO assessed DOE's progress in (1) issuing final regulations and (2) taking actions to help ensure that the program is managed effectively and to maintain accountability. GAO also assessed how inherent risks due to the nature of the LGP may affect DOE's ability to achieve intended program outcomes. GAO analyzed DOE's regulations, guidance, and program documents and files; reviewed Title XVII; and interviewed DOE officials. What GAO Found In October 2007, DOE issued regulations that govern the LGP and include requirements for application submissions, project evaluation factors, and lender eligibility and servicing requirements. The regulations also generally address requirements set forth in applicable guidance. Some key aspects of the initial LGP guidelines were revised in the regulations to help make the program more attractive to lenders and potentially reduce financing costs for projects. For example, the maximum loan guarantee percentage increased from 80 to 100 percent of the loan. In addition, the regulations define equity as "cash contributed by the borrowers," but DOE officials told us they also plan to consider certain non-cash contributions, such as land, as equity. As a result, applicants may not fully understand the program's equity requirements. DOE is not well positioned to manage the LGP effectively and maintain accountability because it has not completed a number of key management and internal control activities. As a result, DOE may not be able to process applications efficiently and effectively, although it has begun to do so. DOE has not sufficiently determined the resources it will need or completed detailed policies, criteria, and procedures for evaluating applications, identifying eligible lenders, monitoring loans and lenders, estimating program costs, or accounting for the program--key steps that GAO recommended DOE take over a year ago. DOE also has not established key measures to use in evaluating program progress. Risks inherent to the LGP will make it difficult for DOE to estimate subsidy costs, which could lead to financial losses and may introduce biases in the projects that receive guarantees. The nature and characteristics of the LGP and uncertain future economic conditions increase the difficulty in estimating the LGP's subsidy costs. Because the LGP targets innovative technologies and the projects will have unique characteristics--varying in size, technology, and experience of the project sponsor--evaluating the risks of individual projects will be complicated and could result in misestimates. The likelihood that DOE will misestimate costs, along with the practice of charging fees to cover the estimated costs, may lead to biases in the projects that receive guarantees. Borrowers who believe DOE has underestimated costs and has consequently set fees that are less than the risks of the projects are the most likely to accept guarantees. To the extent that DOE underestimates the costs and does not collect sufficient fees from borrowers to cover the full costs, taxpayers will ultimately bear the costs of shortfalls. Even if DOE's estimates of subsidy costs are reasonably accurate, some borrowers may not pursue a guarantee because they perceive the fee to be too high relative to the benefits of the guarantee, affecting the project's financial viability. To the extent that this financial viability is not distributed evenly across the technologies targeted by Title XVII, projects in DOE's portfolio may not represent the range of technologies targeted by the program.
<1. Background> The Federal Payment Reauthorization Act of 1994 requires that the mayor of the District of Columbia submit to Congress a statement of measurable and objective performance goals for the significant activities of the District government (i.e., the performance accountability plan). After the end of the each fiscal year, the District is to report on its performance (i.e., the performance accountability report). The District s performance report is to include a statement of the actual level of performance achieved compared to each of the goals stated in the performance accountability plan for the year, the title of the District of Columbia management employee most directly responsible for the achievement of each goal and the title of the employee s immediate supervisor or superior, and a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders. The law also requires that GAO, in consultation with the director of the Office of Management and Budget, review and evaluate the District performance accountability report and submit it not later than April 15 to your committees. Our June 2001 report on the District s fiscal year 2000 performance accountability report included recommendations that the District (1) settle on a set of results-oriented goals that are more consistently reflected in its performance planning, reporting, and accountability efforts, (2) provide specific information in its performance reports for each goal that changed, including a description of how, when, and why the change occurred, and (3) adhere to the statutory requirement that all significant activities of the District government be addressed in subsequent performance accountability reports. Our review had determined that the District s fiscal year 2000 report was of limited usefulness because the District had introduced new plans, goals, and measures throughout the year, the goals and measures were in a state of flux due to these changes, and its report did not cover significant activities, such as the District s public schools, an activity that accounted for more than 15 percent of the District s budget. In response, the District concurred with our findings and acknowledged that additional work was needed to make the District s performance management system serve the needs of its citizens and Congress. The comments stated that the District planned, for example, to consolidate its goals and expand the coverage of its fiscal year 2001 report to more fully comply with its mandated reporting requirements. <2. Objectives, Scope, and Methodology> We examined the progress the District has made in developing its performance accountability report and identified areas where improvements are needed. Specifically, the objectives of this report were to examine (1) the extent to which the District s performance accountability report includes its significant activities, (2) how well the District reports progress toward a consistent set of goals and explains any changes in the goals, (3) the extent to which the report adheres to the statutory requirements, and (4) areas for future improvement. To meet these objectives, we reviewed and analyzed the information presented in the District s fiscal year 2001 performance accountability report and interviewed key District officials. To examine the extent to which the District s performance accountability report included significant activities, we compared the information in the 2001 performance and accountability report with budget information on actual expenditures presented in the District s budget. To determine how well the District reported progress toward a consistent set of goals, we compared the report s goals with those contained in the District s fiscal year 2002 Proposed Budget and Financial Plan which served as the District s 2001 performance plan and then reviewed any changes. To determine the extent to which the report adhered to the statutory requirements, we analyzed the information contained in the District s report in conjunction with the requirements contained in the Federal Payment Reauthorization Act of 1994. We also reviewed the performance contracts for the District s cabinet-level officials. To identify areas for future improvement, we compared the fiscal year 2001 report with the District s fiscal year 2000 and 1999 performance accountability reports to identify baseline and trend information. We based our analysis on the information developed from work addressing our other objectives, recommendations from our June 8, 2001, report commenting on the District s fiscal year 2000 report, and our other recent work related to performance management issues. We conducted our work from December 2001 through April 2002 at the Office of the Mayor of the District of Columbia, Washington, D.C., in accordance with generally accepted government auditing standards. In accordance with requirements contained in P.L. 103-373, we consulted with a representative of the director of the Office of Management and Budget concerning our review. We did not verify the accuracy or reliability of the performance data included in the District s report, including information on the court orders in effect for fiscal year 2001. We provided a draft of this report to the mayor of the District of Columbia for review and comment. The deputy mayor/city administrator provided oral and written comments that are summarized at the end of this report, along with our response. The written comments are reprinted in their entirety in appendix III. <3. The 2001 Report Included Most of the District s Significant Activities> The fiscal year 2001 performance accountability report includes most of the District s significant activities, providing performance information for 66 District agencies that represent 83 percent of the District s total expenditures of $5.9 billion during that year. The District included 26 additional agencies in this year s report, compared with 40 in its prior report for fiscal year 2000. Appendix I lists the 66 agencies included in the District s 2001 performance accountability report, along with the 2001 actual expenditures for each of these agencies. However, the absence of goals and measures related to educational activities remains the most significant gap. The District reports that it is continuing its efforts to include performance information on its significant activities in its performance accountability reports. For example, the 2001 performance accountability report notes that the District of Columbia Public Schools (DCPS) did not include performance goals or measures because they were in the early stages of a long-term strategic planning process initiated by the newly installed school board. DCPS accounted for about 14 percent of the District s fiscal year 2001 actual expenditures, and public charter schools, which also were not included, accounted for another 2 percent of the District s 2001 expenditures. The 2001 report states that in lieu of a formal performance accountability report for DCPS, the District included a copy of the Superintendent s testimony before the Subcommittee on the District of Columbia, Committee on Government Reform, U.S. House of Representatives. The District acknowledged that the inclusion of this information does not fully comply with the statutory requirement and set forth a plan to include DCPS performance goals and measures in the fiscal year 2003 proposed budget and financial plan that will serve as the basis for the DCPS performance accountability report for fiscal year 2002. The 2001 report lists another 10 agencies that were not included, primarily, according to the report, because they did not publish performance goals and measures in the fiscal year 2002 proposed budget. These 10 agencies accounted for about $330 million in fiscal year 2001 actual expenditures, or about 6 percent of the District s total fiscal year 2001 actual expenditures. These agencies included the Child and Family Services Agency, which was under receivership until June 15, 2001 (with fiscal year 2001 actual expenditures of $189 million) and public charter schools (with fiscal year 2001 expenditures of $137 million). Although it may not be appropriate to include agency performance information in some cases, the performance accountability report should provide a rationale for excluding them. For example, Advisory Neighborhood Commissions, according to the deputy mayor, have a wide range of agendas that cannot be captured in a single set of meaningful measures. Table 3 lists these 10 agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, the District also did not specifically include other areas constituting 11 percent of the District s fiscal year 2001 actual expenditures. In view of the District s interest in tying resources to results, the District could further improve its performance accountability reports by linking these budget activities as appropriate to the agencies that are responsible for these expenditures or provide a rationale for exclusion. For example, the Department of Employment Services administers the unemployment and disability funds (with fiscal year 2001 expenditures totaling about $32 million). Similarly, the Office of the Corporation Counsel administers the settlement and judgments fund, which was set up to settle claims and lawsuits and pay judgments in tort cases entered against the District (with fiscal year 2001 expenditures of about $26 million). Table 4 contains a list of these budget activities and fiscal year 2001 actual expenditures. <4. The District s 2001 Plan and Report Addressed a Consistent Set of Performance Goals> The goals in the fiscal year 2001 performance accountability report were consistent with the goals in the District s 2001 performance plan. Using a consistent set of goals enhanced the understandability of the report by demonstrating how performance measured throughout the year contributed toward achieving the District s goals. The District also used clear criteria for rating performance on a five-point scale and reported that these ratings were included in the performance evaluations of cabinet agency directors who had performance contracts with the mayor. In addition, according to a District official, the District will be able to provide information on any future changes made to its performance goals through its new performance management database. The District has made substantial progress in improving its performance planning and reporting efforts by focusing on measuring progress toward achieving a consistent set of goals. In our June 2001 review of the District s 2000 performance accountability report, we had raised concerns that the District s performance management process was in flux, with goals changing continually throughout the year. Further, the District did not discuss the reasons for these changes. This year, the goals were consistent and the District provided some information about upcoming changes that could be anticipated in fiscal year 2002 goals. In addition, according to the 2001 report, the District has developed a performance measures database to allow it to document changes to individual goals and measures that are proposed in the agencies fiscal year 2003 budget submissions. One of the District s enhancements to its 2001 performance accountability report was reporting on a five-point performance rating scale, as compared to the three-point performance rating scale it used in its fiscal year 2000 report. The five-point scale was designed to be consistent with the rating scale used in the District s Performance Management Program, under which management supervisory service, excepted service, and selected career service personnel develop individual performance plans against which they are evaluated at the end of the year. The five ratings are: (1) below expectations, (2) needs improvement, (3) meets expectations, (4) exceeds expectations, and (5) significantly exceeds expectations. According to the fiscal year 2001 performance accountability report, this scale was used to evaluate the performance of cabinet agency directors who held performance contracts with the mayor. It stated that 60-percent of each director s performance rating was based on the agency-specific goals included in the agency s performance accountability report, with the other 40-percent based on operational support requirements such as responsiveness to customers, risk management, and local business contracting. Our work has found that performance agreements can become an increasingly vital part of overall efforts to improve programmatic performance and better achieve results. We found that the use of results-oriented performance agreements: strengthened alignment of results-oriented goals with daily operations, fostered collaboration across organizational boundaries, enhanced opportunities to discuss and routinely use performance information to make program improvements, provided a results-oriented basis for individual accountability, and maintained continuity of program goals during leadership transitions. <5. The Report Generally Adhered to Statutory Requirements> The District s fiscal year 2001 performance accountability report reflected improvement in adhering to the statutory requirements in the Federal Payment Reauthorization Act. The District s 2001 report was timely and included information on the level of performance achieved for most goals listed. It included the titles of the District management employee most directly responsible for the achievement of each of the goals and the title of that employee s immediate supervisor, as required by the statute. We also found that the names and titles on the performance contracts of the cabinet level officials we reviewed matched the names in the performance report as the immediate supervisor for all of the goals. Although the report contains information on certain court orders, the report could be improved by providing clearer and more complete information on the steps the District government has taken during the reporting year to comply with those orders and by including updated information on the court orders applicable to the District as required by the act. <5.1. The Report Identified Performance Levels Achieved toward Most of the District s Goals> The District identified the level of performance achieved for most of the goals in its 2001 report. The report contains a total of 214 performance goals that are associated with the 66 agencies covered. Of these 214 performance goals, 201 goals (or 94 percent) include information on whether or not the goal was achieved, and only 13 did not include information on the level of performance. As shown in table 1, the 13 goals that did not include the level of performance were associated with eight agencies. For example, the District s State Education Office did not provide this information for four of its seven goals because the reports and information needed to achieve the goals had not been completed. <5.2. Information the District Included on Court Orders Has Limitations> Although the District s 2001 performance accountability report included some information on certain court orders imposed upon the District and the status of its compliance with those orders, the act calls for a statement of the status of any court orders applicable to the District of Columbia government during the year and the steps taken by the government to comply with such orders. The 2001 report contains information on the same 12 court orders involving civil actions against the District reported on for fiscal years 1999 and 2000. Among these 12 orders are 2 orders that the fiscal year 2001 report lists as no longer in effect in 2001. One of these court orders involved a receivership that terminated in May 2000. The other involved a maximum-security facility that closed at the end of January 2001. The 2001 report does not disclose whether or not any new court orders were imposed on the District during fiscal year 2001. The summaries that the District provides on the status of these court orders could be more informative if they contained clearer and more complete information on the steps taken by the District government to comply with the court orders. For example, according to the District s 2001 report, the case Nikita Petties v. DC relates to DCPS transportation services to special education students and the timely payment of tuition and related services to schools and providers. The report s summary on the status of this case states: The School system has resumed most of the transportation responsibilities previously performed by a private contractor. A transportation Administrator with broad powers had been appointed to coordinate compliance with Court orders. He has completed his appointment and this position has been abolished. This summary does not provide a clear picture of what steps the school system is taking to comply with the requirements resulting from this court order. The act, however, calls for the District to report on the steps taken by the government to comply with such orders. <6. Steps Are Needed to Improve Future Performance Accountability Reports> The District recognized in its 2001 performance and accountability report that its performance management system is a work-in-progress and stated that there are several fronts on which improvements can be made. In the spirit of building on the progress that the District has made in improving its performance accountability reports over the last 2 years, there are three key areas where we believe that improvements in future performance accountability reports are needed. First, the District needs to be more inclusive in reporting on court orders to more fully comply with the act s requirements. Second, as part of the District s emphasis on expanding its performance-based budgeting approach, the District needs to validate and verify the performance data it relies on to measure performance and assess progress, present this information in its performance accountability reports, and describe its strategies to address any known data limitations. Finally, the District needs to continue its efforts to include goals and measures for its major activities, and it should include related expenditure information to provide a more complete picture of the resources targeted toward achieving an agency s goals and therefore help to enhance transparency and accountability. <6.1. The District Should Be More Inclusive in Reporting on Court Orders> Since this is the third year that the District has had to develop performance and accountability reports, the District has had sufficient time to determine how best to present information on the status of any court orders that are applicable to the District of Columbia during the fiscal year and the steps taken to comply with those orders. However, the District has continued to report on the same 12 court orders for fiscal years 1999, 2000, and 2001. By limiting its presentation to the same 12 court orders, the District s current report does not provide assurance that the information in its performance accountability report reflects court orders applicable during the fiscal year. Court orders have an important effect on the District s performance, as reflected by the chief financial officer s statement that the District s unforeseen expenses are often driven by new legislative imperatives, court-ordered mandates, and suits and settlements. As another indication of their importance, 1 of the 11 general clauses in performance contracts with agency directors addresses the directors responsiveness to court orders. To make future reports more useful, the District should include information on the status of court orders it has not previously reported on as well as those applicable during the fiscal year, including those that may have been vacated during the fiscal year and the steps taken to comply with them. The District should establish objective criteria for determining the types of court orders for which it will provide specific compliance information for future performance accountability reports, and it should consider ways to provide summary information related to any other court orders. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with a court order. If the District government has not acted to comply with a court order it should include an explanation as to why no action was taken. The District s 2001 report contains a statement that Following the publication of the FY 1999 Performance Accountability Report, GAO and the District s Office of Corporation Counsel agreed upon a list of 12 qualifying orders that should be included in the District s future Performance Accountability Reports. We did not intend to limit future reporting to only the 12 court orders first reported by the District for fiscal year 1999. We agreed on the list of 12 court orders because, at that time, the District had difficulty identifying all the court orders as required by statute. However, we believe that the District now has had time to develop criteria and a system for ensuring that updated and accurate information on the status of applicable court orders can be presented in its future performance accountability reports. Therefore, we are recommending that the mayor ensure that such steps are taken. <6.2. The District Faces Challenges in Verifying and Validating Its Performance Information> The District has identified data collection standards as one of the areas it is working to improve. As with federal agencies, one of the biggest challenges the District faces is developing performance reports with reliable information to assess whether goals are being met or how performance can be improved. Data must be verified and validated to ensure the performance measures used are complete, accurate, consistent, and of sufficient quality to document performance and support decision making. Data verification and validation are key steps in assessing whether the measures are timely, reliable, and adequately represent actual performance. The District s performance and accountability reports should include information obtained from verification and validation efforts and should discuss strategies to address known data limitations. As reported in our June 2001 report on the District s fiscal year 2000 performance accountability report, the District had planned to issue performance review guidelines by the end of the summer of 2001. These guidelines were to be issued in response to an Inspector General s finding that the agencies did not maintain records and other supporting documentation for the accomplishments they reported regarding the fiscal year 2000 performance contracts. The District included information in its fiscal year 2003 budget instructions regarding performance measures emphasizing the importance of high quality data. Although not required for agencies budget submissions, the guidance called for every agency to maintain, at a minimum, documentation on how it calculated each measure and the data source for each measure. In its 2001 performance accountability report, the District said it plans to address the development of data collection standards. The District plans to begin developing manuals to document how data for each performance measure is collected, how the measure is calculated, and who is responsible for collecting, analyzing, and reporting the data. A further step the District can consider is ensuring that these data are independently verified and validated. A District official acknowledged that validating and verifying performance information is something the District would deal with in the future. Credible performance information is essential for accurately assessing agencies progress toward the achievement of their goals and pinpointing specific solutions to performance shortfalls. Agencies also need reliable information during their planning efforts to set realistic goals. Decision makers must have reliable and timely performance and financial information to ensure adequate accountability, manage for results, and make timely and well-informed judgments. Data limitations should also be documented and disclosed. Without reliable information on costs, for example, decision makers cannot effectively control and reduce costs, assess performance, and evaluate programs. Toward that end, the District must ensure that its new financial management system is effectively implemented to produce crucial financial information, such as the cost of services at the program level, on a timely and reliable basis. <6.3. The District Should Enhance Its Efforts to Include Goals, Measures, and Related Expenditure Information> Although the District has made progress in presenting program performance goals and measures, the 2001 report did not contain goals and measures for all of its major activities and it did not include information on other areas that accounted for 11 percent of its annual expenditures. The District could enhance the transparency and accountability of its reports by continuing its efforts to ensure that agencies establish goals and measures that they will use to track performance during the year and by taking steps to ensure that agencies responsible for other budget activities (as shown in table 4) include these areas in their performance reports. The District did not include, for example, goals and measures for DCPS, although it did provide a copy of a testimony and stated that this was included, at least in part, to address concerns we had raised in our June 2001 report that the District s fiscal year 2000 performance accountability report did not cover DCPS. The District also did not include another 10 agencies in its 2001 performance accountability report and indicated that it is taking steps to include relevant goals and measures for some of these agencies in the next year s report. In addition to including goals and measures for the District s significant activities, the District should consider including related expenditure information to help ensure transparency and accountability. We found, for example, that the Department of Employment Services administers the unemployment and disability funds but this information was not linked in the District s 2001 performance accountability report. By linking expenditures to agencies that are responsible for them, the District can further improve its future performance accountability reports by providing a more complete picture of performance. <7. Conclusions> The District, like several federal agencies, has found that it needed to change its performance goals in some cases substantially as it learned and gained experience during the early years of its performance measurement efforts. The District has continued to make progress in implementing a more results-oriented approach to management and accountability and issuing a timely and more complete performance accountability report. As we have seen with federal agencies, cultural transformations do not come quickly or easily, and improvements in the District s performance management system are still underway. Despite the important progress that has been made, opportunities exist for the District to strengthen its efforts as it moves forward. <8. Recommendations> In order to more fully comply with the Federal Payment Reauthorization Act of 1994, which requires the District to provide a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders, the mayor should ensure that the District establish objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with these court orders. If the District government has not acted to comply with the court orders it should include an explanation as to why no action was taken. In addition, the District should provide summary information related to other applicable court orders in its performance accountability reports. The Mayor of the District of Columbia should also ensure that future performance accountability reports include information on the extent to which its performance measures and data have been verified and validated and discuss strategies to address known data limitations, and include goals and performance measures for the District s significant activities and link related expenditure information to help ensure transparency and accountability. <9. Agency Comments and Our Evaluation> On April 2, 2002, we provided a draft of our report to the mayor of the District of Columbia for his review. In response to our request, the deputy mayor/city administrator met with us on April 4 to discuss the draft and provided us with written comments on April 8. His written comments appear in appendix III. Overall, the deputy mayor stated that he agreed with the findings of the report and concurred with the report s recommendations. He stated that clear and meaningful performance reports are essential to communicate the extent to which the District has or has not met its goals and commitments to make those improvements. Further, he stated that the findings and recommendations in this report were consistent with the District government s intent of further improving its public reporting. The deputy mayor stated that the District would adopt our recommendation to develop objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. Our recommendation also stated that the District should more fully comply with the statute by reporting information on the steps taken by the District government to comply with these orders. The deputy mayor said that they would provide such additional information although he stated that the statute does not specifically require that this information be provided. However, the Federal Payment Reauthorization Act of 1994 (P.L. 103-373) section 456(b)(C) requires that the District s performance accountability report contain a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders. We encourage the District government to comply with this requirement and concur with its comment that providing this information would make the report more informative and useful to Congress and the general public. The deputy mayor also concurred with our recommendation that the District s future performance reports include information on the extent to which its performance data have been validated and verified. The deputy mayor said that seven District agencies participating in the District s performance based budgeting pilot would be developing data collection manuals this summer. We encourage the District to proceed with this effort as well as to develop and report on strategies for addressing limitations in its data collections efforts. We have suggested in prior reports that when federal agencies have low quality or unavailable performance data, they should discuss how they plan to deal with such limitations in their performance plans and reports. Assessments of data quality do not lead to improved data for accountability and program management unless steps are taken to respond to the data limitations that are identified. In addition, alerting decisionmakers and stakeholders to significant data limitations allows them to judge the data s credibility for their intended use and to use the data in appropriate ways. Regarding the independent verification of performance data, the deputy mayor stated that the District's ability to secure independent verification of more than selected goals and measures is limited by the resources available to the District's Office of the Inspector General (OIG). He said that the OIG conducted spot-check audits of selected scorecard goals in the fiscal year 2000 performance accountability report and although these limited audits allowed the District to determine the validity of only those particular measures, this effort provided valuable observations and suggestions on how District agencies could improve its data collection practices. He also said that his office has discussed initiating additional spot-check audits of selected goals and measures with the OIG during fiscal year 2002. We agree that such spot checks would be useful. The knowledge that the OIG will be spot-checking some performance data during each fiscal year provides a good incentive to develop and use accurate, high-quality data. In our prior work, we have encouraged federal agencies to use a variety of strategies to verify and validate their performance information, depending upon the unique characteristics of their programs, stakeholder concerns, performance measures, and data resources. In addition to relying on inspector general assessments of data systems and performance measures, the District can use feedback from data users and external stakeholders to help ensure that measures are valid for their intended use. Other approaches can include taking steps to comply with quality standards established by professional organizations and/or using technical or peer review panels to ensure that performance data meet quality specifications. The District can also test the accuracy of its performance data by comparing it with other sources of similar data, such as data obtained from external studies, prior research, and program evaluations. The deputy mayor said that the District would be making efforts to include additional agencies and budget activities in future performance reports. We encourage the District to proceed with these efforts. Of the 10 agencies that were not included in the fiscal year 2001 performance report, the District has already included 3 agencies (the Office of Asian and Pacific Islander Affairs, the Child and Family Services Agency, and the Office of Veteran Affairs) in its fiscal year 2002 performance plan issued in March 2002. In addition, the deputy mayor stated that three additional agencies (the Office of the Secretary, the Housing Finance Agency, and the National Capital Revitalization Corporation) would be included in the District s consensus budget to be submitted to the Council of the District of Columbia in June 2002. With regard to the budget activities that were not included in the District s fiscal year 2001 performance report, the deputy mayor agreed that it would be appropriate to develop performance measures for six funds, such as settlements and judgments and administration of the disability compensation fund. The deputy mayor acknowledged that establishing performance measures for administering an additional six funds, such as the Public Benefit Corporation, would have been appropriate but they no longer exist. The deputy mayor said that the District of Columbia Retirement Board manages two funds that had relevant performance measures in the District s 2001 report. We noted, however, that these two retirement funds were not specifically identified in the 2001 performance accountability report. We are sending copies of this report to the Honorable Anthony A. Williams, Mayor of the District of Columbia. We will make copies available to others upon request. Key contributors to this report were Katherine Cunningham, Steven Lozano, Sylvia Shanks, and Susan Ragland. Please contact me or Ms. Ragland on (202) 512-6806 if you have any questions on the material in this report. Expenditures by Agencies Included in the District s Fiscal Year 2001 Performance Accountability Report The District s fiscal year 2001 performance accountability report included 66 agencies accounting for 83 percent of the District s operating budget for fiscal year 2001. Table 2 lists these agencies and their fiscal year 2001 actual expenditures. Agencies and Budget Activities Not Included in the District s Fiscal Year 2001 Performance Accountability Report The District s fiscal year 2001 performance accountability report did not include 10 District agencies primarily because they did not publish performance goals in the District s 2001 performance plan. Table 3 lists these agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, we identified several budget activities accounting for 11 percent of the District s total fiscal year 2001 actual expenditures that were not included in the fiscal year 2001 performance accountability report. Table 4 lists these activities and related fiscal year 2001 actual expenditures.
Why GAO Did This Study This report examines the progress the District of Columbia has made with its fiscal year 2001 performance accountability report and highlights continuing challenges facing our nation's capital. The District must submit a performance accountability plan with goals for the coming fiscal year and, at the end of the fiscal year, a performance accountability report on the extent to which it achieved these goals. What GAO Found GAO found that the District's Performance Accountability Report for Fiscal Year 2001 provided a more complete picture of its performance and made progress in complying with statutory reporting requirements by using a consistent set of goals. This allowed the District to measure and report progress toward the goals in its 2001 performance plan. Specifically, it reported information on the level of performance achieved, the titles of managers and their supervisors responsible for each goal, and described the status of certain court orders. The District has made progress over the last three years in its performance accountability reports and established positive direction for enhancements in court orders, its fiscal year 2003 performance based budgeting pilots, and performance goals and measures.
<1. Background> The federal government spends more than $80 billion dollars on IT annually, with more than $2 billion of that amount spent on acquiring cloud-based services. This amount is expected to rise in coming fiscal years, according to OMB. A goal of these investments is to improve federal IT systems by replacing aging and duplicative infrastructure and systems that are costly and difficult to maintain. Cloud computing helps do this by giving agencies the ability to purchase a broad range of IT services in a utility-based model that allows an agency to pay for only the IT services it uses. According to NIST, an application should possess five essential characteristics to be considered cloud computing: on-demand self- service, broad network access, resource pooling, rapid elasticity, and measured service. Essentially, cloud computing applications are network-based and scalable on demand. According to OMB, cloud computing is economical, flexible, and fast: Economical: cloud computing can be a pay-as-you-go approach, in which a low initial investment is required to begin and additional investment is needed only as system use increases. Flexible: IT departments that anticipate fluctuations in user demand no longer need to scramble for hardware and software to meet increasing need. With cloud computing, capacity can be added or subtracted quickly. Fast: cloud computing eliminates long procurement and certification processes, while providing a wide selection of services. In addition, according to NIST, cloud computing offers three service models: Infrastructure as a service the agency has the capability to provision processing, storage, networks, and other fundamental computing resources and run its own software, including operating systems and applications. The agency does not manage or control the underlying infrastructure but controls and configures operating systems, storage, deployed applications, and possibly, selected networking components (e.g., host firewalls). Platform as a service the agency deploys its own or acquired applications created using programming languages and tools supported by the provider. The agency does not manage or control the underlying infrastructure, but controls and configures the deployed applications. Software as a service the agency uses the service provider s applications, which are accessible from various client devices through an interface such as a Web browser (e.g., Web-based e-mail system). The agency does not manage or control the underlying infrastructure or the individual application capabilities. As can be seen in figure 1, each service model offers unique functionality, with consumer control of the environment decreasing from infrastructure to platform to software. NIST has also defined four deployment models for providing cloud services: private, community, public, and hybrid. In a private cloud, the service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer s premises. In a community cloud, the service is shared by organizations with similar requirements. The cloud may be managed by the organizations or a third party and may exist on or off an organization s premises. A public cloud is available to the general public and is owned and operated by the service provider. A hybrid cloud is a composite of two or more other deployment models (private, community, or public) that are bound together by standardized or proprietary technology. According to federal guidance, these deployment models determine the number of consumers and the nature of other consumers data that may be present in a cloud environment. A public cloud should not allow a consumer to know or control other consumers of a cloud service provider s environment. However, a private cloud can allow for ultimate control in selecting who has access to a cloud environment. Community clouds and hybrid clouds allow for a mixed degree of control and knowledge of other consumers. <1.1. OMB Has Undertaken Initiatives and Issued Guidance to Increase Agency Adoption of Cloud Computing Services> According to OMB, the federal government needs to shift from building custom computer systems to adopting cloud technologies and shared services, which will improve the government s operational efficiencies and result in substantial cost savings. To help agencies achieve these benefits, OMB required agencies in 2010 to immediately shift to a Cloud First policy and increase their use of available cloud and shared services whenever a secure, reliable, and cost-effective cloud service exists. In February 2011, OMB issued the Federal Cloud Computing Strategy, as called for in its 25-Point Plan. The strategy provided definitions of cloud computing services; benefits of cloud services, such as accelerating data center consolidations; a decision framework for migrating services to a cloud environment; case studies to support agencies migration to cloud computing services; and roles and responsibilities for federal agencies. For example, the strategy stated that NIST s role is to lead and collaborate with federal, state, and local government agency chief information officers, private sector experts, and international bodies to identify standards and guidance and prioritize the adoption of cloud computing services. In addition, the strategy stated that agency cloud service contracts should include SLAs designed to meet agency requirements. In a December 2011 memo, OMB established the Federal Risk and Authorization Management Program (FedRAMP), a government-wide program intended to provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud computing products and services. All federal agencies must meet FedRAMP requirements when using cloud services and the cloud service providers must implement the FedRAMP security requirements in their cloud environment. To become authorized, cloud service providers provide a security assessment package to be reviewed by the FedRAMP Joint Authorization Board, which may grant a provisional authorization. Federal agencies can leverage cloud service provider authorization packages for review when granting an agency authority to operate, where this reuse is intended to save time and money. Further, at the direction of OMB, the Chief Information Officers Council and the Chief Acquisition Officers Council issued, in February 2012, guidance to help agencies acquire cloud services. In particular, the guidance highlights that SLAs are a key factor for ensuring the success of cloud based services and that federal agencies should include an SLA when creating a cloud computing contract or as a reference. The guidance provides important areas of an SLA to be addressed; for example, it states that an SLA should define performance with clear terms and definitions, demonstrate how performance is being measured, and identify what enforcement mechanisms are in place to ensure the conditions are being met. In addition, NIST, in its role designated by OMB in the Federal Cloud Computing Strategy, collaborated with private sector organizations to release cloud computing guidance, which affirms the importance of using an SLA when acquiring cloud computing services. Moreover, a number of other public and private sector organizations have issued research on the incorporation of an SLA in a cloud computing contract. According to these studies, an SLA is important because it ensures that services are being performed at the levels specified in the cloud computing contract, can significantly contribute to avoiding conflict, and can facilitate the resolution of an issue before it escalates into a dispute. The studies also highlight that a typical SLA describes levels of service using various attributes such as availability, serviceability or performance, and specifies thresholds and financial penalties associated with a failure to comply with these thresholds. <1.2. Agencies Are Taking Steps to Implement Prior GAO- Identified Improvements for Cloud-based Computing Services> We have previously reported on federal agencies efforts to implement cloud computing services and on progress oversight that agencies have made to help federal agencies in those efforts. These include In May 2010, we reported on the efforts of multiple agencies to ensure the security of government-wide cloud computing services. We noted that, while OMB, the General Services Administration (GSA), and NIST had initiated efforts to ensure secure cloud computing services, OMB had not yet finished a cloud computing strategy; GSA had begun a procurement for expanding cloud computing services for its website that served as a central location for federal agencies to purchase cloud services, but had not yet developed specific plans for establishing a shared information security assessment and authorization process; and NIST had not yet issued cloud-specific security guidance. We recommended that OMB establish milestones to complete a strategy for federal cloud computing and ensure it addressed information security challenges. These include having a process to assess vendor compliance with government information security requirements and division of information security responsibilities between the customer and vendor. OMB agreed with our recommendations and subsequently published a strategy in February 2011 that addressed the importance of information security when using cloud computing, but it did not fully address several key challenges confronting agencies, such as the appropriate use of attestation standards for control assessments of cloud computing service providers, and division of information security-related responsibilities between customer and provider. We also recommended that GSA consider security in its procurement for cloud services, including consideration of a shared assessment and authorization process. GSA generally agreed with our recommendations and has since developed the FedRAMP program. Finally, we recommended that NIST issue guidance specific to cloud computing security. NIST agreed with our recommendations and has since issued multiple publications that address such guidance. In April 2012, we reported that more needed to be done to implement OMB s 25-Point Plan and measure its results. Among other things, we reported that, of the 10 key action items that we reviewed, 3 had been completed and 7 had been partially completed by December 2011. In particular, OMB and agencies cloud-related efforts only partially addressed requirements. Specifically, agencies plans were missing key practices, such as a discussion of needed resources, a migration schedule, and plans for retiring legacy systems. As a result, we recommended, among other things, that the Secretaries of Homeland Security and Veterans Affairs, and the Attorney General direct their respective CIOs to complete practices missing from the agencies plans for migrating services to a cloud computing environment. Officials from each of the agencies generally agreed with our recommendations and have taken steps to implement them. In July 2012, we reported on the efforts of seven agencies to implement three services by June 2012, including the challenges associated with doing so. Specifically, we reported that selected federal agencies had made progress in implementing OMB s Cloud First policy. Seven agencies had implemented 21 cloud computing solutions and had spent a total of $307 million for cloud computing in fiscal year 2012, about 1 percent of their total IT budgets. While each of the seven agencies had submitted plans to OMB for implementing their cloud services, a majority of the plans were missing required elements. Agencies also identified opportunities for future cloud service implementations, such as moving storage and help desk services to a cloud environment. Agencies also shared seven common challenges that they experienced in moving services to cloud computing. We made recommendations to the agencies to develop planning information, such as estimated costs and legacy IT systems retirement plans, for existing and planned services. The agencies generally agreed with our recommendations and have taken actions to implement them. In September 2014, we reported on the aforementioned seven agencies efforts to implement additional cloud computing services, any reported cost savings as a result of implementing those cloud services, and challenges associated with the implementation. All of the seven federal agencies we reviewed had added more cloud computing services; the number of cloud services implemented by them had increased from 21 to 101 between fiscal years 2012 and 2014. In addition, agencies had collectively doubled the percentage of their IT budgets from 1 to 2 percent during the fiscal year 2012 14 period. Further, the agencies reported a collective cost savings of about $96 million through fiscal year 2013. We made recommendations to the agencies to assess their IT investments that had yet to be evaluated for suitability for cloud computing services. For the most part, the agencies generally agreed with our recommendations and have taken actions to implement them. <2. Key Practices for Cloud Computing Service Level Agreements Can Help Agencies Manage Services More Effectively> Based on our analysis of practices recommended by the ten organizations with expertise in the area of SLAs and OMB, we compiled the following list of ten practices that are key for federal agencies to incorporate into a contract to help ensure services are performed effectively, efficiently, and securely for cloud computing services. The key practices are organized by the following management areas roles and responsibilities, performance measures, security, and consequences. Roles and responsibilities: (1) Define the roles and responsibilities of the major stakeholders involved in the performance of the SLA and cloud contract. These definitions would include, for example, the persons responsible for oversight of the contract, audit, performance management, maintenance, and security. (2) Define key terms, including activation date, performance, and identify any ambiguities in the definitions of cloud computing terms in order to provide the agency with the level of service they can expect from their cloud provider. Without clearly defined roles, responsibilities, and terms, the agency may not be able to appropriately measure the cloud provider s performance. Performance measures: (1) Define the performance measures of the cloud service, including who is responsible for measuring performance. These measures would include, among other things, the availability of the cloud service; the number of users that can access the cloud at any given time; and the response time for processing a customer transaction. Providing performance parameters provides both the agency and service provider with a well-defined set of instructions to be followed. (2) Specify how and when the agency would have access to its data, including how data and networks will be managed and maintained throughout the life cycle of the service. Provide any data limitations, such as who may or may not have access to the data and if there are any geographic limitations. (3) Specify management requirements, for example, how the cloud service provider would monitor the performance of the cloud, report incidents, and how and when they would plan to resolve them. In addition, identify how and when the agency would conduct an audit to monitor the performance of the service provider, including access to the provider s performance logs and reports. (4) Provide for disaster recovery and continuity of operations planning and testing. This includes, among other things, performing a risk management assessment; how the cloud service would be managed by the provider in the case of a disaster; how data would be recovered; and what remedies would apply during a service failure. (5) Describe applicable exception criteria for when the cloud provider s service performance measures do not apply, such as during scheduled cloud maintenance or when updates occur. Without any type of performance measures in place, agencies would not be able to determine whether the cloud services under contract are meeting expectations. Security: (1) Specify the security performance requirements that the service provider is to meet. This would include describing security performance metrics for protecting data, such as data reliability, data preservation, and data privacy. Cleary define the access rights of the cloud service provider and the agency as well as their respective responsibilities for securing the data, applications, and processes to meet all federal requirements. (2) Describe what would constitute a breach of security and how and when the service provider is to notify the agency when the requirements are not being met. Without these safeguards, computer systems and networks as well as the critical operations and key infrastructures they support may be lost, and information including sensitive personal information may be compromised, and the agency s operations could be disrupted. Consequences: Specify a range of enforceable consequences, including the terms under which a range of penalties and remedies would apply for non-compliance with the SLA performance measures. Identify how such enforcement mechanisms would be imposed or exercised by the agency. Without penalties and remedies, the agency may lack leverage to enforce compliance with contract terms when situations arise. <2.1. OMB Guidance Addresses Seven of the Ten Key Practices> Guidance issued in February 2012, at the direction of OMB highlighted SLAs as being a key factor for ensuring the success of cloud-based services and advised that federal agencies should include an SLA or a reference within the contract when creating a cloud computing contract. The guidance provides areas of an SLA to be addressed; for example, it states that an SLA should define performance with clear terms and definitions, demonstrate how performance is being measured, and identify what enforcement mechanisms are in place to ensure the conditions are being met. However, the guidance addressed only seven of the ten key practices listed in table 1 that could help agencies better track performance and thus ensure the effectiveness of their cloud services. Specifically, the guidance did not specify how and when the agency would have access to its data, provide for disaster recovery and continuity of operations planning, and describe any exception criteria. OMB staff members said that, although the guidance drafted by the Chief Information Officers Council and the Chief Acquisition Officers Council was a good start, including all ten key practices should be considered. Without complete guidance from OMB, there is limited assurance that agencies will apply all the key SLA practices into their cloud computing contracts, and therefore may be unable to hold contractors accountable when performance falls short of their goals. <3. Selected Agencies Incorporated Most of the Key Practices, but Differed in Addressing Them> Many of the 21 cloud service contracts we reviewed at the five selected agencies incorporated a majority of the key practices, but the number of practices differed among contracts. Specifically, seven of the cloud service contracts reviewed met all 10 of the key practices. This included three from DHS, three from Treasury, and one from VA. The following figure shows the total cloud service contracts reviewed and the number that met the 10 key practices at the five selected agencies. Of the remaining 14 cloud service contracts, 13 incorporated five or more of the key practices, and 1 did not meet any of the key practices. Figure 3 shows each of the cloud service contracts we reviewed and the extent to which the agency had included key practices in its SLA contracts. Appendix II includes our analysis of all the cloud services we reviewed, by agency. A primary reason that the agencies did not include all of the practices was that they lacked guidance that addresses these SLA practices. Of the five agencies, only DOD had developed cloud service contracting guidance that addressed some of the practices. More specifically, DOD s guidance only addressed three of the key practices: disaster recovery and continuity of operations planning, metrics on security performance requirements, and notifying the agency when there is a security breach. In addition, the guidance partially addressed the practice on access to agency data, specifically, with regard to transitioning data back to the agency in case of exit/termination of service. Agency officials responsible for the cloud services that did not meet or only partially met key practices provided the following additional reasons for not including all ten practices: Officials from DOD s Office of the Chief Information Officer told us that the reason key practices were not always fully addressed is that, when the contracts and associated SLAs were developed, they did not have the aforementioned DOD guidance on cloud service acquisition and use namely, the agency s memorandum on acquiring cloud services that was released in December 2014, and the current Defense Federal Acquisition Regulation Supplement, which was finalized in August 2015. However, as previously stated, this updated guidance addressed three of the ten key practices, and part of one other. Officials from DHS s Office of the Chief Information Officer stated that the Infrastructure as a Service cloud service addressed the partially met and not met key practices but did not provide supporting documentation to show that the practices were in place. If key practices have not been incorporated, the system may have decreased performance and the cloud service may not meet its intended goals. HHS officials from the National Institutes of Health attributed unmet or partially met practices for four cloud services Remedy Force, Medidata, the BioMedical Imaging and BioEngineering website, and the Drug Abuse public website to the fact that they evaluate the cloud vendor s ability to meet defined agency needs, rather than negotiate with vendors on SLA requirements. While this may explain their shortfalls in not addressing all SLA key practices, the agency may be placing their systems at risk of not conducting adequate service level measurements, which may result in decreased service levels. HHS officials from the Administration of Children and Families stated that the reason key practices were partially addressed or not addressed for the Grant Solutions cloud service was that these practices were being managed by HHS personnel using other tools and plans, rather than via the SLA established for this service. For example, according to the officials, they are using a management information system to monitor performance of the cloud provider. In addition, with respect to disaster management, the officials said that they have their own disaster recovery plan. Nonetheless, leading studies show that these practices should still be incorporated as part of the cloud service contract to ensure agencies have the proper control over their cloud services. Treasury officials said the reason, among other things, the SLAs for Treasury Web Services and IRS Portal Environment only partially met certain key practices was because the practices were being provided by support contractors hired by the cloud service provider, and were not directly subject to the SLAs established between Treasury and the cloud service provider. Nonetheless, while having contractors perform practices is an acceptable approach, Treasury officials were unable to provide supporting documentation to show that support contractors were assisting with the practices in question. Officials from VA s Office of Information and Technology said the reason the key practice associated with penalties and remedies was not included in the Terremark SLA was because penalties were addressed within other parts of the contract; however, officials were not able to provide documentation identifying such penalties. With regard to an SLA for eKidney, officials told us they had not addressed any of the key practices due to the fact that an SLA was not developed between the agency and cloud service provider. Without including an SLA in cloud service contracts, the agency runs the risk of not having the mechanisms in place to effectively evaluate or control contractor performance. Until these agencies develop SLA guidance and incorporate all key practices into their cloud computing contracts, they may be limited in their ability to measure the performance of the services, and, therefore, may not receive the services they require. <4. Conclusions> Although OMB has provided agencies guidance to better manage contracts for cloud computing services, this guidance does not include all the key practices that we identified as necessary for effective SLAs. Similarly, Defense, Homeland Security, Health and Human Services, Treasury, and Veterans Affairs have incorporated many of the key practices in the cloud service contracts they have entered into. Overall, this is a good start towards ensuring that agencies have mechanisms in place to manage the contracts governing their cloud services. However, given the importance of SLAs to the management of these million-dollar service contracts, agencies can better protect their interests by incorporating the pertinent key practices into their contracts in order to ensure the delivery and effective implementation of services they contract for. In addition, agencies can improve management and control over their cloud service providers by implementing all recommended and applicable SLA key practices. <5. Recommendations for Executive Action> To ensure that agencies are provided with more complete guidance for contracts for cloud computing services, we recommend that the Director of OMB include all ten key practices in future guidance to agencies. To help ensure continued progress in the implementation of effective cloud computing SLAs, we recommend that the Secretary of Defense direct the appropriate officials to ensure key practices are fully incorporated for cloud services as the contracts and associated SLAs expire. These efforts should include updating the DOD memorandum on acquiring cloud services and current Defense Acquisition Regulations System to more completely include the key practices. To help ensure continued progress in the implementation of effective cloud computing SLAs, we recommend that the Secretaries of Health and Human Services, Homeland Security, Treasury, and Veterans Affairs direct appropriate officials to develop SLA guidance and ensure key practices are fully incorporated as the contract and associated SLAs expire. <6. Agency Comments and Our Evaluation> In commenting on a draft of this report, four of the agencies DOD, DHS, HHS, and VA agreed with our recommendations; and OMB and one agency (Treasury) had no comments. The specific comments from each agency are as follows: In an e-mail received on March 25, 2016, OMB staff from the Office of E-Government and Information Technology stated that the agency had no comments at this time. In written comments, the Department of Defense concurred with our recommendation and described actions it plans to take to address the recommendation. Specifically, DOD stated that it will update its cloud computing guidance and contracting guidance as appropriate. The Department of Defense s comments are reprinted in appendix III. In written comments, the Department of Homeland Security concurred with our recommendation and described actions it plans to take to address the recommendation. Specifically, the department will establish common cloud computing service level agreement guidance. DHS also provided technical comments, which we have incorporated in the report as appropriate. The Department of Homeland Security s comments are provided in appendix IV. In written comments, the Department of Health and Human Services concurred with our recommendation, but noted that it was not directed by a federal mandate. We acknowledge that our recommendation is not directed by a mandate; however, implementing leading practices for cloud computing can result in significant benefits. The department also provided technical comments, which we have incorporated in the report as appropriate. The Department of Health and Human Service s comments are provided in appendix V. In an e-mail received on March 18, 2016, an audit liaison from the Department of the Treasury s Office of the CIO stated that the department had no comment. In written comments, the Department of Veterans Affairs concurred with our recommendation and described planned actions to address it. For example, the department will develop service level agreement guidance to include the 10 key practices. The Department of Veterans Affairs comments are provided in appendix VI. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Health and Human Services, Homeland Security, the Treasury, and Veterans Affairs; and the Director of the Office of Management and Budget, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) identify key practices used in cloud computing service level agreements (SLA) to ensure service is performed at specified levels and (2) determine the extent to which federal agencies have incorporated such practices into their cloud computing service level agreements. To identify key practices used in cloud computing service level agreements, we analyzed SLA research, studies, and guidance developed and used by federal agencies and private entities. We then performed a comparative analysis of the practices to identify the practices that were recommended by at least two sources. Specifically, we analyzed information from publications and related documentation issued by the following ten public and private organizations to determine key SLA practices: Federal Chief Information Officer Council Chief Acquisitions Officers Council National Institute of Standards and Technology European Commission Directorate General for Communications Networks, Content and Technology Office of Management and Budget Gartner MITRE Corporation Cloud Standards Customer Council International Organization for Standardization International Electrotechnical Commission Next, we organized these practices into management areas and validated our analysis through interviews with experts from these organizations. We also had officials from the Office of Management and Budget (OMB) review and validate that these practices are the ones the office expects federal agencies to follow. In cases where experts disagreed, we analyzed their responses, including the reasons they disagreed, and made changes as appropriate. These actions resulted in our list of key practices for cloud service SLAs. To determine the extent to which federal agencies have incorporated key practices into their cloud computing contracts, we selected five agencies to review based, in part, on those with the largest fiscal year 2015 IT budgets and planned spending on cloud computing services. The agencies selected were the Departments of Defense (DOD), Homeland Security (DHS), Health and Human Services (HHS), Treasury, and Veterans Affairs (VA). We selected these agencies based on the following two factors. First, they have the largest planned IT budgets for fiscal year 2015. Their budgets, which collectively totaled $57 billion, represent about 72 percent of the total federal IT budget ($78 billion). Second, these agencies plan to spend relatively large amounts on cloud computing. Specifically, based on our analysis of OMB s fiscal year 2015 budget data, each of the five departments were in the top 10 for the largest amount budgeted for cloud computing and collectively planned to spend $1.2 billion on cloud computing, which represents about 57 percent of the total amount that federal agencies plan to invest in cloud computing ($2.1 billion). To select and review the cloud services used by the agencies, we obtained an inventory of cloud services for each of the five agencies, and then, for each agency, we listed their cloud services in a random fashion and selected the first two cloud services in the list for each of the three major cloud service models (infrastructure, platform, and software). In certain cases, the agency did not have two cloud services for a service model, so the number chosen for that service model was less than two. This resulted in a non-generalizable sample of 23 cloud services. However, near the end of our engagement, agencies identified 2 of the services as being in a pilot stage (one from DHS, and one from HHS), and thus not operational. We excluded these services from our analysis, as our methodology to only assess operational cloud services. Due to the stage of the engagement, we were unable to select additional services for review. Further, because no computer-generated data was used we determined that there were no data reliability issues. For each of the selected services, we compared its cloud service contract (if one existed) and any associated SLA documentation to our list of key practices to determine if there were variances and, if so, their cause and impact. To do so, two team analysts independently reviewed the cloud service contracts against the key practices using the following criteria: Met: all aspects of the key practices were fully addressed. Partially met: some key practices were addressed. Did not meet: no key practices were addressed. In cases where analysts differed on the assessments, we discussed what the rating should be until we reached a consensus. We also interviewed agency officials to corroborate our analysis and identify the causes and impacts of any variances. We conducted this performance audit from January 2015 to April 2016 in accordance to generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Agencies Cloud Service SLAs against Key Practices The following tables show each of the five agencies DOD, DHS, HHS, Treasury, and VA cloud services we assessed and our analysis of each contract for cloud services against the key practices. In cases where the SLA partially met a practice, the analysis also includes discussion of the rationale for why that assessment was provided. With regard to those services that partially met key practices: The Integrated Risk Information System partially addressed one key practice on how and when the agency was to have access to its data and networks. It included how the data would be transitioned, but did not specify how access to data and networks was to be managed or maintained. The Case Tracking cloud service partially included the practice on specifying metrics for security performance requirements. It specified how security needs were to be met but did not give specific metrics for doing so. Email as a Service partially addressed two key practices. For the practice on specifying service management requirements, it specified how the cloud service provider was to monitor performance, but did not address how the provider was to report performance or how the agency was to confirm the performance. For the other practice on specifying metrics for security performance requirements, it included how security needs were to be met but did not specify the security metrics. The Web Portal partially incorporated two key practices. For the practice on how and when the agency was to have access to its data and networks, it specified how the data was to be transitioned, but not how access to data and networks was to be managed or maintained. For the other practice on specifying metrics for security performance requirements, it included monitoring of the contractor regarding security, but did not specify security metrics. Infrastructure as a Service partially incorporated two key practices. For the practice on how and when the agency was to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but did not provide how data and networks was to be transitioned back to the agency in case of an exit. For the other practice on service management requirements, it described how the cloud service is to monitor performance, but did not specify how and when the agency was to confirm audits of the service provider s performance. With regard to those services that partially met key practices, National Institute of Health s Remedy Force partially addressed one key practice on defining measurable performance objectives. It included various performance objectives, such as levels of service and availability of the cloud service, capacity and capability, and measures for response time, but it did not include which party was to be responsible for measuring performance. The National Institute of Health s Medidata Rave partially incorporated two key practices. It defined measurable performance objectives, specifically it specified levels of service, capacity and capability of the service, and response time, but did not specify the period of time that it was to be measured. For the other practice on specifying a range of enforceable consequences, it specified remedies, but did not identify any penalties related to non-compliance with performance measures. The National Institute on Drug Abuse Public Website partially addressed two key practices. For the practice on specifying how and when the agency is to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but did not identify how data and networks were to be managed throughout duration of the SLA. For the other practice on specifying a range of enforceable consequences, it included a number of remedies, but did not specify a range of enforceable penalties. HHS s Grant Solutions partially incorporated one key practice on specifying service management requirements. It provided for when and how the agency was to confirm cloud provider performance, but did not specify how the cloud service provider was to monitor performance and report results. With regard to those services that partially met key practices, Treasury s Internal Revenue Service s Portal Environment partially included one key practice on specifying how and when the agency was to have access to its data and networks. It specified how and when the agency was to have access to its data and networks, but it did not provide on how data and networks were to be transitioned back to the agency in case of an exit. The Treasury s Web Solutions partially addressed two key practices. For the practice on specifying how and when the agency was to have access to its data and networks, it specified how and when the agency was to have access to its data and networks, but it did not provide how data and networks would be transitioned back to the agency in case of an exit. For the other practice on specifying a range of enforceable consequences, it did not provide detailed information on a range of enforceable penalties and remedies for non-compliance with SLA performance measures. Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Homeland Security Appendix V: Comments from the Department of Health & Human Services Appendix VI: Comments from the Department of Veterans Affairs Appendix VII: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact name above, individuals making contributions to this report included Gary Mountjoy (assistant director), Gerard Aflague, Scott Borre, Nancy Glover, Lori Martinez, Tarunkant Mithani, Karl Seifert, and Andrew Stavisky.
Why GAO Did This Study Cloud computing is a means for delivering computing services via IT networks. When executed effectively, cloud-based services can allow agencies to pay for only the IT services used, thus paying less for more services. An important element of acquiring cloud services is a service level agreement that specifies, among other things, what services a cloud provider is to perform and at what level. GAO was asked to examine federal agencies' use of SLAs. GAO's objectives were to (1) identify key practices in cloud computing SLAs and (2) determine the extent to which federal agencies have incorporated such practices into their SLAs. GAO analyzed research, studies, and guidance developed by federal and private entities to develop a list of key practices to be included in SLAs. GAO validated its list with the entities, including OMB, and analyzed 21 cloud service contracts and related documentation of five agencies (with the largest fiscal year 2015 IT budgets) against the key practices to identify any variances, their causes, and impacts. What GAO Found Federal and private sector guidance highlights the importance of federal agencies using a service level agreement (SLA) in a contract when acquiring information technology (IT) services through a cloud computing services provider. An SLA defines the level of service and performance expected from a provider, how that performance will be measured, and what enforcement mechanisms will be used to ensure the specified performance levels are achieved. GAO identified ten key practices to be included in an SLA, such as identifying the roles and responsibilities of major stakeholders, defining performance objectives, and specifying security metrics. The key practices, if properly implemented, can help agencies ensure services are performed effectively, efficiently, and securely. Under the direction of the Office of Management and Budget (OMB), guidance issued to agencies in February 2012 included seven of the ten key practices described in this report that could help agencies ensure the effectiveness of their cloud services contracts. GAO determined that the five agencies and the 21 cloud service contracts it reviewed had included a majority of the ten key practices. Specifically, of the 21 cloud service contracts reviewed from the Departments of Defense, Health and Human Services, Homeland Security, Treasury, and Veterans Affairs, 7 had fulfilled all 10 of the key practices, as illustrated in the figure. The remaining 13 contracts had incorporated 5 or more of the 10 key practices and 1 had not included any practices. Agency officials gave several reasons for why they did not include all elements of the key practices into their cloud service contracts, including that guidance directing the use of such practices had not been created when the cloud services were acquired. Unless agencies fully implement SLA key practices into their SLAs, they may not be able to adequately measure the performance of the services, and, therefore, may not be able to effectively hold the contractors accountable when performance falls short. What GAO Recommends GAO recommends that OMB include all ten key practices in future guidance to agencies and that Defense, Health and Human Services, Homeland Security, Treasury, and Veterans Affairs implement SLA guidance and incorporate applicable key practices into their SLAs. In commenting on a draft of this report, OMB and one agency had no comment, the remaining four agencies concurred with GAO's recommendations.
<1. Background> Driving is a complex task that depends on visual, cognitive, and physical functions that enable a person to see traffic and road conditions; recognize what is seen, process the information, and decide how to physically act to control the vehicle. Although the aging process affects people at different rates and in different ways, functional declines associated with aging can affect driving ability. For example, vision declines may reduce the ability to see other vehicles, traffic signals, signs, lane markings, and pedestrians; cognitive declines may reduce the ability to recognize traffic conditions, remember destinations, and make appropriate decisions in operating the vehicle; and physical declines may reduce the ability to perform movements required to control the vehicle. A particular concern is older drivers with dementia, often as a result of illnesses such as Alzheimer s disease. Dementia impairs cognitive and sensory functions causing disorientation, potentially leading to dangerous driving practices. Age is the most significant risk factor for developing dementia approximately 12 percent of those aged 65 to 84 are likely to develop the condition while over 47 percent of those aged 85 and older are likely to be afflicted. For drivers with the condition, the risk of being involved in a crash is two to eight times greater than for those with no cognitive impairment. However, some drivers with dementia, particularly in the early stages, may still be capable of driving safely. Older drivers experience fewer fatal crashes per licensed driver compared with drivers in younger age groups; however, on the basis of miles driven, older drivers have a comparatively higher involvement in fatal crashes. Over the past decade, the rate of older driver involvement in fatal crashes, measured on the basis of licensed drivers, has decreased and, overall, older drivers have a lower rate of fatal crashes than drivers in younger age groups (see fig. 1). Older drivers fatal crash rate per licensed driver is lower than corresponding rates for drivers in younger age groups, in part, because older drivers drive fewer miles per year than younger drivers, may hold licenses even though they no longer drive, and may avoid driving during times and under conditions when crashes tend to occur, such as during rush hour or at night. However, on the basis of miles traveled, older drivers who are involved in a crash are more likely to suffer fatal injuries than are drivers in younger age groups who are involved in crashes. As shown in figure 2, drivers aged 65 to 74 are more likely to be involved in a fatal crash than all but the youngest drivers (aged 16 to 24), and drivers aged 75 and older are more likely than drivers in all other age groups to be involved in a fatal crash. Older drivers will be increasingly exposed to crash risks because older adults are the fastest-growing segment of the U.S. population, and future generations of older drivers are expected to drive more miles per year and at older ages compared with the current older-driver cohort. The U.S. Census Bureau projects that the population of adults aged 65 and older will more than double, from 35.1 million people (12.4 percent of total population) in 2000 to 86.7 million people (20.7 percent of total population) in 2050 (see fig. 3). Intersections pose a particular safety problem for older drivers. Navigating through intersections requires the ability to make rapid decisions, react quickly, and accurately judge speed and distance. As these abilities can diminish through aging, older drivers have more difficulties at intersections and are more likely to be involved in a fatal crash at these locations. Research shows that 37 percent of traffic-related fatalities involving drivers aged 65 and older occur at intersections compared with 18 percent for drivers aged 26 to 64. Figure 4 illustrates how fatalities at intersections represent an increasing proportion of all traffic fatalities as drivers age. DOT through FHWA and NHTSA has a role in promoting older driver safety, although states are directly responsible for operating their roadways and establishing driver licensing requirements. FHWA focuses on roadway engineering and has established guidelines for designers to use in developing engineering enhancements to roadways to accommodate the declining functional capabilities of older drivers. NHTSA focuses on reducing traffic-related injuries and fatalities among older people by promoting, in conjunction with nongovernmental organizations, research, education, and programs aimed at identifying older drivers with functional limitations that impair driving performance. NHTSA has developed several guides, brochures, and booklets for use by the medical community, law enforcement officials, older drivers family members, and older drivers themselves that provide guidance on what actions can be taken to improve older drivers capabilities or to compensate for lost capabilities. Additionally, NIA supports research related to older driver safety through administering grants designed to examine, among other issues, how impairments in sensory and cognitive functions impact driving ability. These federal initiatives support state efforts to make roads safer for older drivers and establish assessment practices to evaluate the fitness of older drivers. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), signed into law in August 2005, establishes a framework for federal investment in transportation and has specific provisions for older driver safety. SAFETEA-LU authorizes $193.1 billion in Federal-Aid Highway Program funds to be distributed through FHWA for states to implement road preservation, improvement, and construction projects, some of which may include improvements for older drivers. SAFETEA-LU also directs DOT to carry out a program to improve traffic signs and pavement markings to accommodate older drivers. To fulfill these requirements, FHWA has updated or plans to update its guidebooks on highway design for older drivers, plans to conduct workshops on designing roads for older drivers that will be available to state practitioners, and has added a senior mobility series to its bimonthly magazine that highlights advances and innovations in highway/traffic research and technology. Additionally, SAFTEA-LU authorizes NHTSA to spend $1.7 million per year (during fiscal years 2006 through 2009) in establishing a comprehensive research and demonstration program to improve traffic safety for older drivers. <2. FHWA Has Recommended Practices and Made Funding Available to Make Roads Safer for Older Drivers, but States Generally Give Higher Priority to Other Safety Issues> FHWA has recommended practices for designing and operating roadways to make them safer for older drivers and administers SAFETEA-LU funds that states which own and operate most roadways under state or local government authority may use for road maintenance or construction projects to improve roads for older drivers. To varying degrees, states are implementing FHWA s older driver practices and developing plans and programs that consider older drivers needs. However, responses to our survey indicated that other safety issues such as railway and highway intersections and roadside hazard elimination are of greater concern to states, and states generally place a higher priority on projects that address these issues rather than projects targeted only towards older drivers. <2.1. FHWA Has Recommended Road Design and Operating Practices and Funds Programs to Improve Older Driver Safety> FHWA has issued guidelines and recommendations to states on practices that are intended to make roads safer for older drivers, such as the Highway Design Handbook for Older Drivers and Pedestrians. The practices emphasize cost-effective construction and maintenance measures involving both the physical layout of the roadway and use of traffic control devices such as signs, pavement markings, and traffic signals. The practices are specifically designed to improve conditions at sites intersections, interchanges, curved roads, construction work zones, and railroad crossings known to be unsafe for older drivers. While these practices are designed to address older drivers needs, implementation of these practices can make roads safer for all drivers. Intersections Recognizing that intersections are particularly problematic for older drivers, FHWA s top priority in its Highway Design Handbook for Older Drivers and Pedestrians is intersection improvements. Practices to improve older drivers ability to navigate intersections include using bigger signs with larger lettering to identify street names, consistent placement of lane use signs and arrow pavement markings, aligning lanes to improve drivers ability to see oncoming traffic, and using reflective markers on medians and island curbs at intersections to make them easier to see at night. See figures 5 through 8 for these and additional intersection improvement practices. Interchanges Practices to aid older drivers at interchanges include using signs and pavement markings to better identify right and wrong directions of travel and configuring on-ramps to provide a longer distance for accelerating and merging into traffic. See figure 9 for these and additional interchange improvement practices. Road curves Practices to assist older drivers on curves include using signs and reflective markers especially on tight curves to clearly delineate the path of the road. See figure 10 for these and additional curve improvement practices. Construction work zones Practices to improve older driver safety in construction work zones include increasing the length of time messages are visible on changeable message signs; providing easily discernable barriers between opposing traffic lanes in crossovers; using properly sized devices (cones and drums) to delineate temporary lanes; and installing temporary reflective pavement markers to make lanes easier to navigate at night. Railroad crossings Practices to help older drivers are aimed at making the railroad crossing more conspicuous by using reflective materials on the front and back of railroad crossing signs and delineating the approach to the crossing with reflective posts. See figure 11 for these and additional railroad crossing improvement practices. FHWA is continuing to research and develop practices to make roads safer for older drivers. FHWA also promotes the implementation of these practices by sponsoring studies and demonstration projects, updating its Highway Design Handbook for Older Drivers and Pedestrians, and training state and local transportation officials. For example, FHWA is supporting a research study to be conducted over the next 3 to 5 years on the effectiveness of selected low-cost road improvements in reducing the number and severity of crashes for all drivers. With the findings of this and other studies, FHWA plans to update its guidelines to refine existing or recommend new practices in improving older driver safety. In addition, FHWA is considering changes to its MUTCD to be published in 2009 that will enhance older driver safety by updating standards related to sign legibility and traffic signal visibility. Under SAFETEA-LU, FHWA provides funding that states may use to implement highway maintenance or construction projects that can enhance older driver safety. However, because projects to enhance older driver safety can be developed under several different SAFETEA-LU programs, it is difficult to determine the amount of federal funding dedicated to highway improvements for older drivers. While older driver safety is generally not the primary focus of projects funded through SAFETEA-LU programs, improvements made to roads may incorporate elements of FHWA s older driver safety practices. For example, under SAFETEA-LU s Highway Safety Improvement Program (HSIP), states submit a Strategic Highway Safety Plan (SHSP) after reviewing crash and other data and determining what areas need to be emphasized when making safety improvements. If older driver safety is found to be an area of emphasis, a state may develop projects to be funded under the HSIP that provide, for example, improved traffic signs, pavement markings, and road layouts consistent with practices listed in FHWA s Highway Design Handbook for Older Drivers and Pedestrians. <2.2. Some States Have Implemented FHWA s Recommended Practices and Considered Older Drivers in Highway Safety Plans and Programs, but Other Safety Issues Generally Receive Greater Priority> State DOTs have, to varying degrees, incorporated FHWA s older driver safety practices into their design standards; implemented the practices in construction, operations, and maintenance activities; trained technical staff in applying the practices; and coordinated with local agencies to promote the use of the practices. The states responses to our survey indicate the range in states efforts. Design standards. Nearly half of the states have incorporated about half or more of FHWA s practices into their design standards, as follows: 24 state DOTs reported including about half, most, almost all, or all of the recommendations. 20 reported including some of the recommendations. 6 reported including few or none of the recommendations. Construction, operations, and maintenance activities. Even though most state DOTs have not incorporated all FHWA practices into their design standards, the majority of states have implemented some FHWA practices in construction, operations, and maintenance activities, particularly in the areas of intersections and work zones (see table 1). Training. Nearly one-fourth of state DOTs have provided training on FHWA practices to half or more of their technical staff, as follows: 12 state DOTs reported having trained about half, most, almost all, or all of their technical staff. 32 have trained some of their technical staff. 7 have trained few or none of their technical staff. Coordination with local agencies. Because state transportation agencies do not own local roads which may account for the majority of roads in a state coordination with local governments is important in promoting older driver safety in the design, operation, and maintenance of local roads. The states reported using a variety of methods in their work with local governments to improve older driver safety (see table 2). States also varied in their efforts to consult stakeholders on older driver issues in developing highway safety plans (defined in the state SHSP) and lists of projects in their Statewide Transportation Improvement Programs (STIP). According to our survey, 27 of the 51 state DOTs have established older driver safety as a component of their SHSPs, and our survey indicated that, in developing their SHSPs, these states were more likely to consult with stakeholders concerned about older driver safety than were states that did not include an older driver component in their plans. Obtaining input from stakeholders concerned about older driver safety from both governmental and nongovernmental organizations is important because they can contribute additional information, and can sometimes provide resources, to address older driver safety issues. For example, elderly mobility was identified by the Michigan State Safety Commission to be an emerging issue and, in February 1998, funded the Southeast Michigan Council of Governments (SEMCOG) to convene a statewide, interdisciplinary Elderly Mobility and Safety Task Force. SEMCOG coordinated with various stakeholder groups Michigan DOT, Michigan Department of State, Michigan Office of Highway Safety Planning, Michigan Department of Community Health, Office of Services to the Aging, University of Michigan Transportation Research Institute, agencies on aging, and AAA Michigan among others in developing a statewide plan to address older driver safety and mobility issues. This plan which outlines recommendations in the areas of traffic engineering, alternative transportation, housing and land use, health and medicine, licensing, and education and awareness forms the basis for the strategy defined in Michigan s SHSP to address older drivers mobility and safety. Even though 27 state DOTs have reported establishing older driver safety as a component of their SHSPs, only 4 state DOTs reported including older driver safety improvement projects in their fiscal year 2007 STIPs. However, state STIPs may contain projects that will benefit older drivers. For example, 49 state DOTs reported including funding for intersection improvements in their STIPs. Because drivers are increasingly more likely to be involved in an intersection crash as they age, older drivers, in particular, should benefit from states investments in intersection safety projects, which generally provide improved signage, traffic signals, turning lanes, and other features consistent with FHWA s older driver safety practices. Although older driver safety could become a more pressing need in the future as the population of older drivers increases, states are applying their resources to areas that pose greater safety concerns. In response to a question in our survey about the extent to which resources defined to include staff hours and funds spent on research, professional services, and construction contracts were invested in different types of safety projects, many state DOTs indicated that they apply resources to a great or very great extent to safety projects other than those concerning older driver safety (see table 3). Survey responses indicated that resource constraints are a significant contributing factor to limiting states implementation of FHWA s older driver safety practices and development of strategic plans and programs that consider older driver concerns. <3. More than Half of States Have Implemented Some Assessment Practices for Older Drivers, and NHTSA Is Sponsoring Research to Develop More Comprehensive Assessments> More than half of state licensing agencies have implemented assessment practices to support licensing requirements for older drivers that are more stringent than requirements for younger drivers. These requirements established under state licensing procedures generally involve more frequent renewals (16 states), mandatory vision screening (10 states), in- person renewals (5 states) and mandatory road tests (2 states). However, assessment of driver fitness in all states is not comprehensive because cognitive and physical functions are generally not evaluated to the same extent as visual function. Furthermore, the effectiveness of assessment practices used by states is largely unknown. Recognizing the need for better assessment tools, NHTSA is developing more comprehensive practices to assess driver fitness and intends to provide technical assistance to states in implementing these practices. <3.1. Over Half of the States Have More Stringent Licensing Requirements for Older Drivers, but Assessment Practices Are Not Comprehensive> Over half of the states have procedures that establish licensing requirements for older drivers that are more stringent than requirements for younger drivers. These requirements generally include more frequent license renewal, mandatory vision screening, in-person renewals, and mandatory road tests. In addition, states may also consider input from medical advisory boards, physician reports, and third-party referrals in assessing driver fitness and making licensing decisions. (See fig. 12 and app. II for additional details.) Accelerated renewal Sixteen states have accelerated renewal cycles for older drivers that require drivers older than a specific age to renew their licenses more frequently. Colorado, for example, normally requires drivers to renew their licenses every 10 years, but drivers aged 61 and older must renew their licenses every 5 years. Vision screening Ten states require older drivers to undergo vision assessments, conducted by either the Department of Motor Vehicles or their doctor, as part of the license renewal process. These assessments generally test for visual acuity or sharpness of vision. For example, the average age for mandatory vision screening is 62, with some states beginning this screening as early as age 40 (Maine and Maryland) and other states beginning as late as age 80 (Florida and Virginia). In-person renewal Five states Alaska, Arizona, California, Colorado, and Louisiana that otherwise allow license renewal by mail require older drivers to renew their licenses in person. Arizona, California, and Louisiana do not permit mail renewal for drivers aged 70 and older. Alaska does not allow mail renewal for drivers aged 69 and older, while Colorado requires in-person renewal for those over age 61. Road test Two states, New Hampshire and Illinois, require older drivers to pass road examinations upon reaching 75 years and at all subsequent renewals. In addition, states have adopted other practices to assist licensing agencies in assessing driver fitness and identifying older drivers whose driving fitness may need to be reevaluated. Medical Advisory Boards Thirty-five states and the District of Columbia rely on Medical Advisory Boards (MAB) to assist licensing agencies in evaluating people with medical conditions or functional limitations that may affect their ability to drive. A MAB may be organizationally placed within a state s transportation, public safety, or motor vehicle department. Board members practicing physicians or health care professionals are typically nominated or appointed by the state medical association, motor vehicle administrator, or governor s office. Some MABs review individual cases typically compiled by case workers who collect and review medical and other evidence such as accident reports that is used to make a determination about a person s fitness to drive. The volume of cases reviewed by MABs varies greatly across states. For example, seven state MABs review more than 1,000 cases annually, while another seven MABs review fewer than 10 cases annually. Physician reports While all states accept reports of potentially unsafe drivers from physicians, nine states require physicians to report physical conditions that might impair driving skills. For example, California specifically requires doctors to report a diagnosis of Alzheimer s disease or related disorders, including dementia, while Delaware, New Jersey, and Nevada require physicians to report cases of epilepsy and those involving a person s loss of consciousness. However, not all states assure physicians that such reports will be kept confidential, so physicians may choose not to report patients if they fear retribution in the form of a lawsuit or loss of the patient s business. Third-party referrals In addition to reports from physicians, all states accept third-party referrals of concerns about drivers of any age. Upon receipt of the referral, the licensing agency may choose to contact the driver in question to assess the person s fitness to drive. A recent survey of state licensing agencies found that nearly three-fourths of all referrals came from law enforcement officials (37 percent) and physicians or other medical professionals (35 percent). About 13 percent of all referrals came from drivers families or friends, and 15 percent came from crash and violation record checks, courts, self-reports, and other sources. However, the assessment practices that state licensing agencies use to evaluate driver fitness are not comprehensive. For example, our review of state assessment practices indicates that all states screen for vision, but we did not find a state with screening tools to evaluate physical and cognitive functions. Furthermore, the validity of assessment practices used by states is largely unknown. While research indicates that in-person license renewal is associated with lower crash rates particularly for those aged 85 and older other assessment practices, such as vision screening, road tests, and more frequent license renewal cycles, are not always associated with lower older driver fatality rates. According to NHTSA, there is insufficient evidence on the validity and reliability of any driving assessment or screening tool. Thus, states may have difficulty discerning which tools to implement. <3.2. NHTSA Is Developing More Comprehensive Practices to Assess Driver Fitness> NHTSA, supported by the NIA and by partner nongovernmental organizations, has promoted research and development of mechanisms to assist licensing agencies and other stakeholders medical providers, law enforcement officers, social service providers, family members in better identifying medically at-risk individuals; assessing their driving fitness through a comprehensive evaluation of visual, physical, and cognitive functions; and enabling their driving for as long as safely possible. In the case of older drivers, NHTSA recognizes that only a fraction of older drivers are at increased risk of being involved in an accident and focuses its efforts on providing appropriate research-based materials and information to the broad range of stakeholders who can identify and influence the behavior of at-risk drivers. Initiatives undertaken by NHTSA and its partner organizations include: Model Driver Screening and Evaluation Program. Initially developed by NHTSA in partnership with AAMVA and supported with researchers funded by NIA the program provides a framework for driver referral, screening assessment, counseling, and licensing actions. The guidance is based on research that relates an individual s functional abilities to driving performance and reflects the results of a comprehensive research project carried out in cooperation with the Maryland Motor Vehicle Administration. Recent research supported under this program and with NIA grants evaluated a range of screenings related to visual, physical, and cognitive functions that could be completed at a licensing agency and may effectively identify drivers at an increased risk of being involved in a crash. Physician s Guide to Assessing and Counseling Older Drivers. Developed by the American Medical Association to raise awareness among physicians, the guide cites relevant literature and expert views (as of May 2003) to assist physicians in judging patients fitness to drive. The guide is based on NHTSA s earlier work with the Association for the Advancement of Automotive Medicine. This work a detailed literature review summarized knowledge about various categories of medical conditions, their prevalence, and their potential impact on driving ability. Countermeasures That Work: A Highway Safety Countermeasure Guide for State Highway Safety Offices. Developed with the Governors Highway Safety Association, this publication describes current initiatives in the areas of communications and outreach, licensing, and law enforcement and the associated effectiveness, use, cost, and time required for implementation that state agencies might consider for improving older driver safety. NHTSA Web site. NHTSA maintains an older driver Web site with content for drivers, caregivers, licensing administrators, and other stakeholders to help older drivers remain safe. NIA research. NIA is supporting research on several fronts in studying risk factors for older drivers and in developing new tools for driver training and driver fitness assessment. A computer-based training tool is being developed to help older drivers improve the speed with which they process visual information. This tool is a self-administered interactive variation of validated training techniques that have been shown to improve visual processing speed. The tool is being designed as a cost-effective mechanism that can be broadly implemented, at social service organizations, for example, and made accessible to older drivers. Driving simulators are being studied as a means of testing driving ability and retraining drivers in a manner that is more reliable and consistent than on-road testing. Virtual reality driving simulation is a potentially viable means of testing that could more accurately identify cognitive and motor impairments than could on-road tests that are comparatively less safe and more subjective. Research is ongoing to evaluate the impacts of hearing loss on cognitive functions in situations, such as driving, that require multitasking. Results of the research may provide insights into what level of auditory processing is needed for safe driving and may lead to development of future auditory screening tools. Studies that combine a battery of cognitive function and road/driving simulator tests are being conducted to learn how age-related changes lead to hazardous driving. Results of these studies may prove useful in developing screening tests to identify functionally-impaired drivers particularly those with dementia who are at risk of being involved in a crash and may be unfit to drive. NHTSA is also developing guidelines to assist states in implementing assessment practices. To date, NHTSA s research and model programs have had limited impact on state licensing practices. For example, according to NHTSA, no state has implemented the guidelines outlined in its Model Driver Screening and Evaluation Program. Furthermore, there is insufficient evidence on the validity and reliability of driving assessments, so states may have difficulty discerning which assessments to implement. To assist states in implementing assessment practices, NHTSA, as authorized under SAFETEA-LU section 2017, developed a plan to, among other things, (1) provide information and guidelines to people (medical providers, licensing personnel, law enforcement officers) who can influence older drivers and (2) improve the scientific basis for licensing decisions. In its plan NHTSA notes that the most important work on older driver safety that needs to occur in the next 5 years is refining screening and assessment tools and getting them into the hands of the users who need them. As an element of its plan, NHTSA is cooperating with AAMVA to create a Medical Review Task Force that will identify areas where standards of practice to assess the driving of at-risk individuals are possible and develop strategies for implementing guidelines that states can use in choosing which practices to adopt. The task force will in areas such as vision and cognition define existing practices used by states and identify gaps in research to encourage consensus on standards. NHTSA officials said that work is currently under way to develop neurological guidelines which will cover issues related to cognitive assessments and anticipate that the task force will report its findings in 2008. <4. Selected States Have Implemented Coordinating Groups and Other Initiatives to Promote Older Driver Safety> Of the six states we visited, five California, Florida, Iowa, Maryland, and Michigan have active multidisciplinary coordination groups that may include government, medical, academic, and social service representatives, among others, to develop strategies and implement efforts to improve older driver safety. Each of these states identified its coordination group as a key initiative in improving older driver safety. As shown in table 4, the coordinating groups originated in different ways and vary in size and structure. For example, Florida s At-Risk Driver Council was formally established under state legislation while Maryland s group functions on an ad hoc basis with no statutory authority. The approaches taken by these groups in addressing older driver safety issues vary as well. For example, California s large task force broadly reaches several state agencies and partner organizations, and the task force leaders oversee the activity of eight work groups in implementing multiple action items to improve older driver safety. In contrast, Iowa s Older Driver Target Area Team is a smaller group that operates through informal partnerships among member agencies and is currently providing consulting services to the Iowa Department of Transportation on the implementation of older driver strategies identified in Iowa s Comprehensive Highway Safety Plan. Members of the coordination groups we spoke with said that their state could benefit from information about other states practices. For example, coordinating group members told us that sharing information about leading road design and licensing practices, legislative initiatives, research efforts, and model training programs that affect older drivers could support decisions about whether to implement new practices. Furthermore, group members said that identifying the research basis for practices could help them assess the benefits to be derived from implementing a particular practice. While some mechanisms exist to facilitate information exchanges on some topics, such as driver fitness assessment and licensing through AAMVA s Web site, there is no mechanism for states to share information on the broad range of efforts related to older driver safety. In addition to coordinating groups, the six states have ongoing efforts to improve older driver safety in the areas of strategic planning, education and awareness, licensing and driver fitness assessment, engineering, and data analysis. The following examples highlight specific initiatives and leading practices in each of these categories. Strategic planning Planning documents establish recommended actions and provide guidance to stakeholders on ways to improve older driver safety. The Michigan Senior Mobility Action Plan, issued in November 2006, builds upon the state s 1999 plan (Elderly Mobility & Safety The Michigan Approach) and outlines additional strategies, discusses accomplishments, and sets action plans in the areas of planning, research, education and awareness, engineering countermeasures, alternative transportation, housing and land use, and licensing designed to (1) reduce the number and severity of crashes involving older drivers and pedestrians, (2) increase the scope and effectiveness of alternative transportation options available to older people, (3) assist older people in maintaining mobility safely for as long as possible, and (4) plan for a day when driving may no longer be possible. In implementing this plan, officials are exploring the development of a community-based resource center that seniors can use to find information on mobility at a local level. Traffic Safety among Older Adults: Recommendations for California developed through a grant from California s Office of Traffic Safety and published in August 2002 offers a comprehensive set of recommendations and provides guidance to help agencies and communities reduce traffic-related injuries and fatalities to older adults. The Older Californian Traffic Safety Task Force was subsequently established to coordinate the implementation of the report s recommendations. Education/awareness Education and public awareness initiatives enable outreach to stakeholders interested in promoting older driver safety. Florida GrandDriver based on a program developed by AAMVA takes a multifaceted approach to public outreach through actions such as providing Web-based information related to driver safety courses and alternative transportation; training medical, social service and transportation professionals; offering safety talks at senior centers; and sponsoring CarFit events. According to the Florida Department of Highway Safety and Motor Vehicles, a total of 75 training programs and outreach events were conducted under the GrandDriver program between 2000 and 2006. California through its Older Californian Traffic Safety Task Force annually holds a Senior Safe Mobility Summit that brings subject- matter experts and recognized leaders together to discuss issues and heighten public understanding of long-term commitments needed to help older adults drive safely longer. Assessment/licensing Assessment and licensing initiatives are concerned with developing better means for stakeholders license administrators, medical professionals, law enforcement officers, family members to determine driver fitness and provide remedial assistance to help older people remain safe while driving. California s Department of Motor Vehicles is continuing to develop a progressive three-tier system for determining drivers wellness through nondriving assessments in the first two tiers and estimating driving fitness in a third-tier road test designed to assess the driver s ability to compensate for driving-relevant functional limitations identified in the first two tiers. The system, currently being tested at limited locations, is being developed to keep people driving safely for as long as possible by providing a basis for a conditional licensing program that can aid drivers in improving their driving-relevant functioning and in adequately compensating for their limitations. Oregon requires physicians and other designated medical providers to report drivers with severe and uncontrollable cognitive or functional impairments that affect the person s ability to drive safely. Oregon Driver and Motor Vehicle Services (ODMVS) evaluates each report and determines if immediate suspension of driving privileges is necessary. A person whose driving privileges have been suspended needs to obtain medical clearance and pass ODMVS vision, knowledge, and road tests in order to have his or her driving privileges reinstated. In cases where driving privileges are not immediately suspended, people will normally be given between 30 and 60 days to pass ODMVS tests or provide medical evidence indicating that the reported condition does not present a risk to their safe driving. Maryland was the first state to establish a Medical Advisory Board (MAB) created by state legislation in 1947 which is currently one of the most active boards in the United States. Maryland s MAB manages approximately 6000 cases per year most involving older drivers. Drivers are referred from a number of sources including physicians, law enforcement officers, friends, and relatives and the MAB reviews screening results, physician reports, and driving records among other information to determine driving fitness. The MAB s opinion is then considered by Maryland s Motor Vehicle Administration in making licensing decisions. The Iowa Department of Motor Vehicles can issue older drivers restricted licenses that limit driving to daylight hours, specific geographic areas, or low-speed roads. Restricted licensing, also referred to as graduated de-licensing, seeks to preserve the driver s mobility while protecting the health of the driver, passengers, and others on the road by limiting driving to low risk situations. About 9,000 older drivers in Iowa have restricted licenses. Iowa license examiners may travel to test older drivers in their home towns, where they feel most comfortable driving. Engineering Road design elements such as those recommended by FHWA are implemented to provide a driving environment that accommodates older drivers needs. A demonstration program in Michigan, funded through state, county, and local government agencies, along with AAA Michigan, made low- cost improvements at over 300 high-risk, urban, signalized intersections in the Detroit area. An evaluation of 30 of these intersections indicated that the injury rate for older drivers was reduced by more than twice as much as for drivers aged 25 to 64 years. The next phase of the program is development of a municipal tool kit for intersection safety, for use by municipal leaders and planners, to provide a template for implementing needed changes within their jurisdictions. The Iowa Department of Transportation (IDOT) has undertaken several initiatives in road operations, maintenance, and new construction to enhance the driving environment for older drivers. Among its several initiatives, IDOT is using more durable pavement markings on selected roads and servicing all pavement markings on a performance-based schedule to maintain their brightness, adding paved shoulders with the edge line painted in a shoulder rumble strip to increase visibility and alert drivers when their vehicles stray from the travel lane, converting 4-lane undivided roads to 3-lane roads with a dedicated left-turn lane to simplify turning movements, encouraging the use of more dedicated left turn indications (arrows) on traffic signals on high-speed roads, installing larger street name signs, replacing warning signs with ones that have a fluorescent yellow background to increase visibility, converting to Clearview fonts on Interstate signs for increased sign demonstrating older driver and pedestrian-friendly enhancements on a roadway corridor in Des Moines, and promoting local implementation of roadway improvements to benefit older drivers by providing training to city and county engineers and planners. The Transportation Safety Work Group of the Older Californian Traffic Safety Task Force provided engineering support in updating California s highway design and traffic control manuals to incorporate FHWA s recommended practices for making travel safer and easier for older drivers. Technical experts from the work group coordinated with the Caltrans design office in reviewing the Caltrans Highway Design Manual and updating elements related to older driver safety. Additionally, the work group managed an expedited process to have the California Traffic Control Devices Committee consider and approve modifications to signing and pavement marking standards in the California Manual on Uniform Traffic Control Devices that benefit older drivers. Data analysis Developing tools to accurately capture accident data enables trends to be identified and resources to be directed to remediating problems. Iowa has a comprehensive data system that connects information from multiple sources, including law enforcement records (crash reports, traffic citations, truck inspection records) and driver license and registration databases, and can be easily accessed. For example, the system allows law enforcement officers to electronically access a person s driving record and license information at a crash scene and enter their crash reports into the data system on-scene. Data captured through this process including the location of all crashes is less prone to error and can be geographically referenced to identify safety issues. In the case of older driver safety, several universities are utilizing Iowa crash data in research efforts. For example, University of Northern Iowa researchers utilized crash data and geospatial analysis to demonstrate how older driver crash locations could be identified and how roadway elements could be subsequently modified to improve safety for older drivers. University of Iowa researchers have used the data in behavioral research to study actions of older drivers and learn where changes in roadway geometrics, signing, or other roadway elements could assist older drivers with their driving tasks. Also, Iowa State University s Center for Transportation Research and Education (CTRE) has used the data to study a number of older driver crash characteristics and supports other older driver data analysis research projects with the Iowa Traffic Safety Data Service. Florida is developing a Mature Driver Database (MDDB) that will collect several types of data vision renewal data, crash data, medical review data to be accessible through the Department of Highway Safety and Motor Vehicles (DHSMV) Web site. According to DHSMV officials, this database is intended to be used across agencies to facilitate strategic planning. DHSMV may use the database, for example, to track driver performance on screenings and analyze the effectiveness of screening methods. Planned MDDB enhancements include providing links to additional data sources such as census and insurance databases. <5. Conclusion> Older driver safety is not a high-priority issue in most states and, therefore, receives fewer resources than other safety concerns. However, the aging of the American population suggests that older driver safety issues will become more prominent in the future. Some states with federal support have adopted practices to improve the driving environment for older road users and have implemented assessment practices to support licensing requirements for older drivers that are more stringent than requirements for younger drivers. However, information on the effectiveness of these practices is limited, and states have been reluctant to commit resources to initiatives whose effectiveness has not been clearly demonstrated. Some states have also implemented additional initiatives to improve older driver safety, such as establishing coordination groups involving a broad range of stakeholders and developing initiatives in the areas of strategic planning, education and outreach, assessment and licensing practices, engineering, and data analysis. NHTSA and FHWA also have important roles to play in promoting older driver safety, including conducting and supporting research on standards for the driving environment and on driver fitness assessment. While states hold differing views on the importance of older driver safety and have adopted varying practices to address older driver safety issues, it is clear that there are steps that states can take to prepare for the anticipated increase in the older driver population and simultaneously improve safety for all drivers. However, state resources are limited, so information on other states initiatives or federal efforts to develop standards for the driving environment and on driver fitness assessment practices could assist states in implementing improvements for older driver safety. <6. Recommendation for Executive Action> To help states prepare for the substantial increase in the number of older drivers in the coming years, we recommend that the Secretary of Transportation direct the FHWA and NHTSA Administrators to implement a mechanism that would allow states to share information on leading practices for enhancing the safety of older drivers. This mechanism could also include information on other initiatives and guidance, such as FHWA s research on the effectiveness of road design practices and NHTSA s research on the effectiveness of driver fitness assessment practices. <7. Agency Comments and Our Evaluation> We provided a draft of this report to the Department of Health and Human Services and to the Department of Transportation for review and comment. The Department of Health and Human Services agreed with the report and offered technical suggestions which we have incorporated, as appropriate. (See app. III for the Department of Health and Human Services written comments.) The Department of Transportation did not offer overall comments on the report or its recommendation. The department did offer several technical comments, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees. We are also sending copies of this report to the Secretary of Transportation and the Secretary of Health and Human Services. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Objectives, Scope, and Methodology This report addresses (1) what the federal government has done to promote practices to make roads safer for older drivers and the extent to which states have implemented those practices, (2) the extent to which states assess the fitness of older drivers and what support the federal government has provided, and (3) what initiatives selected states have implemented to improve the safety of older drivers. To determine what the federal government has done to promote practices to make roads safer for older drivers, we interviewed officials from the Federal Highway Administration (FHWA) within the U.S. Department of Transportation (DOT) and the American Association of State and Highway Transportation Officials (AASHTO) and reviewed manuals and other documentation to determine what road design standards and guidelines have been established, the basis for their establishment, and how they have been promoted. We also reviewed research and interviewed a representative of the National Cooperative Highway Research Program (NCHRP) to gain perspective on federal initiatives to improve the driving environment for older drivers. Finally, to determine trends in accidents involving older drivers, we reviewed and analyzed crash data from the U.S. DOT s Fatality Analysis Reporting System database and General Estimates System database. To obtain information on the extent to which states are implementing these practices, we surveyed and received responses from DOTs in each of the 50 states and the District of Columbia. We consulted with NCHRP, FHWA, and AASHTO in developing the survey. The survey was conducted from the end of September 2006 through mid-January 2007. During this time period, we sent two waves of follow-up questionnaires to nonrespondents in addition to the initial mailing. We also made phone calls and sent e-mails to a few states to remind them to return the questionnaire. We surveyed state DOTs to learn the extent to which they have incorporated federal government recommendations on road design elements into their own design guides and implemented selected recommendations in their construction, operations, and maintenance activities. We also identified reasons for state DOTs rejecting recommendations and determined the proportion of practitioners that were trained in each state to implement recommendations. In addition, we asked state DOTs to evaluate the extent to which they have developed plans (defined in Strategic Highway Safety Plans) and programmed projects (listed in Statewide Transportation Improvement Programs) for older driver safety as provided for by SAFETEA-LU legislation. Before fielding the questionnaire, we reviewed the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) and prior highway legislation to identify the framework for states to develop and implement older driver safety programs. Additionally, we conducted separate in-person pretests with officials from three state DOTs and revised our instrument as a result of the information obtained during those pretests. We took steps in developing the questionnaire and in collecting and analyzing the data to minimize errors that could occur during those stages of the survey process. A copy of the questionnaire and detailed survey results are available at www.gao.gov/cgi-bin/getrpt?GAO- 07-517SP. To determine the extent to which states assess the fitness of older drivers and what support the federal government has provided, we interviewed officials and reviewed relevant documents from the National Highway Traffic Safety Administration within the U.S. DOT, the National Institute on Aging and the Administration on Aging within the U.S. Department of Health and Human Services, and the American Association of Motor Vehicle Administrators a nongovernmental organization that represents state driver licensing agencies. We determined the extent to which the guidelines and model programs of these agencies addressed the visual, physical, and cognitive deficits that may afflict older drivers. We also reviewed federal, state, and nongovernmental Web sites that contained information on states older driver licensing practices and analyzed their content so that we could compare practices across states. To obtain information on the activities of partner nongovernmental organizations in researching and promoting practices to assess older driver fitness, among other initiatives, we interviewed officials from AAA, AARP, the Insurance Institute for Highway Safety, and the Governors Highway Safety Association. To learn of states legislative initiatives concerning driver fitness assessment and licensing, we interviewed a representative of the National Conference of State Legislatures. We also interviewed officials from departments of motor vehicles in select states to report on their efforts in developing, implementing, and evaluating older driver screening and licensing programs. To obtain information on initiatives that selected states have implemented, we conducted case studies in six states California, Florida, Iowa, Maryland, Michigan, and Oregon that transportation experts identified as progressive in their efforts to improve older driver safety. We chose our case study states based on input from an NCHRP report highlighting states with leading practices in the areas of: education/awareness, assessment/licensing, engineering, agency coordination, strategic planning and data analysis. We compared practices across the six states to identify common themes. We also identified and determined, to the extent possible, key practices based on our analysis. The scope of our work focused on older driver safety. Prior GAO work addressed the associated issue of senior mobility for those who do not drive. We conducted our review from April 2006 through April 2007 in accordance with generally accepted government auditing standards. We requested official comments on this report from the U.S. Department of Transportation and the U.S. Department of Health and Human Services. States Licensing Requirements for Older Drivers Tables 5 through 7 list older driver licensing requirements in effect in certain states. Comments from Department of Health and Human Services GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the individual named above, Sara Vermillion, Assistant Director; Michael Armes; Sandra DePaulis; Elizabeth Eisenstadt; Joel Grossman; Bert Japikse; Leslie Locke; Megan Millenky; Joshua Ormond; and Beverly Ross made key contributions to this report.
Why GAO Did This Study As people age, their physical, visual, and cognitive abilities may decline, making it more difficult for them to drive safely. Older drivers are also more likely to suffer injuries or die in crashes than drivers in other age groups. These safety issues will increase in significance because older adults represent the fastest-growing U.S. population segment. GAO examined (1) what the federal government has done to promote practices to make roads safer for older drivers and the extent to which states have implemented those practices, (2) the extent to which states assess the fitness of older drivers and what support the federal government has provided, and (3) what initiatives selected states have implemented to improve the safety of older drivers. To conduct this study, GAO surveyed 51 state departments of transportation (DOT), visited six states, and interviewed federal transportation officials. What GAO Found The Federal Highway Administration (FHWA) has recommended practices--such as using larger letters on signs--targeted to making roadways easier for older drivers to navigate. FHWA also provides funding that states may use for projects that address older driver safety. States have, to varying degrees, adopted FHWA's recommended practices. For example, 24 states reported including about half or more of FHWA's practices in state design guides, while the majority of states reported implementing certain FHWA practices in roadway construction, operations, and maintenance activities. States generally do not place high priority on projects that specifically address older driver safety but try to include practices that benefit older drivers in all projects. More than half of the states have implemented licensing requirements for older drivers that are more stringent than requirements for younger drivers, but states' assessment practices are not comprehensive. For example, these practices primarily involve more frequent or in-person renewals and mandatory vision screening but do not generally include assessments of physical and cognitive functions. While requirements for in-person license renewals generally appear to correspond with lower crash rates for drivers over age 85, the validity of other assessment tools is less clear. The National Highway Traffic Safety Administration (NHTSA) is sponsoring research and other initiatives to develop and assist states in implementing more comprehensive driver fitness assessment practices. Five of the six states GAO visited have implemented coordination groups to assemble a broad range of stakeholders to develop strategies and foster efforts to improve older driver safety in areas of strategic planning, education and awareness, licensing and driver fitness assessment, roadway engineering, and data analysis. However, knowledge sharing among states on older driver safety initiatives is limited, and officials said states could benefit from knowledge of other states' initiatives.
<1. Background> A reverse mortgage is a loan against the borrower s home that the borrower does not need to repay for as long as the borrower meets certain conditions. These conditions, among others, require that borrowers live in the home, pay property taxes and homeowners insurance, maintain the property, and retain the title in the borrower s name. Reverse mortgages typically are rising debt, falling equity loans, in which the loan balance increases and the home equity decreases over time. As the borrower receives payments from the lender, the lender adds the principal and interest to the loan balance, reducing the homeowner s equity. This is the opposite of what happens in forward mortgages, which are characterized as falling debt, rising equity loans. With forward mortgages, monthly loan payments made to the lender add to the borrower s home equity and decrease the loan balance (see fig. 1). There are two primary types of reverse mortgages, HECMs and proprietary reverse mortgages. The Housing and Community Development Act of 1987 (P.L. 100-242) authorized HUD to insure reverse mortgages and established the HECM program. According to industry officials, HECMs account for more than 90 percent of the market for reverse mortgages. Homeowners aged 62 or older with a significant amount of home equity are eligible, as long as they live in the house as the principal residence, are not delinquent on any federal debt, and live in a single-family residence. If the borrower has any remaining balance on a forward mortgage, this generally must be paid off first (typically, taken up-front from the reverse mortgage). In addition, the condition of the house must meet HUD s minimum property standards, but a portion of the HECM can be set aside for required repairs. The borrower makes no monthly payments, and there are no income or credit requirements to qualify for the mortgage. Lenders have offered non-HECM, or proprietary, reverse mortgages in the past, but these products have largely disappeared from the marketplace due, in part, to the lack of a secondary market for these mortgages. Typically, proprietary reverse mortgages have had higher loan limits than HECMs but paid out a lower percentage of the home value to borrowers. The volume of HECMs made annually has grown from 157 loans in fiscal year 1990 to more than 112,000 loans in fiscal year 2008. The HECM program has experienced substantial growth, as the number of HECMs insured by FHA has nearly tripled since 2005 (see fig. 2). Additionally, the potential liability of loans insured by FHA has doubled in the last 2 years (see fig. 3). The potential liability is the sum of the maximum claim amounts for all active HECMs since the program s inception. Finally, recent years have seen a rapid increase in the number of lenders participating in the HECM program (see fig. 4). However, the bulk of HECM business is concentrated among a relatively small percentage of lenders. In fiscal year 2008, roughly 80 percent of all HECMs were originated by fewer than 300 lenders, or about 10 percent of HECM lenders. Lenders can participate in the HECM market through wholesale or retail channels. Wholesale lenders fund loans originated by other entities, including mortgage brokers and loan correspondents. Retail lenders originate, underwrite, and close loans without reliance on brokers or loan correspondents. Most lenders participate in the HECM market through retail lending, although some participate through the wholesale process, and a few have both a retail and wholesale HECM business. There is a secondary market for HECMs, as most lenders prefer not to hold the loans on their balance sheets. Fannie Mae has purchased 90 percent of HECM loans and holds them in its portfolio. In 2007, Ginnie Mae developed and implemented a HECM Mortgage Backed Security product, in which Ginnie Mae-approved issuers pool and securitize a small proportion of HECMs. Fannie Mae and Ginnie Mae s involvement in the HECM secondary market helps to provide liquidity so that lenders can continue offering HECM loans to seniors. The amount of loan funds available to the borrower is determined by several factors (see fig. 5). First, the loan amount is based on the maximum claim amount, which is the highest sum that HUD will pay to a lender for an insurance claim on a particular property. It is determined by the lesser of the appraised home value or the HECM loan limit. In the past year, Congress has raised the HUD loan limit for HECMs twice: HERA established for the first time a national limit for HECMs, which was set at $417,000. As a result of ARRA, the national limit was raised again to $625,500 through December 31, 2009. Prior to HERA, the loan limit for HECMs varied by location and generally were set at 95 percent of the local area median house price. Second, to manage its insurance risk, HUD limits the loan funds available to the borrower by applying a principal limit factor to the maximum claim amount. HUD developed a principal limit factor table using assumptions about loan termination rates which are influenced by borrower mortality and move-out rates and long-term house price appreciation rates, and indexed the table by (1) the borrower s age and (2) the expected interest rate the 10-Year Treasury rate plus the lender s margin. The lender determines which factor to use by inputting the borrower s current age and the current interest rate information. The older the borrower, the higher the loan amount; the greater the expected interest rate of the loan, the smaller the loan amount. Third, the funds available to the borrower are further reduced by a required servicing fee set-aside and by the up-front costs (which include a mortgage insurance premium and the origination fee), because borrowers can choose to finance them. HUD allows lenders to charge up to $35 as a monthly HECM servicing fee. The lender calculates the servicing fee set- aside by determining the total net present value of the monthly charged servicing fees that the borrower would pay between loan origination and when the borrower reaches age 100. The set-aside limits the loan funds available but is not added to the loan balance at origination. If borrowers choose to finance up-front costs as part of the loan, the loan funds available are reduced by these costs. 10 yr. Borrowers incur various costs when obtaining a HECM. HUD allows borrowers to finance both up-front and long-term costs through the loan, which means they are added to the loan balance. Origination fee: Prior to HERA, HECM borrowers were charged an origination fee equal to 2 percent of the maximum claim amount with a minimum fee of $2,000. Since the implementation of HERA, HECM borrowers are charged an origination fee calculated as 2 percent of the maximum claim amount up to $200,000 plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000 and a minimum fee of $2,500. Mortgage insurance premium: Borrowers are charged an up-front mortgage insurance premium equal to 2 percent of the maximum claim amount. While the maximum claim amount is always higher than the initial amount a borrower can receive in HECM payments from the lender, FHA charges the mortgage insurance premium based on this amount because the loan balance (with accumulated interest and fees) could exceed the amount a borrower receives in payments and potentially reach the maximum claim amount. Additionally, borrowers are charged a monthly mortgage insurance premium on their loan balance at an annual rate of 0.5 percent. Interest: Borrowers are charged interest, which generally includes a base interest rate plus a fixed lender margin rate, on the loan balance. Lenders can offer HECMs with fixed, annually adjustable, or monthly adjustable base interest rates. The adjustable rates can be tied to either the 1-Year Constant Maturity Treasury Rate or 1-Year London Interbank Offered Rate Index. Most HECMs have adjustable interest rates. HECM counseling fee: The HECM program requires prospective borrowers to receive counseling to ensure an understanding of the loan. HUD allows counseling providers to charge borrowers up to $125 for HECM counseling. Loan servicing fee: Borrowers pay a monthly servicing fee of up to $35. Closing costs: HECMs also have other up-front closing costs, such as appraisal and title search fees. FHA s insurance for HECMs protects borrowers and lenders in four ways. First, lenders can provide borrowers with higher loan amounts than they could without the insurance. Second, when the borrower is required to repay the loan to the lender, if the proceeds from the sale of the home do not cover the loan balance, FHA will pay the lender the difference. Third, if the lender is unable to make payments to the borrower, FHA will assume responsibility for making these payments. Fourth, if the loan balance reaches 98 percent of the maximum claim amount, the lender may assign the loan to FHA and FHA will continue making payments to the borrower if the borrower has remaining funds in a line of credit or still is receiving monthly payments. To cover expected insurance claims, FHA charges borrowers insurance premiums, which go into an insurance fund. HECM loans originated since the inception of the program through 2008 are supported by FHA s General Insurance and Special Risk Insurance Fund, which includes a number of FHA mortgage insurance programs for single- family and multifamily housing and hospitals. Pursuant to HERA, FHA moved the HECM program and other insurance programs for single-family housing into FHA s Mutual Mortgage Insurance Fund. FCRA requires federal agencies that provide loan guarantees to estimate the expected cost of programs by estimating their future performance and reporting the costs to the government in their annual budgets. Under credit reform procedures, the cost of loan guarantees, such as mortgage insurance, is the net present value of all expected cash flows, excluding administrative costs. This is known as the credit subsidy cost. For loan guarantees, cash inflows consist primarily of fees and premiums charged to insured borrowers and recoveries on assets, and cash outflows consist mostly of payments to lenders to cover the cost of claims. Annually, agencies estimate credit subsidy costs by cohort, or all the loans the agency is committing to guarantee in a given fiscal year. The credit subsidy cost can be expressed as a rate. For example, if an agency commits to guarantee loans totaling $1 million and has estimated that the present value of cash outflows will exceed the present value of cash inflows by $15,000, the estimated credit subsidy rate is 1.5 percent. When estimated cash inflows exceed estimated cash outflows, the program is said to have a negative credit subsidy rate. When estimated cash outflows exceed estimated cash inflows, the program is said to have a positive credit subsidy rate and therefore requires appropriations. Generally, agencies are required to produce annual updates of their subsidy estimates known as re-estimates of each cohort based on information about the actual performance and estimated changes in future loan performance. This requirement reflects the fact that estimates of subsidy costs can change over time. Beyond changes in estimation methodology, each additional year provides more historical data on loan performance that may influence estimates of the amount and timing of future claims. Economic assumptions also can change from one year to the next, including assumptions on home prices and interest rates. FCRA recognized the difficulty of making subsidy cost estimates that mirrored actual loan performance and provides permanent and indefinite budget authority for re-estimates that reflect increased program costs. <2. Most HECM Lenders View the Overall Effect of the HERA Provisions as Neutral or Positive for Their Reverse Mortgage Business> In combination, HERA s changes to the HECM loan limit and origination fee calculation have had a positive to neutral influence on most lenders plans to start or continue offering HECMs. Other factors have had varying influences on lenders planned participation. Current economic conditions have had a moderate upward influence on lenders plans; however, secondary market conditions have had a downward influence on about one-third of lenders plans to start or continue offering HECMs. Finally, the HERA changes have not influenced most lenders plans to offer proprietary non-HECM products. <2.1. HERA s Changes and Other Factors Have Had Varying Effects on Lenders Planned Participation in the HECM Market> HERA s changes to the HECM program have had varying effects on HECM lenders planned participation in the HECM market. On the basis of questionnaire responses from a random sample of HECM lenders, we estimate that for 50 percent of lenders, the combined effect of these changes has had an upward influence on their plans to start or continue to offer HECMs (see fig. 6). For 42 percent of lenders, the combination of HERA s changes to the origination fee and loan limits for the HECM program have had little to no influence on their plans to offer HECMs, while for 8 percent of lenders, HERA s changes have had a downward influence. Some industry participants we interviewed stated that the changes were a good compromise that benefited borrowers by limiting the origination fee and increasing the loan limit, thereby increasing the money borrowers could receive from a HECM. Additionally, officials at NRMLA and MBA said the changes benefited lenders by making the product more attractive to individuals with higher-value homes. Taken separately, the two HERA provisions have had differing effects on lenders plans to offer HECMs. We estimate that for about 70 percent of lenders, HERA s increase in HECM loan limits has had an upward influence on the likelihood of offering HECMs. The loan limit increase has had little to no influence on almost all of the remaining lenders plans to offer HECMs. We estimate that 86 percent of lenders expect that HERA s creation of a single national loan limit of $417,000 will somewhat or greatly increase consumer demand for HECMs. Although the increase in the loan limit has generally had an upward influence on lenders plans, the change to the calculation of the origination fee has had a different effect. We estimate that changing how the fee is calculated has had a downward influence on plans to offer HECMs for 22 percent of HECM lenders, little to no influence for 65 percent of lenders, and an upward influence for 11 percent of lenders. Consistent with these views, 65 percent of lenders expect the change in origination fee to have no effect on consumer demand for HECMs. An estimated 26 percent of lenders expect the change in the origination fee to increase consumer demand, while only a few lenders expect the change to decrease consumer demand. We estimate that only 2 percent of HECM lenders do not plan to continue to offer HECMs. Of the respondents in our sample, three lenders indicated that they did not plan to continue offering HECMs. None of these were large HECM lenders, as they each originated from 40 to 160 HECMs in fiscal year 2008. Each of these lenders participated in the HECM market solely through their retail business. These three lenders varied in the amount of time that they have offered the HECM product. A representative of one lender indicated that HERA s changes to the loan limits and origination fee had a great upward influence on the likelihood that it would offer HECMs, but nonetheless planned to discontinue offering HECMs. The other two lenders indicated that HERA and other economic factors had little to no influence on their decision to discontinue offering HECMs, and one of these lenders noted on the survey that it had discontinued offering HECMs before the enactment of the HERA. As part of our survey, we asked lenders how various economic and legislative factors influenced their plans to start or continue offering HECMs. Two factors had an upward influence on most lenders plans to offer HECMs in 2009. For an estimated 67 percent of HECM lenders, the implementation of the HECM for Purchase program (authorized by HERA) has had an upward influence on their plans to offer HECMs, and it has had little to no influence on almost all of the remaining lenders HECM origination plans. Some industry participants told us that the HECM for Purchase program likely will make HECMs attractive to a broader range of seniors. Additionally, current economic conditions have had an upward influence on the plans to offer HECMs for about 52 percent of lenders. NRMLA officials explained that seniors are seeking additional revenue because they have less available income from traditional sources, such as interest and dividend payments and retirement accounts, which is partially attributable to poor economic and financial market conditions. Additionally, two other factors have had an upward influence on some lenders plans to offer HECMS. For about one-third of lenders, both (1) reduced opportunities in the forward mortgage market and (2) HERA s prohibition on the participation of non-FHA approved entities in the origination of HECMs has had a moderate or great upward influence on their plans to offer HECMs. In contrast, three factors had more of a downward influence on some lenders planned participation in the HECM market. First, we estimate from our survey that house price trends have had a downward influence on the HECM origination plans of 38 percent of lenders; however, house price trends had little or no influence on plans for about 50 percent of lenders. Some industry participants told us that the recent decline in house prices has prevented some seniors from obtaining a HECM either because they lack the equity in their home to qualify for the loan, or because they would not receive enough funds from the HECM to have any cash remaining after they deduct HECM fees and pay off any existing mortgage debt. Second, we estimate that the availability of secondary market options has had a downward influence on the plans of about one-third of lenders to offer HECMs. The secondary market for HECMs plays an important role in maintaining availability of loans because lenders prefer not to hold HECMs on their balance sheets. There are currently two primary options in the secondary market Fannie Mae and Ginnie Mae. Fannie Mae officials stated that Fannie Mae bought and held more than 90 percent of HECMs in its portfolio in 2008 and was the principal secondary market purchaser of HECM loans. However, Fannie Mae s regulator the Federal Housing Finance Agency recently required it to reduce the mortgage assets it holds in portfolio. Fannie Mae officials told us that as a result, they are making changes to their HECM business, which will attract other investors to the secondary market for HECMs, in order to decrease their share of the market. Recently, Fannie Mae lowered the price it pays lenders for HECMs and implemented a live pricing system that requires lenders to commit to the volume of HECMs they will sell to Fannie Mae. We estimate that approximately 90 percent of lenders viewed secondary market pricing requirements and the transition to live pricing as important factors in recent margin rate increases on HECMs. Fannie officials explained that as the price they pay lenders for HECMs falls, the margin rate the lenders charge the consumers generally increases. Some lenders we surveyed noted that margin rate increases stemming from pricing changes could make HECMs less attractive to borrowers because they would not be able to obtain as much cash from their HECM. Some lenders noted that live pricing complicates their relationship with borrowers because the interest rate can change between loan application and closing, which may result in the senior being able to receive less money from their HECM than originally quoted. Ginnie Mae developed and guarantees a HECM Mortgage Backed Security (HMBS) that aims to expand the availability of HECMs from multiple lenders, reduce borrowing costs, and create a broader secondary market for HECM loans. Ginnie Mae officials stated that they were poised to take on extra volume in the HECM secondary market by guaranteeing securities issued by lenders. AARP officials noted that Ginnie Mae s HMBS product could help introduce competition into the secondary market for reverse mortgages, lowering margin rates for seniors. However, industry participants point to several issues with the Ginnie Mae product that could limit its appeal to lenders. First, Ginnie Mae requires HMBS issuers to buy back the HECM when the loan balance reaches 98 percent of the loan s maximum claim amount. Second, issuers are required to pay interest shortfalls to investors when the loan is terminated mid-month. Some HECM lenders have noted that both of these provisions expose them to extra risk on the loan, as compared to the alternative of selling the HECM outright as they had when selling to Fannie Mae. Third, for an estimated 29 percent of lenders, HERA s prohibition on lender-funded counseling has had a downward influence on plans to offer HECMs. Industry participants said that this prohibition is a problem for the HECM industry because counseling is required for borrowers to obtain a HECM, but borrower-paid counseling can be a deterrent for seniors who are still deciding if they want a HECM, or for those who have limited financial means to pay for counseling. In contrast to these comments, we estimate that the prohibition on lender-funded counseling had little or no influence on the plans of 60 percent of lenders. Our survey of HECM lenders asked about two other factors HERA s restrictions on selling other financial products in conjunction with HECMs and the current availability of wholesale lending partners that could influence lenders plans to start or continue to offer HECMs. In general, these factors had little or no influence on lenders plans (see fig. 6). <2.2. HERA Has Not Influenced Most Lenders Plans to Offer Non-HECM Reverse Mortgages> In 2008, several non-HECM reverse mortgages referred to as jumbo or proprietary reverse mortgages were available in the marketplace. Proprietary reverse mortgages offered loan limits that were greater than the HECM loan limit. For example, Financial Freedom, a large reverse mortgage lender, offered a product called the Cash Account Advantage Plan, which was not subject to the HECM loan limits, and in some cases provided more cash than a HECM to borrowers with higher-value homes. Based on our survey results, we estimate that approximately 43 percent of HECM lenders made non-HECM reverse mortgages in 2008. However, towards the end of 2008, almost all of the non-HECM reverse mortgage products were withdrawn from the market due to the lack of a secondary market to support them. Nonetheless, from our survey results, we estimate that 36 percent of HECM lenders plan to offer a non-HECM reverse mortgage in 2009. We estimate that HERA s changes to the calculation of the origination fee and loan limit have had little or no influence on 68 percent of lenders plans to originate non-HECM reverse mortgages (see fig. 7). However, for an estimated 29 percent of HECM lenders, HERA s change to the loan limits has had an upward influence on their plans to offer non-HECM reverse mortgages. Additionally, we estimate that for 32 percent of lenders, the implementation of the HECM for Purchase program had an upward influence on their plans to offer these loans. We estimate that current economic conditions have had an upward influence on plans to offer non-HECM reverse mortgages for 29 percent of lenders, little to no influence for 34 percent of lenders, and a downward influence for 17 percent of lenders. Our survey of HECM lenders asked about several other factors (see fig. 7) that could influence lenders plans to offer a non-HECM reverse mortgage product in 2009. Generally, these factors have had little or no influence on lenders plans. Our survey results did not indicate that secondary market conditions had a downward influence on the plans of most lenders. However, several lenders we interviewed said that while they hoped to offer a non-HECM reverse mortgage in 2009, their ability to do so would depend on the availability of funding in the secondary market. <3. HERA Provisions Will Affect Borrower Costs and Loan Amounts Differently Depending on Home Value and Other Factors> HERA s provisions will affect borrowers in varying ways depending primarily on home value and whether HERA s increase in loan limit will change the maximum claim amount of the loan. HERA s changes to HECM origination fees and loan limits are likely to change the up-front costs (origination fee and up-front mortgage insurance premium) and the loan funds available for most new borrowers. Our analysis of data on borrowers who took out HECMs in 2007 shows that had the HERA provisions been in place, most borrowers would have paid less or the same amount in up-front costs, and most would have had more or the same amount of loan funds available. Additionally, about 28 percent of HECM borrowers in 2007 would have seen an increase in maximum claim amount due to HERA s increase in loan limit, which would have meant more loan funds available for nearly all of these borrowers. Borrowers also may be affected by other consequences of the HERA provisions, such as margin rate increases and changes to funding of HECM counseling. <3.1. HERA Provisions Will Change Up-front Costs for Many Borrowers> The net effect of the HERA provisions on an individual borrower s total up-front costs depends on house value, the local loan limit prior to HERA, and the new loan limit. HECM up-front costs consist primarily of the up- front mortgage insurance premium and the origination fee, both of which are calculated as a proportion of the maximum claim amount. Most borrowers are likely to see changes in origination fees due to HERA. Generally, those with house values greater than the prior HECM loan limit in their area will see changes in the up-front mortgage insurance premium. Borrowers fall into two categories, based on whether their maximum claim amount changes: Maximum claim amount does not change: For borrowers whose houses are valued at or less than the prior HECM loan limit in their area, the maximum claim amount does not change. Therefore, for these borrowers, the mortgage insurance premium (which is calculated based on the maximum claim amount) also does not change. However, the origination fee may change depending on the value of the house. A borrower whose house is valued at less than $125,000 should expect up to a $500 increase in the up-front costs due to the increase in the minimum origination fee from $2,000 to $2,500. A borrower whose house is valued at $125,000 to $200,000 would see no change in the up-front costs because they would pay the same 2 percent of the maximum claim amount (the same as before HERA). A borrower whose house is valued at greater than $200,000 would expect a decrease in up-front costs due to the decreased origination fee for amounts greater than $200,000 and the fee cap of $6,000. For an example, see borrower D, whose house value is $300,000, in table 1. Maximum claim amount increases: For borrowers whose maximum claim amount increases because their house values are greater than the prior local HECM loan limit, the change to up-front costs is more complex. All borrowers in this category will pay more in up-front mortgage insurance premiums because premiums are calculated based on the entire maximum claim amount. However, some borrowers may pay more in origination fees, while others will pay less. When combining these two costs, the total up-front costs could increase, decrease or remain the same. For example, borrowers A, B, and C in table 1 each own houses valued at $300,000 that are located in counties in which prior HECM loan limits varied from $200,000 to $290,000. Each borrower would see different effects in up-front costs. See appendix III for a more complete explanation of how up-front costs will change for borrowers with different characteristics. <3.2. Most 2007 HECM Borrowers Would Have Paid the Same or Less in Up-front Costs under the HERA Provisions, and Most Borrowers Would Have Had the Same or More Loan Funds Available> To illustrate the potential effect of the HERA provisions on borrowers, we compared the actual maximum claim amounts, up-front costs (origination fee plus the up-front insurance premium), and loan funds available for HECM borrowers in 2007 to what their maximum claim amounts, up-front costs, and loan funds available would have been had the HERA provisions been in place. Overall, we found that nearly 27 percent of borrowers would have paid more in up-front costs, 46 percent would have paid less, and 27 percent would have paid the same (see fig. 8). The amount and direction of the changes to up-front costs and loan funds available primarily depended on house value and whether a borrower would have benefited from an increase in loan limit (about 28 percent of 2007 HECM borrowers homes were valued at more than the prior loan limit and would have seen their maximum claim amounts increase because of HERA s increase in the loan limit). Our analysis of up-front costs broken down by its two components is as follows: Origination fees: About 24 percent of 2007 borrowers would have paid more in origination fees, 49 percent would have paid less, and 27 percent would have paid the same amount. Increases in origination fees were due either to the $500 increase in the minimum origination fee (about 17 percent of all borrowers) or to the increased loan limits (about 6 percent of all borrowers). Borrowers who would have paid less in origination fees had maximum claim amounts greater than $200,000, which means they would have benefited from the decrease in the origination fee for the portion of the maximum claim amount greater than $200,000, the $6,000 origination fee cap, or both. Up-front mortgage insurance premium: Twenty-eight percent of 2007 HECM borrowers would have paid more in up-front mortgage insurance premiums due to increases in the loan limit, while 72 percent of borrowers would have paid the same amount, generally because the size of their loans was limited by the value of their homes and not the HECM loan limit. Changes in the loan limits and up-front fees would have affected the loan funds available to most 2007 borrowers. Borrowers whose maximum claim amount would have increased because of an increase in loan limit would have paid a higher up-front mortgage insurance premium, regardless of how much of their available loan funds they chose to access. Because this analysis assumed that HECM borrowers financed the up-front costs in the loan, any increase or decrease in the up-front costs affects the amount of loan funds that are available to them. Our analysis which assumes that borrowers financed their up-front costs shows that had the HERA provisions been in place at origination for 2007 HECMs, approximately 56 percent of borrowers would have had more loan funds available, 17 percent would have had less loan funds available, and 27 percent would have had the same amount available (see fig. 8). 28 percent of borrowers would have had more loan funds available, primarily due to the increase in loan limit; about 28 percent of borrowers would have had more loan funds available due solely to a decrease in their up-front fees; 17 percent of borrowers would have had a smaller amount of loan funds available due solely to an increase in their up-front fees; and 27 percent of borrowers would have experienced no change in the amount of loan funds available because their up-front fees and loan limits remained the same. Additionally, figure 8 shows the number of 2007 borrowers within the various categories and figure 9 shows the average changes in up-front costs and loan funds available for each category of borrower. Borrowers with the largest increases in their maximum claim amounts on average would have the largest percent increases in up-front costs (see fig. 9). Borrowers with no increase in their maximum claim amount, who have a change to up-front costs, will have a corresponding change in loan funds available that are equal in size but opposite in direction. For example a borrower with a $200 decrease in up-front costs will have a $200 increase in loan funds available and a borrower with a $300 increase in up-front costs will have a $300 decrease in loan funds available. <3.3. Borrowers May Be Affected by Other Factors, Such as Lender Margin Rates and Counseling Fees> Increased lender margin rates stemming from HERA s change to the origination fee calculation could reduce loan funds available to borrowers. At loan origination, the expected interest rate HUD uses to determine the portion of the maximum claim amount that will be made available to the borrower includes the 10-year Treasury rate plus the fixed lender margin rate. Our survey of HECM lenders indicates that some lenders have raised their margin rates modestly to compensate for HERA s limitations on the origination fee; however, we did not receive a sufficient number of responses to reliably estimate the median increase in margin rate for the population. To illustrate the impact of a modest increase in margin-rate on borrowers, we applied a 0.25 percentage point increase to borrowers who took out HECMs in 2007. We found that these borrowers would have seen a 3 percent average decrease in loan funds available as a result of the higher margin rate. A comparison of HUD data on HECMs originated within the first 3 months of HERA s implementation with data from the same 3 months from the prior year indicates that average margin rates were higher after HERA but that the overall average HECM expected interest rates were essentially the same. This outcome resulted from declines in 10-year Treasury rates offsetting increases in lender margin rates. In addition, more borrowers, as well as prospective borrowers who ultimately do not obtain a HECM, may need to pay counseling fees. Provisions in HERA prohibit lenders from paying for this counseling but allow HUD to use a portion of HECM mortgage insurance premiums for this purpose. HUD officials said that they have not exercised this authority because the resulting reduction in premium income would affect the subsidy rate of the program adversely and potentially require appropriations. Because HUD did not implement this provision, more borrowers and prospective borrowers may need to pay counseling fees themselves. For borrowers who do eventually obtain a HECM, the fee can be financed in the loan. Prospective borrowers who do not qualify for a HECM or who choose not to proceed with the loan after counseling may have to pay for counseling out of pocket. HUD s recent announcement that it will provide approximately $8 million in grant funds for HECM counseling in 2009 may mitigate any negative impact the HERA changes may have on seniors ability to obtain HECM counseling. <4. HUD Has Enhanced Its Analysis of HECM Program Costs but Changes in House Price Trends and Higher Loan Limits Have Increased HUD s Risk of Losses> HUD has taken or planned steps to enhance its analysis of the HECM program s financial performance. However, HUD s recent estimates of program costs indicate weaker performance than previously estimated, primarily due to more pessimistic assumptions about long-term house price trends. Additionally, higher loan limits enacted under HERA and the American Recovery and Reinvestment Act of 2009 (ARRA) could increase HUD s financial risk. <4.1. HUD Is Taking Steps to Improve its Analysis of the HECM Program s Financial Performance> To estimate the cost of the HECM program, HUD uses a model to project the cash inflows (such as insurance premiums paid by borrowers) and cash outflows (such as claim payments to lenders) for all loans over their expected duration. HUD s model is a computer-based spreadsheet that incorporates assumptions based on historical and projected data to estimate the amount and timing of insurance claims, subsequent recoveries from these claims, and premiums and fees paid by borrowers. These assumptions include estimates of house price appreciation, interest rates, average loan size, and the growth of unpaid loan balances. HUD inputs its estimated cash flows into OMB s credit subsidy calculator, which calculates the present value of the cash flows and produces the official credit subsidy rate for a particular loan cohort. A positive credit subsidy rate means that the present value of the cohort s expected cash outflows is greater than the inflows, and a negative credit subsidy rate means that the present value of the cohort s expected cash inflows is greater than the outflows. To budget for a positive subsidy an agency must receive an appropriation. HUD also uses the cash flow model to annually estimate the liability for loan guarantees (LLG), which represents the net present value of future cash flows for active loans, taking into account the prior performance of those loans. HUD estimates the LLG for individual cohorts as well as for all cohorts combined. The LLG is a useful statistic because unusual fluctuations in the LLG can alert managers to financial risks that require further attention. HUD in recent years has enhanced its cash flow model for the HECM program. In 2007, the HUD Office of Inspector General s (OIG) annual audit of FHA s financial statements cited a material weakness in the cash flow model FHA used to generate credit subsidy estimates for the HECM program. Among other things, the audit noted technical errors in the model, significant discrepancies between projected and actual cash flows, and a lack of supporting documentation for certain modeling decisions. Partly in response to the OIG audit, HUD made a number of improvements to both the model and its supporting documentation, and in 2008 the HUD OIG eliminated the material weakness. For example, HUD improved the methodology it uses for its cash flow model. In the past, HUD used historical averages for termination and recovery rates for projecting cash flows. In 2008, HUD began to incorporate forecasts of national house price appreciation and interest rates from IHS Global Insight, an independent source for economic and financial forecasts, into its modeling. Additionally, HUD improved the way it estimates the growth of unpaid principal balances, which HUD uses to calculate the LLG. In the past, HUD used both active and terminated loans to generate this estimate. Since 2008, HUD has included only active loans to generate this estimate, which is more appropriate because the LLG represents the expected future cash flows of currently active loans. HUD also developed a master database of loan-level information to support the HECM cash flow model. Previously, HUD staff had to draw on data from multiple sources, which increased the chance of analytical errors. Finally, HUD made a number of enhancements to its documentation of estimation processes, including how macroeconomic projections are incorporated into the cash flow model. HUD plans to subject the HECM program to an annual actuarial review, which should provide additional insight into the program s financial condition. Such a review would likely assess if program reserves and funding were sufficient to cover estimated future losses, as well as the sensitivity of this analysis to different economic and policy assumptions. Historically, the HECM program has not had a routine actuarial review because it was supported by the General Insurance and Special Risk Insurance Fund (GI/SRI) Fund, which does not have such a review requirement. However, as of fiscal year 2009, the HECM program is in the Mutual Mortgage Insurance (MMI) Fund, which is statutorily required to receive an independent actuarial review each year and includes FHA s largest mortgage insurance program. HUD officials told us that future actuarial reviews of the MMI Fund will include a separate assessment of the HECM program. HUD also is considering producing credit subsidy re-estimates for the HECM program. As discussed later in this report, HUD has generated credit subsidy estimates for individual HECM cohorts for several years. However, HUD officials told us that, until recently, they did not have the data necessary to produce subsidy re-estimates for HECMs. Specifically, the officials noted that for HECM cohorts prior to 2009, assets for HECMs were aggregated with assets from other programs in the GI/SRI Fund and not accounted for separately. HUD officials said that they are now accounting for HECM assets separately, which will enable them to produce re-estimates for the HECM program. Re-estimates can highlight cohorts that are not expected to meet original budget estimates. This information could help inform future actions to manage HUD s insurance risk and control program costs. <4.2. Prior Cost Estimates Indicated That the HECM Program Was Profitable but Current Estimates Forecast Losses, Primarily Due to Revised House Price Assumptions> HUD s most recent estimates of two important financial indicators for the HECM program the credit subsidy rate and the LLG suggest weaker financial performance than previously estimated, largely due to more pessimistic house price assumptions. All other things being equal, lower house price appreciation can increase HUD s insurance losses because it makes it less likely that the value of the home will cover the loan balance. Analyses by HUD have found that the financial performance of the HECM program is sensitive to long-term trends in house prices. HUD officials told us that HECM program performance is less sensitive to short-term price declines because borrowers with HECMs, unlike those with traditional forward mortgages, do not have an incentive to terminate (or default on) their loans when prices fall. HUD has made credit subsidy estimates for HECM cohorts from 2006 forward. Because the HECM program was relatively small prior to 2006, HUD did not produce separate subsidy estimates for the HECM program but included HECMs in its estimates of subsidy costs for the GI/SRI Fund as a whole. For the 2006 through 2009 HECM cohorts, HUD estimated negative subsidy rates ranging from - 2.82 percent in 2007 to -1.37 percent in 2009 (see fig. 10). However, for the 2010 cohort, HUD estimated a positive subsidy rate of 2.66 percent. Because HUD is expecting to insure about $30 billion in HECMs in 2010, this rate corresponds to a subsidy cost of $798 million. As required by the Federal Credit Reform Act, the President s budget for fiscal year 2010 includes a request for this amount. HUD officials told us that the positive subsidy rate for fiscal year 2010 largely was due to incorporating more conservative assumptions about long-term house price trends than had been used for prior cohorts. For budgeting purposes, the Administration decided to use more modest appreciation rates than the private sector forecasts HUD typically uses. Specifically, the house price appreciation rates used were 0.5 percent greater than the forecasted inflation rates. HUD officials told us that if they had used IHS Global Insight projections to develop the fiscal year 2010 credit subsidy estimate, there would be no need for an appropriation because the credit subsidy rate would be negative. HUD also has estimated the LLG for the HECM program since 2006. As shown in figure 11, HUD s original LLG estimates grew substantially from 2007 to 2008, increasing from $326 million to $1.52 billion. According to FHA s financial statements for fiscal years 2007 and 2008, the increase was primarily due to the lower house price appreciation projections used in the 2008 analysis. The report noted that lower appreciation rates result in lower recoveries on mortgages assigned to HUD, which in turn increases HUD s liability. In September 2008, HUD analyzed the sensitivity of the 2008 LLG estimate for the HECM program as a whole to different assumptions, including alternative house price scenarios. HUD examined the impact of house price appreciation that was 10 percent higher and 10 percent lower than the baseline assumptions from IHS Global Insight for fiscal years 2009 through 2013. (For example, for a baseline assumption of 4 percent house price appreciation, the lower and higher scenarios would have been 3.6 percent and 4.4 percent, respectively.) HUD estimated that the more pessimistic assumption increased the LLG from $1.52 billion to $1.78 billion, while the more optimistic assumption reduced the LLG to $1.27 billion. <4.3. HUD Uses a Conservative Approach in Estimating Program Costs, but Higher Loan Limits May Increase the Potential for Losses> When estimating future costs for all HECMS, HUD assumes that the property value at loan origination is equal to the maximum claim amount. For loans in which the property value is more than the HECM loan limit, this approach results in a conservative assumption about the amount of home equity available at the end of the loan to cover the loan balance. In these cases, the actual home value at the end of the loan is likely to be more than what HUD assumes and therefore more likely to exceed the loan balance at the end of the loan. According to HUD, because of this conservative approach to estimating costs, the HECM program does not rely on loans with property values that exceed the maximum claim amount to operate on a break-even basis over the long-run. Higher loan limits enacted under HERA and ARRA may make HUD s approach less conservative by reducing the proportion of loans for which the property value exceeds the maximum claim amount. This scenario is especially likely in locations that previously had relatively low local loan limits (reflecting their lower home values) but are now subject to the higher national limit. To illustrate, consider a 65-year-old HECM borrower with a $400,000 home whose loan limit prior to HERA was $250,000 (see fig. 12). In this scenario, the maximum claim amount would be the same as the loan limit because the maximum claim amount is defined as the lesser of the loan limit or the home value. However, if the loan limit for the same borrower is increased to the HERA-authorized level of $417,000, the maximum claim amount is the same as the home value ($400,000). As figure 12 shows, when a borrower s maximum claim amount is capped by the loan limit, the maximum claim amount can be substantially lower than the value of the home. All other things being equal, the potential for losses is low in this scenario because the projected loan balance is likely to remain less than the projected home value after the lender assigns the loan to HUD. In contrast, when the maximum claim amount is capped by the home s value, the difference between the projected loan balance and the projected home value is smaller. The potential for losses is higher with such a loan because the projected loan balance is more likely to exceed the projected home value. As also shown in figure 12, when this effect is combined with declining home prices, the potential for losses increases. Studies by HUD and others have noted that HECM loans for which the home value exceeds the maximum claim amount have a positive impact on the program s financial performance but also have noted the potential negative impact of raising the loan limit. When the HECM program started in 1990, HUD developed a statistical model to estimate borrower payments and insurance risk. HUD s technical explanation of the model acknowledges that future expected losses are smaller for HECMs with a maximum claim amount capped by the loan limit, as compared with HECMs with a maximum claim amount equal to the home value. Similarly, actuarial reviews of the HECM program conducted in 1995, 2000, and 2003 concluded that the negative net liability of the HECM program resulted from homes valued at more than the HECM loan limit cross-subsidizing those valued at less than the limit. The 2003 actuarial review also examined how the financial condition of the HECM program would have been affected had a higher, national loan limit been in place when existing HECMs were originated. The analysis found that the higher loan limits would have reduced the expected net liability of the HECM program from -$54.0 million to -$11.4 million. This finding is consistent with a Congressional Budget Office (CBO) analysis of a 2007 legislative proposal to increase the HECM loan limit to $417,000 nationwide. CBO concluded that the increase would reduce HUD s credit subsidy rate for the 2008 cohort of loans from -1.9 percent to -1.35 percent. The percentage of HECMs with maximum claim amounts capped by the loan limit has declined in recent years (see fig. 13). Since the inception of the program, this percentage has ranged from 24 percent to 47 percent. However, this proportion has declined in recent years, dropping from 42 percent in fiscal year 2006 to 25 percent in fiscal year 2008. Furthermore, HUD data show that this proportion dropped to 18 percent for the first 4 months of fiscal year 2009, likely due in part to the higher loan limit. HUD officials acknowledged that a reduction in the proportion of loans with maximum claim amounts capped by the loan limit could have a negative effect on the program s financial performance. However, they also indicated that their conservative approach to estimating program costs mitigates the associated risks. <5. Agency Comments and Our Evaluation> We provided a draft of this report to HUD for its review and comment. In comments provided to us in an e-mail, HUD concurred with our report and provided a technical comment, which we incorporated into the report. We are sending copies of this report to interested congressional parties, the Secretary of the Department of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions about this report, please contact me at (202) 512-8678 or [email protected]. GAO contact information and staff acknowledgments are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) how the Housing and Economic Recovery Act of 2008 (HERA) changes to the Home Equity Conversion Mortgage (HECM) program and other factors have affected HECM lenders planned participation in the reverse mortgage market, (2) the extent to which HERA s changes to HECM origination fees and loan limits will affect costs to borrowers and the loan amounts available to them, and (3) Department of Housing and Urban Development s (HUD) actions to evaluate the financial performance of the HECM program, including the potential impact of loan limit and house price changes. To address these objectives, we reviewed laws, regulations and guidance relevant to the HECM program, including provisions in HERA, the American Recovery and Reinvestment Act of 2009 (ARRA), and HUD handbooks and mortgagee letters. We also spoke with agency, industry, and nonprofit officials, including those at HUD, Ginnie Mae, Fannie Mae, the National Reverse Mortgage Lenders Association (NRMLA), the Mortgage Bankers Association (MBA), and AARP. To determine how HERA s provisions have affected lenders planned participation in the reverse mortgage market, we spoke with industry and nonprofit officials including those at Ginnie Mae, Fannie Mae, AARP, NRMLA, and MBA to understand how recent legislative and economic changes were affecting the industry. To more specifically identify the influence of legislation and economic factors on HECM lenders, we conducted a Web-based survey of a random probability sample of the 2,779 lenders that originated HECMs on a retail basis in fiscal year 2008. We used HUD records of HECM-certified lenders making at least one such loan in fiscal year 2008, and supplemented HUD s loan company officer contact information with names and e-mail addresses of officers at those lenders in our sample who also had memberships in NRMLA. For the remaining sampled lenders for which we lacked contact information, we made telephone calls to identify the most appropriate recipient for our survey invitation. We drew a stratified sample, allocating our selections across three groups defined by the number of HECMs made in fiscal year 2008, sampling from the groups with larger lenders at a higher rate than from the groups with smaller lenders (see table 2). We sampled all 51 members of the stratum with the largest lenders (300 or more loans). We sampled so few (30) and received so few usable responses (8) from the stratum with the smallest lenders (1 to 9 loans), that we considered this a nongeneralizable sample and excluded it from our quantitative analysis. In addition, lenders in the smallest lender stratum account for less than 5 percent of all loans, and thus would not influence overall estimates very much. Responses from the smallest lenders stratum were used only as case study examples in our analysis. To help develop our questionnaire, we consulted with an expert at NRMLA. We pretested our draft questionnaire to officials at three HECM lenders in our population and made revisions to it before finalization. Legal and survey research specialists in GAO also reviewed the questionnaire. Before the survey, in early March 2009, NRMLA sent letters to those lenders in our sample who were also members in that organization, endorsing our survey and encouraging response. In March 2009, we sent e- mails with links to our Web questionnaire and unique login information to each member of our sample with valid e-mail addresses. For sampled companies for which we were unable to obtain working e-mail addresses, we mailed paper versions of the questionnaires. Nonresponding lenders were sent additional e-mails or copies of questionnaires from March through May. We also made telephone calls in April to nonrespondents encouraging them to respond. Our survey closed in early May 2009. We received a total of 180 usable responses, for an overall response rate of 57 percent. The weighted response rate for the survey, which takes into account the relative numbers of lenders in the population that sampled lenders in each of our three size strata had to represent, was 53 percent. The most common reason for ineligibility among our sample firms was closure, merger, or other discontinuation of business in the reverse mortgage industry. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Unless otherwise noted, our estimates have margins of error of plus or minus 10 percentage points or less at the 95 percent confidence interval. In addition to sampling error, the practical difficulties of conducting any survey may introduce other errors: 1. Nonresponse bias from failing to get reports from lenders whose answers would have differed significantly from those who did participate. 2. Coverage failure to include all eligible HECM lenders in the list from which we sampled, or including ineligible firms. 3. Measurement errors in response. 4. Data processing. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such errors. For example, our pretesting and expert reviews of the questionnaire resulted in question changes that reduced the possibility of measurement error, and all data processing and analysis programming was verified by independent analysts. In addition, we followed up on some unlikely answers by recontacting sampled lenders or conducting followup research on them to edit erroneous answers and declare some firms ineligible for our survey, thereby reducing measurement and coverage error. To assess the risk of nonresponse bias, we compared the response rates of lenders across categories of two characteristics that might be related to our key variables the effect of HERA changes and other factors on the likelihood of continuation of HECM lending in the future. The two characteristics known for both respondents and nonrespondents were the number of years the lender had been offering HECMs and the state in which the lender s home office is located, from which we could develop a measure of size of loan activity in each state by summing the number of loans made by lenders whose home offices were in a given state. We found no statistically significant association between these two characteristics and the likelihood of response. Although this does not eliminate the possibility of nonresponse bias, we found no evidence of bias based on our analysis of this available data. To determine the effect of the HERA provisions on HECM borrowers, we examined changes in the up-front mortgage insurance premium, origination fee, and loan funds available to borrowers. The up-front mortgage insurance premium is 2 percent of the maximum claim amount. HERA did not change this rate, but because of HERA s change to the HECM loan limit, some borrowers may be eligible for larger loans and therefore have higher maximum claim amounts. Since the premium is calculated based on the maximum claim amount, these borrowers will pay a higher up-front mortgage insurance premium than they would have prior to HERA. Before HERA, the origination fee was calculated as 2 percent of the maximum claim amount with a minimum fee of $2,000. HERA changed the calculation of the origination fee to 2 percent of the first $200,000 of the maximum claim amount plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000. In implementing HERA, HUD also increased the minimum origination fee by $500 to $2,500. We used two different approaches to assess the impact of the HERA changes. First, we performed a mathematical analysis showing the difference between the up-front costs before and after HERA. Specifically, we derived equations for calculating pre-HERA and post-HERA up-front costs for borrowers with maximum claim amounts in different ranges ($0 to $100,000; $100,000 to $125,000; $125,000 to $200,000; $200,000 to $400,000; and $400,000 to $625,500). For each range, we subtracted the pre-HERA equation from the post-HERA equation to derive an equation for calculating the change in up-front costs due to the HERA provisions. We then used these equations to calculate the potential change in up-front costs in dollars terms. We did this analysis separately for cases in which the maximum claim amount would increase under HERA and cases in which the maximum claim amount would remain the same. Appendix III shows the details of this analysis. Second, we applied the HERA changes to HUD loan-level data for HECMs that borrowers obtained in calendar year 2007. We compared the results to the actual up-front costs and loan funds available for these borrowers. To perform this analysis, we obtained data from HUD s Single-family Data Warehouse. We assessed the reliability of these data by (1) reviewing existing information about the data and the system that produced them, (2) interviewing HUD officials knowledgeable about the data, and (3) performing electronic testing of required data elements. We determined that the data we used were sufficiently reliable for the purposes of this report. As shown in table 3, the universe of 2007 HECMs used in our analysis included 101,480 loans. We applied the $417,000 national loan limit and HERA s changes to the origination fee calculation to the 2007 HECMs. For each borrower, we calculated the new maximum claim amount, origination fee, up-front mortgages insurance premium, and loan funds available under the HERA rules and compared our results to the actual 2007 values. We summarized our results by calculating the average changes in these amounts. To illustrate the potential effect of modest margin rates increases stemming from HERA s change to the origination fee calculation, we applied a 0.25 percentage point increase to the margin rate for the 2007 HECMs adjusted to reflect the HERA provisions. We determined the resulting changes in the loan funds available to borrowers using HUD s table of principal limit factors. To provide perspective on the HERA- related margin rate changes, we compared margin rates from a 3 month period 1 year prior to the implementation of HERA (November 2007 through January 2008) to the margin rates from the 3 month period after the implementation of HERA (November 2008 through January 2009). To examine HUD s actions to evaluate the financial performance of the HECM program, we reviewed HUD s budget estimates for the HECM program for fiscal years 2005 through 2010. We also compiled and analyzed financial performance information about the HECM program, including the liability for loan guarantee (LLG) and credit subsidy estimates. For example, we examined the Federal Housing Administration s (FHA) Annual Management Reports (2005, 2006, 2007, and 2008), which include FHA s annual financial statements; HUD Office of the Inspector General (OIG) audits of FHA s financial statements (2005, 2006, 2007, and 2008); actuarial reviews of the HECM program (1995, 2000, and 2003); and Congressional Budget Office cost estimates relevant to the HECM program. We also reviewed other analyses HUD has conducted of program costs, such as the sensitivity of estimated cash flows to alternative economic assumptions. We interviewed FHA officials about their budget estimates and program analyses. Additionally, we reviewed information about HUD s HECM cash flow model, including a technical explanation of the model published in 1990 and recent changes to the model. We also reviewed historical house price appreciation rates from the Federal Housing Finance Agency and projected house price appreciation rates from IHS Global Insight. To examine the percentage of HECMs with maximum claim amounts capped by the loan limit, we analyzed loan-level data on HECMs from HUD s Single-family Data Warehouse. As noted earlier, we determined that the data we used were sufficiently reliable for this analysis. In addition, we reviewed federal agency standards for managing credit programs, such as those contained in the Federal Credit Reform Act (FCRA), related Office of Management and Budget requirements and instructions, and Federal Accounting Standards Advisory Board guidance. Finally, we interviewed HUD OIG officials, industry participants, and mortgage market analysts. We conducted this performance audit from September 2008 through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Impact of Loan Limit Increase in the American Recovery and Reinvestment Act of 2009 on HECM Lenders The American Recovery and Reinvestment Act (ARRA) raised the national loan limit for Home Equity Conversion Mortgages (HECM) to $625,500 through December 31, 2009. In our survey of HECM lenders, we asked lenders about the influence the increased loan limit would have on their likelihood to offer HECMs and non-HECM reverse mortgages (see fig. 14). Additionally, we asked how they expected consumer demand for HECMs to increase as a result of the ARRA loan increase (see fig. 15). See figures 14 and 15 for survey questions and estimates based on our survey results. Appendix III: Effect of the Housing and Economic Recovery Act of 2008 on Up-front Costs for HECM Borrowers Home Equity Conversion Mortgage (HECM) borrowers may experience changes in up-front costs due to the Housing and Economic Recovery Act of 2008 s (HERA) change to the calculation of the origination fee, the loan limit, or both. Generally, borrowers with house values greater than the prior HECM loan limit will be able to borrow more under HERA s higher loan limit, while borrowers with a wide range of house values may be affected by the changes in origination fees. There are two up-front costs. The first the up-front mortgage insurance premium is 2 percent of the maximum claim amount. The second the origination fee was calculated before HERA as 2 percent of the maximum claim amount with a minimum fee of $2,000. HERA changed the calculation of the origination fee to 2 percent of the first $200,000 of the maximum claim amount plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000. In implementing HERA, HUD also increased the minimum origination fee by $500 to $2,500. To determine how borrowers would be affected by these changes, we developed mathematical equations for calculating the up-front costs under both the HERA and pre-HERA rules. We subtracted the equation for the pre-HERA rules from the equation for the HERA rules to derive an equation for the change in up-front costs resulting from HERA. A positive value indicates that a borrower would pay more under HERA, and a negative value indicates that a borrower would pay less. Figures 16 and 17 illustrate how these changes affect different categories of borrowers. Figure 16 shows the results for borrowers who have home values lower than the previous loan limit. The maximum claim amount is not affected by HERA s change in loan limit. Therefore, for these borrowers, changes in up-front costs derive only from changes in the origination fee. Figure 17 shows the results of the calculation for borrowers who were affected by HERA s increase in loan limit. These borrowers would pay up- front mortgage insurance premiums and origination fees based on a higher maximum claim amount. However, depending on the maximum claim amount, the origination fee may have decreased rather than increased. The net change in up-front costs for this grouping is therefore indeterminable without knowing the old and new maximum claim amounts. Appendix IV: GAO Contact and Staff Acknowledgments <6. Staff Acknowledgments> In addition to the individual named above, Steve Westley, Assistant Director; Anne Akin, Kathleen Boggs, Joanna Chan, Rudy Chatlos, Karen Jarzynka, John McGrail, Marc Molino, Mark Ramage, Carl Ramirez, Barbara Roesmann, and Jennifer Schwartz made key contributions to this report.
Why GAO Did This Study Reverse mortgages--a type of loan against home equity available to seniors--are growing in popularity. A large majority of reverse mortgages are insured by the Department of Housing and Urban Development (HUD) under its Home Equity Conversion Mortgage (HECM) program. The Housing and Economic Recovery Act of 2008 (HERA) made several modifications to the HECM program, including changes in how origination fees are calculated and an increase in the loan limit. The Act directed GAO to examine (1) how these changes have affected lenders' plans to offer reverse mortgages, (2) how the changes will affect borrowers, and (3) actions HUD has taken to evaluate the financial performance of the HECM program. To address these objectives, GAO surveyed a representative sample of HECM lenders, analyzed loan-level HECM data, and reviewed HUD estimates and analysis of HECM program costs. What GAO Found On the basis of a survey of HECM lenders, GAO estimates thattaken together, HERA's changes to the HECM loan limit and origination fee calculation have had a positive to neutral influence on most lenders' plans to offer HECMs. Other factors, such as economic and secondary market conditions, have had a mixed influence. Although economic conditions have had a positive influence on about half of lenders' plans to offer HECMS, secondary market conditions have negatively influenced about one-third of lenders. GAO also estimates that the HERA changes have had little to no influence on most lenders' plans to offer non-HECM reverse mortgages. HERA's provisions will affect borrowers in varying ways depending on home value and other factors. The changes to HECM origination fees and loan limits are likely to change the up-front costs and the loan funds available for most new borrowers. GAO's analysis of data on HECM borrowers from 2007 shows that if the HERA changes had been in place at the time, most would have paid less or the same amount in up-front costs, and most would have had more or the same amount of loan funds available. For example, about 46 percent of borrowers would have seen a decrease in up-front costs and an increase in available loan funds. However, 17 percent of borrowers would have seen an increase in up-front costs and a decrease in available loan funds. HUD has enhanced its analysis of HECM program costs, but less favorable house price trends and loan limit increases have increased HUD's risk of losses. HUD has updated its cash flow model for the program and plans to conduct annual actuarial reviews. Although the program historically has not required a subsidy, HUD has estimated that HECMs made in 2010 will require a subsidy of $798 million, largely due to more pessimistic assumptions about long-run home prices. In addition, the higher loan limit enacted by HERA may increase the potential for losses. To calculate the amount of funds available to a borrower, lenders start with a limiting factor of either the home value or, if the home value is greater than the HECM loan limit, with the loan limit. For loans that are limited by the home value, the loan amount and the home value are closer together at the point of origination, which makes it more likely that the loan balance could exceed the home value at the end of the loan. In contrast, for loans that are limited by the HECM loan limit, there is initially a greater difference between the home value and the loan amount, making it less likely that the loan balance will exceed the home value at the end of the loan. The increase in the HECM loan limit may increase HUD's risk of losses by reducing the proportion of loans that are limited by the HECM loan limit.
<1. ICE Lacks Key Internal Controls for Implementation of the 287(g) Program> ICE has designed some management controls to govern 287(g) program implementation, such as MOAs with participating agencies that identify the roles and responsibilities of each party, background checks of officers applying to participate in the program, and a 4-week training course with mandatory course examinations for participating officers. However, the program lacks several other key controls. For example Program Objectives: While ICE officials have stated that the main objective of the 287(g) program is to enhance the safety and security of communities by addressing serious criminal activity committed by removable aliens, they have not documented this objective in program- related materials consistent with internal control standards. As a result, some participating agencies are using their 287(g) authority to process for removal aliens who have committed minor offenses, such as speeding, carrying an open container of alcohol, and urinating in public. None of these crimes fall into the category of serious criminal activity that ICE officials described to us as the type of crime the 287(g) program is expected to pursue. While participating agencies are not prohibited from seeking the assistance of ICE for aliens arrested for minor offenses, if all the participating agencies sought assistance to remove aliens for such minor offenses, ICE would not have detention space to detain all of the aliens referred to them. ICE s Office of Detention and Removal strategic plan calls for using the limited detention bed space available for those aliens that pose the greatest threat to the public until more alternative detention methods are available. Use of Program Authority: ICE has not consistently articulated in program-related documents how participating agencies are to use their 287(g) authority. For example, according to ICE officials and other ICE documentation, 287(g) authority is to be used in connection with an arrest for a state offense; however, the signed agreement that lays out the 287(g) authority for participating agencies does not address when the authority is to be used. While all 29 MOAs we reviewed contained language that authorizes a state or local officer to interrogate any person believed to be an alien as to his right to be or remain in the United States, none of them mentioned that an arrest should precede use of 287(g) program authority. Furthermore, the processing of individuals for possible removal is to be in connection with a conviction of a state or federal felony offense. However, this circumstance is not mentioned in 7 of the 29 MOAs we reviewed, resulting in implementation guidance that is not consistent across the 29 participating agencies. A potential consequence of not having documented program objectives is misuse of authority. Internal control standards state that government programs should ensure that significant events are authorized and executed only by persons acting within the scope of their authority. Defining and consistently communicating how this authority is to be used would help ICE ensure that immigration enforcement activities undertaken by participating agencies are in accordance with ICE policies and program objectives. Supervision of Participating Agencies: Although the law requires that state and local officials use 287(g) authority under the supervision of ICE officials, ICE has not described in internal or external guidance the nature and extent of supervision it is to exercise over participating agencies implementation of the program. This has led to wide variation in the perception of the nature and extent of supervisory responsibility among ICE field officials and officials from 23 of the 29 participating agencies that had implemented the program and provided information to us on ICE supervision. For example, one ICE official said ICE provides no direct supervision over the local law enforcement officers in the 287(g) program in their area of responsibility. Conversely, another ICE official characterized ICE supervisors as providing frontline support for the 287(g) program. ICE officials at two additional offices described their supervisory activities as overseeing training and ensuring that computer systems are working properly. ICE officials at another field office described their supervisory activities as reviewing files for completeness and accuracy. Officials from 14 of the 23 agencies that had implemented the program were pleased with ICE s supervision of the 287(g) trained officers. Officials from another four law enforcement agencies characterized ICE s supervision as fair, adequate, or provided on an as-needed basis. Officials from three agencies said they did not receive direct ICE supervision or that supervision was not provided daily, which an official from one of these agencies felt was necessary to assist with the constant changes in requirements for processing of paperwork. Officials from two law enforcement agencies said ICE supervisors were either unresponsive or not available. ICE officials in headquarters noted that the level of ICE supervision provided to participating agencies has varied due to a shortage of supervisory resources. Internal control standards require an agency s organizational structure to define key areas of authority and responsibility. Given the rapid growth of the program, defining the nature and extent of ICE s supervision would strengthen ICE s assurance that management s directives are being carried out. Tracking and Reporting Data: MOAs that were signed before 2007 did not contain a requirement to track and report data on program implementation. For the MOAs signed in 2007 and after, ICE included a provision stating that participating agencies are responsible for tracking and reporting data to ICE. However, in these MOAs, ICE did not define what data should be tracked or how it should be collected and reported. Of the 29 jurisdictions we reviewed, 9 MOAs were signed prior to 2007 and 20 were signed in 2007 or later. Regardless of when the MOAs were signed, our interviews with officials from the 29 participating jurisdictions indicated confusion regarding whether they had a data tracking and reporting requirement, what type of data should be tracked and reported, and what format they should use in reporting data to ICE. Internal control standards call for pertinent information to be recorded and communicated to management in a form and within a time frame that enables management to carry out internal control and other responsibilities. Communicating to participating agencies what data is to be collected and how it should be gathered and reported would help ensure that ICE management has the information needed to determine whether the program is achieving its objectives. Performance Measures: ICE has not developed performance measures for the 287(g) program to track and evaluate the progress toward attaining the program s objectives. GPRA requires that agencies clearly define their missions, measure their performance against the goals they have set, and report on how well they are doing in attaining those goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. ICE officials stated that they are in the process of developing performance measures, but have not provided any documentation or a time frame for when they expect to complete the development of these measures. ICE officials also stated that developing measures for the program will be difficult because each state and local partnership agreement is unique, making it challenging to develop measures that would be applicable for all participating agencies. Nonetheless, standard practices for program and project management call for specific desired outcomes or results to be conceptualized and defined in the planning process as part of a road map, along with the appropriate projects needed to achieve those results and milestones. Without a plan for the development of performance measures, including milestones for their completion, ICE lacks a roadmap for how this project will be achieved. <2. Program Resources Are Used for Training, Supervision, and Equipment; Benefits and Concerns Are Reported by ICE and Participating Agencies> ICE and participating agencies used program resources mainly for personnel, training, and equipment, and participating agencies reported activities, benefits, and concerns stemming from the program. For fiscal years 2006 through 2008, ICE received about $60 million to provide training, supervision, computers, and other equipment for participating agencies. State and local participants provided officers, office space, and other expenses not reimbursed by ICE, such as office supplies and vehicles. ICE and state and local participating agencies cite a range of benefits associated with the 287(g) partnership. For example, as of February 2009, ICE reported enrolling 67 agencies and training 951 state and local law enforcement officers. At that time, ICE had 42 additional requests for participation in the 287(g) program, and 6 of the 42 have been approved pending approval of an MOA. According to data provided by ICE for 25 of the 29 program participants we reviewed, during fiscal year 2008, about 43,000 aliens had been arrested pursuant to the program. Based on the data provided, individual agency participant results ranged from about 13,000 arrests in one location, to no arrests in two locations. Of those 43,000 aliens arrested pursuant to the 287(g) authority, ICE detained about 34,000, placed about 14,000 of those detained (41 percent) in removal proceedings, and arranged for about 15,000 of those detained (44 percent) to be voluntarily removed. The remaining 5,000 (15 percent) arrested aliens detained by ICE were either given a humanitarian release, sent to a federal or state prison to serve a sentence for a felony offense, or not taken into ICE custody given the minor nature of the underlying offense and limited availability of the federal government s detention space. Participating agencies cited benefits of the program including a reduction in crime and the removal of repeat offenders. However, more than half of the 29 state and local law enforcement agencies we reviewed reported concerns community members expressed about the 287(g) program, including concerns that law enforcement officers in the 287(g) program would be deporting removable aliens pursuant to minor traffic violations (e.g., speeding) and concerns about racial profiling. We made several recommendations to strengthen internal controls for the 287(g) program to help ensure the program operates as intended. Specifically, we recommended that ICE (1) document the objective of the 287(g) program for participants, (2) clarify when the 287(g) authority is authorized for use by state and local law enforcement officers, (3) document the nature and extent of supervisory activities ICE officers are expected to carry out as part of their responsibilities in overseeing the implementation of the 287(g) program, (4) specify the program information or data that each agency is expected to collect regarding their implementation of the 287(g) program and how this information is to be reported, and (5) establish a plan, including a time frame, for the development of performance measures for the 287(g) program. DHS concurred with each of our recommendations and reported plans and steps taken to address them. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. <3. GAO Contacts and Staff Acknowledgments> For questions about this statement, please contact Richard Stana at 202- 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Bill Crocker, Lori Kmetz, Susanna Kuebler, and Adam Vogt. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony discusses the Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement's (ICE) management of the 287(g) program. Recent reports indicate that the total population of unauthorized aliens residing in the United States is about 12 million. Some of these aliens have committed one or more crimes, although the exact number of aliens that have committed crimes is unknown. Some crimes are serious and pose a threat to the security and safety of communities. ICE does not have the agents or the detention space that would be required to address all criminal activity committed by unauthorized aliens. Thus, state and local law enforcement officers play a critical role in protecting our homeland because, during the course of their daily duties, they may encounter foreign-national criminals and immigration violators who pose a threat to national security or public safety. On September 30, 1996, the Illegal Immigration Reform and Immigrant Responsibility Act was enacted and added section 287(g) to the Immigration and Nationality Act. This section authorizes the federal government to enter into agreements with state and local law enforcement agencies, and to train selected state and local officers to perform certain functions of an immigration officer--under the supervision of ICE officers--including searching selected federal databases and conducting interviews to assist in the identification of those individuals in the country illegally. The first such agreement under the statute was signed in 2002, and as of February 2009, 67 state and local agencies were participating in this program. The testimony today is based on our January 30, 2009, report regarding the program including selected updates made in February 2009. Like the report, this statement addresses (1) the extent to which Immigration and Customs Enforcement has designed controls to govern 287(g) program implementation and (2) how program resources are being used and the activities, benefits, and concerns reported by participating agencies. To do this work, we interviewed officials from both ICE and participating agencies regarding program implementation, resources, and results. We also reviewed memorandums of agreement (MOA) between ICE and the 29 law enforcement agencies participating in the program as of September 1, 2007, that are intended to outline the activities, resources, authorities, and reports expected of each agency. We also compared the controls ICE designed to govern implementation of the 287(g) program with criteria in GAO's Standards for Internal Control in the Federal Government, the Government Performance and Results Act (GPRA), and the Project Management Institute's Standard for Program Management. More detailed information on our scope and methodology appears in the January 30, 2009 report. In February 2009, we also obtained updated information from ICE regarding the number of law enforcement agencies participating in the 287(g) program as well as the number of additional law enforcement agencies being considered for participation in the program. We conducted our work in accordance with generally accepted government auditing standards. What GAO Found In summary, ICE has designed some management controls, such as MOAs with participating agencies and background checks of officers applying to participate in the program, to govern 287(g) program implementation. However, the program lacks other key internal controls. Specifically, program objectives have not been documented in any program-related materials, guidance on how and when to use program authority is inconsistent, guidance on how ICE officials are to supervise officers from participating agencies has not been developed, data that participating agencies are to track and report to ICE has not been defined, and performance measures to track and evaluate progress toward meeting program objectives have not been developed. Taken together, the lack of internal controls makes it difficult for ICE to ensure that the program is operating as intended. ICE and participating agencies used program resources mainly for personnel, training, and equipment, and participating agencies reported activities and benefits, such as a reduction in crime and the removal of repeat offenders. However, officials from more than half of the 29 state and local law enforcement agencies we reviewed reported concerns members of their communities expressed about the use of 287(g) authority for minor violations and/or about racial profiling. We made several recommendations to strengthen internal controls for the 287(g) program to help ensure that the program operates as intended. DHS concurred with our recommendations and reported plans and steps taken to address them.
<1. Scope and Methodology> As part of our audit of the fiscal years 2016 and 2015 CFS, we considered the federal government s financial reporting procedures and related internal control. Also, we determined the status of corrective actions Treasury and OMB have taken to address open recommendations relating to their processes to prepare the CFS, detailed in our previous reports, that remained open at the beginning of our fiscal year 2016 audit. A full discussion of our scope and methodology is included in our January 2017 report on our audit of the fiscal years 2016 and 2015 CFS. We have communicated each of the control deficiencies discussed in this report to your staff. We performed our audit in accordance with U.S. generally accepted government auditing standards. We believe that our audit provides a reasonable basis for our findings and recommendations in this report. <2. Control Deficiencies Identified during Our Fiscal Year 2016 Audit> During our audit of the fiscal year 2016 CFS, we identified three new internal control deficiencies in Treasury s processes used to prepare the CFS. Specifically, we found that (1) Treasury did not have sufficient procedures and metrics for monitoring the federal government s year-to- year progress in resolving intragovernmental differences at the federal entity level, (2) Treasury did not have a sufficient process for working with federal entities to reduce or resolve the need for significant adjustments to federal entity data submitted for the CFS, and (3) three of Treasury and OMB s corrective action plans did not include sufficient information to effectively address related control deficiencies involving processes used to prepare the CFS. <2.1. Monitoring Intragovernmental Differences> During our fiscal year 2016 CFS audit, we found that the federal government continued to be unable to adequately account for and reconcile intragovernmental activity and balances between federal entities. Treasury has taken significant action over the past few years to address control deficiencies in this area, including actions to improve reporting of intragovernmental differences to federal entities and to work actively with federal entities to encourage resolution of reported differences. However, Treasury did not have sufficient procedures and metrics for monitoring the federal government s year-to-year progress in resolving intragovernmental differences at the federal entity level. When preparing the CFS, intragovernmental activity and balances between federal entities should be in agreement and must be subtracted out, or eliminated. If the two federal entities engaged in an intragovernmental transaction do not both record the same intragovernmental transaction in the same year and for the same amount, the intragovernmental transactions will not be in agreement, resulting in errors in the CFS. Federal entities are responsible for properly accounting for and reporting their intragovernmental activity and balances in their entity financial statements and for effectively implementing related internal controls. This includes reconciling and resolving intragovernmental differences at the transaction level with their trading partners. To support this process, Treasury has established procedures for identifying whether intragovernmental activity and balances reported to Treasury by federal entities are properly reconciled and balanced. For example, Treasury calculates intragovernmental differences by reciprocal category and trading partner for each federal entity. Through these calculations, Treasury has identified certain recurring issues, such as significant differences related to specific entities, reciprocal categories, and trading partners. Treasury provides quarterly scorecards to the individual federal entities that are significant to the CFS to highlight intragovernmental differences requiring these entities attention. Treasury also prepares a quarterly government-wide scorecard to communicate the total differences on a government-wide basis. The government-wide scorecard also identifies the 10 largest federal entity contributors to the total government-wide difference. While Treasury s scorecard process and other initiatives focus on identifying and communicating differences to federal entities, they do not include procedures for monitoring the federal government s year-to-year progress in resolving intragovernmental differences at the federal entity level. For example, the entity-level scorecards do not include metrics that could be used to gauge the federal government s year-to-year progress in resolving intragovernmental differences at the entity level by reciprocal category and trading partner. Although Treasury produces a government- wide scorecard, the chart included on the scorecard shows changes in the total intragovernmental differences for recent quarters but does not identify increases or decreases at the individual entity level by reciprocal category and trading partner. While the total of intragovernmental differences has declined in recent years as a result of the scorecard process and other Treasury initiatives, we continued to note that amounts reported by federal entities were not in agreement by hundreds of billions of dollars for fiscal year 2016. Standards for Internal Control in the Federal Government states that management should (1) design control activities to achieve objectives and respond to risks, such as establishing and reviewing performance measures and indicators, and (2) implement control activities, such as documenting responsibilities through policies and procedures. The standard also states that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results and should remediate any identified internal control deficiencies on a timely basis. Without adequate procedures and metrics for effectively monitoring federal government progress in resolving intragovernmental differences at the entity level, Treasury cannot effectively identify areas where specific federal entities need further improvement and attention from year to year to resolve intragovernmental differences that result in errors in the CFS. <2.1.1. Recommendation for Executive Action> We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures and metrics for monitoring the federal government s year-to-year progress in resolving intragovernmental differences for significant federal entities at the reciprocal category and trading partner levels. <2.2. Adjustments to Federal Entity Financial Information Submitted for the CFS> During our fiscal year 2016 CFS audit, we found that Treasury continued to record significant adjustments to data reported by federal entities for inclusion in the CFS. Treasury collects financial statement information from federal entities through its Governmentwide Treasury Account Symbol Adjusted Trial Balance System and Governmentwide Financial Report System. Auditors for entities significant to the CFS are responsible for providing opinions on these entities closing package submissions to Treasury. Once federal entities have submitted data for inclusion in the CFS, Treasury performs procedures to determine the consistency of the submitted data to (1) federal entity audited financial statements and (2) government-wide financial reporting standards. Treasury also performs procedures to determine if adjustments are needed to resolve certain unreconciled differences in intragovernmental activity and balances. Through these processes, Treasury identified the need for tens of billions of dollars of adjustments to federal entity- submitted data and recorded these adjustments to the CFS. Treasury identified many of the adjustments needed as recurring because they related to the same line items and federal entities as in prior years. The adjustments were necessary often because of inaccurate or incomplete information that federal entities submitted for the CFS. Though Treasury had procedures for identifying adjustments needed to data that federal entities submitted at fiscal year-end as well as procedures for reviewing recurring intragovernmental adjustments, Treasury did not have a sufficient process for reviewing recurring non-intragovernmental adjustments. Specifically, Treasury did not have a process to work with federal entities to correctly report non-intragovernmental information in federal entities closing packages prior to submission to Treasury, thereby reducing or resolving the need for Treasury to make significant adjustments to federal entity data. For adjustments related to intragovernmental differences, we found that Treasury s procedures did include steps for reviewing recurring intragovernmental adjustments and for working with federal entities to reduce or resolve the need for these intragovernmental adjustments. Statement of Federal Financial Accounting Concepts No. 4, Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government, states that the consolidated financial report should be a general purpose report that is aggregated from federal entity reports. The Treasury Financial Manual (TFM) provides guidance on how federal entities are to provide their financial data to Treasury for consolidation. In accordance with the TFM, significant component entities are required to submit their financial data to Treasury using a closing package. A significant component entity s chief financial officer must certify the accuracy of the data in the closing package and have it audited. Because the closing package process requires that significant component entities verify and validate the information in their closing packages compared with their audited department-level financial statements and receive audit opinions, Treasury is provided a level of assurance that it is compiling the CFS with reliable financial information. In addition, OMB Bulletin 15-02, Audit Requirements for Federal Financial Statements, establishes requirements for audits of federal financial statements, including audits of the closing packages. Also, Standards for Internal Control in the Federal Government states that management should design and implement control activities, such as procedures to help ensure that financial information is completely and accurately reported. Without a sufficient process aimed at reducing or resolving the need for significant adjustments to federal entity data submitted for the CFS, Treasury is unable to reasonably assure that it has reliable financial information for all federal entities, which is needed to achieve auditability of the CFS. <2.2.1. Recommendation for Executive Action> We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement a sufficient process for working with federal entities to reduce or resolve the need for significant adjustments to federal entity data submitted for the CFS. <2.3. Corrective Action Plans for Certain Areas> Three of Treasury and OMB s corrective action plans did not include sufficient information to effectively address related control deficiencies involving processes used to prepare the CFS. Corrective action plans are the mechanism whereby management presents the actions the entity will take to resolve identified internal control deficiencies. Treasury, in coordination with OMB, compiled a collection of corrective action plans in a remediation plan focused on resolving material weaknesses related to the processes used to prepare the CFS. The corrective action plans contained in the remediation plan which are intended to address control deficiencies related to (1) treaties and international agreements, (2) additional audit procedures for intragovernmental activity and balances, and (3) the Reconciliations of Net Operating Cost and Unified Budget Deficit and Statements of Changes in Cash Balance from Unified Budget and Other Activities (Reconciliation Statements) did not include sufficient information to demonstrate that the plans, if properly implemented, will effectively resolve such deficiencies. <2.3.1. Treaties and International Agreements> Treasury and OMB did not include sufficient information in their corrective action plan to help ensure that major treaty and international agreement information is properly identified and reported in the CFS. We found that the corrective actions included steps and milestones for meeting with the Department of State, a key entity with respect to treaties and international agreements, but did not include specific actions and outcomes planned to analyze all treaties and international agreements to obtain reasonable assurance whether they are appropriately recognized and disclosed in the CFS. As a result of not having specific actions to analyze all treaties and international agreements, any treaties and international agreements that had been omitted from entity reporting would not be identified. Not having procedures for reasonably assuring that information on major treaties and other international agreements is reported in the CFS could result in incomplete recognition and disclosure of probable and reasonably possible losses of the U.S. government. <2.3.2. Additional Audit Procedures for Intragovernmental Activity and Balances> Treasury and OMB s corrective action plan to make intragovernmental scorecards available directly to federal entity auditors was not sufficient to address the control deficiency related to not having a formalized process to require the performance of additional audit procedures focused on intragovernmental activity and balances. Billions of dollars of unreconciled intragovernmental differences continued to be reported in the fiscal year 2016 CFS based on the financial data submitted in federal entities audited closing packages. Although making the scorecard information available to auditors is helpful, that action in and of itself does not establish a process requiring federal entity auditors to perform additional audit procedures specifically focused on intragovernmental activity and balances. A formalized process to require the performance of additional audit procedures would provide increased audit assurance over the reliability of the intragovernmental information and help address the significant unreconciled transactions at the government-wide level. <2.3.3. Reconciliation Statements> Treasury and OMB s corrective action plans related to the Reconciliation Statements did not clearly demonstrate how, once implemented, the corrective actions will remediate the related control deficiencies. For example, the corrective actions did not include sufficient information to explain how they would achieve Treasury s objectives to (1) identify and report all necessary items in the Reconciliation Statements and (2) reasonably assure that the amounts are consistent with underlying audited financial data. Also, some outcome measures did not describe what and how progress related to specific actions taken would be measured. Not including sufficient information on actions and outcomes in the corrective action plan impairs management s ability to assess the progress made toward resolution. <2.3.4. Guidance for Corrective Action Plans> The Chief Financial Officers Council s Implementation Guide for OMB Circular A-123, Management s Responsibility for Internal Control Appendix A, Internal Control over Financial Reporting (Implementation Guide) includes guidance for preparing well-defined corrective action plans. According to the Implementation Guide, key elements necessary for well-defined corrective action plans include 1. descriptions of the deficiency and the planned corrective actions in sufficient detail to facilitate a common understanding of the deficiency and the steps that must be performed to resolve it; 2. interim targeted milestones and completion dates, including subordinate indicators, statistics, or metrics used to gauge resolution progress; and 3. planned validation activities and outcome measures used for assessing the effectiveness of the corrective actions taken. Also, Standards for Internal Control in the Federal Government states that management should (1) remediate identified internal control deficiencies on a timely basis and (2) design control activities to achieve objectives and respond to risks. In addition, OMB Circular No. A-123, Management s Responsibility for Enterprise Risk Management and Internal Control, requires management to develop corrective action plans for material weaknesses and periodically assess and report on the progress of those plans. The Implementation Guide is widely viewed as a best practices methodology for executing the requirements of Appendix A of OMB Circular No. A-123. Corrective actions need to be designed and implemented effectively to allow timely remediation of the deficiencies. An effective corrective action plan facilitates accountability, monitoring, and communication and helps ensure that entity personnel responsible for completing the planned corrective actions and monitoring progress toward resolution have the information and resources they need to do so. Without well-defined, sufficiently descriptive corrective action plans in these three areas, it will be difficult for Treasury and OMB to reasonably assure that corrective action plans will effectively remediate the internal control deficiencies and monitor progress toward resolution. <2.3.5. Recommendation for Executive Action> We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to improve corrective action plans for (1) treaties and international agreements, (2) additional audit procedures for intragovernmental activity and balances, and (3) the Reconciliation Statements so that they include sufficient information to address the control deficiencies in these areas effectively. <3. Status of Recommendations from Prior Reports> At the beginning of our fiscal year 2016 audit, 24 recommendations from our prior reports regarding control deficiencies in the processes used to prepare the CFS were open. Treasury implemented corrective actions during fiscal year 2016 that resolved certain of the control deficiencies addressed by our recommendations. For 7 recommendations, the corrective actions resolved the related control deficiencies, and we closed the recommendations. We also closed 1 additional recommendation, related to corrective action plans, by making a new recommendation that is better aligned with the remaining internal control deficiency in this area. While progress was made, 16 recommendations from our prior reports remained open as of January 4, 2017, the date of our report on the audit of the fiscal year 2016 CFS. Consequently, a total of 19 recommendations need to be addressed 16 remaining from prior reports and the 3 new recommendations we are making in this report. Appendix I summarizes the status as of January 4, 2017, of the 24 open recommendations from our prior years reports according to Treasury and OMB as well as our own assessment and additional comments, where appropriate. Various efforts are under way to address these recommendations. We will continue to monitor Treasury s and OMB s progress in addressing our recommendations as part of our fiscal year 2017 CFS audit. <4. Agency Comments and Our Evaluation> <4.1. Treasury Comments> In written comments, reprinted in appendix II, Treasury stated that it appreciates our perspective and will continue to focus its efforts on cost- beneficial solutions to sufficiently resolve the material conditions that preclude having an opinion rendered on the CFS. Although in its comments Treasury neither agreed nor disagreed with our recommendations, Treasury provided information on actions that it plans to take to address two of the recommendations and stated with regard to the third recommendation that its current corrective action plans were effective. For our first two recommendations related to monitoring intragovernmental differences and reducing significant adjustments to federal entity data submitted for the CFS, Treasury stated that it will continue to (1) evolve its processes as necessary to ensure that appropriate and effective metrics are deployed to measure and monitor agency performance and (2) work with agencies to facilitate improvement of processes, minimizing the need for Treasury adjustments to agency reporting. For our third recommendation aimed at improving corrective action plans for (1) treaties and international agreements, (2) additional audit procedures for intragovernmental activity and balances, and (3) the Reconciliation Statements, Treasury stated that its current remediation plan, including its various corrective action plans, is comprehensive, appropriate, and effective, with robust ongoing monitoring processes in place. Treasury also stated that corrective actions aimed at increasing the quality of intragovernmental data are proving effective and that it does not support encumbering agencies with the cost and burden associated with requiring additional audit procedures. In addition, Treasury stated that it will continue to collaborate with OMB and federal entities on existing corrective actions. However, we continue to believe that the corrective action plans in these three areas do not include sufficient information to effectively address related control deficiencies involving processes used to prepare the CFS. For example, as discussed in our report, Treasury and OMB did not have specific actions in their corrective action plan to analyze all treaties and international agreements to help ensure that major treaty and international agreement information is properly identified and reported in the CFS. Further, we believe that a formalized process for Treasury to require the performance of additional audit procedures focused on intragovernmental activity and balances would provide increased audit assurance over the reliability of intragovernmental information and help address the hundreds of billions of dollars of unreconciled intragovernmental differences at the government-wide level. Treasury also described various actions taken and planned to address long-standing material weaknesses, including improvements in accounting for and reporting on the General Fund of the U.S. Government activity and balances, strengthening internal controls in the preparation of the CFS, and validating material completeness of budgetary information included in the Financial Report of the United States Government. Treasury also indicated that it plans to work with GAO as it fulfills its commitment to improving federal financial reporting. <4.2. OMB Comments> OMB staff in the Office of Federal Financial Management stated in an e-mail that OMB generally agreed with the findings in the report and with Treasury s written response to the draft. The e-mail noted that the current administration is committed to continuing to work with Treasury and federal agencies to achieve sound financial management across the federal government. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, and the Controller of the Office of Management and Budget s Office of Federal Financial Management. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report include Carolyn M. Voltz (Assistant Director), Latasha L. Freeman, Maria M. Morton, Sean R. Willey, and J. Mark Yoder. Appendix I: Status of Treasury s and OMB s Progress in Addressing GAO s Prior Year Recommendations for Preparing the CFS Table 1 shows the status of GAO s prior year recommendations for preparing the CFS. The abbreviations used are defined in the legend at the end of the table. Appendix II: Comments from the Department of the Treasury
Why GAO Did This Study Treasury, in coordination with OMB, prepares the Financial Report of the United States Government , which contains the CFS. Since GAO's first audit of the fiscal year 1997 CFS, certain material weaknesses and other limitations on the scope of its work have prevented GAO from expressing an opinion on the accrual-based consolidated financial statements. As part of the fiscal year 2016 CFS audit, GAO identified material weaknesses and other control deficiencies in the processes used to prepare the CFS. The purpose of this report is to provide (1) details on new control deficiencies GAO identified related to the processes used to prepare the CFS, along with related recommendations, and (2) the status of corrective actions Treasury and OMB have taken to address GAO's prior recommendations relating to the processes used to prepare the CFS that remained open at the beginning of the fiscal year 2016 audit. What GAO Found During its audit of the fiscal year 2016 consolidated financial statements of the U.S. government (CFS), GAO identified control deficiencies in the Department of the Treasury's (Treasury) and the Office of Management and Budget's (OMB) processes used to prepare the CFS. These control deficiencies contributed to material weaknesses in internal control that involve the federal government's inability to adequately account for and reconcile intragovernmental activity and balances between federal entities; reasonably assure that the consolidated financial statements are (1) consistent with the underlying audited entities' financial statements, (2) properly balanced, and (3) in accordance with U.S. generally accepted accounting principles; and reasonably assure that the information in the (1) Reconciliations of Net Operating Cost and Unified Budget Deficit and (2) Statements of Changes in Cash Balance from Unified Budget and Other Activities is complete and consistent with the underlying information in the audited entities' financial statements and other financial data. During its audit of the fiscal year 2016 CFS, GAO identified three new internal control deficiencies. Treasury did not have sufficient procedures and metrics for monitoring the federal government's year-to-year progress in resolving intragovernmental differences at the federal entity level. Treasury did not have a sufficient process for working with federal entities to reduce or resolve the need for significant adjustments to federal entity data submitted for the CFS. Three of Treasury and OMB's corrective action plans did not include sufficient information to effectively address related control deficiencies involving processes used to prepare the CFS. In addition, GAO found that various other control deficiencies identified in previous years' audits with respect to the processes used to prepare the CFS were resolved or continued to exist. For 7 of the 24 recommendations from GAO's prior reports regarding control deficiencies in the processes used to prepare the CFS, Treasury implemented corrective actions during fiscal year 2016 that resolved the related control deficiencies, and as a result, these recommendations were closed. GAO closed 1 additional recommendation that related to corrective action plans, by making a new recommendation that is better aligned with the remaining internal control deficiency in this area. While progress was made, 16 of the 24 recommendations remained open as of January 4, 2017, the date of GAO's report on its audit of the fiscal year 2016 CFS. GAO will continue to monitor the status of corrective actions taken to address the 3 new recommendations made in this report as well as the 16 open recommendations from prior years as part of its fiscal year 2017 CFS audit. What GAO Recommends GAO is making three new recommendations—two to Treasury and one to both Treasury and OMB—to address the control deficiencies identified during the fiscal year 2016 CFS audit. In commenting on GAO's draft report, although Treasury neither agreed nor disagreed with GAO's recommendations, Treasury provided information on actions that it plans to take to address two recommendations, but stated that its current corrective action plans were effective for the third recommendation. GAO continues to believe that actions for this recommendation are needed as discussed in the report. OMB generally agreed with the findings in the report.
<1. Some Instances of Noncompliance with Medical Care Standards Occurred> At the time of our visits, we observed instances of noncompliance with ICE s medical care standards at 3 of the 23 facilities we visited. However, these instances did not show a pervasive or persistent pattern of noncompliance across the facilities like we those identified with the telephone system. Detention facilities that we visited ranged from those with small clinics with contract staff to facilities with on-site medical staff, diagnostic equipment such as X-ray machines, and dental equipment. Medical service providers include general medical, dental, and mental health care providers that are licensed by state and local authorities. Some medical services are provided by the U.S. Public Health Service (PHS), while other medical service providers may work on a contractual basis. At the San Diego Correctional Facility in California, an adult detention facility, ICE reviewers that we accompanied cited PHS staff for failing to administer the mandatory 14-day physical exam to approximately 260 detainees. PHS staff said the problem at San Diego was due to inadequate training on the medical records system and technical errors in the records system. At the Casa de San Juan Family Shelter in California, we found that the facility staff did not administer medical screenings immediately upon admission, as required in ICE medical care standards. At the Cowlitz County Juvenile Detention Center in Washington state, we found that no medical screening was performed at admission and first aid kits were not available, as required. Officials at some facilities told us that meeting the specialized medical and mental health needs of detainees can be challenging. Some also cited difficulties they had experienced in obtaining ICE approval for outside nonroutine medical and mental health care as also presenting problems in caring for detainees. On the other hand, we observed instances where detainees were receiving specialized medical care at the facilities we visited. For example, at the Krome facility in Florida we observed one detainee sleeping with the assistance of special breathing equipment (C- PAP machine) to address what we were told was a sleep apnea condition. At the Hampton Roads Regional jail in Virginia we observed a detainee receiving treatment from a kidney dialysis machine. Again, assessing the quality of care and ICE s decision making process for approval of nonroutine medical procedures were outside the scope of our review. <2. ICE Compliance Inspections Also Show Some Instances of Noncompliance With Medical Standards> We reviewed the most recently available ICE annual inspection reports for 20 of the 23 detention facilities that we visited. With the exception of the San Diego facility in California, the reports covered a different time period than that of our review. The 20 inspection reports showed that ICE reviewers had identified a total of 59 instances of noncompliance, 4 of which involved medical care. According to ICE policy, all adult, juvenile, and family detention facilities are required to be inspected at 12-month intervals to determine that they are in compliance with detention standards and to take corrective actions if necessary. As of November 30, 2006, according to ICE data, ICE had reviewed approximately 90 percent of detention facilities within the prescribed 12-month interval. Subsequent to each annual inspection, a compliance rating report is to be prepared and sent to the Director of the Office of Detention and Removal or his representative within 14 days. The Director of the Office of Detention and Removal has 21 days to transmit the report to the field office directors and affected suboffices. Facilities receive one of five final ratings in their compliance report superior, good, acceptable, deficient, or at risk. ICE officials reported that as of June 1, 2007, 16 facilities were rated superior, 60 facilities were rated good, 190 facilities were rated acceptable, 4 facilities were rated deficient, and no facilities were rated at risk. ICE officials stated that this information reflects completed reviews, and some reviews are currently in process and pending completion. Therefore, ICE could not provide information on the most current ratings for some facilities. Four inspection reports disclosed instances of noncompliance with medical care standards. The Wakulla County Sheriffs Office in Florida had sick call request forms that were available only in English whereas the population was largely Spanish speaking. The Cowlitz County Juvenile Detention Facility in Washington state did not maintain the alien juvenile medical records on-site. The San Diego Correctional facility staff, in addition to the deficiencies noted earlier in this statement, failed to obtain informed consent from the detainee when prescribing psychiatric medication. Finally, the Broward Transitional Center in Florida did not have medical staff on-site to screen detainees arriving after 5 p.m. and did not have a properly locked medical cabinet. We did not determine whether these deficiencies were subsequently addressed as required. <3. Alien Detainee Complaints Included Concerns About Medical Care> Our review of available grievance data obtained from facilities and discussions with facility management showed that the types of grievances at the facilities we visited typically included the lack of timely response to requests for medical treatment, missing property, high commissary prices, poor quality or insufficient quantity of food, high telephone costs, problems with telephones, and questions concerning detention case management issues. ICE s detainee grievance standard states that facilities shall establish and implement procedures for informal and formal resolution of detainee grievances. Four of the 23 facilities we visited did not comply with all aspects of ICE s detainee grievance standards. Specifically, Casa de San Juan Family Shelter in San Diego did not provide a handbook to those aliens in its facility, the Cowlitz County Juvenile Detention Center in Washington state did not include grievance procedures in its handbook, Wakulla County Sheriff s Office in Florida did not have a log, and the Elizabeth Detention Center in New Jersey did not record all grievances that we observed in their facility files. The primary mechanism for detainees to file external complaints is directly with the OIG, either in writing or by phone using the DHS OIG complaint hotline. Detainees may also file complaints with the DHS Office for Civil Rights and Civil Liberties (CRCL), which has statutory responsibility for investigating complaints alleging violations of civil rights and civil liberties. In addition, detainees may file complaints through the Joint Intake Center (JIC), which is operated continuously by both ICE and U.S. Customs and Border Protection (CBP) personnel, and is responsible for receiving, classifying, and routing all misconduct allegations involving ICE and CBP employees, including those pertaining to detainee treatment. ICE officials told us that if the JIC were to receive an allegation from a detainee, it would be referred to the OIG. OIG may investigate the complaint or refer it to CRCL or DHS components such as the ICE Office of Professional Responsibility (OPR) for review and possible action. In turn, CRCL or OPR may retain the complaint or refer it to other DHS offices, including ICE Office of Detention and Removal (DRO), for possible action. Further, detainees may also file complaints with nongovernmental organizations such as ABA and UNHCR. These external organizations said they generally forward detainee complaints to DHS components for review and possible action. The following discussion highlights the detainee complaints related to medical care issues where such information is available. We did not independently assess the merits of detainee complaints. Of the approximately 1,700 detainee complaints in the OIG database that were filed in fiscal years 2003 through 2006, OIG investigated 173 and referred the others to other DHS components. Our review of approximately 750 detainee complaints in the OIG database from fiscal years 2005 through 2006 showed that about 11 percent involved issues relating to medical treatment, such as a detainees alleging that they were denied access to specialized medical care. OPR stated that in fiscal years 2003 through 2006, they had received 409 allegations concerning the treatment of detainees. Seven of these allegations were found to be substantiated, 26 unfounded, and 65 unsubstantiated. Four of the seven substantiated cases involved employee misconduct, resulting in four terminations. According to OPR officials, three cases were still being adjudicated and the nature of the allegations was not provided. Additionally, 200 of the allegations were classified by OPR as either information only to facility management, requiring no further action, or were referred to facility management for action, requiring a response. CRCL also receives complaints referred from the OIG, nongovernmental organizations, and members of the public. Officials stated that from the period March 2003 to August 2006 they received 46 complaints related to the treatment of detainees, although the nature of the complaints was not identified. Of these 46 complaints, 14 were closed, 11 were referred to ICE OPR, 12 were retained for investigation, and 9 were pending decision about disposition. We could not determine the number of cases referred to DRO or their disposition. On the basis of a limited review of DRO s complaints database and discussions with ICE officials knowledgeable about the database, we concluded that DRO s complaint database was not sufficiently reliable for audit purposes. We recommended that ICE develop a formal tracking system to ensure that all detainee complaints referred to DRO are reviewed and the disposition, including any corrective action, is recorded for later examination. We reviewed 37 detention monitoring reports compiled by UNHCR from the period 1993 to 2006. These reports were based on UNHCR s site visits and its discussions with ICE officials, facility staff, and detainee interviews, especially with asylum seekers. Eighteen of the 37 UNHCR reports cited concerns related to medical care, such as detainee allegations that jail staff were unresponsive to requests for medical assistance and UNHCR s concern about the shortage of mental health staff. While American Bar Association officials informed us that they do not keep statistics regarding complaints, they compiled a list for us of common detainee complaints received through correspondence. This list indicated that of the 1,032 complaints it received from January 2003 to February 2007, 39 involved medical access issues such as a detainee alleging denial of necessary medication and regular visits with a psychiatrist, allegations of delays in processing sick call requests, and allegations of a facility not providing prescribed medications. Madam Chairman, this concludes my prepared remarks. I would be happy to answer any questions you or the members of the subcommittee have. <4. Contacts and Acknowledgments> For further information on this testimony, please contact Richard M. Stana at (202) 512-8777 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, William Crocker III, Assistant Director; Minty Abraham; Frances Cook; Robert Lowthian; and Vickie Miller made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In fiscal year 2007, Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement (ICE) detained over 311,000 aliens, with an average daily population of over 30,000 and an average length of stay of about 37 days in one of approximately 300 facilities. The care and treatment of aliens while in detention is a significant challenge to ICE, as concerns continue to be raised by members of Congress and advocacy groups about the treatment of the growing number of aliens while in ICE's custody. This testimony focuses on (1) the extent to which 23 facilities complied with medical care standards, (2) deficiencies found during ICE's annual compliance inspection reviews, and (3) the types of complaints filed by alien detainees about detention conditions. This testimony is based on GAO's July 2007 report evaluating, among other things, the extent to which 23 facilities complied with aspects of eight of ICE's 38 National Detention Standards. This report did not address quality of care issues. What GAO Found At the time of its visits, GAO observed instances of noncompliance with ICE's medical care standards at 3 of the 23 facilities visited. These instances related to staff not administering a mandatory 14-day physical exam to approximately 260 detainees, not administering medical screenings immediately upon admission, and first aid kits not being available as required. However, these instances did not show a pervasive or persistent pattern of noncompliance across all 23 facilities. Officials at some facilities told GAO that meeting the specialized medical and mental health needs of detainees had been challenging, citing difficulties they had experienced in obtaining ICE approval for outside nonroutine medical and mental health care. On the other hand, GAO observed instances where detainees were receiving specialized care at the facilities visited. At the time of its study, GAO reviewed the most recently available ICE annual inspection reports for 20 of the 23 detention facilities that it visited; these reports showed that ICE reviewers had identified a total of 59 instances of noncompliance with National Detention Standards, 4 of which involved medical care. One facility had sick call request forms that were available only in English whereas the population was largely Spanish speaking. Another did not maintain alien medical records on-site. One facility's staff failed to obtain informed consent from the detainee when prescribing psychiatric medication. Finally, another facility did not have medical staff on-site to screen detainees arriving after 5 p.m. and did not have a properly locked medical cabinet. GAO did not determine whether these instances of noncompliance were subsequently corrected as required. The types of grievances at the facilities GAO visited typically included the lack of timely response to requests for medical treatment, missing property, high commissary prices, poor food quality and insufficient food quantity, high telephone costs, problems with telephones, and questions concerning detention case management issues. ICE's detainee grievance standard states that facilities shall establish and implement procedures for informal and formal resolution of detainee grievances. Four of the 23 facilities GAO visited did not comply with all aspects of ICE's detainee grievance standards. For example, one facility did not properly log all grievances that GAO found in their facility files. Detainee complaints may also be filed with several governmental and nongovernmental organizations. The primary way for detainees to file complaints is to contact the DHS Office of Inspector General (OIG). About 11 percent of detainee complaints to the OIG between 2005 and 2006 involved medical treatment issues. However, we found that the OIG complaint hotline 1-800 number was blocked or otherwise restricted at 12 of the facilities we tested. OIG investigates the most serious complaints and refers the remainder to other DHS components. GAO could not determine the number of cases referred to ICE's Detention Removal Office and concluded that ICE's detainee complaint database was not sufficiently reliable.
<1. Background> <1.1. O&S Costs Constitute a Significant Portion of a System s Life-Cycle Costs> A system s life-cycle costs include the costs for research and development, procurement, sustainment, and disposal. O&S costs include the direct and indirect costs of sustaining a fielded system, such as costs for spare parts, fuel, maintenance, personnel, support facilities, and training equipment. According to DOD, the O&S costs incurred after a system has been acquired account for at least 70 percent of a system s life-cycle costs and depend on how long a system remains in the inventory. Many of the key decisions affecting O&S costs are made during the acquisition process, and a weapon system s O&S costs depend to a great extent on its expected readiness level and overall reliability. In general, readiness can be achieved either by building a highly reliable weapon system or supporting it with a more extensive logistics system that can ensure spare parts and other support are available when needed. If a weapon system has a very high expected readiness rate but its design is not reliable, O&S costs may be high and more difficult to predict. Conversely, if the weapon system design has been thoroughly tested for reliability and is robust, O&S costs may be more predictable. <1.2. O&S Costs Are Required to Be Estimated at Various Points during the Acquisition Process> DOD s acquisition process includes a series of decision milestones as the systems enter different stages of development and production. As part of the process, the DOD component or joint program office responsible for the acquisition program is required to prepare life-cycle cost estimates, which include O&S costs, to support these decision milestones and other reviews. Under the current acquisition process, decision makers at milestone A determine whether to approve a program to enter into technology development. Although very little may be known about the system design, performance, physical characteristics, or operational and support concepts, DOD guidance states that rough O&S cost estimates are expected to primarily support plans that guide refinement of the weapon system concept. At milestone B, a decision is made whether to approve the program to enter into engineering and manufacturing development. At this point, according to the guidance, O&S cost estimates and comparisons should show increased fidelity, consistent with more fully developed design and support concepts. At this stage, O&S costs are important because the long-term affordability of the program is assessed, program alternatives are compared, and O&S cost objectives are established. The program must pass through milestone C before entering production and deployment. DOD s guidance states that at milestone C and at the full-rate production decision review, O&S cost estimates should be updated and refined, based on the system s current design characteristics, the latest deployment schedule, and current logistics and training support plans. Further, the guidance states that O&S experience obtained from system test and evaluation should be used to verify progress in meeting supportability goals or to identify problem areas. Lastly, O&S cost objectives should be validated, and any O&S-associated funding issues should be resolved, according to the guidance. OSD s Cost Assessment and Program Evaluation office has established guidance regarding life-cycle O&S cost estimates that are developed at acquisition milestone reviews and has defined standards for preparing and presenting these estimates. Current guidance issued in October 2007 identifies O&S cost elements and groups them into several major areas. This 2007 guidance which went into effect after the systems selected for our review passed through the production milestone updated and refined the guidance issued in May 1992. The cost element structure in the 2007 guidance is similar to that of the 1992 guidance, with some key differences. For example, separate cost elements for intermediate-level and depot-level maintenance under the 1992 structure were combined into one maintenance cost element area in 2007. Cost elements for continuing system improvements were included under sustaining support in 1992 but separately identified in the 2007 structure. Also, cost elements for contractor support are no longer separately identified as a single cost area in the 2007 structure but are spread over other areas. Table 1 summarizes the 2007 and 1992 cost element structure for O&S cost estimating and provides a description of DOD s cost elements. <1.3. The Services Have Developed Systems for Providing Visibility of Actual O&S Costs> Each of the services has developed a system for collecting, maintaining, and providing visibility over historical information on actual weapon system O&S costs. Collectively referred to as VAMOSC systems, the Army s system is the Operating and Support Management Information System; the Navy s system is the Naval Visibility and Management of Operating and Support Cost system; and the Air Force s system is the Air Force Total Ownership Cost system. These systems were developed in response to long-standing concerns that the services lacked sufficient data on the actual costs of operating and supporting their weapon systems. For example, according to a Naval Audit Service report, in 1975 the Deputy Secretary of Defense directed the military departments to collect actual O&S costs of defense weapon systems. In 1987, the Senate Committee on Appropriations requested that each service establish a capability within 4 years to report accurate and verifiable O&S costs for major weapon systems. In 1992, DOD required that the O&S costs incurred by each defense program be maintained in a historical O&S data-collection system. Related guidance tasked the services with establishing historical O&S data-collection systems and maintaining a record of data that facilitates the development of a well-defined, standard presentation of O&S costs by major defense acquisition program. According to DOD s guidance, the services VAMOSC systems are supposed to be the authoritative source for the collection of reliable and consistent historical O&S cost data about major defense programs, and it is incumbent upon the services to make the data as accurate as possible. DOD s stated objectives for the systems include the provision of visibility of O&S costs so they may be managed to reduce and control program life- cycle costs and the improvement of the validity and credibility of O&S cost estimates by establishing a widely accepted database. According to the guidance, the O&S cost element structure provides a well-defined standard presentation format for the services VAMOSC systems. <1.4. Prior GAO Reviews Identified Factors Negatively Affecting DOD s Ability to Manage O&S Costs> Our work in the late 1990s and in 2003 identified several factors negatively affecting DOD s ability to manage O&S costs. First, DOD used immature technologies and components in designing its weapon systems, which contributed to reliability problems and acted as a barrier to using manufacturing techniques that typically help reduce a system s maintenance costs. In contrast, commercial companies ensure they understand their operating costs by analyzing data they have collected on equipment they are currently using. Second, DOD s acquisition processes did not consider O&S costs and readiness as key performance requirements for new weapon systems and placed higher priority on technical performance features. Further, DOD continued to place higher priority on enhanced safety, readiness, or combat capability than on O&S cost management after system fielding. Third, DOD s division of responsibility among its requirements-setting, acquisition, and maintenance communities made it difficult to control O&S costs, since no one individual or office had responsibility and authority to manage all O&S cost elements throughout a weapon system s life cycle. Fourth, the services VAMOSC systems for accumulating data to analyze operations and support actions on fielded systems did not provide adequate or reliable information, thus making it difficult for DOD to understand total O&S costs. We have also reported on the effect of DOD weapon system sustainment strategies on O&S costs. For example, we reported in 2008 that although DOD expected that the use of performance-based logistics arrangements would reduce O&S costs, it was unclear whether these arrangements were meeting this goal. The services were not consistent in their use of business case analyses to support decisions to enter into performance-based logistics arrangements. Also, DOD program offices that implemented these arrangements had not obtained detailed cost data from contractors and could not provide evidence of cost reductions attributable to the use of a performance-based logistics arrangement. Finally, we have reported on O&S cost issues associated with individual weapon systems, including the Marine Corps V-22 aircraft in 2009 and the Navy s Littoral Combat Ship in 2010. <2. Better Information and Guidance Could Help DOD to More Effectively Manage and Reduce O&S Costs of Major Weapons Systems> <2.1. Life-Cycle O&S Cost Estimates for the Production Milestone Were Not Available for Five of the Seven Systems Reviewed> The services did not have the life-cycle O&S cost estimates that were prepared at the production milestone for most of the aviation weapon systems in our sample. Specifically, production milestone O&S cost estimates were available for two of the seven systems we reviewed the Air Force s F-22A and the Navy s F/A-18E/F. We requested cost estimates from a variety of sources, including service and OSD offices that were identified as likely repositories of these estimates. However, service acquisition, program management, and cost analysis officials we contacted could not provide these estimates for the Army s CH-47D, AH-64D, and UH-60L or the Air Force s F-15E or B-1B. OSD offices we contacted, including the OSD Deputy Director for Cost Assessment and offices within the Under Secretary of Defense for Acquisition, Technology and Logistics, also could not provide the cost estimates for these five systems. Without the production milestone cost estimates, DOD officials do not have important information necessary for analyzing the rate of O&S cost growth, identifying cost drivers, and developing plans for managing and controlling these costs. In addition, at a time when the nation faces fiscal challenges and defense budgets may become tighter, the lack of this key information hinders sound weapon-system program management and decision making in an area of high costs to the federal government. In a recent speech, the Secretary of Defense stated that given the nation s difficult economic circumstances and parlous fiscal condition, DOD will need to reduce overhead costs and transfer those savings to force structure and modernization within the programmed budget. DOD officials we interviewed noted that the department has not placed emphasis on assessing and managing weapon system O&S costs compared with other priorities. Moreover, our prior work has shown that rather than limit the number and size of weapon system programs or adjust requirements, DOD s funding process attempts to accommodate programs. This creates an unhealthy competition for funds that encourages sponsors of weapon system programs to pursue overambitious capabilities and to underestimate costs. DOD acquisition guidance has required the development of life-cycle cost estimates for acquisition milestone reviews since at least 1980. Based on the historical acquisition milestones for the five systems with missing estimates, the approximate dates that the production milestone life-cycle O&S cost estimates should have been prepared were 1980 for the Army s CH-47D, 1985 for the Air Force s F-15E, 1989 for the Army s UH-60L and the Air Force s B-1B, and 1995 for the Army s AH-64D. Additionally, DOD has been required to obtain independent cost assessments since the 1980s. We requested any independent estimates that had been prepared for the systems we reviewed from the OSD Cost Assessment and Program Evaluation office, but the office could not provide them. The service estimates were prepared in 2000 for the F/A-18E/F and in 2005 for the F- 22A. While DOD officials could not explain why life-cycle O&S cost estimates for the other five systems were not available, they said that likely reasons were loss due to office moves, computer failures, and purging of older files. Further, prior DOD and service guidance may not have addressed the retention of cost estimates. The two systems for which cost estimates were available had the most recent production milestones of the systems in our sample. Under GAO s guidance for cost-estimating best practices, issued in 2009, thorough documentation and retention of cost estimates are essential in order to analyze changes that can aid preparation of future cost estimates. However, with the exception of the Army, current DOD and service acquisition and cost estimation guidance do not specifically address requirements for retaining O&S cost estimates and the support documentation used to develop the estimates. For example, although DOD s cost-estimation guidance emphasizes the need for formal, complete documentation of source data, methods, and results, neither it nor DOD s acquisition policy specifically addresses retention of cost estimate documentation. Naval Air Systems Command officials said they retained the production milestone O&S cost estimates for the F/A-18E/F because this was a good practice; however, they were not aware of any Navy guidance that required such retention. While the Navy s current acquisition and cost analysis instructions state that records created under the instructions should be retained in accordance with the Navy s records management guidance, the records management manual does not clearly identify any requirements for retaining acquisition cost estimates for aircraft. In addition, we found that although the estimate for the F/A-18E/F was retained, some of the supporting documentation was incorrect or incomplete. The Air Force s acquisition and cost estimation guidance is also unclear with regard to retention of cost estimates. An Air Force acquisition instruction states that the program manager is responsible for developing appropriate program documentation and for maintaining this documentation throughout the life cycle of the system, as well as maintaining a realistic cost estimate and ensuring it is well documented to firmly support budget requests. However, we did not find any references to retaining cost estimates specifically related to acquisition milestones in either this instruction or other Air Force acquisition and cost estimation guidance. Only the Army s current acquisition regulation states that all documentation required by the milestone decision authority for each milestone review must be retained on file in the program office for the life of the program, although the regulation does not make specific reference to retaining the O&S cost estimate. The production milestones for the three Army systems we reviewed predate the Army s current regulation, which was issued in 2003. <2.2. Complete Data on Actual O&S Costs Were Not Collected in the Services VAMOSC Systems> The services VAMOSC systems did not collect complete data on actual O&S costs. The Air Force s and Navy s systems did not collect actual cost data for some cost elements that DOD guidance recommends be collected, and the Army s system was the most limited. Additionally, we found that data for some cost elements were not accurate. DOD guidance recommends but does not require that the cost element structure used for life-cycle O&S cost estimating also be used by the services to collect and present actual cost data. Such guidance, if followed, could enable comparisons between estimated and actual costs. Some O&S cost data that are not collected in the VAMOSC systems may be found in other of the services information systems or from other sources. However, these data may not be readily available for the purpose of analyzing weapon system O&S costs. Without complete data on actual O&S costs, DOD officials do not have important information necessary for analyzing the rate of O&S cost growth, identifying cost drivers, and developing plans for managing and controlling these costs. <2.2.1. Air Force s VAMOSC System> While the Air Force s VAMOSC system collected actual cost data on many of DOD s recommended cost elements, it did not collect data on some cost elements for the weapon systems we reviewed. For example, the Air Force s VAMOSC system did not collect actual O&S costs for support equipment replacement, modifications, or interim contractor support. According to service officials, the F-22A, the F-15E, and the B-1B incurred support equipment replacement and O&S modification costs, and the F- 22A incurred interim contractor support costs. Air Force officials responsible for the VAMOSC system told us that actual cost data on these three cost elements are contained in another information system, the Air Force General Accounting and Finance System Reengineered, but the data are not identifiable because procurement officials often do not apply the established accounting and budgeting structure when they entered into procurement contracts. Further, the Air Force lacks a standard structure for capturing contractor logistics support costs that could provide additional visibility over both procurement and O&S costs. For example, although program officials said the F-22A was supported under interim contractor support in 2006 and 2007, no F-22A interim support costs were included in the VAMOSC system. Further, according to officials, a recent change in the way the Air Force funds repair parts also introduced inaccuracies into that service s VAMOSC system. Starting in fiscal year 2008, the Air Force centralized the funding of its flying operations at higher-level commands that support a number of aircraft and bases. For example, the Air Force Material Command now funds flying operations for most active units. Prior to that time, the Air Force provided funding for repair parts directly to lower-level organizational units that paid for each part when ordered. Under the new process, the higher-level commands provide funding for repair parts to the Air Force Working Capital Fund based on the anticipated number of flying hours and an estimated rate necessary to purchase repair parts per hour of use. Since repair parts funding is now based on such estimates, there have been differences between the amounts provided and the actual costs incurred. For example, officials indicated that in fiscal year 2008 overpayments of $430 million were provided for repair parts, and in fiscal year 2009 the overpayment amount was $188 million. Although the total overpayment amount can be identified, the Air Force cannot identify which specific programs overpaid, so the entire overpayment amount was recorded against the B-1B s O&S costs in the Air Force accounting system. VAMOSC system officials were aware of this inaccuracy and removed the amount from the B-1B s O&S costs within the VAMOSC system. However, because these officials said they do not have the information necessary to apply the appropriate amount of the refund to the appropriate programs, they placed the funds into an account not associated with a particular weapon system. Therefore, the actual O&S costs for repair parts reported by VAMOSC system could be inaccurate for one or more weapon systems for at least the past 2 years. <2.2.2. Navy s VAMOSC System> For the F/A-18E/F, the Navy s VAMOSC system collected data on many of DOD s recommended cost elements but did not collect actual O&S costs for interim contractor support costs, civilian personnel, and indirect infrastructure costs by weapon system. Navy officials responsible for the VAMOSC system told us it did not collect interim contactor support costs because the Navy considers these to be procurement rather than O&S costs. According to Navy officials, the F/A-18E/F incurred interim contractor support costs prior to fiscal year 2003. Navy officials are currently attempting to add direct civilian personnel costs from the Navy s Standard Accounting and Reporting System. However, since it is difficult to identify these costs by weapon system, aggregated civilian personnel costs are currently captured within a separate section of the VAMOSC system. In addition, Navy officials said indirect infrastructure costs are captured in the aggregate within a separate section of the VAMOSC system and are not reported within the O&S costs of each weapon system. According to Navy officials, these indirect infrastructure costs are not available by weapon system because of the time and resources that would be necessary to match real property records indicating the use of the facility to command installation records that contain the costs to operate the facility. Further, we found that some of the cost elements in the Navy s VAMOSC systems were not accurate. For example, the Navy s VAMOSC system did not separately report F/A-18E/F costs for intermediate-level repair parts and materials and supplies. According to Navy officials, intermediate-level costs were included as unit-level repair parts and materials and supplies due to the way the Navy s accounting system captures these costs. Also, officials noted that support equipment maintenance costs were inaccurate because some of these costs were subsumed under other cost elements. Further, Navy officials said that the VAMOSC system reported costs for all F/A-18E/F modifications, including those that added capabilities and those that improved safety, reliability, maintainability, or the performance characteristics necessary to meet basic operational requirements. According to OSD guidance, modifications to add capabilities are considered a procurement cost and therefore should not be reported as an O&S cost in the VAMOSC system. According to Navy officials, they are unable to separate the different types of modification costs in order to provide visibility for the O&S modification costs. <2.2.3. Army s VAMOSC System> Compared with the Navy s and Air Force s systems, the Army s VAMOSC system is the most limited in terms of actual O&S cost data collected. For the three types of Army aircraft we reviewed, the VAMOSC system consistently collected data for unit-level consumption cost elements: fuel, materials and supplies, repair parts, and training munitions. Costs for depot maintenance, while collected in the system, are not presented in the OSD-recommended cost element structure. The system does not include personnel cost data and instead provides a link to another database. In addition, Army officials said the VAMOSC system generally collected costs for only government-provided logistics support and currently contained costs for two weapon systems supported under contractor logistics support arrangements (the Stryker armored combat vehicle and UH-72A Light Utility Helicopter). Further, Army officials said that the costs for materials and supplies and for repair parts were added to the VAMOSC system when the items were transferred to the unit instead of when they were actually used. Also, many of the costs were allocated based on demand, quantity, and price assumptions. That is, if more than one weapon system used a repair part, the costs for this part were allocated to each weapon system based on the number of aircraft. While this may be a reasonable allocation method, the VAMOSC system may not reflect the actual O&S costs for the weapon systems that used the part. We reported on deficiencies of the Army s VAMOSC system in 2000. Our prior work found that the Army did not have complete and reliable data on actual O&S costs of weapon systems. Specifically, the Army s VAMOSC system did not collect data on O&S cost elements such as contractor logistics support, supply depot support, and software support. Further, we reported that the VAMOSC system did not contain cost data on individual maintenance events, such as removal and assessment of failed parts. We concluded that without complete O&S cost data, Army program managers could not assess cost drivers and trends in order to identify cost-reduction initiatives. Although we recommended that the Army improve its VAMOSC system by collecting data on additional O&S cost elements, the Army has not made significant improvements. According to Army officials responsible for the VAMOSC system, it was designed to collect information from other information systems. Therefore, it cannot collect data on other cost elements unless another information system captures these costs. According to Army officials, two information systems that the Army is developing the General Fund Enterprise Business System and the Global Combat Support System may enable the service to collect additional O&S cost data in the future. Even with these planned information systems, it is unclear what additional O&S cost data will be collected, how quickly the Army will be able to incorporate the data into its VAMOSC system, what resources may be needed, or what additional limitations the service may face in improving its VAMOSC system. Army officials, for example, do not expect the General Fund Enterprise Business System to become fully operational until the end of fiscal year 2012, and full operation of the Global Combat Support System will occur later, in fiscal year 2015. Army officials also said while they have requested that additional O&S cost data be collected by weapon system, it is too early to tell whether these data will be collected. <3. The Services Generally Do Not Use Updated Life-Cycle Estimates to Assess O&S Cost Growth for Fielded Weapon Systems> <3.1. Life-Cycle O&S Cost Estimates Were Not Periodically Updated after Fielding for Six of the Seven Systems Reviewed> For six of the seven systems selected for our review, the services did not periodically update life-cycle O&S cost estimates after the systems were fielded, even though most of the systems have been in DOD s inventory for over a decade. Only the program office for the F-22A had updated its production milestone cost estimate. According to Office of Management and Budget guidance on benefit-cost analysis, agencies should have a plan for periodic, results-oriented evaluation of the effectiveness of federal programs. The guidance also notes that retrospective studies can be valuable in determining if any corrections need to be made to existing programs and to improve future estimates of other federal programs. In addition, cost-estimating best practices call for such estimates to be regularly updated. The purpose of updating the cost estimates is to determine whether the preliminary information and assumptions remain relevant and accurate, record reasons for variances so that the accuracy of the estimate can be tracked, and archive cost and technical data for use in future estimates. Despite the benefit-cost analysis guidance and cost- estimating best practices, service officials for six of the seven aviation weapon systems we reviewed could not provide current, updated O&S cost estimates that incorporated actual historical costs or analysis of actual costs compared to the estimate prepared at the production milestone. While cost estimates were prepared for major modifications to some of the systems in our review, these estimates were limited in scope and did not incorporate actual cost data. The Air Force s updated life-cycle O&S cost estimate for the F-22A illustrates the potential magnitude of changes in O&S costs that a weapon system may experience over its life cycle. When the F-22A program office updated the 2005 cost estimate in 2009, it found a 47-percent increase in life-cycle O&S costs. The 2009 estimate of about $59 billion to operate and support the F-22A is $19 billion more than was estimated in 2005. The increase in life-cycle O&S costs occurred despite a 34-percent reduction in fleet size from 277 aircraft projected in the 2005 estimate to 184 aircraft projected in the 2009 estimate. The program office also compared the two estimates to identify areas of cost growth. According to the program office, the projected O&S cost growth was due to rising aircraft repair costs, unrealized savings from using a performance-based logistics arrangement to support the aircraft, an increased number of maintenance personnel needed to maintain the F-22A s specialized stealth exterior, military pay raises that were greater than forecast, and personnel costs of Air National Guard and Air Force Reserve units assigned to F-22A units that were not included in the production milestone estimate. A 2007 independent review by the Air Force Cost Analysis Agency also projected future O&S cost growth for the F-22A. According to Air Force Cost Analysis Agency officials, the review was initiated because cost data showed the F-22A s cost per flying hour was higher than projected in the 2007 President s Budget, prompting concerns that the future O&S costs of the aircraft were underestimated. Specifically, the fiscal year 2007 actual cost per flight hour was $55,783, about 65 percent higher than the $33,762 projected in the 2007 President s Budget. The Air Force Cost Analysis Agency estimated that in 2015 (when the system would be fully mature), the F-22A s projected cost per flight hour would be $48,236, or 113 percent higher than the $22,665 cost per flight hour in 2015 that had been estimated in 2005. The estimated cost per flight hour increased $8,174 because fixed O&S costs did not decrease in proportion to the reductions in the number of planned aircraft (277 to 183) and annual flight hours per aircraft (366 to 277); $4,005 because the formula used in the 2005 estimate to calculate the cost to refurbish broken repair parts understated the potential costs; $2,414 for engine depot maintenance costs due to higher-than- previously-projected engine cycles per flying hour; $2,118 for higher costs for purchasing repair parts not in production or with diminishing manufacturing sources; $2,008 because of unrealized economies of scale for repair parts due to smaller quantity purchases (based on reduced aircraft and flying hours); $1,670 for additional costs for munitions maintainers, training munitions, and fuel consumption associated with a new capability an air-to-ground mission; and $1,641 for additional maintenance due to lower levels of weapon system reliability than projected in the production milestone O&S cost estimate. The remaining $3,542 cost per flight hour increase identified by the Air Force Cost Analysis Agency s review was due to changes in personnel requirements, a new requirement to remove and replace the stealth coating mid-way through the aircraft s life, labor rate increases, immature engine repair procedures, and the administrative cost of Air National Guard units assigned to active duty units or active duty units assigned to Air Force Reserve or Air National Guard units. <3.2. Program Changes over Time Complicate Comparisons of Estimated to Actual O&S Costs for Two Systems> For the two aviation systems where both estimated and actual O&S cost data were available, we found that although there were some areas of cost growth, direct comparisons between estimated and actual costs were complicated in part because of program changes that occurred after the estimates were developed at the production milestone. For example, the Air Force and Navy had fewer F-22A and F/A-18E/F aircraft, respectively, in their inventories and flew fewer hours than planned when the estimates were developed. In addition, a recent, OSD-sponsored study of the Air Force s C-17 aircraft identified various changes that can occur over a weapon system s life-cycle and lead to O&S cost growth. For the C-17, these changes included factors internal to the program, factors external to the program, and changes in accounting methods. (The findings from that study are summarized in app. II.) <3.2.1. Analysis of Estimated and Actual O&S Costs for the F-22A> Our analysis showed that actual O&S costs for the Air Force s F-22A totaled $3.6 billion from fiscal years 2005 to 2009, excluding amounts for interim contractor support. This amount compared to $3.8 billion projected for these years in the 2005 production milestone O&S cost estimate. (Fig. 1 shows estimated and actual costs for each year.) However, the Air Force had 125 aircraft in its inventory in fiscal year 2009 rather than the 143 aircraft projected in the 2005 cost estimate. Also, the aircraft fleet actually flew 68,261 hours over this time period rather than the 134,618 hours projected in the 2005 cost estimate. On a per flight hour basis, the fiscal year 2009 actual O&S costs were $51,829, or 88 percent higher than the $27,559 forecast in 2005 after accounting for inflation. The use of contractor logistics support for the F-22A further complicated comparisons of actual costs to the estimated costs developed in 2005. Although the F-22A has been supported under contractor logistics support arrangements since before 2005, the estimates included the costs for government-provided logistics support of the aircraft. For example, for fiscal years 2005 through 2009, the O&S cost estimate projected that contractor logistics support would cost $736 million. However, actual contractor logistics support costs for the F-22A were $2.1 billion. For fiscal years 2005 through 2009, F-22A contractor logistics support costs were 60 percent of the total actual O&S costs reported in the Air Force s VAMOSC system. Due to the use of this support arrangement, however, the Air Force s VAMOSC system reports all of the amounts paid to the F-22A contractor under a single cost element instead of under multiple individual cost elements. In contrast, program officials confirmed that various contractor-provided cost elements such as repair parts, materials and supplies, depot maintenance, and sustaining support were included in the production milestone O&S cost estimate as separate items. Further, according to officials, prior to 2008 the program office did not obtain from the contractor cost reports that provide details of how the amounts paid to the contractor were spent in terms of DOD s recommended O&S cost elements by fiscal year. Therefore, it is not possible to compare a significant amount of the actual O&S costs for the F-22A to the production milestone estimate at the cost element level. Of the remaining F-22A O&S costs not covered by contractor logistics support, mission personnel costs constituted the largest proportion approximately 22 percent of the total actual O&S costs reported for fiscal years 2005 through 2009. Compared with the estimates developed in 2005, actual mission personnel costs were $34 million (20 percent) higher for fiscal year 2008 and $113 million (62 percent) higher for fiscal year 2009. The 2005 estimate provided for 1,335 maintenance personnel for each F-22A wing (which was projected to number 72 aircraft), but according to Air Force officials the current authorized personnel for an F- 22A wing (now numbering 36 aircraft) is 1,051 maintenance personnel. While the number of aircraft per wing was reduced by half, the number of personnel was reduced by about 21 percent. According to officials, although the change in wing composition from three squadrons of 24 aircraft to two squadrons of 18 aircraft reduced personnel requirements, additional personnel who were not included in the 2005 estimate are now required to support the aircraft s added air-to-ground mission, an increased maintenance requirement for the aircraft s stealth exterior, and other maintenance requirements that were determined through a 2007 staffing study. In addition, Air National Guard and Air Force Reserve units were not included in the 2005 estimate, so the personnel costs of these units resulted in higher actual costs. Finally, as noted in the F-22A program office s 2009 update to the life-cycle O&S cost estimate, military pay raises given to service members were greater than forecast in the production milestone estimate. <3.2.2. Analysis of Estimated and Actual O&S Costs for the F/A- 18E/F> Our analysis for the Navy s F/A-18E/F showed that total actual O&S costs for fiscal years 1999 through 2009 were about $8.7 billion. This amount compares to the $8.8 billion projected for these years in the 1999 production milestone O&S cost estimate. However, program changes complicate direct comparisons between estimated and actual costs, as they do for the F-22A. For example, the Navy estimated that it would have 428 aircraft in fiscal year 2009, but the actual number of aircraft was 358, about 16 percent less. Similarly, the Navy estimated that the aircraft fleet as a whole would fly 780,628 hours from fiscal year 1999 through 2009, but the aircraft fleet actually flew 625,067 hours, or 20 percent less. On a per flight hour basis, the fiscal year 2009 O&S costs were $15,346, 40 percent higher than the $10,979 forecast in 1999. Although total actual costs were less than estimated for the 11-year period, actual annual costs for fiscal years 2005 through 2009 have exceeded the annual estimates by an average of 10 percent after accounting for inflation (see fig. 2). With regard to individual cost elements, our comparison of actual O&S costs for fiscal years 1999 through 2009 to those projected in the 1999 estimate found that actual costs for fuel, modifications, depot maintenance, and intermediate maintenance were higher than originally estimated while training costs were much lower. (App. III presents a more detailed comparison of actual and estimated O&S costs for the F/A-18E/F.) In discussing findings from this comparison with cost analysts at the Naval Air Systems Command, they provided the following explanations for key changes we identified: Fuel costs were higher than estimated because the price of fuel has increased overall since the estimate was developed in 1999. Further, when the estimate was developed, it was assumed the F/A-18E/F aircraft s fuel consumption would be similar to that of the F/A-18C/D. However, this did not prove to be an accurate analogy, and the F/A- 18E/F s fuel consumption has been higher than that of the earlier model aircraft. The analysts also said that some of the increased fuel costs for fiscal year 2005 through 2009 may also be attributed to increased refueling activity of the F/A-18E/F after the retirement of the S-3B aircraft. Depot maintenance costs were higher than estimated, in part because the engine was repaired by a contractor under a performance-based logistics arrangement, but the estimate projected costs for government-provided support. The government repair estimate included a large initial investment of procurement funds which are not considered O&S costs for spare parts. The Navy subsequently changed the engine repair concept to a performance-based logistics arrangement with less expensive spare parts and reduced the initial investment by about 15 percent. However, as a result of the new arrangement, depot maintenance costs increased. Further, the 1999 estimate purposefully excluded some engine depot-maintenance costs in order to keep a consistent comparison with similar costs for the F/A-18A-D models. (These costs were instead included in the estimate as costs for repair parts.) However, after adjusting for these issues, actual engine depot maintenance costs in fiscal years 2007 and 2009 were higher by a total of approximately $64 million, and Navy officials could not explain this variance. Additionally, the production milestone estimate developed in 1999 included costs for support equipment replacement, which are not captured in the Navy s VAMOSC system. Actual costs for aviation repair parts were higher than estimated after removing the costs that should have been included as engine depot- maintenance costs from the estimate. Intermediate-level maintenance costs were higher than projected because the estimate did not include personnel costs for shore-based, intermediate-level maintenance. Modification costs were higher than projected because the Navy s VAMOSC system collected costs for all procurement-funded modifications, including those that added capabilities, while the estimate only projected costs for flight-safety modifications. Training costs were lower than estimated because the Navy s VAMOSC system did not include actual nonmaintenance training costs such as initial pilot and naval flight officer training and installation support costs. These costs were included in the cost estimate. <3.3. Actual O&S Costs Increased for Five Systems, but Extent of Planned Cost Growth Is Uncertain> Although we did not have production milestone estimates of life-cycle O&S costs for the Air Force s F-15E and B-1B or for the Army s AH-64D, CH-47D, and UH-60L, we reviewed changes in actual O&S costs for each system and found that costs increased over time for a variety of reasons. As noted earlier, some cost elements are not maintained in the services VAMOSC systems or are not accurate, and our analysis was subject to these limitations. Furthermore, we could not determine the extent to which the cost growth was planned since the services could not provide us with the O&S cost estimates developed for the production milestone. According to service cost analysis officials, actual O&S costs for these systems were likely higher than estimated because such estimates are typically based on peacetime usage rather than wartime usage assumptions. Further, service cost analysts said that since the late 1990 s actual costs for repair parts have grown faster than the OSD inflation rates that are used to develop O&S cost estimates. <3.3.1. Actual O&S Costs for the F-15E for Fiscal Years 1996 through 2009> Total actual O&S costs for the Air Force s F-15E increased 82 percent from $944 million in fiscal year 1996 to $1.7 billion in fiscal year 2009 (see fig. 3). The number of F-15E aircraft increased 8 percent from 200 to 215 during this time period, and the number of flight hours increased 7 percent from 60,726 to 65,054. Per aircraft, O&S costs increased 69 percent from $5 million to $8 million over this period, and the cost per flight hour increased 70 percent from $15,538 to $26,408. Our analysis found that personnel, fuel, repair parts, and depot maintenance accounted for about 95 percent of the overall increase in F- 15E O&S costs from fiscal years 1996 to 2009. For example, actual personnel costs grew by $73 million (19 percent) over the period. Most of the growth was due to wage increases rather than increases in the number of personnel. Also, fuel costs increased $142 million (18 percent) during these years. According to program officials, this increase was mainly due to higher fuel prices rather than increased consumption. Cost for repair parts grew $398 million (51 percent), and program officials attributed some of this growth to higher costs for materials used during depot repair, as well as higher prices paid for labor, storage, and handling. Further, officials said that several avionics systems on the F-15E have been replaced and the costs to repair some of the new components are higher. Depot maintenance costs increased $124 million (16 percent) and program officials said this increase was due to increasing rates for depot work, noting that the Air Logistics Centers increased their rates because of higher material costs. Also, officials said that as aircraft age the number of subsystems that require repair increases, which results in additional tasks being required during planned depot maintenance. For example, the F-15C/Ds that are expected to fly until 2025 will be completely rewired in planned depot maintenance because the original wiring is deteriorating. A similar program is planned in the future for the F-15Es and is expected to significantly increase the cost of planned depot maintenance for that aircraft. <3.3.2. Actual O&S Costs for the B-1B for Fiscal Years 1999 through 2009> Annual actual O&S costs for the Air Force s B-1B increased 21 percent from $1.1 billion in fiscal year 1999 to $1.3 billion in fiscal year 2009 (see fig. 4). This cost growth occurred despite a 29 percent reduction in the aircraft fleet from 93 to 66 during the same period. Per aircraft, O&S costs increased 71 percent from $11 million to $19 million, and the cost per flight hour increased 23 percent from $46,224 to $56,982. Our analysis showed that fuel, repair parts, and depot maintenance accounted for 97 percent of the overall increase in B-1B O&S costs from fiscal years 1999 through 2009. Fuel costs increased $89.4 million (40 percent), which program officials attributed mainly to higher fuel costs and increased utilization of the aircraft in recent years. Program officials reported that in each of the last 3 full fiscal years (2007, 2008, and 2009), the hourly utilization rate per aircraft was 46 percent, 51 percent, and 54 percent higher, respectively, than in fiscal year 1999. According to the program office, the increased cost for repair parts, which grew $51.9 million (23 percent), was due to the increased cost of materials consumed in the refurbishment of repair parts. Depot maintenance costs increased $77.1 million (34 percent), and program officials said this increase was due to higher utilization of aircraft, increased labor and material costs, and changes in cost accounting. <3.3.3. Actual O&S Costs for the AH- 64D, CH-47D, and UH-60L for Fiscal Years 1998 through 2007> The Army s O&S data on unit-level consumption costs for the AH-64D, CH- 47D, and UH-60L showed that all three experienced significant cost growth from fiscal years 1998 through 2007. However, as table 2 shows, the size of the fleets and numbers of flying hours also increased, with the AH-64D experiencing the greatest growth. According to Army officials, fiscal year 1998 costs reflected peacetime training only, whereas data for fiscal year 2007 also includes costs for units deployed in Afghanistan and Iraq. O&S costs for deployed units constituted more than half of the total O&S dollars for these aircraft in fiscal year 2007. Measured by flight hour, Army unit-level consumption costs increased 51 percent per flight hour for the CH-47D and 111 percent per flight hour for the UH-60L, and decreased 3 percent per flight hour for the AH-64D, from fiscal year 1998 to 2007. As discussed earlier in the report, unit-level consumption costs reported in the Army s VAMOSC system include fuel, materials and supplies, repair parts, and training munitions. As shown in table 3, fuel costs increased by more than 140 percent for all three systems, the costs of materials and supplies and repair parts also increased for each system, and the cost of training munitions decreased. The decreased cost of training munitions drove the overall decrease in unit-level consumption costs for the AH-64D, and a program official stated this was likely due to the significant amount of training conducted during the initial fielding of the AH-64D in 1998. <3.4. Updated Estimates of Life- Cycle O&S Costs and Documentation of Program Changes Are Generally Not Required after Weapon System Production Decisions> Even though periodic updates to life-cycle O&S cost estimates could quantify any cost growth in major weapon systems and help identify cost drivers, DOD acquisition and cost-estimating guidance do not require that O&S cost estimates be updated after a program has completed production. Service guidance that we reviewed does not consistently and clearly require the updating of O&S cost estimates after a program has completed production. Additionally, although our review showed program changes can have a large effect on actual O&S costs after cost estimates are developed at the production milestone, DOD and service acquisition guidance do not require program offices to maintain documentation of such changes for use in cost analysis. Federal law requires that a full life-cycle cost analysis for each major defense acquisition program be included in the programs annual Selected Acquisition Reports to Congress. Requirements related to Selected Acquisition Reports, however, end when a weapon system has reached 90 percent of production. In addition, we found that for the systems we reviewed, the estimated O&S costs included in the Selected Acquisition Reports were sometimes not updated. For our sample, the estimated O&S costs included in the annual reports for the F-22A remained unchanged from 2005 through 2007, and the services did not have current updated life- cycle O&S cost estimates for the other six weapon systems we reviewed. Further, while life-cycle costs are required to be reported in the Selected Acquisition Reports, OSD officials noted that the calculation of the estimate may be inconsistent. For example, cost analysts at the Naval Air Systems Command maintain a cost-estimating model for the F/A-18E/F that is regularly updated and used to develop O&S cost estimates for the Selected Acquisition Reports and other analyses to improve cost effectiveness. However, the methodology used to develop the Navy s cost estimates for the Selected Acquisition Reports differs from the methodology used to develop life-cycle cost estimates for acquisition milestone decisions and includes significantly more infrastructure costs. According to the Naval Air System Command guidance, the estimates for the Selected Acquisition Reports are not comparable to the acquisition milestone life-cycle cost estimates without adjusting for the different ground rules and assumptions used. The estimates for the Selected Acquisition Reports also are not comparable to the costs reported in the Navy s VAMOSC system. DOD acquisition policy requires the services to provide life-cycle O&S cost estimates for decisions made during specific points in the acquisition process, including the production decision, but neither this policy nor DOD s cost-estimating guidance require O&S cost estimates for systems that have been fielded. In a December 2008 memorandum, DOD also required that several metrics, including an ownership cost metric, be reported quarterly for all major weapon defense acquisition programs. However, this quarterly reporting policy does not currently apply to weapon systems that have completed production and are no longer reporting information in the Selected Acquisition Reports to Congress. Of the weapon systems we reviewed, program offices for the AH-64D, F-22A, and F/A-18E/F currently provide Selected Acquisition Reports to Congress. The Army regulation and Navy instructions we reviewed do not address updating life-cycle O&S cost estimates for systems that have been fielded. Although the Air Force has a directive requiring annual updates to program cost estimates, it does not specifically mention life-cycle O&S cost estimates. An Air Force directive issued in August 2008 includes the requirement that major acquisition program cost estimates be updated annually and used for acquisition purposes, such as milestone decisions, and other planning, programming, budgeting, and execution decisions. The directive also states that it is applicable to organizations that manage both acquisition and sustainment programs. However, as mentioned earlier, service and OSD officials were unable to locate O&S cost estimates for the F-15E and the B-1B aircraft. According to Air Force cost analysis and policy officials, the requirement for annual cost estimate updates is applicable to programs no longer in acquisition, but they are still developing the Air Force instruction that will contain more specific guidance for implementing the 2008 directive. The officials expect that, once issued, the Air Force instruction will clarify the requirement to update O&S cost estimates annually. In addition, changes in weapon system programs affected the assumptions used in production-milestone life-cycle O&S cost estimates, but DOD and service acquisition guidance that we reviewed do not explicitly require the services to maintain documentation of program changes affecting O&S costs. According to federal standards for internal control, information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Also, managers need to compare actual performance to planned or expected results and analyze significant differences. <4. DOD Has Departmentwide and Service-Specific Initiatives to Address Weapon System O&S Costs> <4.1. Several Departmentwide Initiatives Address Weapon System O&S Costs> DOD has several departmentwide initiatives to address weapon system O&S costs. The DOD-wide Reduction in Total Ownership Costs Special Interest Program, initiated in 2005, is aimed at reducing weapon system O&S costs by improving reliability and maintainability and reducing total ownership costs in weapon systems that are already fielded. Program funding totaled about $25 million in fiscal year 2009. For its 15 funded projects, DOD forecasts total ownership cost savings for fiscal years 2006 through 2011 to be $9.5 billion, with an average 60 to 1 return on investment. For example, according to officials, the program is funding an effort to develop trend analysis software to diagnose and resolve problems with the F/A-18 aircraft. Other departmentwide initiatives seek to better manage O&S costs of major weapon systems during the acquisition process. Some of these initiatives address factors we previously identified as negatively affecting DOD s ability to manage O&S costs. In 2003, we reported that DOD did not consider O&S costs and readiness as key performance requirements for new weapon systems and placed higher priority on technical performance features. In 2007, DOD began requiring the services to establish an ownership cost metric during the requirements determination and acquisition processes for weapon systems in order to ensure that O&S costs are considered early in decision making. According to current Joint Staff guidance, the ownership cost metric and reliability metric are key system attributes of the sustainment (or materiel availability) key performance parameter. While the ownership cost metric includes many of OSD s recommended O&S cost elements, such as energy (fuel, oil, petroleum, electricity, etc.), maintenance, sustaining support, and continuous system improvements, it does not include personnel and system-specific training costs. In 2008, OSD expanded the use of the ownership cost and materiel reliability metrics, along with the materiel availability key performance parameter, to all major defense acquisition programs that provide information to Congress in Selected Acquisition Reports. In a July 2008 memorandum intended to reinforce the use of the life-cycle metrics, OSD requested that these programs develop target goals for each metric within 60 days. In a December 2008 memorandum, OSD asked the services to begin reporting against the target goals on a quarterly basis. According to OSD officials, they are working with the services to improve the accuracy and submission of the reported cost information. We also previously noted that DOD used immature technologies in designing its weapon systems, which contributed to reliability problems and acted as a barrier to using manufacturing techniques that typically help reduce a system s maintenance costs. DOD has identified insufficient reliability designed in the system during acquisition as one of the key reasons for increases in O&S costs. Based on the recommendation of the DOD Reliability Improvement Working Group, DOD s primary acquisition instruction was updated in 2008 to include guidance directing program mangers to develop reliability, availability, and maintainability strategies that include reliability growth as an integral part of design and development. Further, the instruction states that reliability, availability, and maintainability shall be integrated within systems engineering processes; documented in system plans; and assessed during programmatic reviews. DOD has also taken steps to improve the information available for cost estimating and monitoring of actual O&S costs. In 2008, we reported that for the performance-based logistics arrangements we reviewed, program offices often did not have detailed cost data that would provide insights regarding what the program office was spending for various aspects of the support program. That same year, DOD s primary acquisition instruction was updated to include a requirement that sustainment contracts provide for detailed contractor cost reporting for certain major programs to improve future cost estimating and price analysis. However, the instruction does not provide details as to the timing or content of such cost reporting. Officials in OSD Cost Assessment and Program Evaluation are currently drafting additional guidance to clarify the cost-reporting requirement. Additionally, OSD Cost Assessment and Program Evaluation initiated an effort in 2008 to collect actual operational testing and evaluation information and make it available to cost analysts for use in developing weapon system cost estimates. According to OSD officials, actual test data could improve these estimates by providing cost analysts more accurate information. In support of the initiative, the services have collected over 150 test data reports from their operational testing agencies. Although cost analysis officials indicated that they have not yet used the test data in preparing cost estimates, there is a high level of interest in the information contained in the test reports as evidenced by the number of times the data have been accessed. Officials noted that research is ongoing, particularly within the Army, to develop quantitative tools that link operational test results with O&S cost estimates. <4.2. The Services Have Initiatives to Help Them Better Manage Aviation System O&S Costs> The services also identified initiatives to help them better manage aviation system O&S costs. Although one Army command had an O&S cost- reduction program, none of the services had cost-reduction programs implemented servicewide. According to Army officials, the most direct aviation O&S cost-reduction initiative within that service is the Aviation and Missile Life Cycle Management Command s O&S Cost Reduction program. Under the program, the command investigates fielded aviation systems with high failure rates and high costs and attempts to reduce costs by funding projects aimed at reliability improvements, life-cycle extensions, and acquisition cost reductions. According to Army officials, the annual budget for this program is $10 million to $12 million per year, and most projects predict at least a 2.5 to 1 return on investment. Examples of funded projects include developing a fuel additive and reducing corrosion in CH-47 aircraft blades. Officials also noted that other Army initiatives during the last several years include a renewed emphasis on the importance of estimating total life-cycle costs during the weapon system acquisition process and the establishment of draft guidance for the inclusion of Operations and Maintenance funding projections within acquisition program affordability charts used during certain weapon system acquisition reviews. In addition, the Army conducts annual weapon systems reviews at which program managers present current and emerging life-cycle weapon system funding requirements based on the latest Army or program office cost estimate developed for the system. Army officials said these initiatives can help the Army in better managing O&S costs. While the Navy could not identify initiatives designed specifically to reduce O&S costs for its aviation systems, Navy officials said the Naval Aviation Enterprise, a working group of naval aviation stakeholders, was established in 2004 to meet multiple goals, including exchanging information to reduce O&S costs. Through cross-functional teams, subject- matter experts collaborate to resolve problems and improve operations. The Navy stated that, as a result of this initiative, it achieved O&S cost savings of $50 million from its flying-hour program in fiscal year 2005. Additionally, Navy officials cited the establishment of Fleet Readiness Centers as an initiative that could lead to O&S cost reduction in aviation systems. Created as part of the Base Realignment and Closure process in 2005, the Fleet Readiness Centers aim to improve maintenance efficiency and reduce costs by combining intermediate- and depot-level maintenance personnel. As a result, the Navy expects avoidance of unwarranted maintenance procedures, reduced turnaround times, an increase in completed repairs, and reduced maintenance costs. Although the Navy is expected to achieve cost savings from the Fleet Readiness Centers, we reported in 2007 that the projected savings are likely to be overstated. The Air Force also lacks initiatives specifically designed to reduce O&S costs of aviation systems. Air Force officials noted, however, that improved management of O&S costs could result from its Expeditionary Logistics for the 21st Century program. The program is a logistics process- improvement effort that was started in 2005 under a larger program called Air Force Smart Operations for the 21st Century, which is the guiding program for all transformation efforts within the Air Force. Although one goal of the program is to reduce O&S costs by 10 percent, Air Force officials said program initiatives to date do not focus on specific weapon systems. <4.3. DOD s Recent Assessment Identified Problems and Recommended Actions to Improve Weapon System Product Support> A DOD Product Support Assessment Team led by the Office the Under Secretary of Defense for Acquisition, Technology and Logistics recently concluded a year-long study of weapon system product support, and in November 2009 issued a report with recommendations to improve weapon system life-cycle sustainment. With regard to O&S costs, the report cited inadequate visibility of O&S costs as one of several problems that hinder weapon system life-cycle support management. According to the report, DOD does not have adequate visibility of O&S costs; lacks a process to systematically track and assess O&S costs; and lacks valid, measurable sustainment metrics to accurately assess how programmatic decisions will affect life-cycle costs. Further, the report states that DOD cannot identify, manage, and mitigate major weapon system cost drivers. To address identified deficiencies in O&S cost management, the Product Support Assessment Team recommended (1) establishing an O&S affordability requirement, including linking O&S budgets to readiness, (2) developing and implementing an affordability process with all DOD stakeholders (such as the financial and program management communities), and (3) increasing the visibility of O&S costs and their drivers across the supply chain. In addition to the deficiencies identified with regard to O&S cost management, the Product Support Assessment Team also found deficiencies in DOD s sustainment governance. Governance is defined by the Product Support Assessment Team as the consistent and cohesive oversight across the management, policies, processes, and decision making for sustainment to ensure that sustainment information is a critical component of weapon system acquisition and throughout the life cycle. The report noted that every programmatic decision made during the life cycle of a weapon system should be made with the knowledge of how that decision will affect the life-cycle support of that system. However, the report stated that this has been difficult within DOD due to the lack of perceived relative importance of long-term costs and lack of valid, measurable support metrics, especially cost projections. To address identified deficiencies in sustainment governance, the Product Support Assessment Team recommended (1) strengthening guidance so that sustainment factors are sufficiently addressed and governed at key life- cycle decision points, (2) issuing DOD policy to require the services to conduct independent logistics assessments prior to acquisition milestones, and (3) creating a post-initial-operating-capability review that includes an assessment of known support issues and potential solutions. OSD has formed three Integrated Product Teams to further develop and lead the implementation of the Product Support Assessment Team recommendations over a 3-year period. While the report highlighted some of the limitations on assessing and managing O&S costs, the current recommendations do not identify specific actions or enforcement measures. One of the first changes resulting from the Product Support Assessment Team recommendations was a new DOD effort in April 2010 to begin reviews of sustainment costs for all acquisition category ID weapon system programs and address sustainment factors at milestone decision and other review points during the acquisition process. Under new DOD guidance, program managers for these programs are to use a sustainment chart to facilitate the reviews and provide information on support strategy, metrics, and costs in a standardized format. Specifically, the chart should include the original O&S cost baseline, as reported in the initial Selected Acquisition Report for the system, as well as current program costs according to the most recent projections. Further, the current estimated total O&S costs for the life cycle of the system should also be included, along with the antecedent system s cost for comparison. <4.4. No Single Individual or Entity Is Empowered to Control O&S Costs> A related factor that has historically challenged DOD s ability to reduce weapon system O&S costs is that no single individual or entity within the department is empowered to control these costs. A variety of offices within the services and DOD are involved in the decision making that affects sustainment. Though DOD has designated the program manager as responsible for many aspects of weapon system life-cycle sustainment planning, many decisions and processes are outside of the program manager s control. Using aviation systems as an example, these decisions and processes include budget determination, funding processes, the number and pay of personnel assigned to support aircraft, the number of aircraft procured, the number of hours flown, the aircraft basing locations, and the rates charged by depot maintenance facilities. After the aircraft are produced, program managers have only a limited ability to directly affect O&S costs. Army aviation officials, for example, indicated that during the sustainment phase, program managers control only the budgets for program-related logistics and engineering support, retrofit modifications, and technical manuals, which account for only a small percentage of total O&S costs. In addition, it is likely that multiple individuals will serve as the weapon system s program manager over its life-cycle. For example, the average tenure for a program manager is roughly 17 months, whereas the average life of a major weapon system often exceeds 20 years. This turnover results in program managers bearing responsibility for the decisions of their predecessors, making it difficult to hold the program manager accountable for growth in the system s O&S costs. Finally, a weapon system s long life-cycle also affects cost-reduction initiatives, as it may take many years for some of the initiatives to produce returns on investment. <5. Conclusions> In the absence of key information on O&S costs for its major weapon systems, DOD may not be well-equipped to analyze, manage, and reduce these costs. While the military services are required to develop life-cycle O&S cost estimates to support production decisions, DOD cannot fully benefit from these estimates if they are not retained. If cost-estimating best practices are followed, the estimates, among other things, can provide a benchmark for subsequent cost analysis of that system, enable the identification of major cost drivers, and aid in improving cost estimating for future systems. Similarly, in the absence of more complete historical data on a weapon system s actual O&S costs in their VAMOSC systems, the services are not in a good position to track cost trends over time, compare these actual costs with previous estimates, and determine whether and why cost growth is occurring. While all the services VAMOSC systems have deficiencies, the Army s system has the greatest limitations. We reported on these limitations 10 years ago and recommended improvements, but the Army has not made significant improvements since then. Moreover, without periodically updating life-cycle O&S cost estimates and documenting program changes affecting O&S costs after a system is fielded, DOD managers lack information necessary to compare actual performance to planned or expected results, as stated in federal standards for internal control. DOD has begun to recognize that greater management emphasis should be placed on better managing weapon system O&S costs, as indicated by several current and planned initiatives. The department furthermore has acknowledged deficiencies in O&S cost visibility and noted that every programmatic decision made during the entire life cycle of a DOD weapon system should be made with the knowledge of how that decision will affect the life-cycle sustainment of that system. Finally, citing the economic and fiscal challenges the nation faces along with the prospects for greatly reduced defense budgets, the Secretary of Defense highlighted the need for DOD to take a more aggressive approach to reducing its spending and finding efficiencies where possible in order to better afford its force structure and weapon system modernization priorities. These competing budget priorities provide additional impetus for DOD to manage and reduce weapon system O&S costs. <6. Recommendations for Executive Action> To improve DOD s ability to manage and reduce O&S costs of weapon systems over their life cycle, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics and the Director of OSD Cost Assessment and Program Evaluation to take the following five actions: Revise DOD guidance to require the services to retain life-cycle O&S cost estimates and support documentation used to develop the cost estimates for major weapon systems. This requirement should apply to cost estimates developed by weapon system program offices and other service offices, including cost analysis organizations. Furthermore, this requirement should include cost estimates prepared during the acquisition process as well as those prepared after a system is fielded. Identify the cost elements needed to track and assess major weapon systems actual O&S costs for effective cost analysis and program management, and require the services to collect and maintain these elements in their VAMOSC systems. To the extent possible, data collected on actual O&S costs should be comparable to data presented in life-cycle cost estimates. To oversee compliance with this new requirement, DOD should require the services to identify any gaps where actual cost data are not being collected and maintained and to identify efforts, along with timelines and resources, for filling these gaps. Direct the Army to develop and implement a strategy for improving its VAMOSC system. This strategy should include plans for incorporating additional cost elements from other information systems, time frames for expanding on existing cost elements, and resources required to improve the VAMOSC system. Require the services to periodically update their life-cycle O&S cost estimates for major weapon systems throughout their life cycle. These updates should provide an assessment of cost growth since the prior estimate was developed and account for any significant cost and program changes. Develop guidance for documenting and retaining historical information on weapon system program changes to aid in effective analysis of O&S costs. DOD should determine, in conjunction with service acquisition and cost analysis officials, the types of information needed and the level of detail that should be retained. We also recommend that the Secretary of Defense require that the Director of OSD Cost Assessment and Program Evaluation retain any independent life-cycle O&S cost estimates prepared by that office along with support documentation used to develop these cost estimates for major weapon systems. <7. Agency Comments and Our Evaluation> In its written comments on a draft of this report, DOD generally concurred with our recommendations, noting that the department is committed to strengthening its O&S data availability as well as its use of O&S estimates in the governance process for major defense acquisition programs. DOD also stated that it will take steps to update its policy to ensure that O&S cost estimates are retained, along with supporting documentation. Specifically, the department fully concurred with four recommendations and partially concurred with two. The department s written comments are reprinted in appendix IV. DOD also provided technical comments that we have incorporated into this report where applicable. DOD concurred with our four recommendations to revise guidance to require the services to retain life-cycle O&S cost estimates and support documentation used to develop the cost estimates; develop guidance for documenting and retaining historical information on weapon system program changes to aid in effective analysis of O&S costs; require that the Director of the Cost Assessment and Program Evaluation retain any independent life-cycle O&S cost estimates prepared by that office, along with support documentation used to develop these cost estimates for major weapon systems; and revise DOD guidance to require the services to periodically update life-cycle O&S cost estimates for major weapon systems throughout their life cycle and assess program changes and cost growth. While DOD concurred with our recommendation to periodically update life-cycle O&S cost estimates for major weapon systems, the department noted that the Navy is concerned about the additional cost and personnel related to this requirement. We maintain that periodic estimates that quantify and assess changes in weapon systems O&S costs will assist with the identification of prospective areas for cost reduction and improve DOD s ability to estimate O&S costs in the future. Therefore, the resulting benefits from periodic analysis of O&S costs will likely be greater than the incremental costs associated with the additional resources. DOD partially concurred with our recommendation to identify the cost elements needed to track and assess major weapon systems actual O&S costs for effective cost analysis and program management, require the services to collect and maintain these elements in their VAMOSC systems, and require the services to identify elements where actual cost data are not being collected and maintained, along with efforts for filling these data gaps. However, the department noted that while DOD will coordinate internally to address this issue, the Director of the Cost Assessment and Program Evaluation office should be directed to take this action in lieu of the Under Secretary of Defense for Acquisition, Technology and Logistics. DOD s comments further noted that these two OSD offices would coordinate with one another to implement other recommendations we made. We have modified our recommendations to reflect that both the Under Secretary of Defense for Acquisition, Technology and Logistics and the Director of the Cost Assessment and Program Evaluation office will need to play key roles in implementing these recommendations. DOD also partially concurred with our recommendation that the Army develop and implement a strategy for improving its VAMOSC system. DOD stated that while the Army will develop such a strategy, the Army maintains that its military personnel costs are collected by a separate database, the Army Military-Civilian Cost System, and although the costs are not captured by weapon system fleet, the data are sufficient for O&S cost -estimating purposes. The Army also pointed out that it has made progress in collecting contractor logistics support cost data. Specifically, the Army stated that guidance issued in 2008 has led to cost-reporting requirements (that is, requirements that the contractor provide details regarding support costs by cost element) being included in new support contracts. Further, the Army noted that a future information system should be able to capture contractor support cost data. As we stated in our report, new Army systems may improve the availability of actual O&S cost data. However, these systems are still being developed. Even with these planned information systems, it is unclear what additional O&S cost data will be collected, how quickly the Army will be able to incorporate the data into its VAMOSC system, what resources may be needed, or what additional limitations the service may face in improving its VAMOSC system. We based our recommendation on DOD guidance regarding the VAMOSC systems. As we state in our report, DOD required that the O&S costs incurred by each defense program be maintained in a historical O&S data-collection system and designated the services VAMOSC systems as the authoritative source for these cost data. Therefore, we continue to believe the Army needs a strategy for improving the cost data available in its VAMOSC system. While generally concurring with our recommendations, DOD s response noted that there are over 150 major defense acquisition programs across the departments and agencies, ranging from missile defense systems to combat vehicles, with each program having unique challenges in data reporting. Although DOD agreed that our report was reasonable in its analysis of the seven programs reviewed, it emphasized that the problems encountered with our sample may not be found across the entire department. While we solicited DOD s and the services inputs to try to avoid selecting weapon systems with known data limitations, we agree with DOD and our report clearly states that we selected a nonprobability sample for our review and, therefore, the results cannot be used to make inferences about all major weapon systems. DOD s response also noted that while our report recognizes the recent initiatives the department has established to track and prevent future O&S cost growth, the effects of these initiatives are generally not reflected in the systems we analyzed. According to DOD s comments, a review of at least one pre major defense acquisition program would have allowed us to assess the potential long- term effect of these initiatives with respect to controlling O&S cost growth. While we agree that a review of the effectiveness of recent initiatives would be beneficial in the future, many of the initiatives were only implemented in the last several years and are likely too new to demonstrate improvements. Further, the scope of our work was limited to a comparison of the original O&S cost estimates developed for selected major weapon systems to the actual O&S costs incurred in order to assess the rate of cost growth. Therefore, we selected systems that had previously passed through DOD s acquisition process, achieved initial operating capability, and been fielded for at least several years. These systems were not affected by DOD s recent initiatives. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions concerning this report, please contact me on (202) 512-8246 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix V. Appendix I: Scope and Methodology To conduct our review of growth in operating and support (O&S) costs for major weapon systems, we collected and analyzed data on seven major aviation systems: the Navy s F/A-18E/F; the Air Force s F-22A, B-1B, and F- 15E; and the Army s AH-64D, CH-47D, and UH-60L. We focused on aviation systems to enable comparisons of cost growth, where possible, across the selected systems. For example, some factors driving cost growth in an aviation system may be more applicable to other types of aircraft than to maritime or land systems. We selected aviation systems that had reached initial operating capability after 1980 and had incurred several years of actual O&S costs, indicating a level of maturity in the program. The newest system in our sample the F-22A has been fielded for about 4 years, and the oldest system the CH-47D has been fielded about 17 years. We limited our selection to aviation systems that had relatively large fleets, avoiding low-density systems for which cost data may have been anomalous. We also selected the systems to reflect varied characteristics in terms of military service, mission, and support strategy. However, we did not include a Marine Corps aviation system in our sample because the Naval Air Systems Command manages and supports all Marine Corps aircraft. We also did not select systems with known limitations of available data on actual O&S costs. For example, we have previously reported that some systems supported under performance-based logistics arrangements may not have detailed cost data available because the Department of Defense (DOD) has not required the contractor to provide these data. In considering which systems to select for our review, we also obtained input from DOD and service officials. The results from this nonprobability sample cannot be used to make inferences about all aviation systems or about all major weapon systems because the sample may not reflect all characteristics of the population. The following is an overview of each system selected for our review: The F/A-18E/F Super Hornet is an all-weather attack aircraft as well as a fighter. It performs a variety of missions including air superiority, fighter escort, reconnaissance, aerial refueling, close air support, air defense suppression, and day/night precision strike. The F/A-18E/F entered full rate production in January 2000 and established initial operational capability in September 2001. As of the end of fiscal year 2009, the Navy had 358 F/A-18E/F aircraft. The F-22A Raptor is the Air Force s newest fighter aircraft and performs both air-to-air and air-to-ground missions. Officials stated that the program received approval to enter into full rate production in April 2005 and established initial operating capability in December 2005. Currently, the Air Force plans to buy 187 F-22A aircraft. The F-15E Strike Eagle is a dual-role fighter designed to perform air-to- air and air-to-ground missions. Officials indicated that the program received approval to enter into full rate production in early 1986 and established initial operating capability in September 1989. As of the end of fiscal year 2009, the Air Force had 223 F-15E aircraft. The B-1B Lancer is a multimission long-range bomber designed to deliver massive quantities (74,000 pounds) of precision and nonprecision weapons. The Air Force received the first B-1B in April 1985 and established initial operating capability in September 1986. As of the end of fiscal year 2009, the Air Force had 66 B-1B aircraft. The AH-64D Apache Longbow is the Army s heavy division/corps attack helicopter. It is designed to conduct rear, close, and shaping missions, as well as distributed operations and precision strikes. In addition, the AH-64D is designed to provide armed reconnaissance during day or night, in obscured battlefields, and in adverse weather conditions. The original Apache entered Army service in 1984, and the AH-64D followed in 1998. As of the end of fiscal year 2009, the Army had 535 AH-64D aircraft. The UH-60L Black Hawk is a twin-engine helicopter that is used in the performance of the air assault, air cavalry, and aeromedical evacuation missions. The UH-60L is an update to the original UH-60A, which entered Army service in 1979. As of the end of fiscal year 2009, the Army had 564 UH-60L aircraft. The CH-47D Chinook is a twin-engine, tandem-rotor transport helicopter that carries troops, supplies, ammunition, and other battle- related cargo. Between 1982 and 1994, the Army upgraded all early models the CH-47A, B, and C models to the CH-47D, which features composite rotor blades, an improved electrical system, modularized hydraulics, triple cargo hooks, and more powerful engines. As of the end of fiscal year 2009, the Army had 325 CH-47D aircraft. To determine the extent to which (1) life-cycle O&S cost estimates developed during acquisition and data on actual O&S costs are available for program management and decision making and (2) DOD uses life-cycle O&S cost estimates for major weapon systems after they are fielded to quantify cost growth and identify its causes, we identified available cost estimates, compared the estimates with actual cost data, and obtained additional information on how O&S costs are tracked, assessed, managed, and controlled. We requested documentation from the services and the Office of the Secretary of Defense (OSD) on life-cycle O&S cost estimates that the services prepared during acquisition to support the decision to proceed with production of the aircraft in our sample. We also requested documentation of O&S cost estimates that OSD may have independently prepared for this milestone decision. We focused on the production milestone because, while life-cycle cost estimates may be developed during earlier stages of the acquisition process, DOD cost-estimating guidance states that cost estimates for the production milestone should be based on the current design characteristics of the weapon system, the latest deployment schedule, and the latest operation and maintenance concept. In addition, we requested documentation from the services for any current updates to life-cycle O&S cost estimates that may have been developed after the systems were fielded. We also obtained information from weapon system program offices on their practices for retaining information regarding program changes affecting O&S costs. To identify requirements for conducting, updating, and retaining cost estimates, we reviewed Office of Management and Budget guidance, DOD and service acquisition and cost estimation guidance, and federal guidance on cost - estimating best practices. For actual historical data on weapon system O&S costs, we obtained access to the services Visibility and Management of Operating and Support Costs (VAMOSC) systems that have been designated as the authoritative sources of these data. We worked with service cost analysis officials to understand how data in these systems are organized and how to query them for data on our selected aviation systems. To assess the reliability of the data, we surveyed cost analysis officials. For example, we obtained information on specific cost elements that were collected, data sources, and efforts to improve the completeness and accuracy of collected data. We also reviewed DOD and service guidance on the VAMOSC systems and cost element structure, and we reviewed prior GAO and DOD assessments of the availability of actual O&S cost data for DOD weapon systems. We identified limitations in the data and discuss these in our report. Taking these limitations into account, we determined that the available data were sufficiently reliable to compare estimated to actual costs for the F-22A and F/A-18E/F, the two systems in our sample for which we were able to obtain the production milestone life-cycle O&S cost estimate, and also to present an analysis of changes in actual costs over time for the other five systems. In comparing estimated to actual costs for the F-22A and the F/A-18E/F, we analyzed differences that occurred each year, determined which cost elements experienced the greatest changes over time, and reviewed how actual program conditions compared to the assumptions used to develop the production milestone cost estimate. In addition, we met with cost analysis experts from the Center for Naval Analyses and the Institute for Defense Analyses and obtained the results of an Institute for Defense Analysis study on O&S costs for the Air Force s C-17 aircraft that had been prepared at the request of the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. For the five weapon systems in our sample where production milestone life-cycle O&S cost estimates were unavailable, we obtained and analyzed data on actual O&S costs from the services VAMOSC systems. This analysis was subject to the limitations in the data that we identified for each of the services VAMOSC systems, as discussed in the report. We met with officials responsible for each selected weapon system to discuss issues related to the management of the program and cost trends. In our analysis of O&S costs, we have adjusted DOD data to reflect constant fiscal year 2010 dollars, unless otherwise noted. Throughout this report, all percentage calculations are based on unrounded numbers. To identify efforts taken by DOD to reduce O&S costs, we interviewed cognizant OSD and service officials involved in weapon system acquisition, logistics, and program management. For specific initiatives, we obtained documents that described their objectives, time frames, and other information. In addition, we obtained and reviewed pertinent guidance on performance management and internal control practices in the federal government. We also reviewed a report issued in November 2009 by the DOD Product Support Assessment Team. Finally, we also consulted prior O&S studies performed by DOD, the services audit entities, and GAO. During our review, we conducted work at the DOD and service offices as shown in table 4 (located in the Washington, D.C., area unless indicated otherwise). We conducted this performance audit from June 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Results of OSD-Sponsored Analysis of C-17 Aircraft This appendix provides further information on an Office of the Secretary of Defense sponsored study of operating and support (O&S) cost growth for the Air Force s C-17 aircraft. The Institute for Defense Analyses (IDA) conducted the study for the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. According to an IDA analyst, the study began in 2007 and was completed in April 2009. We did not evaluate the study s methodology, results, or conclusions. The intent of the study was to demonstrate various analytic methods for monitoring major weapon system reliability, maintainability, availability, and O&S costs against baseline targets throughout the life cycle. IDA obtained O&S cost estimates developed by the Air Force during the acquisition of the C-17, compared them to actual fiscal year 2009 O&S costs (estimated using DOD s recommended cost element structure), and developed an updated life-cycle cost estimate using actual O&S cost data. In its report, IDA showed that the C-17 s estimated life-cycle O&S costs increased from $91.6 billion to $118.1 billion (29 percent) from 1985 through 2009. The estimated cost growth occurred despite a decrease in the total aircraft inventory from a projected 210 down to an actual total of 190. Further, the study reported that the C-17 s cost per flight hour increased 43 percent from an estimated $13,989 in 1985 to an estimated $19,995 in 2009. According to the study, major cost drivers were fuel consumption, materials and supplies, repair parts, airframe overhaul, engine overhaul, and sustaining engineering/program management. According to IDA s report, the C-17 program experienced changes during and after acquisition that affected the comparison of the updated O&S cost estimates developed using actual O&S costs to the originally estimated O&S costs. The report grouped the factors that caused O&S cost growth into three categories: internal program factors, external program factors, and accounting factors. According to an IDA analyst involved with the study, variances due to internal program factors are defined as those that were influenced by the aircraft s program managers. Such factors identified in the study included system design, reliability, and maintenance support concepts. For example, the report noted that the C-17 transitioned from planned government-provided support to contractor logistics support, and this change greatly complicated the analysis and became a major aspect of the study. IDA attributed cost increases for sustaining engineering/program management, contractor field service representatives, contractor training support, and engine depot- maintenance costs to this change in support concept. Further, the C- 17 s airframe weight increased during development, which led to increased fuel consumption and higher fuel costs. Finally, system modifications increased in scope, which led to additional cost increases. Changes in costs due to external program factors are defined as those that were generally beyond the control of program managers, according to the IDA analyst. These factors included changes to system quantities or delivery schedules, basing and deployment plan changes, and higher system-operating tempos due to contingencies. For example, the change from 210 to 190 aircraft reduced total costs; a change to the mix of active and reserve units from 73 percent active to 90 percent active increased costs; and personnel costs increased due to growth in incentive pay, housing, and medical care costs. Finally, according to the IDA analyst, variances from accounting factors are defined as those that resulted from differences in the way costs were categorized over time. Accounting factor changes that affected C-17 O&S costs included a change in the scope of DOD s indirect costs; changes in personnel accounting; and changes to the timing of the weapon system s phase-in, steady state, and phase-out periods. On the basis of its C-17 analysis, IDA concluded that any mechanism to track and assess weapon system O&S costs against baseline estimates would require a systematic and institutional methodology that does not currently exist within DOD. According to the report, the methodological approach that was used in the study was ad hoc, labor intensive, and dependent on analyst judgment. The study suggested that, in the absence a more systematic, institutional methodology, DOD could instead track major O&S cost drivers such as reliability, fuel consumption, maintenance manning per aircraft, and dollars per airframe overhaul. However, the exact metrics DOD used would depend on how the department plans to use the data in managing the O&S costs of its weapon systems and how the data would be used in decision making. Appendix III: Analysis of Changes between Estimated and Actual O&S Costs for the Navy s F/A-18E/F This appendix provides a detailed breakdown, by cost element, of total estimated and actual operating and support (O&S) costs for the Navy s F/A-8E/F for the period of fiscal years 1999 through 2009 (see table 5). The estimated costs were obtained from the Navy s O&S life-cycle cost estimates prepared for the 1999 production milestone. Data on actual O&S costs were obtained from the Navy s Visibility and Management of Operating and Support Costs (VAMOSC) system. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments <8. Staff Acknowledgments> In addition to the contact name above, the following staff members made key contributions to this report: Tom Gosling, Assistant Director; Tracy Burney; Sandra Enser; Kevin Keith; James Lackey; Charles Perdue; Richard Powelson, Janine Prybyla; Jennifer Spence; and Alyssa Weir. Related GAO Products Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Littoral Combat Ship: Actions Needed to Improve Operating Cost Estimates and Mitigate Risks in Implementing New Concepts. GAO-10-257. Washington, D.C.: February 2, 2010. Defense Acquisitions: Army Aviation Modernization Has Benefited from Increased Funding but Several Challenges Need to be Addressed. GAO-09-978R. Washington, D.C.: September 28, 2009. Defense Acquisitions: Assessments Needed to Address V-22 Aircraft Operational and Cost Concerns to Define Future Investments. GAO-09-482. Washington, D.C.: May 11, 2009. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. Defense Logistics: Improved Analysis and Cost Data Needed to Evaluate the Cost-effectiveness of Performance Based Logistics. GAO-09-41. Washington, D.C.: December 19, 2008. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Defense Acquisitions: Fundamental Changes Are Needed to Improve Weapon Program Outcomes. GAO-08-1159T. Washington, D.C.: September 25, 2008. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Air Force Depot Maintenance: Improved Pricing and Cost Reduction Practices Needed. GAO-04-498. Washington, D.C.: June 17, 2004. Military Personnel: Navy Actions Needed to Optimize Ship Crew Size and Reduce Total Ownership Costs. GAO-03-520. Washington, D.C.: June 9, 2003. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Defense Logistics: Opportunities to Improve the Army s and the Navy s Decision-making Process for Weapons System Support. GAO-02-306. Washington, D.C.: February 28, 2002. Defense Acquisitions: Navy and Marine Corps Pilot Program Initiatives to Reduce Total Ownership Costs. GAO-01-675R. Washington, D.C.: May 22, 2001. Defense Acquisitions: Higher Priority Needed for Army Operating and Support Cost Reduction Efforts. GAO/NSIAD-00-197. Washington, D.C.: September 29, 2000. Defense Acquisitions: Air Force Operating and Support Cost Reductions Need Higher Priority. GAO/NSIAD-00-165. Washington, D.C.: August 29, 2000. Financial Systems: Weaknesses Impede Initiatives to Reduce Air Force Operations and Support Costs. GAO/NSIAD-93-70. Washington, D.C.: December 1, 1992. Navy Fielded Systems: Operating and Support Costs Not Tracked. GAO/NSIAD-90-246. Washington, D.C.: September 28, 1990.
Why GAO Did This Study The Department of Defense (DOD) spends billions of dollars each year to sustain its weapon systems. These operating and support (O&S) costs can account for a significant portion of a system's total life-cycle costs and include costs for repair parts, maintenance, and personnel. The Weapon Systems Acquisition Reform Act of 2009 directs GAO to review the growth in O&S costs of major systems. GAO's report addresses (1) the extent to which life-cycle O&S cost estimates developed during acquisition and actual O&S costs are available for program management and decision making; (2) the extent to which DOD uses life-cycle O&S cost estimates after systems are fielded to quantify cost growth and identify its causes; and (3) the efforts taken by DOD to reduce O&S costs for major systems. GAO selected seven aviation systems that reflected varied characteristics and have been fielded at least several years. These systems were the F/A-18E/F, F-22A, B-1B, F-15E, AH-64D, CH-47D, and UH-60L. What GAO Found DOD lacks key information needed to effectively manage and reduce O&S costs for most of the weapon systems GAO reviewed--including life-cycle O&S cost estimates and complete historical data on actual O&S costs. The services did not have life-cycle O&S cost estimates developed at the production milestone for five of the seven aviation systems GAO reviewed, and current DOD acquisition and cost-estimating guidance does not specifically address retaining these estimates. Also, the services' information systems designated for collecting data on actual O&S costs were incomplete, with the Army's system having the greatest limitations on available cost data. without historic cost estimates and complete data on actual O&S costs, DOD officials do not have important information necessary for analyzing the rate of O&S cost growth for major systems, identifying cost drivers, and developing plans for managing and controlling these costs. At a time when the nation faces fiscal challenges, and defense budgets may become tighter, the lack of this key information hinders sound weapon system program management and decision making in an area of high costs to the federal government. DOD generally does not use updated life-cycle O&S cost estimates to quantify cost growth and identify cost drivers for the systems GAO reviewed. The services did not periodically update life-cycle O&S cost estimates after production was completed for six of the seven systems. The F-22A program office had developed an updated life-cycle O&S cost estimate in 2009 and found a 47-percent ($19 billion) increase in life-cycle O&S costs from what had been previously estimated in 2005. GAO's comparisons of estimated to actual O&S costs for two of the seven systems found some areas of cost growth. However, notable changes such as decreases in the numbers of aircraft and flying hours occurred in both programs after the production milestone estimates were developed, complicating direct comparisons of estimated to actual costs. According to federal guidance, agencies should have a plan to periodically evaluate program results as these may be used to determine whether corrections need to be made and to improve future cost estimates. However, DOD acquisition and cost estimation guidance does not require that O&S cost estimates be updated throughout a system's life cycle or that information on program changes affecting the system's life-cycle O&S costs be retained. The services' acquisition and cost-estimation guidance that GAO reviewed does not consistently and clearly require the updating of O&S cost estimates after a program has ended production. DOD has several departmentwide and service-specific initiatives to address O&S costs of major systems. One DOD program funds projects aimed at improving reliability and reducing O&S costs for existing systems. Other initiatives are aimed at focusing attention on O&S cost requirements and reliability during the acquisition process. In a recent assessment, DOD identified weaknesses in O&S cost management, found deficiencies in sustainment governance, and recommended a number of corrective actions. Many of DOD's initiatives are recent or are not yet implemented. What GAO Recommends GAO recommends that DOD take steps to retain life-cycle O&S cost estimates for major systems, collect additional O&S cost elements in its visibility systems, update life-cycle O&S cost estimates periodically after systems are fielded, and retain documentation of program changes affecting O&S costs for use in cost analysis. DOD concurred with GAO's recommendations.
<1. Background> In general, federal housing assistance is available only to people or households that have low incomes. Consequently, income, not age, is the single biggest factor in deciding on an elderly person s need and eligibility for federal housing assistance. HUD also identifies problems that, regardless of age, exacerbate a person s need for assisted housing. These problems include housing that costs more than 30 percent of a person s income or is inadequate or substandard. Figure 1 shows the magnitude of the housing needs among low-income elderly households in each state. According to HUD, the need for housing assistance, for the elderly as for the general population, far outstrips the federal resources available to address that need. As a result, federal housing assistance, which is provided through a variety of programs, reaches just over one-third of the elderly households that need assistance. Furthermore, most of the programs are maintaining, rather than increasing, the level of assistance they provide. Only two of these programs Section 202 and HOME are under HUD s jurisdiction and are receiving annual appropriations for the sole purpose of increasing housing assistance for elderly and other households. Under the Section 202 program, HUD provides funding to private nonprofit organizations to expand the supply of housing for the elderly by constructing or rehabilitating buildings or by acquiring existing structures from the Federal Deposit Insurance Corporation. Since it was first created in 1959, the Section 202 program has provided over $10 billion to the sponsors of 4,854 projects containing 266,270 housing units. At the same time that HUD awards Section 202 funds, it enters into contracts with these nonprofit organizations to provide them with project-based rental assistance. This assistance subsidizes the rents that elderly residents with very low incomes will pay when they move into the building. In addition to having a very low income, each household in a Section 202 project must have at least one resident who is at least 62 years old. Finally, sponsoring organizations must identify how they will ensure that their residents have access to appropriate supportive services, such as subsidized meals programs or transportation to health care facilities. When HUD evaluates sponsors applications, it awards more points to, and is thus more likely to fund, applicants who have experience providing such services or have shown that they will readily be able to do so. The purpose of the HOME program is to address the affordable housing needs of individual communities. As a result, the day-to-day responsibility for implementing the program rests not with HUD, but with over 570 participating jurisdictions. These participating jurisdictions can be states, metropolitan cities, urban counties, or consortia made up of contiguous units of general local government. HUD requires these jurisdictions to develop consolidated plans in which they identify their communities most pressing housing needs and describe how they plan to address these needs. Each year, HUD allocates HOME program funds to these jurisdictions and expects them to use the funds according to the needs they have identified in their consolidated plans. The legislation that created the HOME program allows but does not require those receiving its funds to construct multifamily rental housing for the elderly. Although the legislation authorizing the HOME program directs that its funds address the housing needs of low-income people, it allows local communities to choose from a variety of ways of doing so. These include the acquisition, construction, and rehabilitation of rental housing; the rehabilitation of owner-occupied homes; the provision of homeownership assistance; and the provision of rental assistance to lower-income tenants who rent their homes from private landlords. Finally, the legislation requires that communities target the rental assistance they choose to provide. Specifically, jurisdictions must ensure that for each multifamily rental project with at least five HOME-assisted units, at least 20 percent of the residents in the HOME-assisted units have incomes at or below 50 percent of the area s median income; the remaining residents may have incomes up to 80 per cent of the area s median. <2. Housing Assistance for the Elderly Reflects the Programs Intent> The Section 202 program, far more often than the HOME program, is the source of funds for increasing the supply of multifamily rental housing for low-income elderly people. In comparison, through fiscal year 1996, participating jurisdictions have seldom chosen to use HOME funds to produce multifamily housing almost exclusively for the low-income elderly. This result is linked to differences in the purposes for which each program was created and the persons each was intended to serve. The Congress designed the Section 202 program to serve only low-income elderly households. In creating the HOME program, however, the Congress sought to give states and local communities the means and the flexibility to identify their most pressing low-income housing needs and to decide which needs to address through the HOME program. As is consistent with each program s intent, the Section 202 program focuses its benefits on the elderly, while the HOME program benefits those whom local communities choose to serve regardless of age through various kinds of housing assistance. From fiscal year 1992 through fiscal year 1996, over 1,400 Section 202 and HOME program multifamily rental housing projects for the elderly opened nationwide. These projects included 1,400 Section 202 projects with 51,838 rental units, providing homes for at least 47,823 elderly individuals, and 30 comparable HOME projects with 681 rental units, providing homes for at least 675 elderly individuals. On average, the Section 202 projects had 37 units, while the HOME projects had 23 units. Figure 2 illustrates the proportion of the total number of projects attributable to each program. Although only a small portion of the HOME projects were comparable to Section 202 projects, participating jurisdictions used HOME funds to assist low-income elderly people in other ways. Most of the elderly households that obtained assistance from the HOME program over 70 percent used that assistance to rehabilitate the homes they already owned and in which they still lived. The remaining HOME assistance benefiting the elderly did so by providing tenant-based rental assistance; helping new homebuyers make down payments and pay the closing costs associated with purchasing homes; and acquiring, constructing, or rehabilitating single-family and multifamily rental housing. In total, the HOME program assisted 21,457 elderly households, approximately 40 percent as many as the Section 202 program assisted during the same 5-year period. Figure 3 illustrates how the HOME program assisted the elderly during fiscal years 1992 through 1996. <3. Section 202 Projects Rely on HUD Funding, but Most HOME Projects Leverage Private Financing and Other Subsidies> In nearly all cases, Section 202 projects rely solely on HUD to pay the costs of construction and subsidize the rents of the low-income elderly tenants who occupy the buildings. In contrast, HOME-assisted multifamily rental housing projects rely on multiple sources of funding, including private financing, such as bank mortgages and equity from developers. At the HOME-funded projects we visited, the use of HOME funds reduced the amount that the projects sponsors had to borrow for construction or made borrowing unnecessary. Reducing or eliminating the need to go into debt to build HOME projects enables the projects to be affordable to households with lower incomes than would be the case otherwise. <3.1. Section 202 Funds Generally Cover Projects Costs, but Some Need Supplemental Funding> For the Section 202 projects that became occupied during fiscal years 1992 though 1996, HUD provided over $2.9 billion in capital advances and direct loans. The average cost of these projects was about $2.1 million. HUD expects this assistance to be the only significant source of funds for the development of Section 202 projects. Furthermore, when HUD awards Section 202 funds, it also enters into contracts with the sponsoring organizations to provide project-based rental assistance to the tenants who will occupy the buildings once they open. As a result, HUD expects that successful sponsors will be able to develop and build multifamily housing projects that will be affordable to low-income elderly households. The nonprofit sponsors of two of the eight Section 202 projects we visited said that the Section 202 funds were not sufficient to cover all of the costs associated with building their projects. HUD officials told us that this is usually the case when a sponsor (1) includes amenities in a project, such as balconies, for which HUD does not allow the use of Section 202 funds; (2) incurs costs not associated with the site on which the project is being built, such as costs to make the site more accessible to public transportation; or (3) incurs costs that exceed the amount HUD will allow, which can happen when a sponsor pays more for land than HUD subsequently determines the land is worth. Consequently, in some cases, sponsors of the projects we visited sought funding from other sources to make up for the shortfall. Those that found HUD s funding insufficient primarily cited the high cost of land in their area or factors unique to the site on which they planned to build as the reason for the higher costs. For example, one sponsor in California said that the Section 202 funding was not sufficient to cover the high cost of land and of designing a project that was compatible with local design preferences. Several of the Section 202 projects we visited received additional financial support from their nonprofit sponsors or in-kind contributions from local governments (such as zoning waivers or infrastructure improvements). However, this added support was typically a very small portion of a project s total costs. For example, the Section 202 funding for the construction of a project in Cleveland was nearly $3 million. However, Cleveland used $150,000 of its Community Development Block Grant (CDBG) funds to help the sponsor defray costs incurred in acquiring the land on which the project was built. Another nonprofit sponsor in California estimated that the development fee waivers and other concessions the city government made for its project were worth over $160,000. The total cost for this project was over $4 million. However, attempts to use other funds have not always been successful. For example, one of the Section 202 projects we visited obtained HOME and CDBG funds from the local county government, but officials from the HUD regional office subsequently reduced the final amount of the project s capital advance to offset most of these funds. The project s nonprofit sponsor had sought additional funding because the costs of land exceeded the appraised value that HUD had determined (and would thus agree to pay) and because the sponsor incurred additional costs to extend utility service onto the property where the project was being built. According to the sponsor, HUD reduced the project s Section 202 capital advance because the sponsor was using other federal funds to meet expenses for which HUD had granted the Section 202 funding. <4. HOME Multifamily Rental Projects Usually Have Multiple Funding Sources> The HOME program is not meant be a participating jurisdiction s sole source of funds for the development of affordable housing. By statute, the local or state government must contribute funds to match at least 25 percent of the HOME funds the jurisdiction uses to provide affordable housing each year. Additionally, one of the purposes of the HOME program is to encourage public-private partnerships by providing incentives for state and local governments to work with private and nonprofit developers to produce affordable housing. As a result, HOME projects typically attract significant levels of additional public and private funding from sources such as other federal programs, state or local housing initiatives, low-income housing tax credit proceeds, and donations or equity contributions from nonprofit groups. While a participating jurisdiction could conceivably develop new multifamily rental housing using only its allocation of HOME funds, HUD officials questioned why any jurisdiction might choose to do so. Multifamily rental housing is costly to build, and one such project could easily consume a community s entire allocation of HOME funds in a given year if no other funding were used. Furthermore, using HOME funds to leverage other funds can not only significantly increase the total funding available for housing assistance but also allow communities to offer more types of housing assistance than if they devoted their entire HOME allocation to a single multifamily rental project. Overall, with its current funding of $1.4 billion (for fiscal year 1997), the HOME program is a significant source of federal housing assistance. However, it has not been a major source of funds for new multifamily rental housing designed primarily or exclusively to serve the low-income elderly. From fiscal year 1992 through fiscal year 1996, such projects received a small percentage of the total HOME funds allocated to participating jurisdictions. During these 5 years, the jurisdictions built or provided financial support for 30 multifamily rental projects with 681 units, of which the elderly occupied at least 90 percent. These projects were financed with over $12 million in HOME funds. According to HUD s data, these funds leveraged an additional $65 million in other public and private financing. Figure 4 illustrates the multiple funding sources used for these HOME projects. Six of the eight HOME projects we visited had received funding from multiple public and private financing sources, reflecting the national pattern at the local level. These projects developers and/or sponsors told us that using HOME funds in conjunction with other funding sources enabled them to reduce the amount of debt service on their projects (or eliminate the need for borrowing altogether) so that they could charge lower rents and be affordable to more people with lower incomes. Two of the projects we visited were quite unlike the other projects we visited because they did not use the federal Low-Income Housing Tax Credit program and did not have a conventional mortgage or other bank financing. The same participating jurisdiction developed both projects using only public resources, including HOME and CDBG funds, donations of city-owned land, and interior and exterior labor provided by the city s work force. <5. Availability of Supportive Services at Section 202 and HOME Projects> HUD does not pay for supportive services through the HOME program but does, under limited circumstances, do so through the Section 202 program. Information on the provision of services is generally not available because neither program collects nationwide data on the availability of such services at the projects each has funded. For most of the Section 202 and HOME projects we visited, some supportive services, such as group social activities or subsidized meals programs, were available to the residents on-site, but usually only to the extent that the projects could generate operating income to pay for them. Rather than provide such services themselves, the projects tapped into and availed themselves of various supportive, educational, social, or recreational services in their communities. Furthermore, most of the projects that we visited included common areas and activity rooms that gave the residents places to socialize and provided space for hosting community-based and other services. <5.1. Availability of Supportive Services at Most Projects Depended on Having Sufficient Rent Revenue> All eight of the Section 202 and seven of the HOME projects we visited ensured that their residents had access to supportive services. The range and nature of the services depended on the amount of operating income that was available to pay for the services and/or the proximity of community-based services to the projects. In addition, one of the Section 202 projects had a grant from HUD to hire a part-time service coordinator;the remaining Section 202 projects paid for a service coordinator from the project s operating revenues, expected their on-site resident managers to serve as service coordinators, or provided services at nearby facilities. None of the HOME projects received outside support through grants from HUD and/or project-based rental assistance to pay for supportive services. Six of the eight HOME projects and all but one of the Section 202 projects that we visited expected an on-site manager to coordinate the provision of supportive services to elderly residents or relied on rent revenue to pay for a service coordinator. The costs of having on-site managers, like the costs of providing most of the service coordinators, were covered by the projects operating incomes. One of the Section 202 projects that relied on rent revenue provided few services on-site, but its residents had access to a wide variety of services, including a subsidized meals program, at another nearby Section 202 project developed by the same sponsor. In another case, the nonprofit sponsor of the Section 202 project consulted a nonprofit affiliate that has developed services for various housing projects developed by the sponsor. In addition to keeping up to date with the needs of their residents, the sponsors or management companies of the Section 202 projects we visited expected their service coordinators or resident managers to refer residents to community-based services as needed or to bring community-based services to their facilities on a regular or occasional basis. One of the Section 202 projects we visited had hired a part-time service coordinator using a grant from HUD s Service Coordinator Program. According to HUD, resident managers cannot always provide supportive services because they may lack the resources to do so and/or the experience needed to provide such services. As a result, the Congress began funding the Service Coordinator Program in 1992 to help meet the increasing needs of elderly and disabled residents in HUD-assisted housing and to bridge the gap between these needs and resident managers resources and experience. The program awarded 5-year grants to selected housing projects to pay for the salaries of their service coordinators and related expenses. The managers of this Section 202 project doubted that their operating revenues would be sufficient to continue paying for the coordinator when their HUD grant expires. One Section 202 project that we visited was unique in that it did not have a service coordinator, but the project s management company had structured the duties of the resident manager to include activities that a service coordinator performs. The project s management company could do so because it manages over 40 Section 202 projects nationwide and handles nearly all financial, administrative, and recordkeeping duties in one central location so that its resident managers have time to become more involved with their residents. The two HOME projects we visited that had neither a service coordinator nor an expectation that a resident manager would fill this role were the two projects that housed both the low-income elderly and families. At one of these projects, a nearby city adult center offered numerous opportunities for supportive services similar to those other projects provided on-site. At the second project, a social worker from the city visited the project on a part-time basis to provide information about and referrals to community-based services. <5.2. Projects for the Elderly Usually Included Congregate Areas> All of the Section 202 projects we visited had common or congregate areas for group activities, socializing, and supportive services. Six of the eight HOME projects we visited had similar common areas. At both the Section 202 and the HOME projects, these common areas were often the places in which residents could take advantage of the supportive services the project s manager or service coordinator had provided directly or, in the case of community-based services, had arranged to come to the project on a regular or occasional basis. The only projects that did not have common or congregate areas were the two HOME projects that housed a mixture of low-income families and elderly residents. One was a traditional multifamily apartment building in which 19 of the 29 units were set aside for the elderly. Although this project had no congregate space, it was near one of the city s adult centers that provides adult education, recreational classes, and other services for seniors and others from the community. The second was a single-room-occupancy project in which about 20 percent of the tenants were elderly, although the project did not set aside a specific number or percentage of the units for the elderly. This project had more limited common areas, parts of which were devoted to kitchen facilities on each floor because single-room-occupancy units do not have full kitchens themselves. <6. Agency Comments> We provided a draft of this report to HUD for its review and comment. HUD generally agreed with the information presented in this report but said that the report (1) understates the contributions of the HOME program in providing assistance to the elderly and (2) assumes that the Section 202 model is the preferred way of providing housing for the elderly, without giving sufficient recognition to the other kinds of assistance the elderly receive from the HOME program. In discussing the relative contributions of the HOME and the Section 202 programs, HUD said that comparable production of multifamily rental projects for the elderly could not have occurred in the first few years of the HOME program (which was first funded in fiscal year 1992) because of the lead time necessary for planning, selecting, and constructing projects. HUD also questioned whether our data included all HOME projects that might be comparable to Section 202 projects by taking into account the (1) projects developed through the substantial rehabilitation of existing buildings (as opposed to new construction), (2) projects in which vacant units might later be occupied by the elderly in sufficient numbers to achieve comparability with Section 202 projects, (3) projects in which 50 percent or more of the residents were elderly, and (4) projects that were under way but had not been completed at the close of fiscal year 1996. We agree that our review probably would have identified more comparable HOME projects if the program had been funded before fiscal year 1992, and we have added language to this effect in the report. Our analysis and the data we present include projects from the Section 202 and HOME programs that were substantial rehabilitations of existing buildings. We agree that filling vacant units with elderly residents could increase the number of comparable HOME projects in the future, but any such units in our analysis were vacant as of the close of fiscal year 1996, and our report discusses each program s activity only through that date. Data on the HOME projects in which 50 percent or more of the residents were elderly are reflected in figure 3 of this report, which illustrates the different types of HOME assistance the elderly received. We did not compare these data with Section 202 data because, as we note, comparable HOME projects are those in which 90 percent or more of the households have one elderly resident. We agree that some HOME projects that were under way but had not been completed at the close of fiscal year 1996 might in the future be comparable to Section 202 projects, but we note that the number of comparable Section 202 projects would also be greater because projects funded by the Section 202 program were also under way but had not opened as of this date. In stating its belief that this report assumes the Section 202 model is the preferred way of providing housing for the elderly, HUD expressed concern that we did not give sufficient recognition to the assistance the HOME program provides the elderly by other means. HUD noted, for example, that the HOME program provides a viable alternative to multifamily rental housing by offering assistance to the elderly to rehabilitate the homes they own with special features that allow them to continue to live independently. HUD also noted that smaller rental projects than those we compared with the Section 202 program (projects with 1-4 units) also present a viable alternative to multifamily rental housing, provided adequate supportive services are available if needed. We disagree with HUD s comment that this report assumes the Section 202 model is the preferred way of providing housing assistance for the elderly. In this report, we have described the operations of the two programs and presented data on the assistance each has provided nationally and at selected projects. We have not evaluated the manner in which either program provides assistance, and we have not expressed a preference for either approach to delivering housing assistance to elderly households. We have added statements to this effect to the report to address HUD s concern. We acknowledge that the HOME program provides housing assistance to the elderly in several ways other than through the production of new multifamily rental housing that is set aside almost exclusively for the elderly. However, because this report describes comparable Section 202 and HOME-funded housing assistance and because the Section 202 program provides only one kind of housing assistance, we focused on the multifamily rental projects funded by the HOME program that are comparable to those funded by the Section 202 program. To address HUD s concerns and to provide further recognition of the HOME program s other types of housing assistance, we have revised the sections of the report cited by HUD to more prominently reflect the complete range of HOME-funded activities benefiting the elderly. HUD also provided several technical and editorial corrections to the report, which we have incorporated as appropriate. HUD s comments are reproduced in appendix II of this report. <7. Scope and Methodology> The information we present in this report describes the need for assisted housing, discusses the operations of the Section 202 and HOME programs, and presents data on the assistance each program has provided. We did not evaluate the manner in which either program provides assistance, and we did not express a preference in the report for either one of the approaches to delivering assistance to elderly households. To determine the amount and types of new assisted housing that the Section 202 and HOME programs have provided for the elderly, we obtained and analyzed data from HUD headquarters on the Section 202 and HOME projects completed from fiscal year 1992 through fiscal year 1996. Fiscal year 1992 was the first year in which the HOME program received funding, and fiscal year 1996 was the most recently completed fiscal year for which data from the programs were available when we began our review. Our analysis of the HOME data also provided information on the amount and sources of funding for multifamily projects developed under the HOME program. The Section 202 data did not include information on any other federal or nonfederal funding these projects may have received because a Section 202 allocation is intended to cover 100 percent of a project s development costs. In addition to using these data, we analyzed special HUD tabulations of Census data to identify the level of need among the elderly for housing assistance in each state. We examined HUD s data on the HOME program to identify all types of housing assistance that the program has provided for elderly households, but we also analyzed these data by the type of assistance in order to obtain information on the HOME projects that are comparable to Section 202 projects. To do so, we focused our analysis on the HOME multifamily projects in which 90 percent or more of the residents are elderly because, at a minimum, 90 percent of the residents of Section 202 projects must be elderly (before 1991, 10 percent could be persons at least 18 years old with a handicap). Throughout our review, we also discussed housing assistance for the elderly with officials from HUD s Section 202 and HOME programs, HUD s Office of Policy Development and Research, and the Bureau of the Census. In addition, we reviewed relevant documents from each program and prior HUD and Census reports on housing needs of the elderly. We supplemented this national information on each program by visiting a total of 16 projects to obtain more detailed data than HUD collects centrally on the use of other federal and nonfederal funding and the presence or availability of supportive services for elderly residents. Using Section 202 and HOME program data, we judgmentally selected two Section 202 and two HOME projects in each of four states California, Florida, North Carolina, and Ohio. We selected these states because they have relatively high concentrations of low-income elderly residents and numbers of Section 202 and HOME-funded projects. In each state, we selected individual Section 202 and HOME projects that were in the same vicinity and were roughly comparable in size. Nearly all of these projects were reserved exclusively for the elderly or had a portion of their units set aside for the elderly. In one case, about 20 percent of a HOME-funded project s residents were elderly, although neither the project nor any portion of its units was explicitly reserved for elderly residents. At each project we visited, we discussed the project s history and financing and the availability of supportive services with the sponsor or developer and relevant local and HUD officials. The observations we make about the individual projects we visited are not generalizable to all Section 202 or HOME-funded projects because we judgmentally selected these projects and did not visit a sufficient number from each program to draw conclusions about the universe of such projects. We did not assess the reliability of the data we obtained and analyzed from HUD s Section 202 and HOME program databases. However, throughout our review we consulted with the appropriate HUD officials to ensure we were analyzing the relevant data elements for the purposes of this report. Furthermore, the information we obtained from these databases was generally consistent with our observations during our site visits to the projects we selected using these databases. We conducted our work from April through October 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees, the Secretary of Housing and Urban Development, and the Director of the Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have any questions about the material in this report. Major contributors to this report are listed in appendix III. Selected Section 202 and HOME Investment Partnerships Projects As part of our review, we visited 16 low-income, multifamily rental projects 4 each in California, Florida, North Carolina, and Ohio to obtain information that the Department of Housing and Urban Development (HUD) does not collect centrally and to discuss with program participants their experience in applying for, developing, and operating these projects. In each state, two of the projects we visited were funded by the Section 202 program and two received funds from the HOME Investment Partnership (HOME) program. As we noted in the Scope and Methodology section of this report, we judgmentally selected these states because, compared with other states, they had relatively high concentrations of low-income elderly residents and numbers of Section 202 and HOME-funded projects. We selected individual Section 202 and HOME projects that were in the same vicinity and were roughly comparable in size. During each site visit, we discussed the history, financing, and availability of supportive services with the sponsor or developer of the project. We also discussed these issues with on-site management agents, local officials administering the HOME program, and HUD Section 202 and HOME field office officials. At each project, we walked through the grounds, selected residential units, and any common areas available to the residents for group activities. Typically, the Section 202 projects we visited were high- or mid-rise apartment buildings with elevators, laundry facilities, and one or more community rooms in which residents participated in group activities and, in some cases, meals programs. In one project, which consisted of more traditional garden apartments on a single level, each apartment had its own outdoor entrance and front porch. Ranging in size from 42 to 155 units, most of the projects (5 of 8) had a resident manager. Current Section 202 regulations require that all residents of these projects have very low incomes that is, the must earn less than 50 percent of the median income for their area. The HOME projects we visited, ranging in size from 20 to 120 units, were more varied than the Section 202 projects. Several were high- or mid-rise buildings, although one of these was a single-room-occupancy hotel. In the single-room-occupancy hotel, the units were smaller than in a typical apartment building and much of the common space consisted of kitchen facilities, which were not included in the units themselves. At another project, the ground floor of the building housed a city-operated adult center offering a variety of educational and recreational programs. Other HOME projects we visited were multi-unit cottages or detached structures, each of whose units had its own outdoor entrance; one such project consisted of buildings scattered over three different sites. Unlike the Section 202 projects, two of the HOME projects housed both families and the elderly. As we noted earlier in this report, at a minimum, in each multifamily rental project with at least five HOME-assisted units, at least 20 percent of the residents in the HOME-assisted units must have very low incomes (at or below 50 percent of the area s median income); the remaining units may be occupied by households with low-incomes (up to 80 percent of the area s median income). At the HOME projects we visited, half designated all of their units as HOME-assisted, meaning that the HOME program s regulations about tenants incomes applied to those units; the other half designated some but not all of their units as HOME-assisted, meaning that the remaining units in these projects were subject either to the rules associated with other sources of funding or to those established by the local jurisdiction. Comments From the Department of Housing and Urban Development Major Contributors to This Report <8. Resources, Community, and Economic Development Division, Washington, D.C.> <9. Chicago/Detroit Field Office> Gwenetta Blackwell The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO reviewed the similarities and differences between the Department of Housing and Urban Development's (HUD) Section 202 Supportive Housing for the Elderly Program and HOME Investment Partnership Program, focusing on: (1) the amount and types of new multifamily rental housing that each program has provided for the elderly; (2) the sources of each program's funding for multifamily rental projects; and (3) the availability of supportive services for elderly residents. What GAO Found GAO noted that: (1) during fiscal year (FY) 1992 through FY 1996, the Section 202 program substantially exceeded the HOME program in providing multifamily rental housing that was set aside for elderly households; (2) over 1,400 Section 202 projects opened during this time, providing homes for nearly 48,000 elderly residents; (3) at the same time, the HOME program provided housing assistance to 21,457 elderly households, including 675 elderly residents in 30 multifamily rental projects comparable to those developed under the Section 202 program; (4) the Section 202 program produced new multifamily rental housing for low-income elderly households through new construction, rehabilitation of existing buildings, and acquisition of existing properties that the Federal Deposit Insurance Corporation obtained through foreclosure; (5) the HOME program provided housing assistance to address the most pressing housing needs that local communities and states identified among low-income people of all ages; (6) for the elderly, HOME assistance helped rehabilitate the homes they already owned and in which they still lived, provided tenant-based rental assistance, helped new homebuyers make down payments and pay closing costs, and made funds available to acquire, construct, or rehabilitate single-family and multifamily rental housing; (8) in the Section 202 program, the capital advance, which HUD provides to a project's sponsor, is the only significant source of funds for developing the project; (9) in general, a HOME project typically attracts significant levels of additional public and private funding; (10) HOME multifamily housing that is similar to Section 202 projects is usually financed with a combination of HOME funds and other federal and nonfederal funds; (11) HUD does not pay for supportive services, such as transportation or subsidized meals programs, through the HOME program but does do so under limited circumstances through the Section 202 program; (12) the extent to which the Section 202 and HOME projects provided these services on-site for their residents usually depended on each project's ability to generate the operating income needed to pay for the services; (13) these projects often depended on and referred their residents to community-based supportive services; (14) five of the eight Section 202 projects that GAO visited employed a staff person or expected their on-site resident manager to coordinate services; and (15) both projects in many cases had common areas or activity rooms that service providers or residents could use for community-based services, group social or educational activities, and dining.
<1. Background> According to the Congressional Research Service, Title III nutrition funds provide almost 3 million older persons with about 240 million meals each year. Forty-eight percent of the meals are provided in congregate settings, such as senior centers, and 52 percent are provided to frail older persons in their home. In fiscal year 1999, about $785 million in Title III nutrition and support services funds was distributed to 56 states. A total of about $486 million was allotted for congregate and home-delivered meals (Title III-C). Table 1 shows how these federal funds were distributed to the states. Fiscal year 2000 funding for the Older Americans Act increased about 3.5 percent above the level for fiscal 1999. Funds for the home-delivered meals program increased by $35 million 31 percent over the level for fiscal 1999. <2. The Nationwide Level of Title III-C Carryover Funds Is Low, but Some States Have Relatively High Levels> Nationwide, the funds carried over into fiscal year 1999 reported by the states represented a small percentage of the $486 million Title III-C allotment about 5 percent, or $24.6 million. However, the level of carryover funds reported by the states varied considerably. Twenty-two states reported that they had no carryover at the beginning of fiscal year 1999. The remaining 34 states reported a carryover that ranged, as a percentage of their fiscal year 1999 nutrition allotment, from less than 1 percent in 6 states (Colorado, Kentucky, Maryland, Massachusetts, New Mexico, and Puerto Rico) to about 50 percent in Arizona. Seven states (Arizona, Delaware, Hawaii, Missouri, New York, Oregon, and South Dakota) had carryover funds that exceeded their nutrition allotment for fiscal year 1999 by at least 15 percent. Additionally, two-thirds of the carryover funds $16.3 million were reported by seven states (Alabama, Arizona, California, Missouri, New York, Ohio, and Texas) that had at least $1 million in carryover funds. Table 2 shows the distribution of these carryover funds and their respective percentage of the nutrition allotment for each of the latter seven states above at the beginning of fiscal year 1999. (See app. II for information on the amount of carryover funds available to each of the 56 states at the beginning of fiscal year 1999.) States may have substantial amounts of carryover funds for a variety of reasons. For example, a state official said that the annual allotment of Title III funds may not be received by the beginning of a state s fiscal year because of differences between federal and state fiscal year periods (41 states begin their fiscal year 3 months earlier than the federal government) or delays in the federal appropriations process. States may then need to budget their spending on the basis of funding projections. According to the official, some states may develop more conservative spending estimates than others. As a result, some of these states may have substantial funds that cannot be fully spent by the end of the fiscal year. Because of this, funds may be carried over into the next federal fiscal year. The accumulation of carryover funds can occur at the state, area-agency, and/or local-service-provider level. In fiscal year 1999, about 25 percent of the nationwide carryover funds reported by the states were held at the state level and 75 percent were held at the area-agency and/or local- provider level. The states reported that 341, or about 52 percent, of all area agencies had some carryover funds available for their nutrition programs at the beginning of fiscal year 1999. The level of carryover at the area-agency level can vary dramatically. For example, of the 208 area agencies that responded to our survey and reported some carryover at the beginning of fiscal year 1999, the carryover ranged from less than 1 percent of the fiscal year 1999 Title III-C allotment at 20 area agencies to more than 50 percent at 3 area agencies. Most area agencies (132) reported a carryover of from 1 to 10 percent of their annual allotment. <3. Half of the States Do Not Restrict Title III-C Carryover Funds, and Those That Do Use a Variety of Limits> Half of the 56 states reported that they do not restrict the amount of Title III-C funds that their area agencies and/or local service providers may carry over from one year to another. Of the remaining 28 states, 15 reported that neither area agencies nor local service providers are allowed to carry over any funds, and 13 reported having limits on the amount that their area agencies and/or local service providers may carry over into the succeeding fiscal year. Eleven of the 13 states with carryover limits reported that their limits were based on a percentage of the area agencies and/or local service providers annual grant allotment. The percentage of annual grant limit varied from 2 to 10 percent. The average reported percentage limit was about 8 percent. The remaining two states did not specify how they limited the amount of area-agency and/or local-service-provider carryover. Information on each state s policy regarding carryover by area agencies or directly funded local service providers is shown in appendix II. We also examined the types of limits, if any, that area agencies located in the 28 states with no carryover limits placed on their local providers. Of the 563 area agencies responding to our survey, 178 were located in states that did not have carryover limits and did provide elderly meal services primarily through local service providers. The carryover limits that the agencies placed on their providers varied; most (97) did not allow their providers to carry over any funds. Information on the number and percentage of these 178 area agencies is presented in table 3 by type of area agency carryover restriction, if any, placed on local providers. <4. Carryover Funds Are Used to Expand Meal Services, but Few Major Impacts From Declines in Carryover Funds Were Identified> While some area agencies have used carryover funds to expand their meal services, state and area agencies identified relatively few instances of major cutbacks in meal services that occurred in fiscal year 1999 because carryover funds were less than they were in prior years. Additionally, from our analysis of the state and area-agency survey data, we estimate that, nationwide, a very small percentage of area agencies and local providers would have to make major cutbacks in meal services in fiscal years 2000 or 2001 because of reductions in carryover funds. Our state survey information indicated that 37 states allowed their area agencies to carry over unspent Title III-C funds into fiscal year 1999. Seventeen of these states reported that 133 of their 234 area agencies had used carryover funds to expand the number of meals served that year. Only 9, or about 7 percent, of these agencies had to reduce the number of meals served by 10 percent or more in fiscal year 1999. We estimated from the states survey data that 23, or about 4 percent, of all area agencies nationwide may have to reduce their meal services by 10 percent or more in fiscal years 2000 or 2001. Of the 5 states that directly funded local service providers, 2 reported that 2 of their 8 providers used carryover funds to expand the number of meals served (neither of these providers had to reduce meals served by 10 percent or more), 2 reported that none of their 54 providers used funds to expand the meals served, and 1 state with 37 providers reported that comparable data on its providers were not available. We did not estimate how many directly funded local providers may have to reduce meal services by 10 percent or more in fiscal years 2000 or 2001. The results from our area-agency survey were similar. Of the 152 area agencies reporting that they allow local service providers to carry over funds, about one-third (47) did not provide information about their local service providers use of carryover funds to expand meal services. The 105 area agencies that reported such information identified a total of 287 local service providers that had used carryover funds to expand meal services in fiscal year 1999. These area agencies identified only 20 local providers that had reduced the number of meals served by 10 percent or more in fiscal year 1999 because of declines in carryover funds. Again, from our analysis of the area agencies survey data, we estimated that about 3 percent of the approximately 4,000 local service providers nationwide may need to reduce meals by that amount in fiscal years 2000 or 2001 because of declines in carryover funds. <5. Most States Have Transferred Title III Funds and Allow Area Agencies Similar Flexibility> Forty-seven of the states reported that they transferred a total of about $76 million in Title III nutrition and support services funds during fiscal year 1999. Although funds were transferred among the two nutrition allotments and the support services allotment, the bulk of the funds came out of the congregate meal allotment. The flexibility that area agencies and local providers have to transfer these funds varied. <5.1. Most Transfers of Title III Funds Are Made Out of Congregate Meal Allotments> As shown in table 4, the bulk of the Title III funds transferred $71 million came from congregate meal allotments and were reallocated to either the home-delivered meal or support services allotments. These transfers resulted in a decrease of about 19 percent from the level of funding originally allotted to the states for congregate meal services. According to the Congressional Research Service, states have increasingly transferred funds from the congregate meal allotment to the home- delivered meal allotment because of various factors. For example, the growth in the number of persons in the oldest age categories has created a greater demand for the delivery of home care services, including home- delivered meals. According to federal population projections, the number of persons who are 60 years and older will increase by 21 million, or 46 percent, over the next 16 years, while the number who are 85 years and older will increase by 2.2 million, or 51 percent, during the same time frame. In addition, many states, including Connecticut, have devoted resources to the creation of a home- and community-based long-term care system for older persons. Home-delivered meals represent a key component in these systems. As with carryover funds, states reported widely varying amounts of funds transferred. For example, in 43 states that reported transferring funds from their initial congregate meal allotment to their home-delivered meal allotment, the percentage of funds transferred ranged from about 1 percent (Wisconsin) to about 34 percent (West Virginia) the average transfer being 12 percent. Twelve states reported no transfers from their congregate meal allotment to their home-delivered meal allotment, and one state did not provide transfer information. <5.2. Area Agencies Flexibility to Transfer Funds Varies> Nine states reported that they do not allow the transfer of Title III funds by their area agencies and/or local service providers. Other states have adopted policies that limit the transfer of funds by area agencies and/or local service providers. Table 5 shows the number of states that reported a limit on the transfer of Title III funds. <6. Conclusion> At the present time, the buildup and use of Title III-C carryover funds to support elderly nutrition services does not appear to be a widespread problem. However, AoA does not monitor the states buildup of carryover funds. As a result, the agency has little assurance that it could identify meal service problems that could emerge in the future. <7. Recommendation for Executive Action> Although the use of carryover funds to support nutrition services for the elderly does not currently appear to be creating a serious meal service problem nationwide, we recommend that the Secretary, Department of Health and Human Services, direct the Assistant Secretary for Aging, Administration on Aging, to monitor the levels of unspent Title III-C funds that states carry over to the succeeding fiscal year and work with the states that build up substantial amounts of carryover funds to develop a strategy to spend down such funds in a manner that minimizes the potential disruption of meal services for the elderly. Such monitoring could be performed with available resources if it is done as a part of the administration s routine program-monitoring activities. <8. Agency Comments> We provided the U.S. Department of Health and Human Services with a draft of this report for review and comment. Department officials agreed with our recommendation. More specifically, the Assistant Secretary for Aging, stated that AoA will monitor those states having a history of difficulty in controlling carryover and provide enhanced technical assistance to ensure that these practices do not jeopardize the program s goals. In addition, the Assistant Secretary noted that the Department will consider the promulgation of regulations to reinforce the grantees understanding of their responsibility in controlling and monitoring such funds. The Department made no other comments on the information contained in the draft report. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; interested Members of Congress; the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jeanette C. Takamura, Assistant Secretary for Aging, Department of Health and Human Services; the Honorable Jacob J. Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. If you have any questions about this report, please contact me or Thomas E. Slomba, Assistant Director, at (202) 512-5138. Key contributors to this report were Carolyn M. Boyce, Senior Social Science Analyst; and Peter M. Bramble, Jr., Senior Food Assistance Analyst. Scope and Methodology To address the objectives of our review, we developed separate written mail-out surveys for state and area agencies that received Title III nutrition and support funds, respectively, in fiscal year 1999. We pretested the draft state survey at three states that manage senior services (Colorado, Louisiana, and Pennsylvania), and the draft area- agency survey at four area agencies in four states (Colorado, Louisiana, Virginia, and West Virginia). We visited these states and area agencies to conduct each pretest. During these visits, we attempted to simulate the actual survey experience by asking the state or area agency official to fill out the survey. We subsequently interviewed the officials to ensure that the (1) questions were readable and clear, (2) terms were precise, (3) survey did not place an undue burden on the survey recipients, and (4) survey appeared to be independent and unbiased. Administration on Aging (AoA) officials also reviewed and provided comments on each draft survey. In order to maximize the response to our surveys, we mailed a prenotification letter to all of the 56 states and 652 area agencies about 1 week before we mailed the surveys. We also sent a reminder letter to nonrespondents about 4 weeks after the initial survey mailing and a replacement survey for those who had not responded after about 8 weeks. After reviewing all of the survey responses, we contacted several states by telephone and E-mail to clarify their responses to various survey questions. Our survey data represent the responses from all of the 56 states and 563 of the 652 area agencies (an 86-percent response rate). We also collected Title III administrative and program information from AoA. We performed our work from March through December 2000 in accordance with generally accepted government auditing standards. Nationwide Title III-C Carryover Funds Available to the States at the Beginning of Fiscal Year 1999 <9. Ordering Information> The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. <10. To Report Fraud, Waste, or Abuse in Federal Programs> Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Why GAO Did This Study Under Title III of the Older Americans Act, the Administration on Aging (AoA) distributes grants to states on the basis of their proportional share of the total elderly population in the United States. These grants are then disbursed to more than 600 area agencies nationwide, and are used to fund group and in-home meals, as well as support services, including transportation and housekeeping. The grants are further subdivided by these agencies to more than 4,000 local service providers. AoA requires that states obligate these funds by September 30 of the fiscal year in which they are awarded. Also, states must spend this money within two years after the fiscal year in which it is awarded. During this time AoA does not limit or monitor the amount of unspent funds that states may carry over to the succeeding fiscal year. GAO examined whether states were using Title III carryover funds to expand their meal service programs for the elderly beyond a level sustainable by their annual allotments alone. What GAO Found GAO found that the buildup and use of Title III carryover funds to support elderly nutrition services does not appear to be a widespread problem. However, AoA does not monitor the states' buildup of carryover funds. As a result, the agency has little assurance that it could identify meal service problems that could emerge in the future.
<1. Background> <1.1. Laws Governing the Opportunity Scholarship Program> Congress passed the D.C. School Choice Incentive Act of 2003 in January 2004, which directed the Secretary of Education to award a grant on a competitive basis for up to 5 years to an entity or entities to use to make scholarship payments to parents of eligible students to be used for This Act private school tuition, fees, and transportation expenses.created the program known as the District of Columbia Opportunity Scholarship Program (OSP), the first private kindergarten-through-grade- 12 school choice program supported by federal funds. The purpose of the Act was to provide low-income parents of students in the District, particularly parents of students who attend public schools identified as in need of improvement under the Elementary and Secondary Education Act of 1965, as amended, with expanded opportunities for enrolling their children in higher performing schools in the District by providing annual scholarships to attend the private elementary or secondary schools of their choice. In April 2011, Congress reauthorized OSP for 5 years under the Scholarships for Opportunity and Results Act (SOAR Act). The Act sets forth requirements for participating private schools.provides that none of the funds may be used by an eligible student to enroll in a participating private school unless the participating school 1. has and maintains a valid certificate of occupancy issued by the 2. makes readily available to all prospective students information on its 3. if operating for 5 years or less, submits to the eligible entity administering the program proof of adequate financial resources and the ability to maintain operations throughout the school year; 4. agrees to submit to site visits as determined to be necessary; 5. has financial systems, controls, policies, and procedures to ensure that funds are used according to the statute; and 6. ensures that each teacher of core subject matter in the school has a baccalaureate degree or its equivalent, regardless of whether the degree was awarded in or outside of the United States. <1.2. Administration and Oversight of OSP> Education s Office of Innovation and Improvement, which is charged with the management of the program and oversight of the program administrator, awards scholarship funds to a grantee and provides programmatic guidance and technical assistance. Soon after the D.C. School Choice Incentive Act passed in 2004, Education awarded the grant to operate the OSP to the Washington Scholarship Fund, a nonprofit organization in the District of Columbia that had experience providing privately funded scholarships to low-income students. In May 2010, Washington Scholarship Fund withdrew as the grantee and Education transferred the grant for administration of OSP to the DC Children and Youth Investment Trust Corporation (the Trust), a non-profit organization. As the program administrator, the Trust recruits new students and schools, receives and processes applications, and awards scholarship funds to program participants. The Trust is also responsible for providing oversight to participating private schools. The SOAR Act, which was enacted about 11 months after the Trust assumed responsibility for administering the OSP grant, states that the grantee (the Trust) must ensure that scholarships be awarded through a random selection process if more eligible applicants seek admission to the program than the program can accommodate, or if more eligible applicants seek admission to a participating school than the school can accommodate, and the Trust has implemented this requirement through a lottery process. The groups of applicants that have a priority status as defined in the SOAR Act are (1) applicants who are currently attending a school that has been identified for improvement, corrective action, or restructuring under the Elementary and Secondary Education Act of 1965, as amended; (2) applicants who have a sibling currently participating in OSP; and (3) applicants who received an OSP scholarship award in a previous year but did not use it. At the time of our review, the lottery process was carried out by Westat, a contractor. In addition, the SOAR Act updates the responsibilities for implementing OSP for the District and Education. It directs the Secretary of Education and the Mayor of the District to revise the MOU regarding the implementation of OSP that was required pursuant to the D.C. School Choice Incentive Act. The MOU governs the program for fiscal year 2011 and all subsequent years. It is intended to help ensure the efficient and effective continued implementation of the OSP in a manner that incorporates and is consistent with the roles and responsibilities of both Education and the District. According to the MOU, Education is responsible for overseeing the Trust as a recipient of federal funds and working with the Trust to ensure that it makes improvements to various aspects of program administration. The District also has a role in the administration of OSP. The MOU states that District agencies are directly responsible for conducting required building, health, and safety inspections of participating schools when notified by the Trust. <1.3. Federal Funding for OSP and Use of Funds> According to Education, from fiscal years 2004-2010, Congress appropriated between $13.2 and $14.8 million per year for OSP, as shown in figure 1. For fiscal year 2011, the appropriated amount increased to about $15.5 million, and in fiscal year 2012, the appropriated amount increased to $20 million. Most recently, Congress appropriated about $19 million for the program for fiscal year 2013. Since OSP s creation, about $152 million has been appropriated in total. The SOAR Act permits 11 percent of the grant funds each year to be used for certain administrative and other purposes as follows with the remainder of the funds going towards student scholarships: 5 percent for the program s evaluation, The grantee (the Trust): 3 percent for administrative costs, 2 percent for parental assistance, 1 percent for student academic support services. The law provides for an increase in the amount of the scholarship for eligible students. Specifically, it provides up to $8,000 for grades K-8 and $12,000 for grades 9-12 starting in school year 2011-2012. While tuition is paid first, any leftover scholarship funds can be used to pay for school fees including uniforms, field trips, before and after care and transportation expenses. <1.4. Student Eligibility and Enrollment in OSP> Id., 3007(c). Id., 3007(d). thresholdbenefits (known as SNAP benefits) issued within the District. The student must also be at least 5 years old by September 30th of the school year. or that receive Supplemental Nutrition Assistance Program In the 2011-2012 school year, there were 53 participating schools in OSP (see fig. 2). These schools represent a wide range of grades, sizes, sectors, and tuition levels. These schools are located throughout all eight wards of the District, with the greatest number of schools located in Ward 3. In accordance with District regulations for all private schools operating in the District, these schools must be accredited by, or be in the process of being accredited by, at least one of seven recognized accrediting organizations or any other approved accrediting body. Schools that are unaccredited must submit documentation to demonstrate satisfactory evidence of instruction. <1.5. Internal Controls Relevant to Implementation and Oversight of OSP> Internal control is broadly defined as a process affected by an entity s board of directors, management, and other personnel designed to provide reasonable assurance that the following objectives are being achieved: (1) effectiveness and efficiency of operations, (2) reliability of financial reporting, and (3) compliance with applicable laws and regulations. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Effective internal control is a major part of managing any organization to achieve desired outcomes and manage risk. Standards for Internal Control in the Federal Government presents five standards that define the minimum level of quality acceptable for internal control in government and provide the basis against which internal control is to be evaluated. These standards control environment, risk assessment, control activities, information and communication, and monitoring apply to all aspects of an agency s operations and should be an integral part of a system that managers use to regulate and guide an agency s operations. The private sector also recognizes the importance of internal controls for executives to better manage their business enterprises. The Committee of Sponsoring Organizations of the Treadway Commission s (COSO)the same five key elements stated previously, which are intended to promote efficiency, reduce risk of asset loss, and help ensure the reliability of financial reports and compliance with laws and regulations. <2. The Trust Provides Untimely Information and Scholarship Awards> <2.1. The Trust Provides Information to Families through Many Channels, but Its Participating School Directory Provides Incomplete and Untimely Information> The Trust provides program information to prospective and current OSP families through a variety of outreach activities. To reach prospective OSP families, the Trust advertises through print, radio, and bus ads, as well as in newspapers and flyers posted in neighborhood libraries, recreation centers, and local government service centers. For example, one OSP parent told us she found out about OSP through a local Spanish-language newspaper. The Trust conducts several application events throughout the District where interested families can apply for the program. The Trust also holds a participating school fair in which OSP families can obtain additional information about participating schools. In addition, the Trust works with families directly, offering frequent personal contact to assist families through the scholarship application and renewal processes, according to Trust officials. The Trust provides this assistance via phone, e-mail, and through one-on-one interactions, and employs Spanish and Amharic speakers to assist non-English speaking families.Lastly, the Trust holds events to assist current OSP families, such as a workshop for students transitioning from 8th grade to high school. Most families we spoke with were generally happy with their children s participation in the program, citing increased safety and security at their children s OSP schools and improved quality of education. The SOAR Act requires all OSP grantee applicants to describe how parents of eligible students will be notified of the expanded choice opportunities each year, and the Directory is a key source of information about schools in the program. The Trust publishes the Directory for each school year in both English and Spanish, and it is designed to provide families with basic information on each participating school. The Directory also includes guidance on how to apply to private schools and worksheets to help families in their school search. In addition, the Directory provides information on tuition, accreditation, admissions, and facilities for each school participating in OSP for a given school year. The 2011-2012 Directory was used to assist families with selecting schools for 2012-2013. We found that this Directory lacked key information about schools tuition, fees, and accreditation that could help families make more informed school choices: Tuition: The 2011-2012 Directory listed one tuition amount for each school. According to Trust officials, the published tuition amount is the tuition applicable to each student unless the student is in a category to receive a discount on tuition. Whether a school offered different tuition levels, however, was not noted in the Directory. We found that many schools have different tuition levels, varying by grade level and religious affiliation. Grade level: Nine of the 53 participating schools in school year 2011-2012 offered different tuition levels for students in grades K- 8 versus students in grades 9-12. In addition, 10 of the 53 participating schools had tuition levels that differed within grades K-8. Religious affiliation: Some religious schools also offered lowered or discounted tuition rates for their members. For example, in school year 2011-2012, 6 Catholic schools offered lowered tuition rates for Catholic families, but the Directory only listed the higher, non-Catholic tuition rate. In addition, a Seventh-day Adventist school offered three different tuition levels, with the lowest rates offered to families belonging to their church and the highest rate offered to families who were not affiliated with this denomination. Only the highest tuition rate, however, was published in the Directory. Fees: The Directory offers parents limited information about fees. For example, the Directory excluded the cost of commonly required fees at schools, such as registration and book fees. In addition, many OSP families may need services such as before and after care, and hot lunches. The Directory listed the availability of such services but did not include their cost. The Directory also excluded information about parental involvement or fundraising fees. Some schools can reimburse at least part of these fees if parents volunteer a certain number of hours, according to four school officials. If a family cannot fulfill these volunteer hours, then it must pay these fees out of pocket because these fees are not covered under the OSP scholarship. Most OSP students attend schools where the maximum OSP scholarship amount would cover the estimated total cost of attendance (see sidebar). But without having this information, it is difficult for families to know the full cost of attendance, and they may be responsible for paying fees that they were unaware of. This may be difficult, given this program s target population. Accreditation: The Directory also offers families incomplete information about schools accreditation status. According to representatives of accreditation organizations, school accreditation is important because it serves as a means of accountability and oversight. The Directory indicates which accrediting organizations are recognized by the District. However, the Directory lists some schools affiliated memberships as an accreditation organization. For example, one school was listed as having the National Association of Episcopal Schools as an accrediting organization, but this organization does not accredit schools. Three schools were listed as being accredited by Nativity Miguel, which is a network of schools serving low-income communities, not an accrediting organization. Some families we spoke with told us that they did not receive the Directory or tuition information, and therefore, they were not aware of which costs would not be covered by OSP or which schools were participating in the program. Instead, they communicated directly with schools about whether they were participating in the program. Although the Directory is publicly available on the Trust s website, not all program participants may have computer or Internet access. In addition, even if families have Internet access through smart phones, this does not necessarily mean that program participants can use them to access everything they need, according to one school official. Trust officials told us they provide direct assistance to families to help them select schools and discern their total cost of attendance, but not all families may be receiving this assistance. In addition to having incomplete information, the Directory has also been published too late to truly assist families in selecting a school. Families need to select schools for their students before the school year starts in August or September. However, the Trust published the Directory for the 2012-2013 school year in May 2013, about 9 months after the start of that school year. According to Trust officials, the Directory was published late because the school site visits during which Trust officials confirm and update key school data for the Directory occurred later in the year than in previous years due to substantial staff changes. Trust officials also told us that these directories are primarily meant to inform families selecting schools for the upcoming school year. While the 2012-2013 Directory was issued too late to assist families for that same school year, it can assist families for the upcoming 2013-2014 school year. Trust officials stated that in the future they plan to publish the Directory by December of the current school year. The Trust plans for future Directories to include more accurate notation of the school accrediting bodies and a general notation about applicable discounts and/or additional fees. According to Trust officials, they issued a list of participating schools in the fall for the 2012-2013 school year to assist families with school selection. This list of participating schools was intended to be a companion piece to the 2011-2012 Directory and included schools addresses and grades served, but it did not include key information such as updated tuition information and admissions requirements. According to Trust officials, the participating school list is updated as needed to reflect any school changes that may have occurred between the publication of the previous year s Directory and the start of the school year. For example, the 2012-2013 list of participating schools excluded a school that had closed, even though this school was listed in the 2011-2012 Directory. While some families may have been able to fill any gaps between these two sources of information through the guidance and hands-on assistance provided by the Trust, the extent to which all families had access to this direct assistance is unclear. <2.2. The Timeframes for Awarding Scholarships Do Not Align with Many Schools Admissions and Enrollment Schedules> For both the 2012-2013 and the 2013-2014 school years, the Trust awarded scholarships several months after many schools application deadlines had passed. For the 2012-2013 school year, the Trust conducted its scholarship lottery in July 2012, about 1 month before classes started for many schools. In addition, the Trust did not hold its school placement and welcome fair, which it uses to assist families with school selection, until August 2012, the same month that classes began for 6 of the 10 schools we visited, as shown in figure 3. By that time, many of the schools in OSP had already completed their admissions and enrollment processes. We found that 9 of the 10 schools we visited did not have an application deadline or were still accepting applications after students had been notified that they had received a scholarship. For the 2013-2014 school year, the Trust moved up its timeframes, giving families more time to select schools than in the previous year. First, the Trust awarded scholarships in late May 2013 and held its school placement and welcome fair in early June rather than August, as was done in 2012, according to Trust officials. Still, as in school year 2012- 2013, many schools had already completed their admissions and enrollment processes by that time. In addition, according to some schools we visited, awarding scholarships so close to the start of the school year can affect their ability to adequately plan for the coming school year. For example, officials at two schools we visited told us that as a result of the scholarship lottery and school fair being held in July and August 2012, respectively, they did not know how many students would be enrolled at their schools until just a few weeks before the school year started in August. This made it difficult to discern class sizes and plan for the number of teachers needed. Officials at schools we visited told us that they have found ways to work around the scholarship time frame and accommodate OSP students. For example, one school held an additional open house and another school accepted applications as late as the first week of school, according to two school officials. However, several parents we spoke with mentioned that they only had about a month to find a school and enroll their students, at which time many schools were no longer accepting students, and three school officials told us that the scholarship timeframes do not give families enough time to research school options. Since the application and financial aid deadlines for some of the program s most costly schools occur much earlier than the OSP scholarship timeframes, by the time a student is awarded a scholarship, there are neither financial aid dollars from the school nor space available for that student. According to Trust officials, Education s requirement to verify family income using the most recent complete year of income information delays scholarship awards. To verify income information, the Trust also told us they have to wait until after the April 15th tax filing deadline. Trust officials noted that some schools use prior year income information to determine provisional financial aid awards, reconcile these awards with the current year s tax information once that is available, and rescind financial aid awards if the incomes exceed a certain level. Trust officials expressed concerns about having to potentially rescind any scholarships, even though the chance that incomes will change so substantially among program participants is low. Trust officials are currently exploring ways to use prior year income information to determine eligibility earlier in the year and enable them to award scholarships earlier. Education officials told us if the Trust wants to consider using a different year s income for the purpose of making preliminary determinations regarding students eligibility, it would have to submit its proposal to Education for evaluation. <3. The Trust Has Not Developed Effective Internal Controls to Safeguard Program Funds> <3.1. The Trust Has Not Developed Effective Policies and Procedures for Implementing and Overseeing the Program> The Trust s policies and procedures lack detail in several areas related to school compliance and financial accounting, which may result in little overall accountability for program funds. The absence of detailed policies and procedures also reflect weak internal control in the areas of risk assessment, control activities, information and communication, and control environment. Internal control is broadly defined as a process designed to provide reasonable assurance that an organization can achieve its objectives with effective, efficient operations, reliable financial reporting, and compliance with laws and regulations. The Committee of Sponsoring Organizations of the Treadway Commission s (COSO) Internal Control Integrated Framework, includes five key elements or components, as shown in table 1. COSO is applicable to the Trust because it is a non-governmental entity. Policies and procedures are a central part of control activities and help ensure necessary actions are taken to address risks to achievement of the entity s objectives. In August 2013, the Trust made amendments to its policies and procedures to address the financial review performed on schools, administrative expenses, and bank reconciliations. However, these amendments do not address all weaknesses identified in this report, and they have not yet been fully implemented. The Trust, as administrator of OSP, is responsible for ensuring that participating schools comply with the reporting and program requirements that are outlined in the SOAR Act. The Trust, however, does not have a process for independently verifying the information that schools submit as evidence of compliance, reflecting weaknesses in the Trust s risk assessment process and internal control activities. The SOAR Act requires a potential OSP grantee to demonstrate how it will ensure that participating schools meet certain reporting and program requirements specified in the Act. In addition, the SOAR Act requires the grantee applicants to ensure that participating schools report to parents at least once during the school year on students academic achievement, as well as the safety and accreditation status of the school. The Act also requires schools to meet certain standards, such as building occupancy and teacher credentialing requirements. The Trust s policies and procedures state that participating schools must self-certify whether they have met several of these requirements. Participating schools complete a School Participation Verification Form that is submitted to the Trust each year to attest to student academic performance, school safety, and all of the applicable requirements in the SOAR Act, including status of accreditation. According to District regulation, if a private school in the District is not accredited, the school can provide evidence of acceptable instruction to the District s Office of the State Superintendent for Education (OSSE) by submitting information regarding teacher credentials and curriculum documentation. However, during our interview, officials from OSSE stated that they do not regularly ensure that schools have an acceptable curriculum. Officials we spoke with said that they are revisiting this policy, but they did not provide a timeline for whether and when any changes will occur. According to OSSE officials, the last time the Trust requested an accreditation review of participating OSP schools was before the start of the 2010-2011 school year. The SOAR Act states that participating schools must maintain a valid certificate of occupancy. The Trust collects and reviews copies of the certificates of occupancy. However, if the validity of the certificate of occupancy changes, the Trust may not know of the change. We obtained and reviewed the certificates of occupancy for the 10 schools we selected for site visits. In 5 of the 10 schools, we could not discern the validity of their certificates of occupancy. For example, one school s certificate of occupancy did not list school or private school for the use, but rather child development center and infant preschool. Enrollment at three other schools, as shown in the Directory, appeared to exceed the listed capacity, and one additional school s certificate did not list a capacity so it is not known if the school has exceeded its current capacity. Additional follow-up is needed to determine if these certificates of occupancy are, in fact, valid, but the Trust does not make any inquiries with the District agency responsible for issuing certificates of occupancy, nor does the Trust follow up with the schools. In addition, the Trust conducts site visits at participating schools but does not verify the documents or activities to which the schools attest. Participating schools complete a School Review Form to document each site visit. This form is also used by schools to attest to meeting reporting and program requirements and is not independently verified by the Trust. As the program administrator, the Trust is responsible for mitigating potential program risks. Verifying the information reported by participating schools is important to ensuring that scholarships are awarded only to students to attend participating schools that are in compliance with the SOAR Act s requirements and that the program is administered effectively. Without a mechanism or procedures for verifying the accuracy of the information provided by participating schools, the Trust cannot ensure that schools are eligible to participate in OSP and, therefore, risks providing federal dollars to students to attend schools that do not meet the educational and health and safety standards required by the District. The Trust s policies and procedures lack sufficient detail to ensure each participating school in OSP has the financial systems, controls, policies, and procedures in place to ensure federal funds are used according to the law, a requirement of the SOAR Act. The Trust s policies and procedures require the financial Controller of the Trust to review documentation that demonstrates the adequacy of each participating schools financial resources. Based on this financial review, the Trust has identified two schools as high risk, one for issues that occurred during the 2010-2011 school year, and the other for financial struggles in 2011. However, the policies and procedures for this financial stability review do not identify the specific risk factors that should be considered when assessing schools financial sustainability information.In addition, it is unclear what risk factors were considered during the prior Controller s review of the schools documentation demonstrating adequate financial resources since the policies and procedures did not specify factors to consider and, according to current Trust officials, there was no documentation about the review. According to the risk assessment and control activities components of COSO s internal control framework, it is important that management carefully considers factors that contribute to or increase risk and that management creates policies and procedures that help ensure that necessary actions are taken to address these risks. For OSP, factors that contribute to or increase risk include whether a school can continue to reasonably meet its financial obligations as they become due, and how dependent a school is on OSP funds. If a school cannot reasonably meet its financial obligations, it could be monitored more closely before and after being accepted into the program. If a school is overly reliant on OSP funds, which would be determined by the Trust, further review or increased monitoring could be warranted. As a result of certain risk factors not being considered in assessing schools financial sustainability, schools that are not financially sustainable may be participating in the program. Based on documents provided by the Trust, several schools we visited that participated in OSP during the 2010-2011 and 2011-2012 school years did not provide detailed financial statements necessary to assess their financial stability, and during this time, the Trust did not have a The financial practice of documenting its financial review of schools.information submitted by 6 of the 10 schools we visited did not include detailed financial information required by the Trust s policies and procedures. There was not any documentation or guidance completed by the previous Controller on these financial reviews. Officials from participating OSP schools we visited also confirmed that the Trust did not ask for additional information or support for the financial condition of the schools. Despite the policies and procedures that exist for OSP financial review of schools, there is little documentation detailing the analyses performed or the conclusions reached by the Trust. In addition, exceptions to the policy, such as where schools submitted information that did not adhere to the policies and procedures, was not documented. As stated earlier, the Trust amended their policies and procedures in August 2013 to include more detail regarding the financial review of participating schools; however, these amendments have not yet been fully implemented. The Trust does not have detailed policies and procedures for dealing with schools that are not in compliance with program rules. For example, if the Trust discovers that a school no longer possesses a valid certificate of occupancy or is not accredited and did not meet the District s educational standards, the Trust lacks policies and procedures dictating what is to be done at what time in the school year to ensure all schools participating in the program meet program rules and all students in OSP are attending schools that comply with these standards of safety, educational quality, and financial stability. The Trust s policies and procedures provide only one example of a policy directed at a school not meeting standards required by the SOAR Act: if a school is unable to provide supporting documentation for financial sustainability, the Trust s policy is that scholarship payments to the school will be distributed on a month-to- month basis, as opposed to three times per year. It is a weakness in the Trust s control activities that there is not a clearly defined, comprehensive written policy addressing schools out of compliance with program rules. Trust officials told us that they were not clear on what actions they could take to address non-compliant schools. However, Education officials told us that under certain circumstances the Trust can remove a school from participating in OSP. The Trust s policies and procedures for fiscal years 2010, 2011, and 2012 did not specify how to track administrative expenses, including what expenses should be included, and the Trust has little documentation to support administrative expenses incurred during the fiscal years 2010, 2011, and 2012. In addition, the Trust does not have detailed policies and procedures outlining the monthly bank reconciliation process, where staff review accounting system records and compare them to bank records. As stated earlier, it is important that an entity s management have policies and procedures in place that address risks to achieving an entity s objectives. Control activities include a range of activities such as verifications, reconciliations, as well as reviews of operating performance and security of assets. The SOAR Act limits the Trust s administrative expenses to 3 percent of the annual grant amount. The Trust s policies and procedures state that they may only draw down federal funds from Education on a reimbursement basis after such expenses have been incurred and paid. While the Trust s policy manual does provide guidance for tracking employee time that should be billed to the program, it does not provide guidance for other expenses such as rent, telephone, printing, or office supplies, which could possibly be counted as administrative expenses for purposes of seeking reimbursement. The cost of administering OSP could be higher or lower than the 3 percent designated for administrative costs. Because these expenses were not tracked prior to October 2012, the true cost of administering the OSP program is unknown. In part due to the outcome of their fiscal year 2010 financial audit, the Trust has begun to track administrative expenses more closely. The Trust did not draw down federal reimbursement from Education for administrative expenses for fiscal years 2010, 2011, and 2012. In addition, the Trust did not document the decision or approval as to why it did not follow its policies and procedures or did not request reimbursement from Education for its administrative expenses. According to Education officials, grantees do not typically document why they have not drawn down funds. In addition, they explained that Education s grants management system automatically flags funds that have not been drawn down within a particular time frame and the system has not flagged the Trust since funds had been drawn for the scholarships. Education officials also stated that the Trust classifies its obligations and expenditures in reports such as their monthly expense reports, which is sufficient for the department s purposes. In addition, as part of the finance department s monthly bank reconciliation process, the Trust s policies and procedures also refer to scholarship payment reconciliation as well as administrative expense drawdown reconciliation. However, the Trust s policies and procedures do not specify certain aspects of the reconciliation, such as when the reconciliations should be completed, how exceptions should be identified and dispossessed, how the process should be documented, and when the reconciliation should be reviewed and by whom. Without the specific guidance in the policies and procedures, it will be difficult for the Trust to ensure that items are being appropriately and consistently tracked and recorded, as administrative expenses may be included that should not be (or not included that should be). In addition, without specificity in the policies and procedures regarding bank reconciliations, these reconciliations may not be completed in a consistent or timely manner and increases the potential for errors, and likelihood that errors will not be identified and corrected in a timely manner. <3.2. The Trust s Database Is Not Efficiently Structured> The Online Reports and Invoicing System (the database), which is used by the Trust to manage the OSP program, is not structured well enough for effective program implementation and oversight. Information contained in the database includes current and past application information for students and guardians, school placement information for each student who received a scholarship award, payment information, and information on participating schools. According to COSO s Integrated Framework, information should be communicated to management and within the entity in a form and time frame that enables officials to carry out their responsibilities and determine whether they are meeting their stated objectives. For example, in OSP, it is important that Trust officials have access to accurate, up-to- date student application information in order to meet program objectives, such as determining eligibility and awarding OSP scholarships in an efficient and timely manner. Similarly, families of OSP scholarship award recipients, as consumers, need complete and timely information about participating schools to make informed decisions about what school is best for the student. The Trust s database has several deficiencies including a lack of documentation and automated checks, and a deficient structure, which leaves the database open to errors and slows the Trust s ability to manage the program on a day-to-day basis, and communicate information about the program to families and Education. The Trust (including the contractor that created and maintains the database) does not have any documentation for this database. For example, there is no user guide or data dictionary, and potential users of the database must rely on the institutional memory of select users who have had more experience with the database to understand how to properly use the database or the definitions of key data fields. When we attempted to follow various verbal or informal written instructions for using the database, we found flaws in these instructions that necessitated several rounds of follow-up to successfully perform key functions and basic data manipulation. It is not known what specifically caused this condition, but Trust officials have noted financial constraints in requesting more support from the contractor. In addition to lacking documentation, the database does not have key automated checks built into the system to ensure accuracy and efficient data entry. Automated checks help maintain integrity in a database reducing the risk that a significant mistake could occur and remain undetected and uncorrected. For example, the Trust s database does not have checks to determine if a student s application information has already been entered. When a student submits an application, before his or her information is entered into the system, a Trust staff person must run searches in the database for his or her name, the guardian s name, both persons Social Security numbers and address to ensure the applicant is not already in the system. These checks are necessary because the system allows duplicate entry of the same student information, potentially rendering the data unreliable, even with the searches described above. To ensure data are accurate given the lack of automated checks, the Trust stated that one of their officials runs several queries and creates tables in Microsoft Excel to check for errors by hand on a weekly or more frequent basis during key periods for the program. The database s current structure hampers an administrator s ability to look at historical trends and use them as an effective management tool. When the Trust inherited the database from the Washington Scholarship Fund, the older data in the system were not cleaned, and thus there are many records in the database with missing fields, or applications that are only partially entered, suggesting they are not valid records. The Trust has stated they cannot attest to the accuracy of these older data. Yet, the Trust is using these older data in reports without an explanation of the potential issues of inaccuracy. The weaknesses in the database s structure also affect key activities for the Trust, such as determining the priority groups of applications for the OSP lottery. As stated earlier in this report, the SOAR Act states that if more students apply to OSP than the program can accommodate, the grantee (the Trust) must ensure that applicants are selected for awards through a random selection process that gives weight to three priority groups (which the Trust has implemented through a lottery). The groups of applicants that have a priority status as defined in the SOAR Act are: (1) applicants who are currently attending a school that has been identified for improvement, corrective action, or restructuring under the Elementary and Secondary Act of 1965, as amended; (2) applicants who have a sibling currently participating in OSP; and (3) applicants who received an OSP scholarship award in a previous year but did not use it. Using the Trust s database, we employed our own methodology to replicate the process to determine the number of students in each applicant priority group for the 2011-2012 school year because no documentation exists for these queries. However, we were unable to determine which students belonged in the third priority category of students for the 2011-2012 school year because a key variable necessary for this calculation appears to be unreliably populated. In other words, it does not appear possible to use the Trust s database to derive this priority category and puts in question the Trust s ability to provide accurate priority categories for the OSP lottery. Lastly, data entry of application information is also problematic. In addition to basic eligibility information, OSP applications also include important information needed for the third-party evaluation of the OSP program. According to Education officials, the evaluation timelines have been negatively affected for the 2011-2012 and 2012-2013 school years because data entry was not completed in a timely manner. In one instance the Trust had to retrieve hard copies of the applications from storage and send relevant application data to the officials involved in the evaluation because they were so behind in data entry. Education officials stated that there may have been a lack of clarity regarding the specific information the Trust was supposed to enter into the database and that delays resulting from untimely data entry should not be an issue going forward. <3.3. The Trust Has Not Filed Its Mandatory Financial Reports on Time> The Single Audit Act requires that recipients submit their Single Audit reports to the federal government no later than 9 months after the end of the period being audited, which would be June 30th for an entity with a September 30th fiscal year end, such as the Trust. The Trust s Financial Statements, Schedule of Expenditures of Federal Awards, and Independent Auditors Reports Required by Government Auditing Standards and OMB Circular A-133 for the Year Ended September 30, 2010, were issued on January 31, 2013 more than 2 years after the end of its 2010 fiscal year. Until this report was issued, Education did not have the financial reports required to properly account for the federal funds expended for OSP under the Trust s administration. According to Trust officials, the 2010 audit was delayed because of a need to change audit firms and ongoing investigations regarding its finances related to other grants. The issue of delayed reports has not been resolved. As of August 2013, the Trust s audited financial statements for fiscal year ending September 30, 2011, and 2012 have not been issued yet (see fig. 4). On August 9, 2013, Trust officials told us that the final fiscal year 2011 audit is expected to be completed by the end of August 2013 and the fiscal year 2012 audit will not be completed before September 2013.delay in submitting mandatory financial reports there was no opportunity for formal oversight of federal dollars spent on OSP for almost 2 years under the Trust s administration. In addition, the continued delay in the audit reports means the Trust has not taken actions on deficiencies found in each audit in a timely manner to mitigate further issues. <4. Agencies Have Not Fully Executed Responsibilities Outlined in Governing Documents> The SOAR Act requires that the Secretary of Education and the Mayor of the District of Columbia enter into a memorandum of understanding (MOU) that addresses how OSP will be implemented. The MOU, which was agreed upon by Education and the District in June 2012, is intended to help ensure the efficient and effective implementation of OSP in a manner that incorporates and is consistent with the roles and responsibilities of Education and the District. It describes, for example, offices and officials within Education and the District that have lead responsibility for implementing OSP, issues and areas on which Education and the District will collaborate, and specific activities that Education and District agencies, such as the Department of Consumer and Regulatory Affairs (DCRA), Fire and Emergency Medical Services (EMS), and the Department of Health will carry out. According to the MOU, Education is responsible for working with the Trust to improve certain aspects of the administration of OSP. The District is responsible for conducting regulatory inspections of participating schools upon notification by the Trust and providing the Trust with the results of those inspections (see fig. 5). In addition, Education entered into a cooperative agreement with the Trust to set forth the responsibilities of each entity for implementing OSP as required by the SOAR Act and the Education Department General Administrative Regulations. Education and Trust officials signed the cooperative agreement in June 2013, 3 years after the grant for OSP had been transferred to the Trust. Education officials stated that, among other reasons, the cooperative agreement had not been developed any earlier because of substantial turnover in staff at the Trust. Education officials stated they wanted to wait to implement the agreement until the Trust s staff had stabilized. Trust officials stated that the cooperative agreement was an opportunity to codify the various ways in which Education and the Trust would assist each other in implementing the program. Through the cooperative agreement, Education also agreed to assist the Trust in the same four areas stated in the MOU. <4.1. Education Has Provided Limited Assistance to the Trust> Education has provided limited assistance to the Trust as agreed upon in the MOU and cooperative agreement governing OSP. Trust officials acknowledged that they have not proactively sought Education s assistance in these areas. They told us that although they have had several discussions with Education regarding the general administration of the program and operational issues, they have not had opportunities to discuss lessons learned or making improvements in certain areas of program administration. Through the MOU and cooperative agreement, Education is responsible for helping the Trust ensure that it implements appropriate improvements to its financial system. However, according to Trust officials, Education has not undertaken any activity to help improve its financial systems and the Trust has not had any conversations with Education about doing so. Education officials told us that the Trust s financial systems were an improvement over the previous administrator s financial system and that they have not had to closely manage the Trust in this area. However, as previously discussed, the Trust s policies and procedures do not provide specific guidance on allocating certain administrative expenses and the Trust does not have detailed policies and procedures outlining the bank reconciliation process. Further, as a result of the Trust s tardiness in submitting mandatory financial reports, the Trust was unable to account for federal dollars spent on OSP for about 2 years after the end of fiscal year 2010. Education has not assisted the Trust in developing, implementing, or updating its policies and procedures for conducting site visits, as specified in the MOU and cooperative agreement. According to Education officials, as long as the Trust maintained the procedures used by the previous administrator, there was no need to revise its site visit procedures. When we spoke to Trust officials, however, we were told that although Education has not assisted with its site visit policies and procedures, the Trust updates its procedures every year and provides them to Education for review. Additionally, Education told us that it did not have any concerns with the timing of the Trust s site visits, even though the Trust did not conduct site visits for the 2012-2013 school year until February 2013 the second semester of the school year. According to the MOU and cooperative agreement, Education agreed to assist the Trust in developing procedures to improve the accuracy of information provided to families before they choose a participating school and encourage schools to make such information available on an annual basis to families of enrolled students. Education officials stated that they meet with the Trust at the start of each school year to discuss marketing and recruitment, the participating school fair, and how to engage with families, and to provide feedback on these aspects of OSP as needed. For example, Education worked with the Trust to publish the school Directory for the 2012-2013 school year. Education did not have any concerns with the accuracy of the information provided to families for the purpose of selecting a school to attend. According to the Trust, Education relies on it to verify that schools are providing parents with such information. However, as discussed earlier, the school Directory does not include complete information about schools tuition, fees, and accreditation that families need to make informed school choices. In addition to the responsibilities described above, Education is also responsible for providing oversight of the Trust as a recipient of federal funds. According to Education officials, its role is to identify a grantee to administer OSP and ensure that the grantee adheres to applicable federal law and regulations. Education monitors the Trust with regard to (1) progress in implementing OSP; (2) financial records; and (3) data records, including records on student placements and numbers of scholarship awards. The activities that Education conducts as part of its oversight of the Trust are documented in the cooperative agreement between Education and the Trust and it conducts ongoing reviews of the Trust s activities to ensure satisfactory performance under the cooperative agreement. Education officials stated that the cooperative agreement was developed, in part, to increase its oversight of the Trust. For example, Education used the cooperative agreement to institute reporting requirements and clarify program rules. Officials said they also recognized the need for a closer partnership with the Trust and stated that the cooperative agreement was an opportunity to clearly describe the Trust s expectations. Education officials also told us that they provide guidance and technical assistance to the Trust. For example, Education officials told us they participate in conference calls with Trust officials, review the Trust s annual performance report and monitoring plans, and provide ongoing technical assistance. Education officials also stated that they conduct regular, ongoing desk monitoring of the Trust, and will also conduct biannual site visits with the Trust beginning in the fall of 2013. If there were any problems with safeguarding OSP funds, Education officials stated they would work with the Trust to correct the problem or, if necessary, they could penalize the Trust through actions such as freezing and withholding funds, recovering funds, and ensuring that the grant cannot be renewed. <4.2. Requirements Under the MOU Are Not Being Met> The MOU requires that the District conduct inspections necessary for schools to participate in OSP upon notification by the grantee (the Trust), but required inspections are often not being conducted. For example, the MOU requires the District Government, upon notification by the Trust, to conduct regulatory inspections of all program schools to determine whether they have a current certificate of occupancy. The District s DCRA issues certificates of occupancy and conducts inspections based on the types of building or trade permits that schools have. A DCRA official, however, stated that it relies on the permit holder to make requests for all required inspections. Also, the District s Fire and EMS Department is responsible for conducting biennial inspections or otherwise determining whether participating schools comply with applicable District health and safety requirements, as required by the MOU. However, a Fire and EMS official stated that private schools are not subject to regular inspections. Trust officials told us they did not know whether participating schools had been inspected by the District agencies. Lastly, officials at schools we visited told us that certain District agencies had conducted some inspections but not specifically for OSP. Rather, they occurred for other reasons, such as general inspections for schools that offer special education or early childhood services. The MOU also states that the District should provide information to the Trust regarding whether participating schools are in conformance with District requirements. The Trust s understanding of this requirement is that the District should be providing the inspection results on a regular basis without being specifically asked by the Trust. However, Trust officials told us they do not receive any information from the District as a result of any inspections that may have been conducted at participating schools. And according to the Trust, it does not follow up with District agencies to inquire about the results of any of the inspections. Although the MOU is a written agreement between Education and the District, it includes a responsibility for the Trust. Specifically, the MOU states that the Trust as the grantee is responsible for notifying District agencies to conduct these required inspections of participating private schools. Given that the Trust is responsible for ensuring that participating schools continue to be eligible to receive federal dollars through OSP, notifying the District agencies can be important in ensuring appropriate oversight of participating schools. This activity can be especially important for agencies like the District s Fire and EMS department that do not normally inspect private schools. However, because the Trust is not a signatory to the MOU, Trust officials were not acutely aware that they were responsible for notifying the District agencies. Furthermore, the cooperative agreement between Education and the Trust does not include this requirement for the Trust to notify District agencies. Officials from the Trust stated that they assumed that District agencies conducted inspections of private schools as a matter of process, without the Trust prompting them to do so. As a result, activities that are crucial to the successful implementation of the program such as building, zoning, health, and safety inspections may not be occurring for all participating schools. <5. Conclusions> The OSP is intended to afford District families the opportunity to attend higher-performing schools of their choice and have all or part of the cost of attendance paid. In order to make well-informed decisions, however, families need complete and timely information on schools tuition, fees, and accreditation, as well as timely scholarship awards. Since taking over OSP, the Trust has not provided such information to families in a timely manner, requiring families to make less-informed decisions near the beginning of the school year, when many schools no longer have places for admission. The Trust needs to improve program management and operations to assure efficiency and effectiveness. Currently, it does not effectively oversee participating schools, has not implemented effective policies and procedures, and is unable to efficiently manage day-to-day program operations. As a result, families may believe that participating schools fully meet program requirements, although the Trust cannot ensure that these schools meet them. In August 2013, the Trust made amendments to its policies and procedures manual regarding the financial review performed on schools, administrative expenses, and bank reconciliations. However, these amendments do not address all weaknesses identified in this report, and they have not yet been fully implemented. Without sufficiently detailed policies and procedures for all aspects of the Trust s operation, the Trust cannot sufficiently monitor its own operation of the program and may not be able to account for all federal dollars spent on OSP. Additionally, because of the weaknesses in its program database, the Trust cannot efficiently manage day-to-day operations of the program. The weaknesses that we have identified highlight the need for sustained oversight and management attention to ensure efficient and effective program implementation and accountability over federal funds. However, Education has provided the Trust with limited assistance in specific areas stated in the MOU and the cooperative agreement governing the program. Furthermore, the Trust is unsure whether key activities crucial to the successful implementation of the program inspections for compliance with the District s building, zoning, health, and safety requirements are being conducted. Under the MOU, the inspections are to be done upon the Trust s notification to District agencies, but the Trust is not a signatory to the MOU. Moreover, the cooperative agreement between Education and the Trust is silent regarding the responsibilities of the Trust on this issue. The Trust s limited ability to effectively oversee participating schools coupled with the weaknesses in the Trust s internal control environment underscore the need for Education to conduct more rigorous oversight and monitoring of the Trust. Not only would increased oversight help ensure that the Trust is administering the program effectively, but it would also help ensure that OSP achieves its intended purpose as outlined in the SOAR Act. <6. Recommendations for Executive Action> To ensure families receive expanded opportunities for school choice and OSP is implemented and overseen effectively, we recommend that the Secretary of Education take the following actions: Revise the June 2013 cooperative agreement between Education and the Trust to include Education s expectations for the Trust regarding collaboration with District agencies to ensure they conduct required building, zoning, health, and safety inspections. Work with the Mayor of the District of Columbia to revise the memorandum of understanding that governs OSP implementation to include processes that will help ensure that the results of OSP school inspections, when they are conducted, are communicated to the Trust. Explore ways to improve Education s monitoring and oversight of the Trust. For example, Education could require the Trust to develop and implement a plan for how it will address its timeliness in mandatory financial reporting. Conduct activities to ensure that the Trust: publishes the school Directory prior to participating schools application deadlines with more complete information. more closely aligns its scholarship timeframes with schools admissions and enrollment schedules. Education could find ways to assist the Trust in executing the OSP lottery earlier in the school year, for example, by allowing the Trust to use the prior year s income information to make preliminary determinations regarding eligibility. improves its OSP database by creating and maintaining documentation for the use of the database, adding automated checks to the system, creating flags and/or variables for the key priority categories called for in the SOAR Act, and streamlining the data entry process for new applications in order to ensure there is sufficiently reliable data regarding the operation of the program. Require the Trust to update its policies and procedures in several key respects, and monitor the Trust s activities to ensure that these updates are made. These updates should include: identifying the steps the Trust will take to verify reported information on participating schools compliance with reporting and program requirements specified in the SOAR Act. identifying the specific risks that should be included in the evaluation performed when assessing participating schools financial sustainability information, including a record of the analyses performed, and ensure that the conclusions reached with regard to schools financial sustainability are documented. specifying a process for addressing schools that are noncompliant with program requirements. providing more detailed procedures in key areas including calculating yearly administrative expenses, and performing monthly and year end reconciliations of the OSP bank account, and ensure the document reflects current practices and that key decisions associated with administrative expenses are documented. <7. Agency Comments and Our Evaluation> We provided a draft of this report to Education, the Office of the Mayor of the District of Columbia (the District), and the DC Children and Youth Investment Trust Corporation (the Trust) for comment. Education officials provided technical comments, which were incorporated into the final report as appropriate. Written comments from Education and the Trust are reproduced in appendices V and VI, respectively, and are also summarized below. The District did not provide written comments on the draft report. In its comments, Education did not indicate agreement or disagreement with our recommendations. The Trust generally agreed with our recommendations, but disagreed with some of the findings. Both Education and the Trust provided additional information on issues raised in the report. In its letter, Education stated that it took the issues raised in the GAO report seriously and would continue to consider them carefully, but noted that the report did not fully reflect the Trust s efforts to provide complete and timely information about participating schools to OSP families. However, in our final report, we discuss a number of different ways in which the Trust provides information and assistance to current and prospective OSP families, and we believe we adequately reflected the Trust s efforts. Education also noted that the report does not fully recognize the significant staff turnover in leadership at the Trust, which affected several aspects of the Trust s implementation of the program in school year 2012-2013. We recognize that the Trust faced a number of challenges as it assumed responsibility for the program. However, several issues that we identified in the report are not new, but rather contribute to long-standing weaknesses in internal controls and oversight of the program. Therefore, we believe that if Education had provided more effective oversight of the program and more timely technical assistance to the Trust, many of the challenges identified in the report could have been addressed sooner. In addition, more effective internal controls, including robust policies and procedures for program administration and school oversight, could have helped to smooth the difficulties associated with the transition in leadership and program administration. The Trust disagreed with our finding that its Participating School Directory provided incomplete and untimely information. More specifically, in its comments, the Trust emphasized that this condition does not exist under the current management and administration of OSP and that the Trust provides information through a comprehensive array of tools, events, and individualized assistance. We acknowledge the substantial change in staff that occurred at the Trust and discuss in our report the number of ways in which the Trust provides assistance to families through channels other than the Directory. However, the most recent Directory a key source of school information for families was still published too late and contained incomplete information about tuition, fees, and accreditation. Education s written comments focus on the Trust making a deliberate decision to maintain a reasonably sized, user-friendly directory with straightforward information on average tuition and fees and on the efforts of the Trust to provide individualized ongoing assistance to families and assistance through school placement fairs. We agree that the Directory must be prepared for a non-technical audience, but continue to believe that for a program predicated on choice, it is imperative that the Trust provide families with complete and timely information needed to make informed decisions, especially with respect to tuition and fees. The Trust could accomplish this without overburdening families with excessive information by using a general notation for schools that have discounted or different tuition rates, as the Trust noted in its written comments, or providing a range of the potential total cost of attendance for families. As the Trust indicated in its letter, it could also include more accurate notation of school-accrediting bodies. We support the Trust s efforts to update the Directory for future school years by including more complete information for OSP families. In addition, in its comments, Education noted that substantial changes in the Trust s staff and leadership transition affected the timing of the Trust s site visits at participating schools. As we discuss in our report, these circumstances resulted in site visits occurring later in the year, which in turn affected the timeliness of the Directory s publication. We acknowledge that this is not a routine practice of the Trust and encourage the Trust to continue to work toward conducting these site visits earlier in the school year and publishing the Directory earlier. We also support Education s plans to work with the Trust to provide the Directory prior to participating schools application deadlines. The plan to publish the Directory by December 2013 for the 2013-2014 school year, and by November in future school years, represents a marked improvement in the timeliness of the school directories. However, we maintain that families need to have information about schools prior to participating schools application deadlines. We encourage Education and the Trust to consider publishing the Directory earlier than November or December to help families start making school decisions as early as possible in the scholarship application process, ensuring that families have the full range of school options available to them. In response to GAO s finding regarding the misalignment of scholarship awards with participating schools admissions and enrollment schedules, Education expressed concern that an earlier scholarship lottery date could, among other things, result in the Trust conducting less outreach to potential applicants. We acknowledge the steps that Education identified the Trust needs to take prior to the lottery, such as screening applications for eligibility, conducting multiple rounds of baseline testing, and ensuring that the lottery pool of eligible applicants is sufficient so that the evaluation can proceed with the necessary sample size. Education stated that the Trust would need to submit a proposal to Education if it wanted to use a different approach for screening student eligibility, such as using a different year s income information, that could help expedite the steps leading up to the lottery. We support Education s consideration of any proposal to use a different approach that would ultimately enable the Trust to award scholarships earlier and ensure that OSP families have the widest range of choice possible, and we would encourage the Trust to fully consider the issues, and if it deems it appropriate, develop such a proposal for Education s consideration. The Trust also disagreed with our finding regarding its lack of policies and procedures for implementing and overseeing OSP, stating that its policies and procedures manual clearly and accurately reflects the policies and procedures governing the program. Further, the Trust stated that although amendments to the policies and procedures regarding allocation of costs and reconciliation were codified in August 2013, these policies were in practice prior to that date. GAO applauds the Trust s efforts to increase oversight of participating schools and update their policies and procedures accordingly. Despite the Trust s claims that these policies and procedures were already in practice, GAO does not consider these policies and procedures fully implemented until these activities take place on a regular basis. In addition, the Trust s revised policies and procedures do not address all of the weaknesses for which GAO recommended action, such as developing a process for addressing schools that are noncompliant with program requirements. Education commented that the Trust s proposed revisions to the letter of agreement with participating schools reserves the Trust s right to limit or suspend a school s participation in OSP and that any OSP schools that require further discussion about compliance would be handled on a case-by-case basis. However, we continue to believe that the Trust needs to develop a more clearly defined, comprehensive written policy to address schools that are noncompliant. With regard to weaknesses GAO identified in the structure of the Trust s database, the Trust stated they would take GAO s comments under advisement and consider key changes and improvements. The Trust stated that the database is proficient in areas such as determining student eligibility, student invoicing, school oversight documentation tracking, and notation of communication with individual families and schools. The Trust also noted that they must use outside spreadsheets to complete any reporting on historical information about the program, and that automated checks do not exist, but the database does not impede data collection or identification of student sub-groups for inclusion in the scholarship lottery (referred to in this report as priority categories ). Based on our review, we continue to believe that the Trust s ability to determine the third priority category described in the SOAR Act applicants who received an OSP scholarship award in a previous year but did not use it is questionable because, according to GAO s analysis, a key variable in the database used to determine this priority category is not reliable. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, the Secretary of Education, the Mayor of the District of Columbia, the Executive Director of the District of Columbia Children and Youth Investment Trust Corporation, and other interested parties. In addition, the report will be available at no charge on GAO s web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Scope and Methodology We reviewed relevant federal and District laws and regulations and analyzed key documents from the DC Children and Youth Investment Trust Corporation (Trust), the federal Department of Education (Education), and participating schools, as well as generally accepted guiding documents for internal controls including those published by GAO and the Committee of Sponsoring Organizations of the Treadway Commission (COSO). We also conducted site visits at 10 of the 53 participating private schools in the program and interviewed relevant school officials. In addition, we conducted two discussion groups one in English and one in Spanish with parents and guardians of students who received and used an Opportunity Scholarship Program (OSP) scholarship in the 2012-2013 school year. We also interviewed key officials at the Trust, Education, and select agencies in the District, as well as representatives of accreditation organizations and nonprofit tuition assistance organizations. In addition, we analyzed the structure of the Trust s database and the Trust s program data. We conducted this performance audit from May 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. <8. Review of Program Documents> To determine the types of information available to families and the types of schools participating in OSP, we reviewed the Trust s Participating School Directory (the Directory) for school years 2010-2011, 2011-2012, and 2012-2013. We also reviewed the list of participating schools available to families at the beginning of the 2012-2013 school year and compared it to the Directory when it was issued in May 2013 to determine if there were any discrepancies. We also examined OSP application materials as well as documents given to OSP families at the 10 participating schools we visited. These documents included application materials, tuition and fees information, and notices sent to families, such as school newsletters and notices about safety incidents occurring in and around the school. To determine the types of information participating schools submit to the Trust, we reviewed schools Key School Data Forms. These forms include information on a school s enrollment, facilities, number of days of instruction, curriculum, and services offered and are used by the Trust to compile the Directory. We also reviewed schools Tuition and Fees forms to estimate a student s total cost of attendance. Given the variation in fees among participating schools, we estimated the total cost of attendance using several assumptions, such as full-time attendance for one school year for a new OSP student. We also used information provided by schools about their admissions requirements and deadlines to determine how their application deadlines aligned with scholarship award timeframes implemented by the Trust. To determine the extent to which the Trust implemented internal controls over financial transactions and program administration, we reviewed the Trust s financial management policies and procedures. We also reviewed the Trust s financial statements, schedule of expenditures of federal awards, and independent auditors reports required by government auditing standards and OMB Circular A-133 for the year ending September 30, 2010. In addition, we reviewed the Trust s administrative expense report and copies of monthly and annual bank reconciliations and reviewed relevant federal laws and regulations. To determine the roles and responsibilities of the Trust, Education, and the District, we examined the grant transfer agreement between the Trust, the Washington Scholarship Fund, and Education; the memorandum of understanding between Education and the District of Columbia Government; and the cooperative agreement between Education and the Trust. To determine the Trust s assessment of participating schools financial stability, we reviewed fiscal year end 2010-2012 financial statements and other documentation submitted by schools, such as management letters and reports. We also reviewed forms submitted by schools, such as the School Participation Verification Forms, to determine the mechanism by which schools attest to the veracity of the information they submit. In addition, we reviewed COSO s standards for internal controls for non- federal entities. <9. Analysis of Program Data from the OSP Database> To determine several aspects of the Trust s administration of OSP, we requested program data from the Trust, including participating school characteristics, OSP student enrollment at participating schools, student eligibility determination, scholarship payments, and student characteristics, including the priority categories for awarding scholarships as stated in the SOAR Act. We received program data from the 2004- 2005 school year, the first year in which the program was administered by the Washington Scholarship Fund, to the 2011-2012 school year. We assessed the reliability of the data through performing electronic testing for obvious errors in accuracy and completeness and interviewing agency officials knowledgeable about the data. We found substantial problems with the data, including missing fields or partially entered applications. As we previously stated in this report, the lack of documentation for the database made it difficult to understand which fields were current or still in use. As a result, we limited the scope of our analyses to data from the 2011-2012 school year, the most recent and complete year of data available at the time of our review. We determined that these data were sufficiently reliable for the purposes of our report. The Trust did not have a documented query or process to determine the number of OSP students enrolled at each participating school. Following the instructions from the Trust s data contractor, we initially determined the number of OSP students at each school by using variables representing the year, student and school identifiers, student school placement, and the number of days the student was enrolled. We found, however, that there was an extensive number of missing values for several of the key variables needed to complete this analysis. As a result, we determined that we could not use this method of calculating OSP enrollment. Instead, we calculated OSP enrollment by using a count of student identifiers by school and payment information. We also analyzed OSP student enrollment by race and ethnicity to determine the demographic characteristics of OSP students enrolled at each participating school. Under the SOAR Act, Education is required to give priority to grantee applicants that will most effectively give priority to: 1. Students who, in the preceding year, attended a public school in the District that was identified as a school in need of improvement, corrective action, or restructuring under the Elementary and Secondary Education Act of 1965, as amended. 2. Students who previously were awarded a scholarship in a preceding year but have not used the scholarship, specifically emphasizing students that were awarded a scholarship for the first time in 2009- 2010, when the 216 new scholarships awarded that year were rescinded by the Department of Education. 3. Students that have a sibling participating in OSP. To assess the Trust s application of these priority groups for the 2011- 2012 school year, we attempted to replicate the Trust s process for determining the number of students in each priority group. Since the Trust has not documented these processes, we determined the number of students in two of these priority groups those attending a school identified for improvement and those with a sibling in the program using detailed queries and clarifications from the Trust s contractor. Despite employing several different approaches, our results did not corroborate the Trust s results for these two categories. We were unable to determine a reliable number of students in the third priority group those who were previously awarded a scholarship but had not used it. We found that one of the key variables Trust officials use to determine this category was unreliable. As a result, we could not use other variables in its place to reliably determine the number of students belonging in this group. <10. Site Visits to Participating Schools> To review aspects of participating schools compliance with program rules and regulations, as well as their admissions and enrollment processes, we conducted site visits to 10 participating schools (see table 2). During these visits, we interviewed key school officials; reviewed documents, including certificates of occupancy, health certificates, and teacher credentials; and toured school facilities. We also interviewed finance officers at these schools to determine the schools financial stability and the types of financial information schools submit to the Trust. We selected a nongeneralizable sample of 10 schools to visit based on two criteria: (1) the percentage of OSP students enrolled and (2) the largest number of OSP students enrolled. Using this approach allowed us to visit schools that potentially were almost entirely funded by OSP scholarships as well as schools that were receiving the most absolute OSP dollars. We used student enrollment data obtained from the Trust for the most recent school year available (2011-2012) as well as information from the 2011-2012 Directory. Using these criteria, we determined the sample of 10 schools to visit by selecting 5 schools with the highest concentration of OSP students relative to the total school enrollment and 5 schools with the highest number of OSP students, as shown in table 2. In order to develop these criteria, we made multiple requests for information from the Trust and made several clarifying inquiries to ensure the accuracy of our approach. After we completed this analysis, through the course of additional interviews with the Trust, we learned there were some additional types of records in the Trust s database that we needed to exclude to execute a more exact query for OSP enrollment. However, we received this information several months after nearly all the site visits were conducted. To provide balance, we also spoke with National Presbyterian School and Aidan Montessori School two schools that historically had few or zero OSP students enrolled to determine what factors impact their interest and ability to enroll OSP students. We selected these schools based on our analysis of the Trust s 2011-2012 data on the number of OSP students enrolled at participating schools. <11. Discussion Groups with Parents and Guardians of OSP Students> To better understand families experiences with OSP, we conducted two semi-structured discussion groups with parents and guardians of students who received and used an OSP scholarship in the 2012-2013 school year. To ensure we had a diverse group of parents, we conducted one group in English and one group in Spanish. The Trust recruited parents and guardians to participate in these groups by randomly selecting 75 to 100 English-speaking and Spanish-speaking parents/guardians. Of these randomly selected parents and guardians, 14 participated in these discussion groups. Although asking the program administrator to recruit participants may result in a biased group of participants, the Trust staff knew this population best and could most effectively reach out to this group. As a result, the extent to which these participants represent all OSP families is unclear, and the results of these discussion groups are not generalizable to the entire population of OSP families. The Trust e-mailed letters written by GAO in both English and Spanish to parents explaining the purpose of the discussion groups and encouraging their participation. Two discussion groups were held in December 2012 and were about 1 to 2 hours in length. To ensure participants privacy, parents and guardians were asked to use alternate first names during the discussion. In each of the discussion groups, we covered the following topics: reasons why parents were drawn to the OSP; how they found out about the program; the school selection and application process; and their children s experience at their OSP schools. Following the discussion group, we asked participants to provide us with some basic background information in a short voluntary questionnaire. Thirteen of the 14 participants completed the questionnaire, which we provided in both English and Spanish. After we conducted the groups, we obtained transcripts of both groups to ensure the accuracy of the groups discussions. For the discussion group conducted in Spanish, we obtained a transcript in Spanish as well as a translation in English to ensure the accuracy of the translation. <12. Interviews> We interviewed officials at the Trust to determine how they administered OSP, their relationship with Education and the District, and their internal control activities. During one of these interviews with Trust officials, we conducted a walk-through where Trust staff demonstrated how they enter and maintain their database. To better understand how the Trust manages its database, we interviewed Trust officials and the Trust s contractor, who is responsible for maintaining the database. To determine Education s role in OSP and their relationship with OSP and the Trust, we interviewed officials at Education s Office of Innovation and Improvement and Institute of Education Sciences. To understand the scholarship lottery process and how OSP is evaluated, we interviewed officials from Education and Westat, the contractor responsible for conducting the lottery. To understand the relationship between the District, Education, and the Trust, and to specifically understand the execution of the MOU, we interviewed officials at key agencies in the District, such as the Office of Deputy Mayor for Education and the Office of the State Superintendent for Education. We also requested and received written responses to key questions from certain District agencies, including the Department of Health, the Department of Consumer and Regulatory Affairs, and the Fire and Emergency Medical Services Department. To determine the significance and role of accreditation, we spoke with representatives from Middle States Association of Colleges and Schools Commissions on Elementary and Secondary Schools, Association of Independent Maryland & DC Schools, and AdvancED. We also spoke with Independent Education, an association of private independent District- area schools that requires its member schools to be accredited. To better understand the role played by nonprofit organizations that provide additional private school tuition assistance, we interviewed representatives from the Archdiocese of Washington, Latino Student Fund, Capital Partners for Education, and the Jack Kent Cooke Foundation s Young Scholars Program. Appendix II: Schools That Agreed to Accept Students in the District s Opportunity Scholarship Program in School Years 2010-2011 and 2011-2012 ( Participating Schools ) Appendix II: Schools That Agreed to Accept Students in the District s Opportunity Scholarship Program in School Years 2010- 2011 and 2011-2012 ( Participating Schools ) School name Academia De La Recta Porta International Christian Day School Beauvoir National Cathedral Elementary School Bishop John T. Walker School for Boys Howard University Early Learning Program Kuumba Learning Center (MLK Campus) School name National Cathedral School Our Lady of Victory School St. Ann s Academy St. Anselm s Abbey School St. John s College High School Washington Middle School for Girls (THEARC & Washington View Campuses) Appendix III: Total School and OSP Enrollment, School Year 2011-2012 Appendix III: Total School and OSP Enrollment, School Year 2011-2012 School name Academia De La Recta Porta Intl Christian Day School Beauvoir-National Cathedral Elem. School Bishop John T. Walker School for Boys Howard University Early Learning Program Kuumba Learning Center (MLK Campus) School name Preparatory School of DC St. Ann s Academy St. Anselm s Abbey School St. John s College High School Washington Middle School for Girls Dupont Park Adventist School has two campuses: Alabama Ave (grades PK-5) and Massachusetts Ave (grades 6-10). Washington International School has two campuses: Primary School Campus (grades PK-5) and Tregaron Campus (grades 6-12). Washington Middle School for Girls has two campuses: THEARC Campus (grades 6-8) and Washington View Campus (grades 4-5). Appendix IV: Internal Control Frameworks Internal control is broadly defined as a process, effected by an entity s board of directors, management and other personnel, designed to provide reasonable assurance regarding the achievement of objectives in the following categories: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. According to the Committee of Sponsoring Organizations of the Treadway Commission (COSO), the five components of internal control for businesses and other entities are: Control Environment: The control environment sets the tone of an organization, influencing the control consciousness of its people. It is the foundation for all other components of internal control, providing discipline and structure. Control environment factors include the integrity, ethical values and competence of the entity s people; management s philosophy and operating style; the way management assigns authority and responsibility, and organizes and develops its people; and the attention and direction provided by the board of directors. The control environment has a pervasive influence on the way business activities are structured, objectives established and risks assessed. It also influences control activities, information and communication systems, and monitoring activities. This is true not only of their design, but also the way they work day to day. The control environment is influenced by the entity s history and culture. It influences the control consciousness of its people. Effectively controlled entities strive to have competent people, instill an enterprise-wide attitude of integrity and control consciousness, and set a positive tone at the top. They establish appropriate policies and procedures, often including a written code of conduct, which foster shared values and teamwork in pursuit of the entity s objectives. Risk Assessment: Every entity faces a variety of risks from external and internal sources that must be assessed. A precondition to risk assessment is establishment of objectives, linked at different levels and internally consistent. Risk assessment is the identification and analysis of relevant risks to achievement of the objectives, forming a basis for determining how the risks should be managed. Because economic, industry, regulatory and operating conditions will continue to change, mechanisms are needed to identify and deal with the special risks associated with change. All entities, regardless of size, structure, nature or industry, encounter risks at all levels within their organizations. Risks affect each entity s ability to survive; successfully compete within its industry; maintain its financial strength and positive public image; and maintain the overall quality of its products, services and people. There is no practical way to reduce risk to zero. Indeed, the decision to be in business creates risk. Management must determine how much risk is to be prudently accepted, and strive to maintain risk within these levels. Objective setting is a precondition to risk assessment. There must first be objectives before management can identify risks to their achievement and take necessary actions to manage the risks. Objective setting, then, is a key part of the management process. While not an internal control component, it is a prerequisite to and enabler of internal control. This chapter first discusses objectives, followed by the discussion of risks. Control Activities: Control activities are policies and procedures, which are the actions of people to implement the policies, to help ensure that management directives identified as necessary to address risks are carried out. They help ensure that necessary actions are taken to address risks to achievement of the entity s objectives. Control activities occur throughout the organization, at all levels and in all functions. They include a range of activities including approvals, authorizations, verifications, reconciliations, reviews of operating performance, security of assets and segregation of duties. Control activities can be divided into three categories, based on the nature of the entity s objectives to which they relate: operations, financial reporting, or compliance. Although some controls relate solely to one area, there is often overlap. Depending on circumstances, a particular control activity could help satisfy entity objectives in more than one of the three categories. For example, operations controls also can help ensure reliable financial reporting and financial reporting controls can serve to effect compliance. Although these categories are helpful in discussing internal control, the particular category in which a control happens to be placed is not as important as the role it plays in achieving a particular activity s objectives. Information and Communication: Pertinent information must be identified, captured and communicated in a form and time frame that enables people to carry out their responsibilities. Information systems produce reports, containing operational, financial and compliance- related information, that make it possible to run and control the business. They deal not only with internally generated data, but also information about external events, activities and conditions necessary to informed business decision-making and external reporting. Effective communication also must occur in a broader sense, flowing down, across and up the organization. All personnel must receive a clear message from top management that control responsibilities must be taken seriously. They must understand their own role in the internal control system, as well as how individual activities relate to the work of others. They must have a means of communicating significant information upstream. There also needs to be effective communication with external parties, such as customers, suppliers, regulators and shareholders. Every enterprise must capture pertinent information, financial and non-financial, relating to external as well as internal events and activities. The information must be identified by management as relevant to managing the business. It must be delivered to people who need it in a form and timeframe that enables them to carry out their control and other responsibilities. Monitoring: Internal control systems need to be monitored, a process that assesses the quality of the system s performance over time. This is accomplished through ongoing monitoring activities, separate evaluations or a combination of the two. Ongoing monitoring occurs in the course of operations. It includes regular management and supervisory activities, and other actions personnel take in performing their duties. The scope and frequency of separate evaluations will depend primarily on an assessment of risks and the effectiveness of ongoing monitoring procedures. Internal control deficiencies should be reported upstream, with serious matters reported to top management and the board. Internal control systems change over time. The way controls are applied may evolve. Once-effective procedures can become less effective, or perhaps are no longer performed. This can be due to the arrival of new personnel, the varying effectiveness of training and supervision, time and resource constraints or additional pressures. Furthermore, circumstances for which the internal control system originally was designed also may change, causing it to be less able to warn of the risks brought by new conditions. Accordingly, management needs to determine whether the internal control system continues to be relevant and able to address new risks. Monitoring ensures that internal control continues to operate effectively. This process involves assessment by appropriate personnel of the design and operation of controls on a suitably timely basis, and the taking of necessary actions. It applies to all activities within an organization, and sometimes to outside contractors as well. For example, with outsourcing of health claims processing to a third-party administrator, and such processing directly affects benefits costs, the entity will want to monitor the functioning of the administrator s activities and controls. GAO Standards for Internal Controlthe five standards of internal control: for federal agencies also comprise Control Environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal controls and conscientious management. A positive control environment is the foundation for all other standards. Several key factors affect the control environment including the integrity and ethical values maintained and demonstrated by management and staff, management s commitment to competence, and philosophy and operating style. In addition, the agency s organizational structure and the manner in which the agency delegates authority and responsibility throughout the organization affect the control environment. Good human capital policies and practices are another critical environmental factor. A final factor affecting the control environment is the agency s relationship with Congress and central oversight agencies such as the Office of Management and Budget. Congress mandates the programs that agencies undertake and monitors their progress and central agencies provide policy and guidance on many different matters. In addition, Inspectors General and internal senior management councils can contribute to a good overall control environment. Risk Assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. A precondition to risk assessment is the establishment of clear, consistent agency objectives. Risk assessment is the identification and analysis of relevant risks associated with achieving the objectives, such as those defined in strategic and annual performance plans developed under the Government Performance and Results Act, and forming a basis for determining how risks should be managed. Management needs to comprehensively identify risks using methods such as qualitative and quantitative ranking activities, management conferences, forecasting and strategic planning, and consideration of findings from audits and other assessments. Once risks have been identified, they should be analyzed for their possible effect. Because governmental, economic, industry, regulatory, and operating conditions continually change, mechanisms should be provided to identify and deal with any special risks prompted by such changes. Control Activities: Internal control activities help ensure that management s directives are carried out. The control activities should be effective and efficient in accomplishing the agency s control objectives. Control activities are policies, procedures, techniques, and mechanisms that enforce management s directives, such as the process of adhering to requirements for budget development and execution. They help ensure that actions are taken to address risks. Control activities are an integral part of an entity s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Control activities occur at all levels and functions of the entity. They include a wide range of diverse activities such as approvals, authorizations, verifications, reconciliations, performance reviews, maintenance of security, and the creation and maintenance of related records which provide evidence of execution of these activities as well as appropriate documentation. Control activities may be applied in a computerized information system environment or though manual processes. Activities may be classified by specific control objectives, such as ensuring completeness and accuracy of information processing. Examples of control activities include top level reviews of actual performance, reviews by management at the functional or activity level, management of human capital, controls over information processing, physical control over vulnerable assets, establishment and review of performance measures and indictors, segregation of duties, proper execution of transactions and events, accurate and timely recording of transactions and events, access restrictions to and accountability for resources and records, and appropriate documentation of transactions and internal control. Information and Communications: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. For an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events. Information is needed throughout the agency to achieve all of its objectives. Program managers need both operational and financial data to determine whether they are meeting their agencies strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. For example, operating information is required for development of financial reports. This covers a broad range of data from purchases, subsidies, and other transactions to data on fixed assets, inventories, and receivables. Operating information is also needed to determine whether the agency is achieving its compliance requirements under various laws and regulations. Financial information is needed for both external and internal uses. It is required to develop financial statements for periodic external reporting, and, on a day-to-day basis, to make operating decisions, monitor performance, and allocate resources. Pertinent information should be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. Effective communications should occur in a broad sense with information flowing down, across, and up the organization. In addition to internal communications, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. Moreover, effective information technology management is critical to achieving useful, reliable, and continuous recording and communication of information. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. It is performed continually and is ingrained in the agency s operations. It includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. Separate evaluations of control can also be useful by focusing directly on the controls effectiveness at a specific time. The scope and frequency of separate evaluations should depend primarily on the assessment of risks and the effectiveness of ongoing monitoring procedures. Separate evaluations may take the form of self-assessments as well as review of control design and direct testing of internal control. Separate evaluations also may be performed by the agency Inspector General or an external auditor. Deficiencies found during ongoing monitoring or through separate evaluations should be communicated to the individual responsible for the function and also to at least one level of management above that individual. Serious matters should be reported to top management. Monitoring of internal control should include policies and procedures for ensuring that the findings of audits and other reviews are promptly resolved. Managers are to (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies operations, (2) determine proper actions in response to findings and recommendations from audits and reviews, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management s attention. The resolution process begins when audit or other review results are reported to management, and is completed only after action has been taken that (1) corrects identified deficiencies, (2) produces improvements, or (3) demonstrates the findings and recommendations do not warrant management action. Appendix V: Comments from the Department of Education Appendix VI: Comments from the District of Columbia Children and Youth Investment Trust Corporation Appendix VII: Contact and Staff Acknowledgments <13. GAO Contact> <14. Staff Acknowledgments> In addition to the contact named above, Gretta L. Goodwin, Assistant Director; Jamila Jones Kennedy; Grace Cho; and Michelle Loutoo Wilson made significant contributions to this report. In addition, key support was provided by Carl Barden, Hiwotte Amare, Kimberly McGatlin, Carla Craddock, David Chrisinger, Maria C. Belaval, Mimi Nguyen, Melinda Cordero, John Lopez, Helina Wong, Ramon Rodriguez, Alexander Galuten, Jean McSween, James Rebbe, Edward Bodine, Aron Szapiro, and Kristy Kennedy.
Why GAO Did This Study School vouchers, a school choice program designed to provide students with public funds to attend private schools, feature prominently in policy discussions about education reform. The OSP was reauthorized by Congress in 2011 by the Scholarships for Opportunity and Results Act, and has garnered national attention as the first federally-funded voucher program. Since the program's inception in 2004, Congress has provided almost $152 million for the program benefitting almost 5,000 students, currently providing scholarships of about $8,000 for grades K-8 and about $12,000 for grades 9-12. As requested, GAO examined (1) the extent to which the Trust provides information that enables families to make informed school choices, (2) whether the Trust's internal controls ensure accountability for OSP, and (3) how Education and District agencies responsible for overseeing OSP have performed their stated roles and responsibilities. To conduct this work, GAO visited 10 participating schools; interviewed school officials; conducted discussion groups with 14 parents of scholarship students; analyzed key program documents; reviewed generally accepted guiding documents for internal controls; and interviewed officials at Education, relevant District agencies, and the Trust. What GAO Found The DC Children and Youth Investment Trust Corporation (the Trust) provides information to prospective and current families of children participating in the District of Columbia (the District) Opportunity Scholarship Program (OSP) through a variety of outreach activities. To reach prospective OSP families, the Trust advertises through print, radio, and bus ads, as well as in newspapers and flyers posted in neighborhood libraries, recreation centers, and local government service centers. However, the Trust provides incomplete and untimely information about participating schools to OSP families. The participating school directory, which is published by the Trust, lacks key information about tuition, fees, and accreditation. The Trust published the directory 9 months after the start of the 2012-13 school year, too late to assist families in selecting a school for that year. Without such information, parents cannot make fully informed school choices. Additionally, the Trust awarded scholarships to students several months after many schools completed their admissions and enrollment processes, limiting the amount of time and choice in selecting schools. Most families GAO spoke with were generally happy with OSP but some were concerned about the availability of program information. The Trust's internal controls do not ensure effective implementation and oversight of OSP. Adequate policies and procedures can provide reasonable assurance of effective, efficient operations, reliable financial reporting, and compliance with applicable laws. However, the Trust's policies and procedures do not include a process for verifying eligibility information that schools selfreport. As a result, the Trust cannot ensure that schools are eligible to participate in the program and, therefore, risks providing federal dollars to students to attend schools that do not meet standards required by law. Furthermore, the Trust's database is not well structured and hampers the effectiveness of program implementation. For example, the Trust lacks written documentation for the database, and staff must rely on institutional memory to ensure processes such as data entry are conducted properly, which could contribute to errors in the database. As required by law, the Trust groups eligible applicants into three priority categories by which scholarships are then awarded by lottery; however, weaknesses in the database's structure puts into question the Trust's ability to provide accurate priority categories. Additionally, the Trust has not submitted its mandatory financial reports on time, despite a legal requirement that these reports be filed within 9 months of the end of the entity's fiscal year. The Trust's fiscal year 2010 financial report was almost 2 years late, and the Trust's fiscal year 2011 and 2012 reports had not yet been submitted as of August 2013. In August 2013, the Trust also made amendments to its policies and procedures in three areas GAO identified. However, these amendments do not address all weaknesses identified in this report, and have not yet been fully implemented. The Department of Education (Education) has provided limited assistance to the Trust in certain areas outlined in the memorandum of understanding (MOU) with the District and in the cooperative agreement with the Trust. Specifically, Education is responsible for helping the Trust make improvements to its financial system, enhance its site visit policies and procedures, and improve the accuracy of information provided to parents. Trust officials acknowledged that Education has provided general assistance regarding administrative and operational functions, but it has not assisted with specific improvements in these areas. Although the MOU is a written agreement between Education and the District, it holds the Trust, as the grantee, responsible for notifying District agencies to conduct required building, zoning, health, and safety inspections of participating schools—a requirement that is not detailed in the cooperative agreement signed by Education and the Trust—but would assist the Trust in providing continued oversight of schools participating in OSP. As a result, Trust officials were not acutely aware of this responsibility, and required inspections were not being conducted in the manner described in the MOU between Education and the District. What GAO Recommends GAO is making 10 recommendations to Education to improve OSP, such as ensuring that the Trust publishes a more complete school directory and updates key aspects of its policies and procedures. Education did not indicate agreement or disagreement with our recommendations. The Trust disagreed with some findings and both provided additional information.
<1. Scope and Methodology> To determine what is known about the supply and domestic demand for lithium-7, we analyzed data provided by industry representatives, reviewed agency and industry documents, and interviewed agency officials and industry representatives. Specifically, to understand the supply and domestic demand of lithium-7, we reviewed data from the three brokers that purchase lithium hydroxide from China and Russia and sell it to utilities and other companies in the United States. To assess the reliability of the data, we interviewed lithium-7 brokers about the data and found the data to be sufficiently reliable for purposes of this report. We also obtained information on China s supply and demand for lithium-7 from an expert on nuclear reactors at the Massachusetts Institute of Technology that was identified by DOE and Y-12 officials. Additionally, this expert has been working with DOE in its meetings with scientists from the Chinese Academy of Sciences regarding China s research on new reactor designs. We also reviewed documents provided by DOE, Y-12, and two utilities that operate pressurized water reactors Tennessee Valley Authority (TVA) and Exelon. We also interviewed representatives of companies that buy, sell, and/or handle lithium hydroxide, including Ceradyne, Inc., Isoflex, Nukem Isotopes, and Sigma Aldrich and officials from DOE, NNSA, and Y-12. To examine the responsibilities of DOE, NRC, and other entities in assessing risks to the lithium-7 supply, and what, if anything, has been done to mitigate a potential supply disruption of lithium-7, we reviewed documents from DOE, Y-12, and NRC. We also interviewed officials from DOE s Isotope Program and the Office of Nuclear Energy; NNSA s Office of Nuclear Materials Integration, Office of Nuclear Nonproliferation and International Security, and Y-12; and NRC. We also interviewed representatives from Exelon, TVA, EPRI, North American Electric Reliability Corporation, Nuclear Energy Institute, Pressurized Water Reactors Owners Group, Ceradyne, Inc., and Isoflex. In addition, we compared actions DOE is taking to manage and communicate lithium-7 supply risks with federal standards for internal control. To identify additional options, if any, for mitigating a potential lithium-7 shortage, we reviewed technical articles and documents from industry and academia, DOE, Y-12, and NRC. We also interviewed officials from DOE s Isotope Program, Office of Nuclear Energy, and Idaho National Laboratory; Y-12; and representatives from Exelon, TVA, and EPRI. We conducted this performance audit from June 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. <2. Background> Lithium-7 was produced in the United States as a by-product of enriching lithium-6 for the United States nuclear weapons program. Lithium-7 and lithium-6 are derived from natural lithium, which contains about 92.5 percent lithium-7 and about 7.5 percent lithium-6. Lithium-6 was enriched in the United States by separating it from lithium-7 using a column exchange process, called COLEX, that required very large quantities of mercury, which can harm human health and the environment. Y-12 built a COLEX facility and began operations in 1955 and used it through 1963 to enrich lithium-6 and lithium-7. Y-12 experienced several problems with the COLEX process, including equipment failures, worker exposure to mercury, and mercury contamination of the environment. Y-12 shut the COLEX facility down in 1963 and has not operated it since then. While the United States still has a stockpile of lithium-6, DOE sold the lithium-7 by-product to commercial companies, though some was enriched and still remains stored at Y-12. Lithium-7 is used in two functions of a pressurized water reactor to produce lithium hydroxide that is added to the cooling water to reduce the acidity, and lithium-7 is added to demineralizers to filter contaminants out of the cooling water. The cooling water becomes acidic due to the addition of boric acid, which contains boron-10, an isotope of boron that is used to manage the nuclear reaction in the core the use of both boron- 10 and lithium hydroxide is based on reactor core design requirements and water pH requirements for corrosion control. Lithium hydroxide, made with lithium-7 rather than lithium-6, is added to the cooling water to reduce the acidity of the water and boric acid. Lithium-7 is used rather than lithium-6 or natural lithium, which contains lithium-6, because lithium- 6 would react with nuclear material in the reactor core to produce tritium, a radioactive isotope of hydrogen. According to industry representatives, lithium hydroxide is added directly to the cooling water, via a chemical feed tank, when a pressurized water reactor is started up after being shut down, such as after refueling. Lithium-7 is also used in special water purifiers called demineralizers that remove radioactive material and impurities from the cooling water. Figure 1 shows the flow of water through a typical pressurized water reactor, though some variations among reactors may exist. As the cooling water circulates in the primary cooling loop, as shown in figure 1, some of the water flows through pipes to the demineralizers and the chemical feed tank where the lithium hydroxide is added. <3. Little Is Known about Lithium-7 Production, Creating Uncertainty about the Reliability of the Future Supply> There is no domestic production of lithium-7, and little is known about the lithium-7 production capabilities of China and Russia and whether they will be able to provide future supplies. China and Russia produce lithium- 7 as a by-product of enriching lithium-6 for their nuclear weapons programs, according to a DOE official, much like the United States previously did. Because of the secrecy of their weapons programs, China and Russia s lithium-7 production capabilities are not fully known, according to lithium-7 brokers. According to industry representatives, lithium-7 brokers, and NNSA documents, China and Russia have produced enough lithium-7 to meet the current U.S. demand, which is not expected to increase a significant amount in the near future, based on DOE s information that shows five new pressurized water reactors scheduled to begin operating by 2018. Additionally, during the course of our review, utilities announced that four pressurized water reactors would be decommissioned, eliminating their demand for lithium-7. China s continued supply of lithium-7 may be reduced by its own growing demand created by the construction of new reactors and the development of new reactor designs. China s demand is expected to increase because, according to information from DOE, the International Atomic Energy Agency, and an expert on nuclear reactors who has met with Chinese scientists on this topic, China is constructing over 25 pressurized water reactors that are scheduled to begin operating by 2015. Additionally, China is planning to build a new type of nuclear power reactor a molten salt reactor that will require dramatically larger amounts of lithium-7 to operate. China is pursuing the development of two different types of molten salt reactors, according to the expert, each of which will result in a reactor that requires 1,000s of kilograms of lithium-7 to operate, rather than the approximate 300 kilograms (about 660 pounds) annually needed for all 65 U.S. pressurized water reactors combined, according to lithium- 7 brokers. China s first molten salt reactor is expected to be finished by 2017, and the second reactor by 2020, according to the reactor expert.Furthermore, molten salt reactors require a more pure form of lithium-7 99.995 percent, or higher than what is currently produced by China and Russia, according to the reactor expert and a lithium-7 broker. To obtain the higher enriched lithium-7, according to the reactor expert who is familiar with China s research, China built a small facility that will feed in lower-enriched lithium-7 and enrich it to the higher level of purity that is needed. An Isotope Program official suggested to us that China s new facility could increase the available supply of lithium-7 for pressurized water reactors. However, according to the reactor expert, this new facility may reduce the supply of lithium-7 available for export since it uses lithium-7 as feedstock. This expert said that China has obtained lithium-7 from its own supplies and has purchased additional lithium-7 from Russia to enrich in its own facility, possibly making China a net importer of lithium-7. It is unknown, however, whether China has enough lithium-7 for its increased nuclear fleet and molten salt reactors, or if it will need to import additional quantities, which could reduce the available supply of lithium-7. For example, one lithium-7 broker told us in June of this year that China had no lithium-7 that it could sell to this broker. Russia s supply of lithium-7, on the other hand, may be largely available for export because Russia is believed to have very little domestic demand for lithium-7. Russia s fleet of pressurized water reactors does not use lithium hydroxide because they were specifically designed to use potassium hydroxide to lower the cooling water s acidity. However, because Russia s production capacity of lithium-7 is not known, U.S. utilities cannot be assured that Russia will continue to meet their demand for lithium-7 as China s demand increases. For example, one lithium-7 broker told us in June 2013 that he is having difficulty getting lithium-7 from Russia, though he is unsure if it is because Russia is unable to meet demand or for some other reason. The risk of relying on so few producers of lithium-7 leaves the 65 pressurized water reactors in the United States vulnerable to supply disruptions. In 2010, for example, we reported on the challenges faced by the Department of Defense when it experienced supply disruptions in rare earth elements 17 elements with unique magnetic properties that are Specifically, we reported that a produced almost exclusively in China.Department of Defense program was delayed due to a shortage of rare earth elements. Controlling most of the market on rare earth materials production, China caused a shortage when it decreased its exports of rare earth materials. At the time of our report, the Department of Defense and other federal agencies were taking steps to mitigate a shortage to prevent future supply disruptions. In the case of lithium-7, according to representatives of two utilities, if not mitigated, a lithium-7 shortage could possibly lead to the shutdown of one or more pressurized water reactors. Pressurized water reactors are temporarily shut down to refuel about every 18 months, after which time lithium-7, in the form of lithium hydroxide, is added to the cooling water, according to industry representatives. TVA representatives explained that nuclear reactors are scheduled for refueling during times when there is low demand for electricity, such as the spring or fall, when there is less need for heating or air-conditioning of homes and businesses. During peak times of electricity use, such as the summer months, commercial nuclear reactors are critical for maintaining the stability of the electrical grid, according to industry representatives. Without lithium hydroxide or some alternative, industry representatives told us that they would not be able to restart the pressurized water reactors after refueling. According to NRC officials, operating a pressurized water reactor without lithium-7 could be done, but it would significantly increase the corrosion of pipes and other infrastructure. <4. No Entity Has Taken Stewardship Responsibility for Assessing and Managing Risks to the Lithium-7 Supply, but DOE Is Taking Some Actions> No federal entity has taken stewardship responsibility for assessing risks to the lithium-7 supply for the commercial nuclear power industry. However, DOE has taken some steps in this area. Specifically, DOE studied lithium-7 supply and demand and concluded that no further action is needed, but our review found shortcomings in DOE s study. <4.1. No Entity Has Taken Stewardship Responsibility for Lithium-7 Risk Assessment> No federal entity has taken stewardship responsibility for assessing and managing risks to the supply of lithium-7 for commercial use. Federal stakeholders DOE, NRC, and NNSA told us they view lithium-7 as a commercial commodity for which industry is responsible. Officials in DOE s Isotope Program told us that because lithium-7 is a material bought and sold through commercial channels and used by industry, industry is responsible for monitoring the supply risks and managing those risks as it would do for any other commercial commodity. The Isotope Program produces isotopes that are in short supply and not those that are produced commercially and not in short supply. Notwithstanding, Isotope Program officials told us that the program s mission includes isotopes that have the potential for being in short supply and that they see the Isotope Program s role as being the lead office within DOE on issues related to lithium-7. Additionally, an Isotope Program official told us that the program must be careful not to address lithium-7 risks too aggressively because that may signal to industry stakeholders that DOE is taking responsibility for mitigating these risks risks that DOE views as the responsibility of industry to manage. NRC officials also told us that they believe industry is better suited to address any problems with the lithium-7 supply because the utilities are more likely to be aware of and have more information related to supply constraints than NRC or other federal government agencies. Similarly, officials in DOE s Office of Nuclear Energy said that, in their view, industry is responsible for addressing lithium-7 risks, and their office s role is to serve as liaison between DOE and industry. One DOE official said that industry probably would be aware of a shortage before any government agency would be. An official in NNSA s Office of Nuclear Materials Integration noted that NNSA is responsible for ensuring there is a sufficient supply of lithium-7 for federal demand but not for industry s demand. Furthermore, this official said that utilities are in the electricity business and should, therefore, assume the responsibility of assessing and managing risks. This official also stated that, in his view, given the importance of lithium-7 to the nuclear power industry, the commercial market would respond by increasing production to bring supply and demand into balance. However, our review found no other countries with the capability to enrich lithium-7 and, as described above, it is unclear if Russia and China will be able to meet increased demand. We reported in May 2011 on the importance of stewardship responsibility for critical isotopes. Specifically, our review found that a delayed response to the shortage of helium-3 in 2008 occurred because, among other things, there was no agency with stewardship responsibility to monitor the risks to helium-3 supply and demand. The shortage was addressed when an interagency committee took on a stewardship role by researching alternatives and allocating the limited supply, among other things. In that report, we recommended the Secretary of Energy clarify which entity has a stewardship role for 17 isotopes that are sold by the Isotope Program. In its comments on that report, NNSA stated that it could implement our recommendation, but to date, DOE and NNSA have not determined which entity or entities should serve as steward for lithium-7, and no federal entity has assumed such responsibility. The nuclear power industry may not be concerned about lithium-7 supply disruptions because it may not be aware of all the risks. Industry representatives we spoke with said that they have no concerns over the lithium-7 supply because they have not experienced any supply problems. For example, representatives from one utility said they have never had a problem obtaining lithium-7 so they did not see a need to consider actions to mitigate future supply disruptions. Similarly, representatives from EPRI said that they are not doing any work related to lithium-7 because there is no demonstrated need. However, EPRI representatives said they were surprised to recently learn from DOE that China is researching the development of molten salt reactors. These representatives said that such a development is important for EPRI s considerations of the lithium-7 issue. EPRI representatives told us they need to learn about all the factors relating to the current and future supply and demand of lithium-7 so those factors can be incorporated into EPRI s decision-making process and long-term planning. We discussed this point with DOE officials, and they were surprised to hear that industry was previously uninformed about China s development of molten salt reactors. An official from DOE s Office of Nuclear Energy told us the risks to the lithium-7 supply had been discussed with industry representatives in October 2012, including China s increased domestic demand for new reactors and for research on molten salt reactors, all of which could impact the lithium-7 supply. In addition to the longer term supply challenges created by increased Chinese domestic demand for lithium-7, there are also the recent examples of brokers facing supply disruptions. As previously discussed, two of the lithium-7 brokers told us they are having difficulty obtaining lithium-7 from China and Russia. Given the recent nature of this information, the uncertainty over whether these are isolated difficulties or indicative of a trend, and that the impact has not yet been felt by utilities, could also contribute to industry s current assessment that the risks of a possible lithium-7 supply disruption are low. Some industry representatives stated that, if there is a shortage, the federal government should be involved to ensure the reliability of the electrical grid.example, EPRI representatives said that, in the event of shortage, EPRI s role would be to research options for replacing lithium-7, but also said that government involvement is needed to ensure the reliability of the electrical grid. GAO/AIMD-00 -21.3.1. not have access to all the sources of information that are available to DOE. <4.2. DOE Studied Lithium-7 Supply and Demand and Concluded That No Further Action Is Needed, but Its Study Has Shortcomings> DOE studied the supply and demand of lithium-7 and concluded that no further action is needed to mitigate a potential lithium-7 shortage, but our review found shortcomings in its assessment of domestic demand and the mitigation measures it identifies for industry to consider implementing. In conducting this study, Isotope Program officials collaborated with officials in DOE s Offices of Nuclear Energy and Intelligence and Counterintelligence and NNSA s Office of Nuclear Materials Integration and had discussions with EPRI and other industry representatives. DOE s study, which was completed in May 2013, identifies some risks to the lithium-7 supply, describes several actions that industry could take to help mitigate a shortage, and lists the steps that DOE s Isotope Program is taking, or plans to take. According to DOE s study, there are several risks to the lithium-7 supply that could result in a shortage in a matter of years. Specifically, DOE s study points out that increasing demand for lithium-7 from construction of additional pressurized water reactors and the development of molten salt reactors are risks to the lithium-7 supply because demand could exceed the supply in a matter of years, if production does not increase. The study also points out the risks of relying on two foreign suppliers for lithium-7 and notes that a supply shortage is a low probability risk, but it is one with high consequence. DOE s study also describes several actions that industry could take to help mitigate a lithium-7 shortage. In its discussions with industry representatives, representatives identified the following four actions that the nuclear power industry could take should a shortage of lithium-7 occur: recycling lithium-7 from the demineralizers; increasing the burnable poisons in the reactor fuel;reducing the acidity of the cooling water to reduce the amount of lithium-7 needed by using boric acid that is enriched with boron-10, which would reduce the amount of boric acid added to the cooling water, thus reducing the acidity; and developing alternative sources of lithium-7, including building a domestic lithium-7 production capability. DOE s study of lithium-7 also lists two steps the Isotope Program is taking and concludes that no further action is needed. First, the study states that the Isotope Program will work with NNSA to prevent its inventory of contaminated lithium-7 at Y-12 from being disposed of or distributed without approval from DOE and will request that NNSA retain 200 kilograms (441 pounds) of this inventory to be purified and then sold to the nuclear power industry in the event of a supply disruption.according to Isotope Program officials, as part of its mission to support isotope production research and development, the program is also funding research on enriching lithium-7 without employing the mercury- intensive COLEX method that was previously used. The study concludes that the listed steps serve as an acceptable short-term strategy for mitigating the risks of a lithium-7 shortage and concludes that no additional action is needed. Nevertheless, our review found several shortcomings in DOE s study regarding its assessment of domestic demand for lithium-7 and the feasibility of the actions it says industry can take to mitigate the risks of a supply disruption. First, our review found that DOE s Isotope Program, as well as Y-12, underestimated domestic demand for lithium-7. While studying lithium-7 supply and demand, DOE s Isotope Program and Y-12 both estimated annual domestic demand for lithium-7 to be about 200 kilograms per year, whereas the lithium-7 brokers estimated domestic demand to be over 300 kilograms (662 pounds) per year, on average, from 2008 through 2012. Isotope Program and Y-12 officials told us that their estimate of 200 kilograms per year includes lithium-7 used in cooling water, but it does not include lithium-7 used in demineralizers, which the lithium-7 brokers did account for. Second, DOE s study concludes that there is enough lithium-7 in inventory held on-site at reactors to keep the reactors operating during the approximately 7 months required to purify Y-12 s lithium-7. However, DOE officials involved in the study said they did not collect any data from utilities to determine what quantities they held in inventory, and industry representatives told us that they are not aware of any entity that keeps records of the amount of lithium-7 inventory held at utilities across the industry. Some industry representatives also said that there is no standard practice for when to purchase lithium-7 or how much inventory to have on hand and that they believe inventory practices vary from utility to utility. Regarding the measures the study indicates industry can take to mitigate a potential lithium-7 supply shortage, our review found that DOE s study provides more optimistic assessments than industry s view about the challenges involved in implementing these actions. For example, DOE s study characterizes the process for recycling lithium-7 from demineralizers to be straightforward and of low technical risk, and it states that recycling can be implemented within a year. However, according to representatives of a utility with whom we spoke, there is no existing method to retrieve and recycle the lithium-7 from the demineralizers. According to EPRI representatives who provided information for DOE s study, the process is challenging because extracting lithium-7 from the demineralizers may require a special process to separate it from the other materials in the demineralizers, some of which pose radiation risks. In addition, there are also application challenges to recovering the lithium-7, such as modifying the plants to implement the process. EPRI representatives estimated it would take more than a year to develop the technology, and potentially many years to address the application challenges before this process could be implemented. Another mitigation option that DOE s study identifies is increasing burnable poisons isotopes added to the nuclear fuel to help control the nuclear reaction that would decrease the amount of boron required in the cooling water, in turn reducing the amount of lithium-7 needed to decrease acidity. The study states that doing so should not take a long time to implement, based on the premise that the modified fuel could be changed when plants refuel, which is about every 18 months. EPRI representatives, however, said this would be a longer process because any given fuel assembly is typically in the reactor for three operating cycles of 18 months each, which means a fuel assembly would be in the reactor for a total of about 4 years before being replaced. Also, according to NRC officials, a change in the fuel would require extensive modeling, testing, and regulatory reviews, which could take considerably longer than 4 years. As a result of the shortcomings in DOE s study, combined with the recent supply problems reported by brokers, as we previously discussed, it is unclear if its conclusion is correct that no additional actions need to be taken. <5. Additional Options Exist to Mitigate a Potential Lithium-7 Shortage> Based on information from government officials and industry representatives, we identified three options for mitigating a potential lithium-7 shortage in the near and long term, which could be implemented by government, industry, or even a committee of federal and industry stakeholders. The three near- and long-term options are: building a domestic reserve of lithium-7, building domestic capability to produce lithium-7, and reducing pressurized water reactors reliance on lithium-7. The first option building a domestic reserve of lithium-7 is a relatively low-cost option and would provide a fixed quantity of lithium-7 that, in the event of a shortage, could be used until a long-term solution is implemented. Establishing a domestic reserve would involve building up a stockpile of lithium-7 by importing an additional quantity above what is needed each year, purifying all or a portion of the existing supply of lithium-7 at Y-12 to make it suitable for use in pressurized water reactors, or a combination of these two. Stockpiling could be accomplished by individual utilities or, for example, by a steward that could maintain the supply for all utilities. Increasing imports to establish a domestic reserve could be initiated immediately, and the cost would be based on the market price of lithium-7, which is currently less than $10,000 per kilogram (about 2.2 lbs). However, stockpiling lithium-7 would have to be carefully managed to avoid a negative impact on the market stockpiling lithium-7 too aggressively could cause the price to increase or otherwise disrupt the available supply. A second way to help build up a reserve is the purification of all or a portion of the 1,300 kilograms of lithium-7 at Y- 12. DOE has plans to set aside 200 kilograms of the 1,300 kilograms of lithium-7 at Y-12, which could be purified and sold to utilities. DOE estimates it would take about 7 months to purify 200 kilograms and cost about $3,000 per kilogram for a total cost of about $600,000; purifying the remainder of the 1,300 kilograms would likely incur additional costs. The second option building a domestic lithium-7 production capability is a longer-term solution that would reduce or eliminate the need for importing supplies, but it would take several years to develop the technology and construct a production facility. While lithium separation was done in the United States until 1963 using the COLEX process, DOE and Y-12 officials told us that the COLEX separation method will not be used for a new production facility because of the large quantities of mercury it requires. Officials from DOE and Y-12, as well as industry representatives, identified several other potential separation techniques that do not use mercury, such as solvent extraction, a process in which the components to be separated are preferentially dissolved by a solvent and are thus separated, and electromagnetic separation, a process that uses electric and magnetic fields to separate isotopes by their relative weights. While these techniques have been developed and used to separate other materials for example, electromagnetic separation was used to separate isotopes of uranium further development of the techniques specifically for use with lithium-7 would be needed, according to DOE documentation. In particular, DOE s Isotope Program is funding a proposal from scientists at Oak Ridge National Laboratory and Y-12 to conduct research on lithium separation techniques using solvent extraction processes, which have been used in the pharmaceutical industry. If successful, according to Y-12, its proposed research would provide the basis for an industrial process to produce lithium-7. According to Y-12 officials, the entire research and development process, and the construction of a pilot facility capable of producing 200 kilograms of lithium-7 per year, would take about 5 years and cost $10 to $12 million. The third option reducing pressurized water reactors reliance on lithium-7 is also a longer-term option that would generally require changes in how reactors are operated and may produce only modest reductions in the use of lithium-7. Four possible changes that could be made to reactors include the following: Lithium-7 can be recycled from used demineralizers. According to industry representatives, the chemistry required for the recycling process would be challenging, would require plant modifications, and may pose risks to workers due to the presence of radioactive materials. This option would reduce the amount of lithium-7 needed for demineralizers but not reduce the amount of lithium-7 needed for the cooling water. Potassium hydroxide can be used in lieu of lithium hydroxide in the cooling water. According to nuclear power industry representatives, making such a change would require about 10 years of research to test the resulting changes in the rate of corrosion of pipes and other infrastructure in the reactor. Using enriched boric acid in the cooling water in place of natural boric acid would require less boric acid to be used, which would reduce the acidity of the water and result in less lithium-7 being needed. According to industry representatives, however, enriched boric acid is expensive, and this change may require plant modifications and would only modestly reduce the amount of lithium-7 needed. The nuclear fuel used in pressurized water reactors could be modified to reduce the need for boric acid and thus also reduce the amount of lithium-7 needed. According to industry representatives, however, this would be expensive and require long-term planning because utilities typically plan their fuel purchases for refueling 1 to 4 years in advance. According to one utility, changing the fuel could also have widespread impacts on operations and costs that are difficult to quantify. Industry representatives characterized all four possible changes to pressurized water reactors for reducing the demand for lithium-7 as requiring significant modifications to reactor operations at all 65 pressurized water reactors. Furthermore, these possible changes would need to be studied in more detail to determine the associated cost, time, and safety requirements before implementation and, if necessary, approved by NRC, all of which may take several years. <6. Conclusions> DOE studied the lithium-7 supply and demand situation, including identifying some supply risks, and is undertaking some actions to help mitigate a potential shortage, such as setting aside 200 kilograms of lithium-7 as a reserve. However, relying on two foreign producers to supply a chemical that is critical to the safe operation of most of the commercial nuclear power reactors in the United States places their ability to continue to provide electricity at some risk. Furthermore, the recent problems some brokers reported in obtaining lithium-7 from Russia and China, combined with China s increasing demand for lithium-7 suggest that the potential for a supply problem occurring may be increasing. DOE has not taken on stewardship responsibility, in part because lithium-7 it is not in short supply, at which time it could fall under the Isotope Program s mission. However, waiting for a critical isotope with increasing supply risks to become short in supply before taking action does not appear consistent with the mission of the Isotope Program. Because no entity has assumed stewardship responsibility for lithium-7, supply risks may not have been effectively communicated to industry, which could then weigh the risks and respond appropriately. Furthermore, there is no assurance that the risks have been fully analyzed and mitigated, as outlined in federal standards for internal control. Similarly, a shortage of helium-3 occurred in 2008 because, among other things, there was no agency with stewardship responsibility to monitor the risks to helium-3 supply and demand. The shortage was addressed when an interagency committee took on a stewardship role by researching alternatives and allocating the limited supply, among other things. Some DOE officials have described lithium-7 as a commercial commodity used by industry and, therefore, they assert that industry is responsible for addressing any supply problems, despite its importance to the electrical grid; NNSA and NRC concur that industry is responsible. Yet, industry is not in a position like DOE to be aware of all the risks. DOE has studied lithium-7 supply and demand to guide its decisions related to lithium-7. However, its study contains shortcomings, including underestimating the domestic demand, and may be underestimating the technological challenges industry will face in trying to adjust to a supply disruption. These shortcomings bring into question DOE s conclusion that no additional actions are needed to mitigate a potential lithium-7 shortage. In the end, without a full awareness of supply risks and an accurate assessment of domestic demand, utilities may not be prepared for a shortage of lithium-7. This leaves the reactors that depend on lithium-7 vulnerable to supply disruptions that, if not addressed, could lead to their shutdown. <7. Recommendation for Executive Action> To ensure a stable future supply of lithium-7, we recommend that the Secretary of Energy direct the Isotope Program, consistent with the program s mission to manage isotopes in short supply, to take on the stewardship role by fully assessing supply risks; communicating risks, as needed, to stakeholders; ensuring risks are appropriately managed; and fully and accurately determining domestic demand. <8. Agency Comments and Our Evaluation> We provided a draft of this report to DOE and NRC for review and comment. In written comments, DOE s Office of Science s Acting Director, responding on behalf of DOE, wrote that DOE concurred with our recommendation. DOE s written comments on our draft report are included in appendix I. In an e-mail received August 15, 2013, NRC s Audit Liaison in the Office of the Executive Director for Operations stated that NRC generally agreed with the report s content and recommendation. DOE and NRC provided technical comments that we incorporated as appropriate. In its comment letter, DOE concurred with our recommendation and stated that, in its view, ongoing efforts by DOE s Isotope Program satisfy the recommendation. Specifically, DOE s letter states that to further address lithium-7 utilization, demand, and inventory management, the Isotope Program has initiated the development of a more in-depth survey coordinated directly with the power industry through the Electric Power Research Institute a new undertaking that we learned about after providing a draft of our report to DOE for comment. We believe that this undertaking is especially important since we found that few people in industry were aware of the lithium-7 supply risks. In its written comments, DOE also states that the report includes several inaccurate descriptions of the federal role with respect to the response to lithium-7 availability and demand. Specifically, DOE does not agree with our characterization that there is a lack of federal stewardship for assessing and managing risks to the lithium-7 supply. DOE states that it has been active in assessing and managing supply risks, including engaging with stakeholders, forming an internal working group, and identifying actions to be taken to mitigate a shortage. We disagree and believe that DOE s comment letter overstates both the department s level of awareness of lithium-7 supply risks and its involvement in mitigating these risks. At no time during our review did any DOE official characterize DOE as a steward of lithium-7 or state that the agency will manage supply risks. Notably, during our review, the Director of the Facilities and Project Management Division, who manages the Isotope Program, told us that the Isotope Program is not the steward of lithium-7, nor should it be. Regarding engagement with stakeholders, we found that Isotope Program officials were aware of only two of the three key brokers of lithium-7 until we informed them of the third broker during a meeting in June 2013 over a year after the program became aware of a potential lithium-7 supply problem. Moreover, at this same meeting, program officials were not yet aware of recent lithium-7 supply problems experienced by two of the three lithium-7 brokers. Regarding mitigation actions, while DOE states in its comment letter that industry stakeholders identified actions for consideration should a shortage of lithium-7 occur, industry stakeholders told us that they were not aware that their input was being used for a DOE study and would not characterize the actions as DOE did in its study. We also disagree with DOE s comment letter suggesting that the shortcomings identified in our report regarding the department s demand estimates for lithium-7 were simply due to differences between our estimates and the DOE internal working group s estimates as a result of the demand quantities identified being for specific and different applications. To identify the actions needed to mitigate a lithium-7 shortage, all the uses of lithium-7 must be considered. By not accounting for the lithium-7 used in demineralizers, DOE left out an important use of lithium-7 that may represent about one-third of the total demand for pressurized water reactors. As DOE engages collaboratively with industry for ensuring a stable supply of lithium-7, accurately accounting for lithium- 7 demand will be essential. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Energy, Executive Director for Operations of NRC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact David C. Trimble at (202) 512-3841 or [email protected] or Dr. Timothy M. Persons at (202) 512-6412 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Energy Appendix II: GAO Contacts and Staff Acknowledgments <9. GAO Contacts> <10. Staff Acknowledgments> In addition to the individuals named above, Ned H. Woodward, Assistant Director; R. Scott Fletcher; Wyatt R. Hundrup; and Franklyn Yao made key contributions to this report. Kevin Bray, Cindy Gilbert, Karen Howard, Mehrzad Nadji, and Alison O Neill also made important contributions.
Why GAO Did This Study About 13 percent of our nation’s electricity is produced by pressurized water reactors that rely on lithium-7, an isotope of lithium produced and exported solely by China and Russia, for their safe operation. Lithium-7 is added to the water that cools the reactor core to prevent the cooling water from becoming acidic. Without the lithium-7, the cooling water’s acidity would increase the rate of corrosion of pipes and other infrastructure—possibly causing them to fail. Utilities that operate the pressurized water reactors have experienced little difficulty obtaining lithium-7, but they may not be aware of all the risks of relying on two producers. GAO was asked to review the supply and domestic demand for lithium-7 and how risks are being managed. This report examines (1) what is known about the supply and demand of lithium-7, (2) what federal agencies are responsible for managing supply risks, and (3) alternative options to mitigate a potential shortage. GAO reviewed documents and interviewed officials from DOE, NNSA, and NRC, in addition to industry representatives. This report is an unclassified version of a classified report also issued in September 2013. What GAO Found Little is known about lithium-7 production in China and Russia and whether their supplies can meet future domestic demand. According to industry representatives, China and Russia produce enough lithium-7 to meet demand from U.S. pressurized water reactors, a type of commercial nuclear power reactor that requires lithium-7 for safe operation. However, China's continued supply may be reduced by its own growing demand, according to an expert that is familiar with China's plans. Specifically, China is building several pressurized water reactors and developing a new type of reactor that will require 1,000s of kilograms of lithium-7 to operate, rather than the 300 kilograms needed annually for all 65 U.S. pressurized water reactors. Relying on two producers of lithium-7 leaves U.S. pressurized water reactors vulnerable to lithium-7 supply disruptions. No federal entity has taken stewardship responsibility for assessing and managing risks to the lithium-7 supply, but DOE is taking some steps. Risk assessment is the identification and analysis of relevant risks, communication of risks to stakeholders, and then taking steps to manage the risks, according to federal standards for internal control. Officials at DOE, the National Nuclear Security Administration (NNSA), and the Nuclear Regulatory Commission (NRC) told GAO they view lithium-7 as a commercial commodity for which industry is responsible. Industry representatives told GAO that they had no concerns about the lithium-7 supply, as they have experienced no problems in obtaining it. But GAO learned that industry representatives may not be familiar with all the supply risks. Notwithstanding, DOE plans to set aside 200 kilograms of lithium-7 and is funding research on lithium-7 production methods. DOE also studied lithium-7 supply and demand and concluded that no further action is needed. However, GAO found several shortcomings in its study, including that DOE underestimated the amount of lithium-7 used domestically. Industry estimates show that about 300 kilograms of lithium-7 are used annually in the United States, whereas DOE estimated that 200 kilograms are used annually. This and other shortcomings make it unclear if DOE's conclusion is correct that no additional action is needed. Based on information from agency officials and industry representatives, GAO identified three options to mitigate a potential lithium-7 shortage: (1) building a domestic reserve is a low-cost option that could help in the short-term; (2) building a domestic production capability is a longer-term solution that could eliminate lithium-7 imports, but take about 5 years and cost $10-12 million, according to NNSA; and (3) reducing pressurized water reactors' reliance on lithium-7 is another longer-term solution, but may require years of research and changes in how reactors are operated. What GAO Recommends GAO recommends that the Secretary of Energy ensure a stable future supply of lithium-7 by directing the Isotope Program to take on a stewardship role for lithium-7 by taking steps, including fully assessing risks and accurately determining domestic demand. DOE concurred with the recommendation.
<1. Background> CMS has undertaken steps to educate beneficiaries about the Part D benefit using written documents, a toll-free help line, and the Medicare Web site. To explain the Part D benefit to beneficiaries, CMS had produced more than 70 written documents as of December 2005. Medicare & You the beneficiary handbook is the most widely available and was sent directly to beneficiaries in October 2005. Other written documents were targeted to specific groups of beneficiaries, such as dual-eligible beneficiaries and beneficiaries with Medicare Advantage or Medigap policies. Beneficiaries can obtain answers to questions about the Part D benefit by calling the 1-800-MEDICARE help line. This help line, which is administered by CMS, was established in March 1999, to answer beneficiaries questions about the Medicare program. As of December 2005, about 7,500 CSRs were handling calls on the help line, which operates 24 hours a day, 7 days a week, and is run by two CMS contractors. CMS provides CSRs with detailed scripts to use in answering the questions. Call center contractors write the scripts, and CMS checks them for accuracy and completeness. In addition, CMS s Medicare Web site provides information about various aspects of the Medicare program. The Web site contains basic information about the Part D benefit, suggests factors for beneficiaries to consider when choosing plans and provides guidance on enrollment and plan selection. It also lists frequently asked questions, and allows users to view, print, or order publications. In addition, the site contains information on cost and coverage of individual plans. There is also a tool that allows beneficiaries to enroll directly in the plan they have chosen. <2. Clarity of CMS Written Documents Could Be Improved> Although the six sample documents we reviewed informed readers of enrollment steps and factors affecting coverage, they lacked clarity in two ways. First, about 40 percent of seniors read at or below the fifth-grade level, but the reading levels of the documents ranged from seventh grade to postcollege. As a result, these documents are challenging for many seniors. Even after adjusting the text for 26 multisyllabic words, such as Medicare, Medicare Advantage, and Social Security Administration, the estimated reading level ranged from seventh to twelfth grade, a reading level that would remain challenging for at least 40 percent of seniors. Second, on average, the six documents we reviewed did not comply with about half of the 60 commonly recognized guidelines for good communications. For example, although the documents included concise and descriptive headings, they used too much technical jargon and often did not define difficult terms such as formulary. The 11 beneficiaries and 5 advisers we tested reported frustration with the documents lack of clarity as they encountered difficulties in understanding and attempting to complete 18 specified tasks. For example, none of these beneficiaries and only 2 of the advisers were able to complete the task of computing their projected total out-of-pocket costs for a plan that provided Part D standard coverage. Only one of 18 specified tasks was completed by all beneficiaries and advisers. Even those who were able to complete a given task expressed confusion as they worked to comprehend the relevant text. <3. Help Line Responses Frequently Complete and Accurate, but Varied By Question> Of the 500 calls we placed to CMS s 1-800-MEDICARE help line regarding the Part D benefit, CSRs answered about 67 percent of the calls accurately and completely. Of the remainder, 18 percent of the calls received inaccurate responses, 8 percent of the responses were inappropriate given the question asked, and about 3 percent received incomplete responses. In addition, about 5 percent of our calls were not answered, primarily because of disconnections. The accuracy and completeness of CSR responses varied significantly across our five questions. (See fig. 1.) For example, while CSRs provided accurate and complete responses to calls about beneficiaries eligibility for financial assistance 90 percent of the time, the accuracy rate for calls concerning the drug plan that would cost the least for a beneficiary with specified prescription drug needs was 41 percent. CSRs inappropriately responded 35 percent of the time that this question could not be answered without personal identifying information such as the beneficiary s Medicare number or date of birth even though the CSRs could have answered our question using CMS s Web-based prescription drug plan finder tool. CSRs failure to read the correct script also contributed to inaccurate responses. The time GAO callers waited to speak with CSRs also varied, ranging from no wait time to over 55 minutes. For 75 percent of the calls 374 of the 500 the wait was less than 5 minutes. <4. Part D Benefit Portion of Medicare Web Site Can Be Challenging to Use> We found that the Part D benefit portion of the Medicare Web site can be difficult to use. In our evaluation of overall usability the ease of finding needed information and performing various tasks we found usability scores of 47 percent for seniors and 53 percent for younger adults, out of a possible 100 percent. While there is no widely accepted benchmark for usability, these scores indicate difficulties in using the site. For example, tools such as the drug plan finder were complicated to use, and forms that collect information on-line from users were difficult to correct if the user made an error. We also evaluated the usability of 137 detailed aspects of the Part D benefit portion of the site, including features of Web design and on-line tools, and found that 70 percent of these aspects could be expected to cause users confusion. For example, key functions of the prescription drug plan finder tool, such as the continue and choose a drug plan buttons, were often not visible on the page without scrolling down. In addition, the drug plan finder tool defaults or is automatically reset to generic drugs, which may complicate users search for drug plans covering brand name drugs. The material in this portion of the Web site is written at the 11th grade level, which can also present challenges to some users. Finally, in our evaluation of the ability of seven participants to collectively complete 34 user tests, we found that on average, participants were only able to proceed slightly more than half way though each test. When asked about their experiences with using the Web site, the seven participants, on average, indicated high levels of frustration and low levels of satisfaction. <5. Concluding Observations> Within the past 6 months, millions of Medicare beneficiaries have been making important decisions about their prescription drug coverage and have needed access to information about the new Part D benefit to make appropriate choices. CMS faced a tremendous challenge in responding to this need and, within short time frames, developed a range of outreach and educational materials to inform beneficiaries and their advisers about the Part D benefit. To disseminate these materials, CMS largely added information to existing resources, including written documents, such as Medicare & You; the 1-800-MEDICARE help line; and the Medicare Web site. However, CMS has not ensured that its communications to beneficiaries and their advisers are provided in a manner that is consistently clear, complete, accurate, and usable. Although the initial enrollment period for the Part D benefit will end on May 15, 2006, CMS will continue to play a pivotal role in providing beneficiaries with information about the drug benefit in the future. The recommendations we have made would help CMS to ensure that beneficiaries and their advisers are prepared when deciding whether to enroll in the benefit, and if enrolling, which drug plan to choose. Mr. Chairman, this concludes my prepared remarks. I would be happy to respond to any questions that you or other Members of the subcommittee may have at this time. <6. Contact and Acknowledgments> For further information regarding this statement, please contact Leslie G. Aronovitz at (312) 220-7600. Contact points for our Offices of Congressional Relations and Public Affairs may be found in the last page of this statement. Susan T. Anthony and Geraldine Redican-Bigott, Assistant Directors; Shaunessye D. Curry; Helen T. Desaulniers; Margaret J. Weber; and Craig H. Winslow made key contributions to this statement. Appendix I: Sample of CMS Written Documents Reviewed To assess the clarity, completeness, and accuracy of written documents, we compiled a list of all available CMS-issued Part D benefit publications intended to inform beneficiaries and their advisers and selected a sample of 6 from the 70 CMS documents available, as of December 7, 2005, for in- depth review, as shown in Table 1. The sample documents were chosen to represent a variety of publication types, such as frequently asked questions and fact sheets available to beneficiaries about the Part D benefit. We selected documents that targeted all beneficiaries or those with unique drug coverage concerns, such as dual-eligibles and beneficiaries with Medigap plans. Appendix II: Questions and Criteria Used to Evaluate Accuracy and Completeness of CSR s Help Line Responses To determine the accuracy and completeness of information provided regarding the Part D benefit, we placed a total of 500 calls to the 1-800- MEDICARE help line. We posed one of five questions about the Part D benefit in each call, so that each question was asked 100 times. Table 2 summarizes the questions we asked and the criteria we used to evaluate the accuracy of responses. Appendix III: Comments from the Centers for Medicare & Medicaid s Services Appendix IV: Agency Comments and Our Evaluation We received written comments on a draft of our report from CMS (see app. III). CMS said that it did not believe our findings presented a complete and accurate picture of its Part D communications activities. CMS discussed several concerns regarding our findings on its written documents and the 1-800-MEDICARE help line. However, CMS did not disagree with our findings regarding the Medicare Web site or the role of SHIPs. CMS also said that it supports the goals of our recommendations and is already taking steps to implement them, such as continually enhancing and refining its Web-based tools. CMS discussed concerns regarding the completeness and accuracy of our findings in terms of activities we did not examine, as well as those we did. CMS stated that our findings were not complete because our report did not examine all of the agency s efforts to educate Medicare beneficiaries and specifically mentioned that we did not examine the broad array of communication tools it has made available, including the development of its network of grassroots partners throughout the country. We recognize that CMS has taken advantage of many vehicles to communicate with beneficiaries and their advisers. However, we focused our work on the four specific mechanisms that we believed would have the greatest impact on beneficiaries written materials, the 1-800-MEDICARE help line, the Medicare Web site, and the SHIPs. In addition, CMS stated that our report is based on information from January and February 2006, and that it has undertaken a number of activities since then to address the problems we identified. Although we appreciate CMS s efforts to improve its Part D communications to beneficiaries on an ongoing basis, we believe it is unlikely that the problems we identified in our report could have been corrected yet given their nature and scope. CMS raised two concerns with our examination of a sample of written materials. First, it criticized our use of readability tests to assess the clarity of the six sample documents we reviewed. For example, CMS said that common multisyllabic words would inappropriately inflate the reading level. However, we found that reading levels remained high after adjusting for 26 multisyllabic words a Medicare beneficiary would encounter, such as Social Security Administration. CMS also pointed out that some experts find such assessments to be misleading. Because we recognize that there is some controversy surrounding the use of reading levels, we included two additional assessments to supplement this readability analysis the assessment of design and organization of the sample documents based on 60 commonly recognized communications guidelines and an examination of the usability of six sample documents, involving 11 beneficiaries and 5 advisers. Second, CMS expressed concern about our examination of the usability of the six sample documents. The participating beneficiaries and advisers were called on to perform 18 specified tasks, after reading the selected materials, including a section of the Medicare & You handbook. CMS suggested that the task asking beneficiaries and advisers to calculate their out-of-pocket drug costs was inappropriate because there are many other tools that can be used to more effectively compare costs. We do not disagree with CMS that there are a number of ways beneficiaries may complete this calculation; however, we nonetheless believe that it is important that beneficiaries be able to complete this task on the basis of reading Medicare & You, which, as CMS points out, is widely disseminated to beneficiaries, reaching all beneficiary households each year. In addition, CMS noted that it was not able to examine our detailed methodology regarding the clarity of written materials including assessments performed by one of our contractors concerning readability and document design and organization. We plan to share this information with CMS. Finally, CMS took issue with one aspect of our evaluation of the 1-800- MEDICARE help line. Specifically, CMS said the 41 percent accuracy rate associated with one of the five questions we asked was misleading, because, according to CMS, we failed to analyze 35 of the 100 responses. However, we disagree. This question addressed which drug plan would cost the least for a beneficiary with certain specified prescription drug needs. We analyzed these 35 responses to this question and found the responses to be inappropriate. The CSRs would not provide us with the information we were seeking because we did not supply personal identifying information, such as the beneficiary s Medicare number or date of birth. We considered such responses inappropriate because the CSRs could have answered this question without personal identifying information by using CMS s Web-based prescription drug plan finder tool. Although CMS said that it has emphasized to CSRs, through training and broadcast messages, that it is permissible to provide the information we requested without requiring information that would personally identify a beneficiary, in these 35 instances, the CSR simply told us that our question could not be answered. CMS also said that the bulk of these inappropriate responses were related to our request that the CSR use only brand-name drugs. This is incorrect none of these 35 responses were considered incorrect or inappropriate because of a request that the CSR use only brand-name drugs as that was not part of our question. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Today's hearing focuses on Medicare Part D, the program's new outpatient prescription drug benefit. On January 1, 2006, Medicare began providing this benefit, and beneficiaries have until May 15, 2006, to enroll without the risk of penalties. The Centers for Medicare & Medicaid Services (CMS), which administers the Part D benefit, has undertaken outreach and education efforts to inform beneficiaries and their advisers. GAO was asked to discuss how CMS can better ensure that Medicare beneficiaries are informed about the Part D benefit. This testimony is based on Medicare: CMS Communications to Beneficiaries on the Prescription Drug Benefit Could Be Improved, GAO-06-654 (May 3, 2006). What GAO Found Information given in the six sample documents that GAO reviewed describing the Part D benefit was largely complete and accurate, although this information lacked clarity. First, about 40 percent of seniors read at or below the fifth-grade level, but the reading levels of these documents ranged from seventh grade to postcollege. Second, on average, the six documents we reviewed did not comply with about half of 60 common guidelines for good communication. For example, the documents used too much technical jargon and often did not define difficult terms. Moreover, 16 beneficiaries and advisers that GAO tested reported frustration with the documents' lack of clarity and had difficulty completing the tasks assigned to them. Customer service representatives (CSRs) answered about two-thirds of the 500 calls GAO placed to CMS's 1-800-MEDICARE help line accurately and completely. Of the remainder, 18 percent of the calls received inaccurate responses, 8 percent of the responses were inappropriate given the question asked, and about 3 percent received incomplete responses. In addition, about 5 percent of GAO's calls were not answered, primarily because of disconnections. The accuracy and completeness of CSRs' responses varied significantly across the five questions. For example, while CSRs provided accurate and complete responses to calls about beneficiaries' eligibility for financial assistance 90 percent of the time, the accuracy rate for calls concerning the drug plan that would cost the least for a beneficiary with specified prescription drug needs was 41 percent. For this question, the CSRs responded inappropriately for 35 percent of the calls by explaining that they could not identify the least costly plan without the beneficiary's personal information--even though CSRs had the information needed to answer the question. The time GAO callers waited to speak with CSRs also varied, ranging from no wait time to over 55 minutes. For 75 percent of the calls--374 of the 500--the wait was less than 5 minutes. The Part D benefit portion of the Medicare Web site can be difficult to use. GAO's test of the site's overall usability--the ease of finding needed information and performing various tasks--resulted in scores of 47 percent for seniors and 53 percent for younger adults, out of a possible 100 percent. While there is no widely accepted benchmark for usability, these scores indicate that using the site can be difficult. For example, the prescription drug plan finder was complicated to use and some of its key functions, such as "continue" and "choose a drug plan," were often not visible on the page without scrolling down.
<1. Background> When investigating complaints about nursing homes, state survey agencies follow state policies and procedures based on CMS instructions. To oversee state survey agencies complaint investigation processes, CMS uses data from its complaints database and State Performance Standards System. <1.1. Complaint Investigation Policies and Procedures> CMS s State Operations Manual outlines procedures for state survey agencies investigation of nursing home complaints. This manual is based on requirements in statutes and regulations and includes a detailed protocol for handling complaints and incidents, such as directions for key parts of the complaints process intake, prioritization, investigation, and reporting of results. Intake. State survey agencies receive complaints via phone calls, e-mails, or letters. At intake, staff review the information provided by the complainant and, because each complaint can have more than one allegation, determine the type(s) of allegations involved, such as resident abuse or poor quality of care. Prioritization. Based on the nature of the allegations, staff assign a priority level to the complaint, which determines if an onsite investigation is required. Four of the eight priority levels require an onsite investigation. (See table 1.) For example, investigations for complaints that allege immediate jeopardy to a resident s health, safety, or life must be started within 2 working days of receipt, while investigations for complaints that allege a high level of actual harm ( actual harm-high ) to a resident must be started within 10 working days of prioritization. Investigation. During the unannounced investigation, state agency surveyors may conduct a document review and observe nursing home conditions. Additionally, surveyors interview witnesses, including the resident about whose care the complaint was filed and other residents with similar care needs, being careful to protect the anonymity of those involved in the complaint. Surveyors determine whether the allegations are substantiated and whether the nursing home should be cited for any deficiencies (failure to meet federal or state quality standards), which may be related or unrelated to the complaint allegations. Deficiencies are categorized according to scope and severity. Scope refers to the number of residents potentially or actually affected and has three levels isolated, pattern, or widespread. Severity refers to the degree of relative harm and has four levels immediate jeopardy (actual or potential for death or serious injury), actual harm, potential for more than minimal harm, or potential for minimal harm. Reporting of Results. After the complaint investigation is completed, the state survey agency notifies the complainant and the nursing home of the outcome of the investigation, following guidelines specified in the State Operations Manual. <1.2. CMS Oversight of State Survey Agencies Complaint Investigation Processes> CMS oversees state survey agencies complaint investigation processes using its complaints data and State Performance Standards System. CMS s Complaints Data. As of January 1, 2004, state survey agencies were required to enter data about all complaints and incidents into the ACTS Automated Survey Processing Environment (ASPEN) Complaints/Incidents Tracking System database according to guidance provided by CMS. Officials in CMS s headquarters and regional offices can access all information in ACTS, though the information is stored on individual state servers. CMS provides guidance to state survey agencies regarding ACTS database procedures, including what complaint information states are required to enter. The information is then uploaded into CMS s national complaints database, which contains a variety of information about complaints, such as the date of the alleged event, the name of the nursing home involved, and the source of the complaint. (See table 2.) State Performance Standards System. CMS s 10 regional offices are responsible for annually evaluating state survey agencies nursing home complaint investigations using four performance standards. (See table 3.) CMS developed the State Performance Standards System in fiscal year 2001 to assess whether state survey agencies were meeting the requirements for the survey and certification program and to identify areas for improvement. In fiscal year 2006, CMS reorganized the performance standards system, and in the following years made several revisions to the four nursing home complaint performance standards. None of the standards focus exclusively on nursing home complaints. For some standards, the scope of review includes incidents as well as complaints, facilities other than nursing homes, or standard surveys as well as complaint investigations. For all except the timeliness standard, the review is based on samples rather than the universe of complaints and incidents. Upon completion of the performance evaluation, CMS regional offices share the results with each respective state survey agency and CMS headquarters, which in turns shares each state s scores with all of the other states. State survey agencies that fail performance standards must submit corrective action plans to their CMS regional offices, which the regional offices can accept or reject, depending on whether they believe the state has outlined appropriate steps to address poor performance. The regional offices use these plans to follow up with state survey agencies as part of their monitoring activities. <2. CMS 2009 Data Show that States Received Over 50,000 Nursing Home Complaints and Substantiated the Complaint and Cited Federal Deficiencies in 19 Percent of Investigations> CMS s national complaints data show that state survey agencies received over 50,000 complaints about nursing homes in calendar year 2009. The number and types of complaints varied among states. State survey agencies investigated all but 102 of the complaints that required an investigation. Among complaints that were investigated and uploaded to CMS s national database for 2009, 19 percent were substantiated with at least one federal deficiency cited. <2.1. According to CMS s National Data, State Survey Agencies Received 53,313 Nursing Home Complaints in 2009> State survey agencies reported receiving 53,313 complaints about nursing homes in 2009. In 2009, 9 states received fewer than 100 complaints while 17 states received more than 1,000. Six states Illinois, Missouri, New York, Ohio, Texas, and Washington accounted for roughly half of all 2009 complaints in CMS s database. Although the number of nursing home residents has remained relatively stable, the number of complaints received generally increased by about 1,000 complaints a year from 2004 to 2008. In 2009, the number of complaints dropped by about 5,000. Complaint Rate. Nationally, in 2009, CMS s database showed a complaint rate of roughly 38 complaints per 1,000 nursing home residents. The complaint rate ranged from less than 1 (0.77) in South Dakota to about 137 in Washington. Additionally, 11 states received 15 or fewer complaints per 1,000 nursing home residents, while 14 states received more than 45. (See fig. 1.) Submission of Complaints and Sources. CMS data show that state survey agencies received three-quarters of complaints in 2009 by phone. Complaints also were submitted through other means, such as in writing, through e-mail, or in person. In 2009, complaints were typically submitted by family members (47 percent), anonymously (19 percent), or by residents (10 percent). Complaints were also submitted by current nursing home staff or other sources. Prioritization of Complaints. In 2009, among the complaints in CMS s national data, state survey agencies prioritized most as either actual harm- high (45 percent) or actual harm-medium (33 percent). Roughly 10 percent of complaints were prioritized as immediate jeopardy and about 4 percent were prioritized as actual harm-low. Approximately 8 percent of complaints were prioritized at the four lowest levels and did not require an onsite investigation. State survey agencies varied in the percentage of complaints they prioritized at different levels. For example, 23 state survey agencies prioritized more than 50 percent of complaints as immediate jeopardy or actual harm-high, while 7 state survey agencies prioritized fewer than 10 percent of complaints they received at these two levels. Allegations. Allegations are specific charges within complaints; each complaint can have multiple allegations. In 2009, according to CMS s national data, the average number of allegations per complaint was 2.3. Allegations that focused on quality of care or treatment accounted for about 40 percent of all allegations in 2009. (See table 4.) <2.2. CMS National Data Show States Investigated Nearly All Complaints that Required an Investigation and Cited Deficiencies in 19 Percent of the Investigations> CMS data show that in 2009 about 48,900 of the approximately 53,300 complaints received required an investigation and that state survey agencies investigated all but 102 of those complaints. Among those 102 complaints, 25 percent were prioritized as either immediate jeopardy or actual harm-high (6 and 19 percent respectively). The remaining 75 percent were complaints prioritized as actual harm-medium or actual harm-low. The percentage of complaints investigated from 2004 through 2009 remained relatively stable even as the number of complaints increased in all years except 2009. In 2009, an investigation was initiated within CMS s required time frames for most complaints prioritized as either immediate jeopardy or actual harm-high. Among immediate jeopardy complaints, an investigation was initiated within 2 working days of receiving the complaint for 88 percent of complaints. Among complaints prioritized as actual harm-high, an investigation was initiated within 10 working days of prioritization for 72 percent of complaints. Roughly 19 percent of the complaints that were investigated and uploaded into CMS s complaints database for 2009 were substantiated with at least one deficiency cited. However, there was considerable variation across states. In 19 states, more than 30 percent of the complaints investigated were substantiated with at least one deficiency cited, while in 5 states, the proportion was less than 10 percent. Of the approximately 16,000 nursing homes nationwide, about 2,800 had one substantiated complaint where at least one deficiency was cited. In addition, about 1,100 nursing homes had two such complaints. The percentage of immediate jeopardy complaints that were substantiated with at least one deficiency cited was higher than for complaints prioritized at lower levels in 2009. According to CMS s complaints database, roughly 26 percent of the immediate jeopardy complaints that were investigated were substantiated with at least one deficiency cited. Among complaints prioritized at lower levels, the percentage was around 21 percent for actual harm-high complaints, 17 percent for actual harm- medium complaints, and 12 percent for actual harm-low complaints. In 2009, among the complaints prioritized as immediate jeopardy or actual harm-high, the percentage substantiated with at least one deficiency was higher if the investigation was initiated within required time frames than if it was not. For example, among actual harm-high complaints that were investigated within 10 working days of prioritization, 22 percent were substantiated with at least one federal deficiency cited. (See table 5.) In contrast, among actual harm-high complaints that were investigated late, the proportion was 17 percent. (App. I contains state-level data on complaints received, investigated, and substantiated by state survey agencies, according to CMS data.) <3. Many State Survey Agencies Had Difficulty Meeting Certain Performance Standards for Nursing Home Complaint Investigations, but Reported Taking Steps Intended to Improve Performance> Many state survey agencies did not meet some of CMS s performance standards for nursing home complaints in fiscal year 2009. In particular, 19 state survey agencies had difficulty investigating complaints and incidents prioritized as actual harm-high within the required time frame. State survey agencies reported that they have taken or plan to take steps in four key areas staffing, agency restructuring, training and guidance, and monitoring to meet CMS s nursing home complaint standards. Although the standards do not assess state survey agencies communication with complainants, CMS does expect agencies to convey investigation findings to complainants in accordance with CMS s State Operations Manual. We found that agencies varied in their interpretations of the manual s instructions, and some provided limited information to complainants. <3.1. Many State Survey Agencies Had Difficulty Meeting Certain Nursing Home Complaint Standards, Particularly for Timely Investigation of Actual Harm-High Complaints> More than half of state survey agencies had difficulty meeting certain CMS performance standards pertaining to nursing home complaints. According to CMS s assessment for fiscal year 2009, 28 state survey agencies failed the timeliness of investigations standard for either immediate jeopardy or actual harm-high complaints, the prioritization of complaints standard, or both. Timeliness of Investigations Standard. CMS s assessment of state survey agencies performance found that some had difficulty meeting the timeliness of investigations standard, which evaluates: (1) whether an investigation was initiated within 10 working days of prioritization for actual harm-high complaints and incidents for nursing homes, and (2) whether an investigation was initiated within 2 working days of receipt for immediate jeopardy complaints and incidents for nursing homes and other facilities. State survey agencies must begin investigating at least 95 percent of complaints and incidents within required time frames. For actual harm-high complaints and incidents, CMS evaluates performance for nursing homes separately from that of other facilities. For immediate jeopardy complaints and incidents, CMS evaluates performance for both nursing homes and other types of facilities. CMS found that in fiscal year 2009, 19 state survey agencies failed to meet the timeliness of investigations standard for complaints and incidents prioritized as actual harm-high. This marked an improvement from fiscal year 2008, when 25 states failed. States fiscal year 2009 scores varied widely. For example, among states failing this standard, Louisiana nearly passed with 94.4 percent of actual harm-high complaints and incidents investigated within the required time frame, while Michigan s score was 17.3 percent. (For information on all state survey agencies performance on this standard, see app. II.) According to CMS s national data for calendar year 2009, the 19 states that failed this standard in fiscal year 2009 accounted for more than half (52 percent) of all actual harm-high complaints received nationally. In these 19 states, at least 43 percent of actual harm-high complaint investigations were initiated late, and at least 33 percent were initiated more than 11 working days late. Officials from the three state survey agencies in our sample that failed to meet the timeliness standard for actual harm-high complaints cited long- standing workload and staffing issues as reasons. More specifically, officials with the Michigan and Texas survey agencies said they had difficulty because of staffing shortages and because the volume of complaints and incidents increased. Tennessee officials noted that the state has tried to hire the additional staff needed to investigate the state s backlog of complaints, but has been hampered by low salaries for surveyor positions as well as a cumbersome state hiring process. Nationwide, state survey agencies generally performed better on CMS s timeliness standard for immediate jeopardy complaints and incidents than they did for actual harm-high complaints and incidents. In CMS s assessment for fiscal year 2009, all but nine state survey agencies passed this standard by initiating investigations within 2 working days of receipt for at least 95 percent of the immediate jeopardy complaints and incidents they received about nursing homes and other facilities. Among the nine state survey agencies that failed this standard, four had scores at or below 50 percent. As with actual harm-high complaints and incidents, the two state survey agencies in our sample that failed the timeliness standard for immediate jeopardy complaints and incidents Michigan and Tennessee cited staffing shortages or increases in the number of complaints and incidents as key reasons. Fourteen state survey agencies that met CMS s timeliness standard for immediate jeopardy complaints and incidents did not meet the timeliness standard for actual harm-high complaints and incidents. An official in one CMS regional office noted that immediate jeopardy complaints are the highest priority and therefore rightly received the most attention. Prioritization of Complaints Standard. CMS s assessment of state survey agencies performance found that most agencies (32) consistently passed this standard for the past four years. State survey agencies must appropriately prioritize at least 90 percent of complaints and incidents. CMS evaluates performance for nursing homes separately from that of other facilities. In CMS s assessment for fiscal year 2009, all but nine state survey agencies passed this performance standard. Among the nine state survey agencies that failed this standard in fiscal year 2009, most had scores between 70 percent and 88 percent. (See app. II for information on all state survey agencies performance on this standard.) All but one of the six state survey agencies in our sample passed the prioritization standard in fiscal year 2009. Officials from Tennessee said that the agency had difficulty meeting this standard because of personnel changes and because it took time for new management to fully understand how the agency operates. Officials from the five state survey agencies in our sample that passed this standard generally attributed their agencies performance on the prioritization standard to staff skills and experience, training, and processes for quality control. For example, officials from two state survey agencies Arkansas and Texas attributed their states success, in part, to a supervisor s or quality assurance specialist s review of the priority levels assigned by the staff members who received the complaint. <3.2. State Survey Agencies Reported Taking Steps Intended to Improve or Maintain Performance on CMS s Standards> State survey agencies reported that they have taken or plan to take steps in four key areas staffing, agency restructuring, training and guidance, and monitoring to either improve or maintain performance on CMS s nursing home complaint standards. Staffing. Officials from three of the state survey agencies in our sample indicated that because staff shortages affected their ability to meet CMS standards, they had taken steps to increase staffing. For example, officials of the Michigan survey agency, which repeatedly failed the timeliness of investigations standard between 2006 and 2009, reported that beginning in fiscal year 2009, the agency was able to hire additional surveyors and as of June 1, 2010, had eliminated its backlog of complaints. Tennessee officials indicated that the agency received state legislature approval in February 2009 to hire additional surveyors to fill vacant positions. Texas officials also hired additional surveyors to conduct complaint investigations. Officials of state survey agencies in our sample that met all or most of CMS s nursing home complaint standards credited, among other factors, experienced agency staff. For example, Wisconsin officials indicated that the agency s ability to meet CMS s standards was partly due to the quality of the staff hired by the agency specifically, some staff members experience in the regulatory process, as both health care providers and regulators. Agency Restructuring. Some state survey agencies restructured complaint investigation operations to address performance issues, either consolidating regional offices or creating separate units to investigate complaints. For example, to provide better statewide coverage with available staff, the Tennessee survey agency downsized from three regional offices to two. Arkansas and Texas both established separate complaint investigation units in Arkansas s case, more than 10 years ago in an effort to better manage large volumes of complaints. Officials of state survey agencies that have separate complaint investigation units cited several advantages to dividing complaint investigation functions from standard survey functions, including greater efficiency and flexibility. For example, some officials said that staff assigned to the complaints unit are able to build experience and familiarity with the process and thus conduct more efficient investigations and prepare more accurate reports; likewise, staff that focus on standard surveys are able to conduct these inspections more efficiently because they do not have to investigate complaints at the same time. One official also said that a separate complaint investigation unit affords managers more flexibility for example, by allowing them to more easily change staff members assignments from day to day to respond to high priority complaints. Training and Guidance. Officials of some state survey agencies attributed their agencies successful performance on the prioritization of complaints standard partly to staff training. State survey agencies also issued guidance, including policy manuals and standardized forms or templates, to guide staff through the complaint investigation process. For example, Florida provides staff with a 44-page manual, with chapters on intake, prioritization, and investigation of complaints, and created an automated complaint investigation form that captures information about each allegation in a complaint, as well as the evidence collected and findings reached with respect to each. Monitoring. Among the state survey agencies in our sample that failed to meet some of CMS s standards, officials indicated that their agencies had implemented or planned to implement additional monitoring efforts. For example, Texas officials indicated that the agency conducts reviews throughout the complaint process. For example, after a complaint has been prioritized, a quality assurance specialist reviews the information to ensure that the prioritization was appropriate. Similarly, officials from Tennessee s survey agency indicated that the agency planned to increase monitoring. In particular, the officials indicated that each of the state s regional offices would track and report quarterly on the timeliness of investigations for all immediate jeopardy and actual harm-high complaints. Tennessee officials indicated that surveyors in the state s regional offices would be immediately alerted when they are assigned an immediate jeopardy complaint to investigate, something not always done in the past. State survey agencies in our sample that generally passed CMS s performance standards indicated that monitoring programs contributed to the agencies success. For example, a Florida official indicated that a supervisor reviews a sample of complaints received on the previous day to determine whether they were prioritized appropriately. <3.3. Some State Survey Agencies Provide Limited Information to Complainants about Investigation Findings> Although the CMS performance standards do not assess whether state survey agencies are providing sufficient information to complainants about investigation results, CMS s State Operations Manual indicates that state survey agencies should provide a written report to complainants in accordance with certain guidelines specified in the manual. The manual specifies that the state agency should acknowledge the complainant s concerns, identify the agency s regulatory authority to investigate, provide a summary of investigation methods and the date of the investigation, summarize the investigation findings, and identify any follow-up action to be taken. The six state survey agencies in our sample varied in their interpretations of the manual, particularly the instruction to provide a summary of the investigation findings. Two of the six agencies consistently provided detailed information that specifically addressed complainants allegations. For example, one sample letter we received from the Wisconsin survey agency lists four specific allegations made by the complainant and then describes the agency s finding with respect to each, including whether a deficiency was cited. (See fig. 2 for an excerpt from this letter.) The other state survey agency that provided detailed information (Michigan) did so by enclosing the investigation report with the letter, along with the statement of deficiencies, if any were cited. A Michigan survey agency official said that staff also make at least one attempt to contact a complainant by telephone to explain the findings. In contrast, four of the state survey agencies sent complainants only boilerplate descriptions of the complaint investigation, typically sending one type of form letter if surveyors cited deficiencies and another if they did not. For example, in the sample letter we received from Florida, the survey agency varied the middle paragraph of its three-paragraph letter depending on whether deficiencies were cited (see fig. 3). An official of this agency said the letter was intended to let complainants know that the point of an investigation is to determine a nursing home s compliance with regulations. Of the four state survey agencies that provided boilerplate descriptions of their investigation findings, two told complainants how to obtain a more detailed report. For example, a sample letter from the Arkansas state survey agency noted that the agency s report on the deficiencies cited and the nursing home s plan of correction should be posted in the nursing home. An Arkansas survey agency official said that complainants could also request a copy of the investigation report, but that it might be heavily redacted to protect medical and identifying information. <4. CMS s Oversight of State Survey Agencies Complaint Investigation Processes Is Hampered by Data Reliability Issues, Due in Part to Inconsistent Interpretation of Performance Standards Among CMS Reviewers> CMS s oversight of state survey agencies complaint investigation processes, through its performance standards system and complaints database, is hampered by data reliability issues. While the four performance standards CMS uses to assess state survey agencies processes for investigating nursing home complaints are consistent with certain key criteria for performance measures identified by GAO and other audit agencies, the standards have weaknesses in areas related to other key criteria, particularly data reliability, due in part to inadequate sample sizes and inconsistent interpretation of some standards by CMS reviewers. In addition, CMS has not made full use of the information it collects about state survey agencies complaint investigation processes. For example, in part because of data reliability concerns, CMS does not routinely use data from the complaints database to calculate certain measures that could enhance its understanding of state survey agencies performance. Although CMS requires state survey agencies that fail performance standards to develop corrective action plans, these plans do not necessarily address the underlying causes of performance issues, such as staffing shortages. <4.1. CMS s Performance Standards Are Comprehensive and Limited in Number and Overlap, but Performance Scores Are Not Always Reliable> CMS s four nursing home complaint performance standards (1) prioritization of complaints, (2) timeliness of investigations, (3) quality of investigations, and (4) documentation of deficiencies are consistent with some, but not all, of the key criteria for performance measures identified by GAO and other audit agencies. Specific weaknesses we identified include a lack of comparability over time in the performance scores and thus an inability to assess trends; a lack of balance among some standards; and, most critically, a lack of data reliability, due in part to inadequate sample sizes and varying interpretations of the standards. Consistent with key criteria for performance measures, CMS s performance standards are comprehensive and limited in number and overlap. Officials of all of the state survey agencies and CMS regional offices in our sample indicated that they considered the four nursing home complaint standards comprehensive. Although the performance standards system does not include standards for certain steps in the complaint investigation process, such as intake, officials indicated that the standards cover key steps, which include prioritizing complaints, scheduling and conducting investigations, and documenting any deficiencies identified. The standards are also limited in number and overlap, with each focused on different aspects of the nursing home complaint process than the others. Performance trends cannot be easily assessed because scores are not comparable over time. Because CMS changed the scoring methodologies for three of the four nursing home complaint standards during the past 4 years, it is not readily apparent from scores on these standards whether state survey agencies performance improved or worsened over that time period. CMS officials generally felt that the changes had enhanced the standards in the case of the documentation of deficiencies and quality of investigations standards, by holding state survey agencies accountable for meeting all of the underlying requirements or by highlighting specific areas in need of improvement. Further, they did not identify the lack of trend data as a major concern. Officials noted that CMS judges state survey agencies performance for a given year, not in relation to prior years, and does not count scores on a standard in the first year after a significant change in methodology. However, a lack of consistent trend data makes it more difficult for CMS to assess whether the steps that it and the states are taking to improve performance on the nursing home complaint standards are having the desired effect. The balance among standards may be undermined by how the prioritization standard is scored. In general, the standards are balanced, so that the incentives created by one standard are counterbalanced by the incentives created by other standards. However, because the prioritization standard requires only that complaints be assigned a priority level at or above the level assigned by CMS reviewers, this standard may create an incentive for state survey agencies to assign higher priority levels than are warranted which may jeopardize the timeliness of investigations. As one state survey agency official pointed out, the staff members who prioritize complaints may not be responsible for conducting investigations; consequently, these staff may be more focused on the agency s meeting the prioritization standard than the timeliness standard and thus err on the side of caution in prioritizing complaints. According to CMS headquarters officials, the prioritization standard is scored this way because the agency was most concerned about complaints being prioritized at too low a level and did not want to fault state survey agencies for investigating complaints sooner than necessary. However, officials of two CMS regional offices noted that assigning complaints too high a priority level can cause misallocation of resources, as state survey agencies that prioritize complaints at higher levels than are warranted must investigate these complaints within shorter time frames than they otherwise would. Some performance scores are unreliable because of inadequate sample sizes and varying interpretations of standards among CMS reviewers. For three of the four CMS performance standards, the samples specified by CMS are in some cases too small to yield reliable data. Scores on the prioritization of complaints, quality of investigations, and documentation of deficiencies standards were generally based on a sample of 10 to 40 cases (10 percent, up to a maximum of 40). With samples this small, the margin of error around states scores on the prioritization of complaints standard, for example, was as much as 19 percentage points in fiscal year 2009. Accordingly, at least some of the states that received passing marks on this standard may actually have failed, and at least five of the nine states that received failing marks may actually have passed. Although the small sample sizes CMS requires make the reviews involved in certain standards more practical, by reducing the documentation CMS reviewers must examine, the trade-off is a lack of precision in the scores for these standards. Moreover, interpretation of some standards has varied among CMS reviewers in terms of both the materials reviewed to assess performance and how certain requirements were construed by reviewers. Materials reviewed. To assess the quality of investigations, some CMS regional offices reviewed only information surveyors entered into the complaints database, while other CMS regional offices reviewed more extensive hard-copy notes from complaint investigations. CMS headquarters officials indicated that relying solely on the information in the complaints database to assess the quality of investigations was not consistent with federal guidance, stating that regional office officials should follow the guidance for the standard, which calls for reviewers to examine a variety of documents, including surveyor worksheets and investigation notes. They also noted that the investigation notes are not required data elements in the complaints database. Some state survey agency officials said that their scores on this standard have suffered because the investigation notes in the database do not always provide a complete picture of the agency s complaint investigations. How requirements were construed. State survey agency officials we interviewed also noted differences in how CMS reviewers understood certain requirements in the standards, particularly in the documentation of deficiencies standard. For example, officials described differences in reviewers interpretations of what it means to quantify the extent of a deficient practice, one of the requirements in that standard. One state survey agency official said that his agency s scores on the standards improved from one half of the year to the next simply because the CMS staff conducting the review changed. Officials in one of the CMS regions where all state survey agencies failed the documentation of deficiencies standard acknowledged the 100 percent failure rate was at least partially due to a change in the regional office s review specifically, regional managers having issued more explicit instructions to staff about how to assess states performance on particular requirements. The clustering of failing scores on this standard within certain CMS regions also suggests regional variation in interpretation; in three regions, all of the state survey agencies failed the documentation of deficiencies standard in fiscal year 2009, while in the other seven regions, half or fewer of the state survey agencies failed. Although some CMS regional offices have tried to ensure consistent interpretation of the standards within their own regions for example, by requiring that multiple reviewers concur on any failing marks given to state survey agencies and encouraging ongoing dialogue about the standards some officials we interviewed believe CMS should do more to ensure consistency across regions. CMS headquarters officials told us that the agency has issued additional guidance when officials became aware of a need for clarification, but some CMS regional office officials said that parts of the guidance need enhancement and that CMS headquarters should have more staff dedicated to developing guidance and answering questions from regional office staff. In addition, some state survey agency officials suggested that CMS regional offices should have less autonomy in the performance review process. One official suggested that CMS headquarters should exert more control over the regional offices with respect to the review process, and others indicated a need for more review of the reviewers for example, by having the performance reviews conducted by each regional office validated by another. Officials of one state survey agency, noting that state survey agencies can appeal their performance scores only to the same regional office that conducted their performance review, suggested that a second regional office should at least be involved in the appeals process. <4.2. CMS Has Not Made Full Use of Performance Information on State Survey Agencies Complaint Investigations> CMS has not made full use of the information it collects about state survey agencies complaint investigation processes through its complaints database and performance standards system. For example, CMS does not routinely use data from its complaints database to calculate certain measures that could enhance its understanding of state survey agencies performance investigating complaints and has not publicly reported state survey agencies scores on the performance standards. CMS has not made full use of data in the complaints database to monitor performance. In part because of data reliability concerns, CMS does not routinely calculate certain measures that could shed additional light on state survey agencies performance such as substantiation rates or additional measures of the timeliness of investigations. Substantiation rates, if interpreted by state survey agencies in a consistent manner, could provide insight into the quality of complaint investigations. Given the many factors that influence these rates, including whether the complaints have a basis in fact, it would not be appropriate to require state survey agencies to achieve a particular rate. However, substantial variation in rates, either among states or over time, could signal issues with complaint investigations and prompt further inquiry by CMS. A CMS headquarters official told us that because some state survey agencies may consider a complaint to be substantiated even if no federal deficiencies are cited, CMS headquarters does not systematically monitor substantiation rates and most CMS regional offices probably do not do so either. The Patient Protection and Affordable Care Act (PPACA), enacted March 23, 2010, requires HHS to post on the Nursing Home Compare Web site summary information on substantiated complaints, including their number, type, severity and outcome, by March 23, 2011. Accordingly, a CMS official told us that CMS headquarters will issue guidance to ensure that state survey agencies interpret substantiation in a consistent manner. Additional measures of timeliness such as the number of days by which state survey agencies miss the deadlines for some complaint investigations could provide CMS with a more comprehensive picture of performance in this area. We found that some state survey agencies with similar scores on CMS s timeliness standard for actual harm-high complaints in fiscal year 2009 had very different backlogs of complaint investigations. For example, looking at two state survey agencies with performance scores of 82 and 85 percent which indicates, respectively, that 18 and 15 percent of their investigations were late we found that 51 percent of one agency s late investigations were initiated more than 30 days late in calendar year 2009, compared with 4 percent for the other agency. Currently, the reliability of timeliness measures such as this is uncertain because state survey agencies do not necessarily enter all complaints into CMS s database or prioritize complaints in the same way. Responsibility for training to address performance issues has generally been left to CMS regional offices. The CMS regional offices in our sample have used information from the performance standards system to identify performance issues, but training designed to address these issues has generally been undertaken by individual CMS regional offices and, as a result, has varied in content and scope. Complaint investigation training at the national level has been limited and was not designed to address specific performance issues identified during reviews. Officials of most of the state survey agencies in our sample indicated that CMS s training and guidance was sufficient, but officials of two state survey agencies noted that their agencies provide any training above the basic level. One state survey agency official said that CMS should offer more comprehensive training, including more material on complaint investigations, so that states are not sinking or swimming on their own and are able to conduct investigations in a more consistent manner. PPACA directed HHS to enter into a contract to establish a National Training Institute to help surveyors develop complaint investigation skills. However, as of March 2011, funds had not yet been appropriated to implement this provision of the act, and CMS estimates that it would cost about $12 million to establish the institute. As a start, CMS has redirected about $1 million from other projects to initiate a project which will provide instruction on all aspects of complaint surveys for all facility types, including nursing homes. Corrective action plans are not timely and may not address the underlying causes of performance issues. CMS requires state survey agencies that fail performance standards to submit plans to improve their performance, but CMS does not require these plans to be submitted until halfway through the next performance cycle, which allows little time for corrective actions to take effect before the next performance review. (See fig. 4.) Moreover, despite CMS regional office input, the plans do not necessarily address the underlying causes of state survey agencies failure to meet performance standards. For example, all three of the state survey agencies in our sample that failed the timeliness of investigations standard for immediate jeopardy complaints, actual harm-high complaints, or both in all 4 fiscal years from 2006 through 2009 cited staff shortages as a reason, but two of the three submitted at least one corrective action plan during that period that did not propose hiring the additional staff needed. CMS regional office officials indicated that they had accepted such corrective action plans because the steps the state survey agencies did propose such as developing a graphic analysis tool to track performance or implementing additional central oversight of regional offices were likely to improve performance to some extent, and because CMS does not have the authority to require state survey agencies to hire or reallocate staff. Only one of the CMS regional offices in our sample reported ever having rejected a corrective action plan, and officials of one CMS regional office told us they preferred that a corrective action plan provide a realistic account of what a state survey agency was going to try to achieve rather than propose actions that the agency could not carry out. Some CMS officials view the penalties the agency might impose for failure to meet nursing home complaint standards as counterproductive or unrealistic. CMS s regulations provide for penalties to be imposed on a state survey agency for failure to follow procedures specified by CMS for complaint investigations, such as reducing funding or terminating the contract under which the state survey agency conducts standard surveys and complaint investigations. CMS headquarters officials noted that while CMS has reduced funding to state survey agencies for failure to meet requirements for standard surveys, such as statutory time frames, the agency has not done the same for complaint investigations. One official said that CMS has not done so partly because of concerns about the fairness of penalizing states for failure to meet standards that may vary from year to year, as well as concerns that reducing states funding might make it even more difficult for them to meet the standards. Some CMS regional office officials said that reducing state survey agencies funding for failure to complete complaint investigations on time made sense, but others said that taking resources away from the agencies could be counterproductive, further hampering their ability to carry out investigations. Although CMS could terminate its contract with a state survey agency, CMS officials we interviewed indicated that this was not a realistic option. CMS has not publicly reported state survey agencies performance scores. Public reporting of performance information has been advocated by GAO and other auditors as a critical step in performance management because it provides policymakers and the public with information needed to assess progress and may also serve to motivate agency managers and staff. While CMS has shared state survey agencies scores on the performance standards with all of the other state survey agencies, it has not made the scores available to other stakeholders, such as residents, family members, or advocates. According to a CMS headquarters official, some state survey agencies have made their own scores publicly available, but CMS has not yet issued any guidance to the states on public disclosure of scores. This official told us that CMS plans to issue a policy memo affirming state survey agencies right to disclose their own scores and is also considering making all of the scores publicly available, possibly on CMS s Web site. Although some CMS regional office officials questioned whether performance reports might too easily be misconstrued by the public and necessarily gloss over details that would provide a more nuanced picture of performance, GAO s prior work on performance management suggests reports can be structured to avoid these potential pitfalls for example, by explaining the limitations of the data and using clearly defined terms and readily understood tables and graphs to convey information. <5. Conclusions> In the past decade, CMS has made several efforts to improve the intake and investigation of nursing home complaints by state survey agencies, including (1) implementation of a database that not only helps state survey agencies track complaints but also helps CMS monitor the state survey agencies performance and (2) establishment of and refinements to its performance standards related to nursing home complaints. However, our review indicates that challenges remain. CMS s complaint data have limitations. We found that the lack of consistency in state surveys agencies use of the database particularly in terms of which complaints are entered and how certain fields are interpreted undermines the reliability of some of the data and limits the usefulness of the database as a monitoring tool. CMS does not routinely use the data to calculate measures such as substantiation rates that could enhance its understanding of complaint investigations partly because of concerns about the reliability of the data. CMS s performance reviews highlight state workload issues. Although state survey agencies generally prioritized nursing home complaints in accordance with CMS s performance standard, we found that many agencies had difficulty managing a heavy workload of actual harm-high complaints. In 2009, state survey agencies prioritized 45 percent of the more than 53,000 nursing home complaints they received as actual- harm high, which requires initiation of an investigation within 10 working days of prioritization. In fiscal year 2009, 19 state survey agencies failed to meet the CMS timeliness standard for these complaints. Staffing shortages and heavy workloads were cited as key reasons by survey agency officials we interviewed whose states had failed this standard. CMS s policy for scoring the prioritization standard may contribute to these workload issues by creating an incentive for the agency staff who prioritize complaints to assign higher priority levels than are warranted. While CMS is correct in asserting that prioritizing complaints at too high a level is preferable to the reverse, this practice can have a significant impact on state survey agencies workload and thus on their ability to meet requirements for timely investigations. Additionally, CMS data for 2009 showed that, among investigated complaints prioritized as either immediate jeopardy or actual harm-high, the percentage substantiated with at least one federal deficiency cited was higher if the investigation was initiated within required time frames than if it was not. Though many factors can affect whether complaints are substantiated, including whether there is evidence to support them, considerable variation in substantiation rates, among the states or over time, could indicate potential concerns with state survey agencies complaint investigations. Some performance standards scores are unreliable due to small samples and varying interpretations of requirements. CMS has also made efforts to refine its performance standards for nursing home complaints. However, as with the complaints data, scores on some standards are unreliable, because of inadequate sample sizes and varying interpretations of the standards by the CMS regional office officials who conduct the performance reviews. While we recognize that CMS may have opted for small samples for some standards in order to limit the amount of documentation reviewers must examine each year, sample sizes could be increased without increasing reviewers workloads if performance on certain standards those that require document review were assessed less frequently than once a year. Less frequent reviews could also help address the issue of state survey agencies receiving their final scores and submitting their corrective action plans so far into the next performance cycle that little time remains for them to improve their performance. The credibility of the scores could be further enhanced by ensuring that the standards are consistently interpreted by the CMS regional offices. Clarifying CMS guidance could help in this regard as well as in ensuring that state survey agencies understand their responsibilities with respect to each aspect of the complaint investigation process, including the manner in which investigation results are communicated to complainants. CMS is considering making state survey agencies scores on the performance standards publicly available. While we support such a step, we believe that it is important to consider the reliability of data, as well as its comparability over time, when deciding which scores to publish. For such performance reports to be useful to the public, they should also include meaningful trend data that reflect agencies actual progress over time, as well as a clear explanation of the limitations of the data. <6. Recommendations> To ensure that information entered into CMS s complaints database is reliable and consistent, we recommend that the Administrator of CMS: Identify issues with data quality and clarify guidance to states about how particular fields in the database should be interpreted, such as what it means to substantiate a complaint. To strengthen CMS s assessment of state survey agencies performance in the management of nursing home complaints, we recommend that the Administrator of CMS take the following three actions: Conduct additional monitoring of state performance using information from CMS s complaints database, such as additional timeliness measures. Assess state survey agencies performance in certain areas specifically, documentation of deficiencies, prioritization of complaints, and quality of investigations less frequently than once a year. Assure greater consistency in assessments by identifying differences in interpretation of the performance standards and clarifying guidance to state survey agencies and CMS regional offices. To strengthen and increase accountability of state survey agencies management of the nursing home complaints process, we recommend that the Administrator of CMS take the following three actions: Clarify guidance to the state survey agencies about the minimum information that should be conveyed to complainants at the close of an investigation. Provide guidance encouraging state survey agencies to prioritize complaints at the level that is warranted, not above that level. Implement CMS s proposed plans to publish state survey agencies scores but limit publication to those performance standards that CMS considers the most reliable and clear. <7. Agency and Other External Comments> We received written comments on a draft of this report from HHS and from the Association of Health Facility Survey Agencies (AHFSA), the organization that represents state survey agencies. <7.1. HHS Comments> HHS provided written comments, which are reproduced in app. III. HHS generally concurred with all of our recommendations. With respect to our first recommendation, HHS agreed that CMS should take steps to ensure that information entered into the agency s complaints database is reliable and consistent. HHS said that CMS will convene a workgroup including staff from CMS headquarters, CMS regional offices, and state survey agencies to address data quality issues. HHS also agreed that CMS needs to strengthen its assessment of state survey agencies performance in the management of nursing home complaints. HHS said that CMS s planned workgroup will review the three specific actions we recommended and identify ways to strengthen the agency s oversight process. Finally, HHS agreed that CMS needs to strengthen and increase accountability of state survey agencies management of the nursing home complaints process. Regarding the specific actions we recommended, HHS said that CMS will provide increased guidance to states regarding the minimum information that must be conveyed to complainants at the close of an investigation and provide clarification and guidance to ensure that complaints are prioritized at the appropriate level. With respect to our recommendation that CMS publish state survey agencies scores on certain nursing home complaint performance standards, HHS said that CMS will work with state officials and others to identify key information about state survey agencies performance that would be of public value. HHS also provided technical comments, which we incorporated as appropriate. <7.2. AHFSA Comments> AHFSA emphasized the critical importance of enforcing federal and state survey and certification standards and noted that in many states, complaint systems have significant connections to state and local licensing and enforcement activities, which are outside CMS s jurisdiction. AHFSA noted that several of the policy and operational issues raised in our report create challenges for states. These include lack of clarity about what it means to substantiate a complaint and lack of timely notification to the states of any changes in CMS s performance standards for nursing home complaints. AHFSA also commented that CMS s guidance on prioritizing complaints could be improved but questioned whether many states were prioritizing complaints at a higher level than is warranted in order to meet CMS s prioritization standard. In addition, AHFSA said that the complaint system is the primary safety net for vulnerable nursing home residents and therefore suggested that states should err on the side of caution when prioritizing complaints in order to better protect residents. AHFSA also provided some state-specific comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: CMS s State-Level Data on Complaints Received, Investigated, and Substantiated by State Survey Agencies, 2009 This appendix provides additional information on the number of complaints received, investigated, and substantiated by all 50 state survey agencies and the survey agency for the District of Columbia for 2009 based on complaints in Centers for Medicare & Medicaid Services (CMS) national complaints database. We included only complaints and excluded facility-reported incidents, which nursing homes are required to self-report to state survey agencies. Additionally, we included only complaints that alleged a violation of federal requirements. In the course of our work, we found some limitations to the data we obtained, including that state survey agencies interpret certain variables, such as substantiation, differently from one another and that data are missing for certain variables, such as the date on which the state survey agency acknowledged the complaint. Additionally, we learned that CMS s national database may not include all complaints because the state survey agencies may not have entered all of the complaints they received. Because of the data limitations we found, we included in our analysis only those variables that we found to be reliable, and we consider the number of complaints from CMS s national complaints database to be a conservative estimate of the total number of complaints received by state survey agencies. Appendix II: Performance Scores for Selected Nursing Home Complaint Performance Standards, Fiscal Year 2009 Score (Percent) Pass ( ) or fail (X) Score (Percent) Pass ( ) or fail (X) Score (Percent) Score (Percent) Pass ( ) or fail (X) Score (Percent) Pass ( ) or fail (X) Score (Percent) = passed performance standard X= failed performance standard. A blank in the score column indicates that the state received a passing score (at least 90 percent for the prioritization of complaints standard and at least 95 percent for the timeliness of investigation standards for immediate jeopardy and actual harm-high complaints). Pennsylvania officials reported that the state did not pass the prioritization of complaints standard because it required all complaint investigations to be initiated within 48 hours and survey agency staff therefore assigned a priority level of immediate jeopardy to nearly all complaints. Because CMS guidance on this standard was not clear in fiscal year 2009, the CMS regional office that assessed Pennsylvania s performance considered complaints assigned a priority level higher than warranted to be inappropriately prioritized and therefore gave the state a failing score on this standard. Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments <8. Acknowledgments> In addition to the contact name above, Walter Ochinko, Assistant Director; Jennie Apter; Shaunessye Curry; Christie Enders; Nancy Fasciano; Dan Lee; Lisa Motley; Matthew Rae; and Jessica Smith made key contributions to this report. Related GAO Products Nursing Homes: Complexity of Private Investment Purchases Demonstrates Need for CMS to Improve the Usability and Completeness of Ownership Data. GAO-10-710. Washington, D.C.: September 30, 2010. Poorly Performing Nursing Homes: Special Focus Facilities Are Often Improving, but CMS s Program Could Be Strengthened. GAO-10-197. Washington, D.C.: March 19, 2010. Nursing Homes: Addressing the Factors Underlying Understatement of Serious Care Problems Requires Sustained CMS and State Commitment. GAO-10-70. Washington, D.C.: November 24, 2009. Nursing Homes: Opportunities Exist to Facilitate the Use of the Temporary Management Sanction. GAO-10-37R. Washington, D.C.: November 20, 2009. Nursing Homes: CMS s Special Focus Facility Methodology Should Better Target the Most Poorly Performing Homes, Which Tended to Be Chain Affiliated and For-Profit. GAO-09-689. Washington, D.C.: August 28, 2009. Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine Its Approach for Funding State Oversight of Health Care Facilities. GAO-09-64. Washington, D.C.: February 13, 2009. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
Why GAO Did This Study CMS, the agency within HHS that manages Medicare and Medicaid, contracts with state survey agencies to investigate complaints about nursing homes from residents, family members, and others. CMS helps assure the adequacy of state complaint processes by issuing guidance, monitoring data that state survey agencies enter into CMS's database, and annually assessing performance against specific standards. Concerns have been raised about the timeliness and adequacy of complaint investigations and CMS's oversight. GAO examined (1) complaints received, investigated, and substantiated by state survey agencies; (2) whether those agencies were meeting CMS performance standards and other requirements; and (3) the effectiveness of CMS's oversight. In addition to analyzing CMS data on complaints and performance reviews, GAO examined CMS guidance and conducted interviews with officials from three high- and three low-performing state survey agencies and their CMS regional offices. GAO addressed data reliability concerns by reporting only data we determined to be reliable. What GAO Found CMS's complaints data showed that state survey agencies received 53,313 complaints about nursing homes in 2009. The number and types of complaints varied among states. For example, 11 states received 15 or fewer complaints per 1,000 nursing home residents while 14 states received more than 45. State survey agencies assess the severity of a complaint and assign a priority level, which dictates if and when an investigation must be initiated. About 10 percent of complaints were prioritized as immediate jeopardy, requiring investigation within 2 working days of receipt, while 45 percent were prioritized as actual harm-high, requiring investigation within 10 working days of prioritization. State survey agencies investigated all but 102 complaints that required an investigation. Among investigated complaints, 19 percent were substantiated and resulted in the citation of at least one federal deficiency. The percentage of immediate jeopardy and actual harm-high complaints that were substantiated with at least one federal deficiency cited was higher if the investigation was initiated on time. In CMS's performance assessment for fiscal year 2009, many state survey agencies had difficulty meeting some of CMS's nursing home complaint standards, most of which also assess performance with regard to incidents--specific care issues that nursing homes are required to report. In particular, 19 state survey agencies had difficulty investigating actual harm-high complaints and incidents within the required time frame. However, most states were able to meet other CMS standards--timely investigation of immediate jeopardy complaints and incidents and appropriate prioritization of complaints and incidents. Although CMS's performance assessment does not review state survey agencies' communication with complainants, CMS does expect the agencies to convey investigation findings according to CMS guidelines. GAO found state survey agencies had varied interpretations of those guidelines, and some provided limited information to complainants. CMS's oversight of state survey agencies' complaint investigation processes, through its performance standards system and complaints database, is hampered by data reliability issues. While CMS's performance standards are consistent with certain key criteria for performance measures identified by GAO and other audit agencies, performance scores are not always reliable, due in part to inadequate sample sizes and inconsistent interpretation of some standards by CMS reviewers. In addition, CMS has not made full use of the information it collects. For example, in part because of data reliability concerns, CMS does not routinely use data from the complaints database to calculate certain measures that could enhance its understanding of agencies' performance. Although CMS requires state survey agencies that fail performance standards to develop corrective action plans, states' plans do not necessarily address the underlying causes of performance issues, such as staffing shortages. What GAO Recommends GAO recommends that the CMS Administrator take several steps to strengthen oversight of complaint investigations, such as improving the reliability of its complaints database and clarifying guidance for its state performance standards to assure more consistent interpretation. HHS generally agreed with our recommendations.
<1. Background> SBA was established by the Small Business Act of 1953 to fulfill the role of several agencies that previously assisted small businesses affected by the Great Depression and, later, by wartime competition. SBA s stated purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. In addition, SBA serves as a small business advocate, working with other federal agencies to, among other things, reduce regulatory burdens on small businesses. SBA also provides low-interest, long-term loans to individuals and businesses to assist them with disaster recovery through its Disaster Loan Program the only form of SBA assistance not limited to small businesses. Homeowners, renters, businesses of all sizes, and nonprofit organizations can apply for physical disaster loans for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. Small businesses can also apply for economic injury disaster loans to obtain working capital funds until normal operations resume after a disaster declaration. SBA s Disaster Loan Program differs from the Federal Emergency Management Agency s (FEMA) Individuals and Households Program (IHP). For example, a key element of SBA s Disaster Loan Program is that the disaster victim must have repayment ability before a loan can be approved whereas FEMA makes grants under the IHP that do not have to be repaid. Further, FEMA grants are generally for minimal repairs and, unlike SBA disaster loans, are not designed to help restore the home to its predisaster condition. In January 2005, SBA began using DCMS to process all new disaster loan applications. SBA intended for DCMS to help it move toward a paperless processing environment by automating many of the functions staff members had performed manually under its previous system. These functions include both obtaining referral data from FEMA and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. <2. DCMS s Limited Capacity and Difficulties in Other Logistical Areas Impeded SBA s Response to the Gulf Coast Hurricanes> Our July 2006 report identified several significant limitations in DCMS s capacity and other system and procurement deficiencies that likely contributed to the challenges that SBA faced in providing timely assistance to Gulf Coast hurricane victims as follows: First, due to limited capacity, the number of SBA staff who could access DCMS at any one time to process disaster loans was restricted. Without access to DCMS, the ability of SBA staff to process disaster loan applications in an expeditious manner was diminished. Second, SBA experienced instability with DCMS during the initial months following Hurricane Katrina, as users encountered multiple outages and slow response times in completing loan processing tasks. According to SBA officials, the longest period of time DCMS was unavailable to users due to an unscheduled outage was 1 business day. These unscheduled outages and other system-related issues slowed productivity and affected SBA s ability to provide timely disaster assistance. Third, ineffective technical support and contractor oversight contributed to the DCMS instability that SBA staff initially encountered in using the system. Specifically, a DCMS contractor did not monitor the system as required or notify the agency of incidents that could increase system instability. Further, the contractor delivered computer hardware for DCMS to SBA that did not meet contract specifications. In the report released in February, we identified other logistical challenges that SBA experienced in providing disaster assistance to Gulf Coast hurricane victims. For example, SBA moved urgently to hire more than 2,000 mostly temporary employees at its Ft. Worth, Texas disaster loan processing center through newspaper and other advertisements (the facility increased from about 325 staff in August 2005 to 2,500 in January 2006). SBA officials said that ensuring the appropriate training and supervision of this large influx of inexperienced staff proved very difficult. Prior to Hurricane Katrina, SBA had not maintained the status of its disaster reserve corps, which was a group of potential voluntary employees trained in the agency s disaster programs. According to SBA, the reserve corps, which had been instrumental in allowing the agency to provide timely disaster assistance to victims of the September 11, 2001 terrorist attacks, shrank from about 600 in 2001 to less than 100 in August 2005. Moreover, SBA faced challenges in obtaining suitable office space to house its expanded workforce. For example, SBA s facility in Ft. Worth only had the capacity to house about 500 staff whereas the agency hired more than 2,000 mostly temporary staff to process disaster loan applications. While SBA was able to identify another facility in Ft. Worth to house the remaining staff, it had not been configured to serve as a loan processing center. SBA had to upgrade the facility to meet its requirements. Fortunately, in 2005, SBA was also able to quickly reestablish a loan processing facility in Sacramento, California, that had been previously slated for closure under an agency reorganization plan. The facility in Sacramento was available because its lease had not yet expired, and its staff was responsible for processing a significant number of Gulf Coast hurricane related disaster loan applications. As a result of these and other challenges, SBA developed a large backlog of applications during the initial months following Hurricane Katrina. This backlog peaked at more than 204,000 applications 4 months after Hurricane Katrina. By late May 2006, SBA took about 74 days on average to process disaster loan applications, compared with the agency s goal of within 21 days. <3. Unprecedented Loan Application Volume and SBA s Limited Disaster Planning Contributed to Challenges in Providing Timely Assistance to Hurricane Victims> As we stated in our July 2006 report, the sheer volume of disaster loan applications that SBA received was clearly a major factor contributing to the agency s challenges in providing timely assistance to Gulf Coast hurricane. As of late May 2006, SBA had issued 2.1 million loan applications to hurricane victims, which was four times the number of applications issued to victims of the 1994 Northridge, California, earthquake, the previous single largest disaster that the agency had faced. Within 3 months of Hurricane Katrina making landfall, SBA had received 280,000 disaster loan applications or about 30,000 more applications than the agency received over a period of about 1 year after the Northridge earthquake. However, our two reports on SBA s response to the Gulf Coast hurricanes also found that the absence of a comprehensive and sophisticated planning process contributed to the challenges that the agency faced. For example, in designing DCMS, SBA used the volume of applications received during the Northridge, California, earthquake and other historical data as the basis for planning the maximum number of concurrent agency users that the system could accommodate. SBA did not consider the likelihood of more severe disaster scenarios and, in contrast to insurance companies and some government agencies, use the information available from catastrophe models or disaster simulations to enhance its planning process. Since the number of disaster loan applications associated with the Gulf Coast hurricanes greatly exceeded that of the Northridge earthquake, DCMS s user capacity was not sufficient to process the surge in disaster loan applications in a timely manner. Additionally, SBA did not adequately monitor the performance of a DCMS contractor or stress test the system prior to its implementation. In particular, SBA did not verify that the contractor provided the agency with the correct computer hardware specified in its contract. SBA also did not completely stress test DCMS prior to implementation to ensure that the system could operate effectively at maximum capacity. If SBA had verified the equipment as required or conducted complete stress testing of DCMS prior to implementation, its capacity to process Gulf Coast related disaster loan applications may have been enhanced. In the report we issued in February, we found that SBA did not engage in comprehensive disaster planning for other logistical areas such as workforce or space acquisition planning prior to the Gulf Coast hurricanes at either the headquarters or field office levels. For example, SBA had not taken steps to help ensure the availability of additional trained and experienced staff such as (1) cross-training agency staff not normally involved in disaster assistance to provide backup support or (2) maintaining the status of the disaster reserve corps as I previously discussed. In addition, SBA had not thoroughly planned for the office space requirements that would be necessary in a disaster the size of the Gulf Coast hurricanes. While SBA had developed some estimates of staffing and other logistical requirements, it largely relied on the expertise of agency staff and previous disaster experiences none of which reached the magnitude of the Gulf Coast hurricanes and, as was the case with DCMS planning, did not leverage other planning resources, including information available from disaster simulations or catastrophe models. <4. SBA Has Taken Steps to Better Prepare for Disasters, but Continued Commitment and Actions Are Necessary> In our July 2006 report, we recommended that SBA take several steps to enhance DCMS, such as reassessing the system s capacity in light of the Gulf Coast hurricane experience and reviewing information from disaster simulations and catastrophe models. We also recommended that SBA strengthen its DCMS contractor oversight and further stress test the system. SBA agreed with these recommendations. I note that SBA has completed an effort to expand DCMS s capacity. SBA officials said that DCMS can now support a minimum of 8,000 concurrent agency users as compared with only 1,500 concurrent users for the Gulf Coast hurricanes. Additionally, SBA has awarded a new contract for the project management and information technology support for DCMS. The contractor is responsible for a variety of DCMS tasks on SBA s behalf including technical support, software changes and hardware upgrades, and supporting all information technology operations associated with the system. In the report released in February, we identified other measures that SBA had planned or implemented to better prepare for and respond to future disasters. These steps include appointing a single individual to coordinate the agency s disaster preparedness planning and coordination efforts, enhancing systems to forecast the resource requirements to respond to disasters of varying scenarios, redesigning the process for reviewing applications and disbursing loan proceeds, and enhancing its long-term capacity to acquire adequate facilities in an emergency. Additionally, SBA had planned or initiated steps to help ensure the availability of additional trained and experienced staff in the event of a future disaster. According to SBA officials, these steps include cross-training staff not normally involved in disaster assistance to provide back up support, reaching agreements with private lenders to help process a surge in disaster loan applications, and reestablishing the Disaster Active Reserve Corps, which had reached about 630 individuals as of June 2007. While SBA has taken a variety of steps to enhance its capacity to respond to disasters, I note that these efforts are ongoing and continued commitment and actions by agency managers are necessary. In June 2007, SBA released a plan for responding to disasters. While we have not evaluated the process SBA followed in developing its plan, according to the SBA plan, the agency is incorporating catastrophe models into its disaster planning processes as we recommended in both reports. For example, the plan states that SBA is using FEMA s catastrophe model, which is referred to as HAZUS, in its disaster planning activities. Further, based on information provided by SBA, the agency is also exploring the use of models developed by private companies to assist in its disaster planning efforts. These efforts to incorporate catastrophe models into the disaster planning process appear to be at an early stage. SBA s plan also anticipates further steps to ensure an adequate workforce is available to respond to a disaster, including training and using 400 non- disaster program office staff to assist in responding to the 2007 hurricane season and beyond. According to SBA officials, about 200 of these staff members will be trained in reviewing loan applications and providing customer service by the end of this month and the remainder will be trained by this Fall. We encourage SBA to actively pursue initiatives that may further enhance its capacity to better respond to future disasters, and we will monitor SBA s efforts to implement our recommendations. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. <5. GAO Contact and Staff Acknowledgments> For further information on this testimony, please contact William B. Shear at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Wesley Phillips, Assistant Director; Triana Bash; Alison Gerry; Marshall Hamlett; Barbara S. Oliver; and Cheri Truett. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Small Business Administration (SBA) helps individuals and businesses recover from disasters such as hurricanes through its Disaster Loan Program. SBA faced an unprecedented demand for disaster loan assistance following the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), which resulted in extensive property damage and loss of life. In the aftermath of these disasters, concerns were expressed regarding the timeliness of SBA's disaster assistance. GAO initiated work and completed two reports under the Comptroller General's authority to conduct evaluations and determine how well SBA provided victims of the Gulf Coast hurricanes with timely assistance. This testimony, which is based on these two reports, discusses (1) challenges SBA experienced in providing victims of the Gulf Coast hurricanes with timely assistance, (2) factors that contributed to these challenges, and (3) steps SBA has taken since the Gulf Coast hurricanes to enhance its disaster preparedness. GAO visited the Gulf Coast region, reviewed SBA planning documents, and interviewed SBA officials. What GAO Found GAO identified several significant system and logistical challenges that SBA experienced in responding to the Gulf Coast hurricanes that undermined the agency's ability to provide timely disaster assistance to victims. For example, the limited capacity of SBA's automated loan processing system--the Disaster Credit Management System (DCMS)--restricted the number of staff who could access the system at any one time to process disaster loan applications. In addition, SBA staff who could access DCMS initially encountered multiple system outages and slow response times in completing loan processing tasks. SBA also faced challenges training and supervising the thousands of mostly temporary employees the agency hired to process loan applications and obtaining suitable office space for its expanded workforce. As of late May 2006, SBA processed disaster loan applications, on average, in about 74 days compared with its goal of within 21 days. While the large volume of disaster loan applications that SBA received clearly affected its capacity to provide timely disaster assistance to Gulf Coast hurricane victims, GAO's two reports found that the absence of a comprehensive and sophisticated planning process beforehand likely limited the efficiency of the agency's initial response. For example, in designing the capacity of DCMS, SBA primarily relied on historical data such as the number of loan applications that the agency received after the 1994 Northridge, California, earthquake--the most severe disaster that the agency had previously encountered. SBA did not consider disaster scenarios that were more severe or use the information available from disaster simulations (developed by federal agencies) or catastrophe models (used by insurance companies to estimate disaster losses). SBA also did not adequately monitor the performance of a DCMS contractor or completely stress test the system prior to its implementation. Moreover, SBA did not engage in comprehensive disaster planning prior to the Gulf Coast hurricanes for other logistical areas, such as workforce planning or space acquisition, at either the headquarters or field office levels. While SBA has taken steps to enhance its capacity to respond to potential disasters, the process is ongoing and continued commitment and actions by agency managers are necessary. As of July 2006, SBA officials said that the agency had completed an expansion of DCMS's user capacity to support a minimum of 8,000 concurrent users as compared with 1,500 concurrent users supported for the Gulf Coast hurricanes. Further, in June 2007, SBA released a disaster plan. While GAO has not evaluated the process SBA followed in developing its plan, consistent with recommendations in GAO reports, the plan states that SBA is incorporating catastrophe models into its planning process, an effort which appears to be at an early stage. GAO encourages SBA to actively pursue the use of catastrophe models and other initiatives that may further enhance its capacity to better respond to future disasters.
<1. Background> The United States has approximately 360 commercial sea and river ports that handle more than $1.3 trillion in cargo annually. A wide variety of goods, including automobiles, grain, and millions of cargo containers, travel through these ports each day. While no two ports are exactly alike, many share certain characteristics, like their size, general proximity to a metropolitan area, the volume of cargo being processed, and connections to complex transportation networks designed to move cargo and commerce as quickly as possible, that make them vulnerable to physical security threats. Entities within the maritime port environment are also vulnerable to cyber- based threats because maritime stakeholders rely on numerous types of information and communications technologies to manage the movement of cargo throughout ports. Examples of these technologies include the following: Terminal operating systems: These are information systems used by terminal operators to, among other things, control container movements and storage. For example, the terminal operating system is to support the logistical management of containers while in the terminal operator s possession, including container movement and storage. To enhance the terminal operator s operations, the system can also be integrated with other systems and technologies, such as financial systems, mobile computing, optical character recognition, and radio frequency identification systems. Industrial control systems: In maritime terminals, industrial control systems facilitate the movement of goods throughout the terminal using conveyor belts or pipelines to various structures (e.g., refineries, processing plants, and storage tanks). Business operations systems: These are information and communications technologies used to help support the business operations of the terminal, such as communicating with customers and preparing invoices and billing documentation. These systems can include e-mail and file servers, enterprise resource planning systems,networking equipment, phones, and fax machines. Access control and monitoring systems: Information and communication technology can also be used to support physical security operations at a port. For example, camera surveillance systems can be connected to information system networks to facilitate remote monitoring of port facilities, and electronically enabled physical access control devices can be used to protect sensitive areas of a port. See figure 1, an interactive graphic, for an overview of the technologies used in the maritime port environment. See appendix III for a printable version. Move mouse over blue system names to get descriptions of the systems. See appendix III for noninteractive version of this graphic. The location of the entity that manages these systems can also vary. Port facility officials we interviewed stated that some information technology systems used by their facilities are managed locally at the ports, while others are managed remotely from locations within and outside the United States. In addition, other types of automated infrastructure are used in the global maritime trade industry. For example, some ports in Europe use automated ground vehicles and stacking cranes to facilitate the movement of cargo throughout the ports. <1.1. The Nation and Its Ports Face an Evolving Array of Cyber-Based Threats> Like threats affecting other critical infrastructures, threats to the maritime information technology (IT) infrastructure can come from a wide array of sources. For example, advanced persistent threats where adversaries possess sophisticated levels of expertise and significant resources to pursue their objectives pose increasing risk. Threat sources include corrupt employees, criminal groups, hackers, and terrorists. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary or political gain or mischief, among other things. Table 1 describes the sources of cyber- based threats in more detail. These sources of cyber threats may make use of various cyber techniques, or exploits, to adversely affect information and communications networks. Types of exploits include denial-of-service attacks, phishing, Trojan horses, viruses, worms, and attacks on the IT supply chains that support the communications networks. Table 2 describes the types of exploits in more detail. Similar to those in the United States, ports elsewhere in the world also rely on information and communications technology to facilitate their operations, and concerns about the potential impact of cybersecurity threats and vulnerabilities on these operations have been raised. For example, according to a 2011 report issued by the European Network and Information Security Agency, the maritime environment, like other sectors, increasingly relies on information and communications systems to optimize its operations, and the increased dependency on these systems, combined with the operational complexity and multiple stakeholders involved, make the environment vulnerable to cyber attacks. In addition, Australia s Office of the Inspector of Transport Security reported in June 2012 that a cyber attack is probably the most serious threat to the integrity of offshore oil and gas facilities and land-based production. In addition, a recently reported incident highlights the risk that cybersecurity threats pose to the maritime port environment. Specifically, according to Europol s European Cybercrime Center, a cyber incident was reported in 2013 (and corroborated by the Federal Bureau of Investigation) in which malware was installed on a computer at a foreign port. The reported goal of the attack was to track the movement of shipping containers for smuggling purposes. A criminal group used hackers to break into the terminal operating system to gain access to security and location information that was leveraged to remove the containers from the port. <1.2. Federal Plans and Policies Establish Responsibilities for Securing Cyber-Reliant Critical Infrastructure> Port owners and operators are responsible for the cybersecurity of their operations, and federal plans and policies specify roles and responsibilities for federal agencies to support those efforts. In particular, the National Infrastructure Protection Plan (NIPP), a planning document originally developed pursuant to the Homeland Security Act of 2002 and Homeland Security Presidential Directive 7 (HSPD-7), sets forth a risk management framework to address the risks posed by cyber, human, and physical elements of critical infrastructure. It details the roles and responsibilities of DHS in protecting the nation s critical infrastructures; identifies agencies that have lead responsibility for coordinating with the sectors (referred to as sector-specific agencies); and specifies how other federal, state, regional, local, tribal, territorial, and private-sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. In addition, NIPP sets up a framework for operating and sharing information across and between federal and nonfederal stakeholders within each sector that includes the establishment of two types of councils: sector coordinating councils and government coordinating councils. The 2006 and 2009 NIPPs identified the U.S. Coast Guard as the sector-specific agency for the maritime mode of the transportation sector.and resilience strategies for the maritime environment. In this role, the Coast Guard is to coordinate protective programs Under NIPP, each critical infrastructure sector is also to develop a sector- specific plan to detail the application of its risk management framework for the sector. The 2010 Transportation Systems Sector-Specific Plan includes an annex for the maritime mode of transportation. The maritime annex is considered an implementation plan that details the individual characteristics of the maritime mode and how it will apply risk management, including a formal assessment of risk, to protect its systems, assets, people, and goods. In February 2013, the White House issued Presidential Policy Directive 21, which shifted the nation s focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. The directive identified sector-specific agency roles and responsibilities to include, among other things, serving as a day-to-day federal interface for the prioritization and coordination of sector-specific activities. In December 2013, DHS released an updated version of NIPP. The 2013 NIPP reaffirms the role of various coordinating structures (such as sector coordinating councils and government coordinating councils) and integrates cyber and physical security and resilience efforts into an enterprise approach for risk management, among other things. The 2013 NIPP also reiterates the sector-specific agency roles and responsibilities as defined in Presidential Policy Directive 21. In addition, in February 2013 the President signed Executive Order 13636 for improving critical infrastructure cybersecurity. The executive order states that, among other things the National Institute of Standards and Technology shall lead the development of a cybersecurity framework that will provide technology-neutral guidance; the policy of the federal government is to increase the volume, timeliness, and quality of cyber threat information sharing with the U.S. private sector; agencies with responsibility to regulate the security of critical infrastructure shall consider prioritized actions to promote cyber security; and DHS shall identify critical infrastructure where a cybersecurity incident could have a catastrophic effect on public health or safety, economic security, or national security. <1.3. Federal Laws and Implementing Regulations Establish Security Requirements for the Maritime Sector> The primary laws and regulations that establish DHS s maritime security requirements include the Maritime Transportation Security Act of 2002 (MTSA), the Security and Accountability for Every Port Act of 2006 (SAFE Port Act),laws. and Coast Guard s implementing regulations for these Enacted in November 2002, MTSA requires a wide range of security improvements for protecting our nation s ports, waterways, and coastal areas. DHS is the lead agency for implementing the act s provisions and relies on its component agencies, including the Coast Guard and FEMA, to help implement the act. The Coast Guard is responsible for security of U.S. maritime interests, including completion of security plans related to geographic areas around ports with input from port stakeholders. These plans are to assist the Coast Guard in the protection against transportation security incidents across the maritime port environment. The Coast Guard has designated a captain of the port within each of 43 geographically defined port areas across the nation who is responsible for overseeing the development of the security plans within his or her respective geographic region. The MTSA implementing regulations, developed by the Coast Guard, require the establishment of area maritime security committees across all port areas. The committees for each of the 43 identified port areas, which are organized by the Coast Guard, consist of key stakeholders who (1) may be affected by security policies and (2) share information and develop port security plans. Members of the committees can include a diverse array of port stakeholders, including federal, state, local, tribal, and territorial law enforcement agencies, as well as private sector entities such as terminal operators, yacht clubs, shipyards, marine exchanges, commercial fishermen, trucking and railroad companies, organized labor, and trade associations. These committees are to identify critical port infrastructure and risks to the port, develop mitigation strategies for these risks, and communicate appropriate security information to port stakeholders. The area maritime security committees, in consultation with applicable stakeholders within their geographic region, are to assist the Coast Guard in developing the port area maritime security plans. Each area maritime security plan is to describe the area and infrastructure covered by the plan, establish area response and recovery protocols for a transportation security incident, and include any other information DHS requires. In addition, during the development of each plan, the Coast Guard is to develop a risk-based security assessment that includes the identification of the critical infrastructure and operations in the port, a threat assessment, and a vulnerability and consequence assessment, among other things. The assessment is also to consider, among other things, physical security of infrastructure and operations of the port, existing security systems available to protect maritime personnel, and radio and telecommunication systems, including computer systems and networks as well as other areas that may, if damaged, pose a risk to people, infrastructure, or operations within the port. Upon completion of the assessment, a written report must be prepared that documents the assessment methodology that was employed, describes each vulnerability identified and the resulting consequences, and provides risk reduction strategies that could be used for continued operations in the port. MTSA and its associated regulations also require port facility owners and operators to develop facility security plans for the purpose of preparing certain maritime facilities, such as container terminals and chemical processing plants, to deter a transportation security incident. The plans are to be updated at least every 5 years and are expected to be consistent with the port s area maritime security plan. The MTSA implementing regulations require that the facility security plans document information on security systems and communications, as well as facility vulnerability and security measures, among other things. The implementing regulations also require port facility owners and operators, as well as their designated facility security officers, to ensure that a facility security assessment is conducted and that, upon completion, a written report is included with the corresponding facility security plan submission for review and approval by the captain of the port. The facility security assessment report must include an analysis that considers measures to protect radio and telecommunications equipment, including computer systems and networks, among other things. Enacted in October 2006, the SAFE Port Act created and codified new programs and initiatives related to the security of the U.S. ports, and amended some of the original provisions of MTSA. For example, the SAFE Port Act required the Coast Guard to establish a port security exercise program. <1.3.1. Port Security Grant Funding> MTSA also codified the Port Security Grant Program, which is to help defray the costs of implementing security measures at domestic ports. According to MTSA, funding is to be directed towards the implementation of area maritime security plans and facility security plans among port authorities, facility operators, and state and local government agencies that are required to provide port security services. Port areas use funding from the grant program to improve port-wide risk management, enhance maritime domain awareness, and improve port recovery and resiliency efforts through developing security plans, purchasing security equipment, and providing security training to employees. FEMA is responsible for designing and operating the administrative mechanisms needed to implement and manage the grant program. Coast Guard officials provide subject matter expertise regarding the maritime industry to FEMA to inform grant award decisions. <2. Federal Stakeholders Have Taken Limited Actions to Address Cybersecurity in the Maritime Port Environment> DHS and the other stakeholders have taken limited steps with respect to maritime cybersecurity. In particular, the Coast Guard did not address cybersecurity threats in a 2012 national-level risk assessment. In addition, area maritime security plans and facility security plans provide limited coverage of cybersecurity considerations. While the Coast Guard helped to establish mechanisms for sharing security-related information, the degree to which these mechanisms were active and facilitated the sharing of cybersecurity-related information varied. Also, FEMA had taken steps to address cybersecurity through the Port Security Grant Program, but it has not taken additional steps to help ensure cyber-related risks are effectively addressed. Other federal stakeholders have also taken some actions to address cybersecurity in the maritime environment. According to DHS officials, a primary reason for limited efforts in addressing cyber- related threats in the maritime environment is that the severity of cyber- related threats has only recently been recognized. Until the Coast Guard and FEMA take additional steps to more fully implement their efforts, the maritime port environment remains at risk of not adequately considering cyber-based threats in its mitigation efforts. <2.1. The Coast Guard Did Not Address Cyber-Related Risks in a National-Level Risk Assessment for the Maritime Mode> While the Coast Guard has assessed risks associated with physical threats to port environments, these assessments have not considered risks related to cyber threats. NIPP recommends sector-specific agencies and critical infrastructure partners manage risks from significant threats and hazards to physical and cyber critical infrastructure for their respective sectors through, among other things, the identification and detection of threats and hazards to the nation s critical infrastructure; reduction of vulnerabilities of critical assets, systems, and networks; and mitigation of potential consequences to critical infrastructure if incidents occur. The Coast Guard completes, on a biennial basis, the National Maritime Strategic Risk Assessment, which is to be an assessment of risk within the maritime environment and risk reduction based on the agency s efforts. Its results are to provide a picture of the risk environment, including a description of the types of threats the Coast Guard is expected to encounter within its areas of responsibility, such as ensuring the security of port facilities, over the next 5 to 8 years. The risk assessment is also to be informed by numerous inputs, such as historical incident and performance data, the views of subject matter experts, and risk models, including the Maritime Security Risk Analysis Model. However, the Coast Guard did not address cybersecurity in the fourth and latest iteration of the National Maritime Strategic Risk Assessment, which was issued in 2012. While the assessment contained information regarding threats, vulnerabilities, and the mitigation of potential risks in the maritime environment, none of the information addressed cyber- related risks. The Coast Guard attributed this gap to its limited efforts to develop inputs related to cyber threats, vulnerabilities, and consequences to inform the assessment. Additionally, Coast Guard officials stated that the Maritime Security Risk Analysis Model, a key input to the risk assessment, did not contain information regarding cyber-related threats, vulnerabilities, and potential impacts of cyber incidents. The Coast Guard plans to address this deficiency in the next iteration of the assessment, which is expected to be completed by September 2014, but officials could provide no details on how cybersecurity would be specifically addressed. Without a thorough assessment of cyber-related threats, vulnerabilities, and potential consequences to the maritime subsector, the Coast Guard has limited assurance that the maritime mode is adequately protected against cyber-based threats. Assessments of cyber risk would help the Coast Guard and other maritime stakeholders understand the most likely and severe types of cyber-related incidents that could affect their operations and use this information to support planning and resource allocation to mitigate the risk in a coordinated manner. Until the Coast Guard completes a thorough assessment of cyber risks in the maritime environment, maritime stakeholders will be less able to appropriately plan and allocate resources to protect the maritime transportation mode. <2.2. Maritime-Related Security Plans Provide Limited Coverage of Cybersecurity Considerations> MTSA and the SAFE Port Act provide the statutory framework for preventing, protecting against, responding to, and recovering from a transportation security incident in the maritime environment. MTSA requires maritime stakeholders to develop security documentation, including area maritime security plans and facility security plans. These plans, however, do not fully address the cybersecurity of their respective ports and facilities. Area maritime security plans do not fully address cyber-related threats, vulnerabilities, and other considerations. The three area maritime security plans we reviewed from the three high-risk port areas we visited generally contained very limited, if any, information about cyber-related threats and mitigation activities. For example, the three plans reviewed included information about the types of information and communications technology systems that would be used to communicate security information to prevent, manage, and respond to a transportation security incident; the types of information that are considered to be Sensitive Security Information; and how to securely handle and transmit this information to those with a need to know. However, the MTSA-required plans did not identify or address any other potential cyber-related threats directed at or vulnerabilities in the information and communications systems or include cybersecurity measures that port area stakeholders should take to prevent, manage, and respond to cyber-related threats and vulnerabilities. Coast Guard officials we met with agreed that the current set of area maritime security plans, developed in 2009, do not include cybersecurity information. This occurred in part because, as Coast Guard officials stated, the guidance for developing area maritime security plans did not require the inclusion of a cyber component. As a result, port area stakeholders may not be adequately prepared to successfully manage the risk of cyber-related transportation security incidents. Coast Guard officials responsible for developing area maritime security plan guidance stated that the implementing policy and guidance for developing the next set of area maritime security plans includes basic considerations that maritime stakeholders should take into account to address cybersecurity. Currently, the area maritime security plans are formally reviewed and approved on a 5-year cycle, so the next updates will occur in 2014 and will be based on recently issued policy and guidance. Coast Guard officials stated that the policy and guidance for developing the area security plans was updated and promulgated in July 2013 and addressed inclusion of basic cyber components. Examples include guidance to identify how the Coast Guard will communicate with port stakeholders in a cyber-degraded environment, the process for reporting a cyber-related breach of security, and direction to take cyber into account when developing a port s all hazard -compatible Marine Transportation System Recovery Plan. Our review of the guidance confirmed that it instructs preparers to generally consider cybersecurity issues related to information and communication technology systems when developing the plans. However, the guidance does not include any information related to the mitigation of cyber threats. Officials representing both the Coast Guard and nonfederal entities that we met with stated that the current facility security plans also do not contain cybersecurity information. Our review of nine facility security plans from the organizations we met with during site visits confirmed that those plans generally have very limited cybersecurity information. For example, two of the plans had generic references to potential cyber threats, but did not have any specific information on assets that were potentially vulnerable or associated mitigation strategies. According to federal and nonfederal entities, this is because, similar to the guidance for the area security plans, the current guidelines for facility security plans do not explicitly require entities to include cybersecurity information in the plans. Coast Guard officials stated that the next round of facility security plans, to be developed in 2014, will include cybersecurity provisions. Since the plans are currently in development, we were unable to determine the degree to which cybersecurity information will be included. Without the benefit of a national-level cyber-related risk assessment of the maritime infrastructure to inform the development of the plans, the Coast Guard has limited assurance that maritime-related security plans will appropriately address cyber-related threats and vulnerabilities associated with transportation security incidents. <2.3. Information-Sharing Mechanisms Were Active and Shared Cybersecurity Information to Varying Degrees> Although the Coast Guard helped to establish mechanisms for sharing security-related information, the degree to which these mechanisms were active and shared cybersecurity-related information varied. As the DHS agency responsible for maritime critical infrastructure protection-related efforts, the Coast Guard is responsible for establishing public-private partnerships and sharing information with federal and nonfederal entities in the maritime community. This information sharing is to occur through formalized mechanisms called for in federal plans and policy. Specifically, federal policy establishes a framework that includes government coordinating councils composed of federal, state, local, or tribal agencies and encourages the voluntary formation of sector coordinating councils, typically organized, governed by, and made up of nonfederal stakeholders. Further, federal policy also encourages sector-specific agencies to promote the formulation of information sharing and analysis centers (ISAC), which are to serve as voluntary mechanisms formed by owners and operators for gathering, analyzing, and disseminating information on infrastructure threats and vulnerabilities among owners and operators of the sectors and the federal government. The Maritime Modal Government Coordinating Council was established in 2006 to enable interagency coordination on maritime security issues. Coast Guard officials stated that the primary membership consisted of representatives from the Departments of Homeland Security, Transportation, Commerce, Defense, and Justice. Coast Guard officials stated that the council has met since 2006, but had only recently begun to discuss cybersecurity issues. For example, at its January 2013 annual meeting, the council discussed the implications of Executive Order 13636 for improving critical infrastructure cybersecurity for the maritime mode. In addition, during the January 2014 meeting, Coast Guard officials discussed efforts related to the development of a risk management framework that integrates cyber and physical security resilience efforts. In 2007, the Maritime Modal Sector Coordinating Council, consisting of owners, operators, and associations from within the sector, was established to enable coordination and information sharing within the sector and with government stakeholders. However, the council disbanded in March 2011 and is no longer active. Coast Guard officials attributed the demise of the council to a 2010 presidential memorandum that precluded the participation of registered lobbyists in advisory committees and other boards and commissions, which includes all Critical Infrastructure Partnership Advisory Council bodies, including the Critical Infrastructure Cross-Sector Council, and all sector coordinating councils, according to DHS. The former chair of the council stated that a majority of the members were registered lobbyists, and, as small trade associations, did not have non-lobbyist staff who could serve in this role. The Coast Guard has attempted to reestablish the sector coordinating council, but has faced challenges in doing so. According to Coast Guard officials, maritime stakeholders that would likely participate in such a council had viewed it as duplicative of statutorily authorized mechanisms, such as the National Maritime Security Advisory Committee and area maritime security committees. As a result, Coast Guard officials stated that there has been little stakeholder interest in reconstituting the council. While Coast Guard officials stated that these committees, in essence, meet the information-sharing requirements of NIPP and, to some extent, may expand the NIPP construct into real world all hazards response and recovery activities, these officials also stated that the committees do not fulfill all the functions of a sector coordinating council. For example, a key function of the council is to provide national-level information sharing and coordination of security-related activities within the sector. In contrast, the activities of the area maritime security committees are generally focused on individual port areas. In addition, while the National Maritime Security Advisory Committee is made up of maritime-related private-sector stakeholders, its primary purpose is to advise and make recommendations to the Secretary of Homeland Security so that the government can take actions related to securing the maritime port environment. Similarly, another primary function of the sector coordinating council may include identifying, developing, and sharing information concerning effective cybersecurity practices, such as cybersecurity working groups, risk assessments, strategies, and plans. Although Coast Guard officials stated that several of the area maritime security committees had addressed cybersecurity in some manner, the committees do not provide a national-level perspective on cybersecurity in the maritime mode. Coast Guard officials could not demonstrate that these committees had a national-level focus to improve the maritime port environment s cybersecurity posture. In addition, the Maritime Information Sharing and Analysis Center was to serve as the focal point for gathering and disseminating information regarding maritime threats to interested stakeholders; however, Coast Guard officials could not provide evidence that the body was active or identify the types of cybersecurity information that was shared through it. They stated that they fulfill the role of the ISAC through the use of Homeport a publicly accessible and secure Internet portal that supports port security functionality for operational use. According to the officials, Homeport serves as the Coast Guard s primary communications tool to support the sharing, collection, and dissemination of information of various classification levels to maritime stakeholders. However, the Coast Guard could not show the extent to which cyber-related information was shared through the portal. Though the Coast Guard has established various mechanisms to coordinate and share information among government entities at a national level and between government and private stakeholders at the local level, it has not facilitated the establishment of a national-level council, as recommended by NIPP. The absence of a national-level sector coordinating council increases the risk that critical infrastructure owners and operators would not have a mechanism through which they can identify, develop, and share information concerning effective cybersecurity practices, such as cybersecurity working groups, risk assessments, strategies, and plans. As a result, the Coast Guard would not be aware of and thus not be able to mitigate cyber-based threats. <2.4. Port Security Grant Program Provides Some Guidance for Cybersecurity Grants but Has Not Taken Additional Steps to Help Ensure Risks are Addressed> Under the Port Security Grant Program, FEMA has taken steps to address cybersecurity in port areas by identifying enhancing cybersecurity capabilities as a funding priority in fiscal years 2013 and 2014 and by providing general guidance regarding the types of cybersecurity-related proposals eligible for funding. DHS annually produces guidance that provides the funding amounts available under the program for port areas and information about eligible applicants, the application process, and funding priorities for that fiscal year, among other things. Fiscal year 2013 and 2014 guidance stated that DHS identified enhancing cybersecurity capabilities as one of the six priorities for selection criteria for all grant proposals in these funding cycles. FEMA program managers stated that FEMA added projects that aim to enhance cybersecurity capabilities as a funding priority in response to the issuance of Presidential Policy Directive 21 in February 2013. Specifically, the 2013 guidance stated that grant funds may be used to invest in functions that support and enhance port-critical infrastructure and key resources in both physical space and cyberspace under Presidential Policy Directive 21. The 2014 guidance expanded on this guidance to encourage applicants to propose projects to aid in the implementation of the National Institute of Standards and Technology s cybersecurity framework, established pursuant to Executive Order 13636, and provides a hyperlink to additional information about the framework. In addition, the guidance refers applicants to the just-established DHS Critical Infrastructure Cyber Community Voluntary Program for resources to assist critical infrastructure owners and operators in the adoption of the framework and managing cyber risks. While these actions are positive steps towards addressing cybersecurity in the port environment, FEMA has not consulted individuals with cybersecurity-related subject matter expertise to assist with the review of cybersecurity-related proposals. Program guidance states that grant applications are to undergo a multi-level review for final selection, including a review by a National Review Panel, comprised of subject matter experts drawn from the Departments of Homeland Security and Transportation. However, according to FEMA program managers, the fiscal year 2013 National Review Panel did not include subject matter experts from DHS cybersecurity and critical infrastructure agencies such as the DHS Office of Cybersecurity and Communications, the DHS Office of Infrastructure Protection, or the Coast Guard s Cyber Command. As a result, the National Review Panel had limited subject matter expertise to evaluate and prioritize cybersecurity-related grant proposals for funding. Specifically, according to FEMA guidance, the proposal review and selection process consists of three levels: an initial review, a field review, and a national-level review. During the initial review, FEMA officials review grant proposals for completion. During the field review, Coast Guard captains of the port, in coordination with officials of the Department of Transportation s Maritime Administration, review and score proposals according to (1) the degree to which a proposal addresses program goals, including enhancing cybersecurity capabilities, and (2) the degree to which a proposal addresses one of the area maritime security plan priorities (e.g., transportation security incident scenarios), among other factors. The captains of the port provide a prioritized list of eligible projects for funding within each port area to FEMA, which coordinates the national review process. In March 2014, FEMA program managers stated that cybersecurity experts were not involved in the National Review Panel in part because the panel has been downsized in recent years. For the future, the officials stated that FEMA is considering revising the review process to identify cybersecurity proposals early on in the review process in order to obtain relevant experience and expertise from the Coast Guard and other subject matter experts to inform proposal reviews. However, FEMA has not documented this new process or its procedures for the Coast Guard and FEMA officials at the field and national review levels to follow for the fiscal year 2014 and future cycles. In addition, because the Coast Guard has not conducted a comprehensive risk assessment for the maritime environment that includes cyber-related threats, grant applicants and DHS officials have not been able to use the results of such an assessment to inform their grant proposals, project scoring, and risk-based funding decisions. MTSA states that, in administering the program, national economic and strategic defense concerns based on the most current risk assessments available shall be taken into account. Further, according to MTSA, Port Security Grant Program funding is to be used to address Coast Guard-identified vulnerabilities, among other purposes. FEMA officials stated that the agency considers port risk during the allocation and proposal review stages of the program funding cycle. However, FEMA program managers stated that the risk formula and risk-based analysis that FEMA uses in the allocation and proposal review stages do not assess cyber threats and vulnerabilities. Additionally, during the field-level review, captains of the port score grant proposals according to (1) the degree to which a proposal addresses program goals, including enhancing cybersecurity capabilities, and (2) the degree to which a proposal addresses one of the area maritime security plan priorities (e.g., transportation security incident scenarios), among other factors. However, as Coast Guard officials stated, and our review of area maritime security plans indicated, current area maritime security plans generally contain very limited, if any, information about cyber- related threats. Further, a FEMA Port Security Grant Program section chief stated that he was not aware of a risk assessment for the maritime mode that discusses cyber-related threats, vulnerabilities, and potential impact. Using the results of such a maritime risk assessment that fully addresses cyber-related threats, vulnerabilities, and consequences, which as discussed previously has not been conducted, to inform program guidance could help grant applicants and reviewers more effectively identify and select projects for funding that could enhance the cybersecurity of the nation s maritime cyber infrastructure. Furthermore, FEMA has not developed or implemented outcome measures to evaluate the effectiveness of the Port Security Grant Program in achieving program goals, including enhancing cybersecurity capabilities. As we reported in November 2011, FEMA had not evaluated the effectiveness of the Port Security Grant Program in strengthening critical maritime infrastructure because it had not implemented measures to track progress toward achieving program goals. Therefore, we recommended that FEMA in collaboration with the Coast Guard develop time frames and related milestones for implementing performance measures to monitor the effectiveness of the program. In response, in February 2014 FEMA program managers stated that the agency developed and implemented four management and administrative measures in 2012 and two performance measures to track the amount of funds invested in building and sustaining capabilities in 2013. According to a FEMA program manager, FEMA did not design the two performance measures to evaluate the effectiveness of the program in addressing individual program goals, such as enhancing cybersecurity capabilities, but to gauge the program s effectiveness in reducing overall maritime risk in a port area based on program funding. While these measures can help improve FEMA s management of the program by tracking how funds are invested, they do not measure program outcomes. In addition, in February 2012, we found that FEMA had efforts under way to develop outcome measures for the four national preparedness grant programs, including the Port Security Grant Program, but that it had not completed these efforts. Therefore, we recommended that FEMA revise its plan in order to guide the timely completion of ongoing efforts to develop and implement outcome-based performance measures for all four grant programs. In January 2014, FEMA officials stated that they believe that the implementation of project-based grant application tracking and reporting functions within the Non-Disaster Grant Management System will address our February 2012 recommendation that the agency develop outcome measures to determine the effectiveness of the Port Security Grant Program. However, the officials did not provide details about how these functions will address the recommendation. While the development of the Non-Disaster Grant Management System is a positive step toward improving the management and administration of preparedness grants, FEMA officials stated that the deployment of these system functions has been delayed due to budget reductions, and the time frame for building the project-based applications and reporting functions is fiscal year 2016. Therefore, it is too early to determine how FEMA will use the system to evaluate the effectiveness of the Port Security Grant Program. Until FEMA develops outcome measures to evaluate the effectiveness of the program in meeting program goals, it cannot provide reasonable assurance that funds invested in port security grants, including those intended to enhance cybersecurity capabilities, are strengthening critical maritime infrastructure including cyber-based infrastructure against risks associated with potential terrorist attacks and other incidents. <2.5. Other Federal Agencies Have Taken Actions to Address Cybersecurity in the Maritime Port Environment> In addition to DHS, the 2010 Transportation Systems Sector-Specific Plan identified the Departments of Commerce, Defense, Justice, and Transportation as members of the Maritime Modal Government Coordinating Council. Many agencies, including others within DHS, had taken some actions with respect to the cybersecurity of the maritime subsector. For more details on these actions, see appendix II. <3. Conclusions> Disruptions in the operations of our nation s ports, which facilitate the import and export of over $1.3 trillion worth of goods annually, could be devastating to the national economy. While the impact of a physical event (natural or manmade) appears to have been better understood and addressed by maritime stakeholders than cyber-based events, the growing reliance on information and communications technology suggests the need for greater attention to potential cyber-based threats. Within the roles prescribed for them by federal law, plans, and policy, the Coast Guard and FEMA have begun to take action. In particular, the Coast Guard has taken action to address cyber-based threats in its guidance for required area and facility plans and has started to leverage existing information-sharing mechanisms. However, until a comprehensive risk assessment that includes cyber-based threats, vulnerabilities, and consequences of an incident is completed and used to inform the development of guidance and plans, the maritime port sector remains at risk of not adequately considering cyber-based risks in its mitigation efforts. In addition, the maritime sector coordinating council is currently defunct, which may limit efforts to share important information on threats affecting ports and facilities on a national level. Further, FEMA has taken actions to enhance cybersecurity through the Port Security Grant Program by making projects aimed at enhancing cybersecurity one of its funding priorities. However, until it develops procedures to instruct grant reviewers to consult cybersecurity-related subject matter experts and uses the results of a risk assessment that identifies any cyber-related threats and vulnerabilities to inform its funding guidance, FEMA will be limited in its ability to ensure that the program is effectively addressing cyber-related risks in the maritime environment. <4. Recommendations for Executive Action> To enhance the cybersecurity of critical infrastructure in the maritime sector, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following actions: work with federal and nonfederal partners to ensure that the maritime risk assessment includes cyber-related threats, vulnerabilities, and potential consequences; use the results of the risk assessment to inform how guidance for area maritime security plans, facility security plans, and other security- related planning should address cyber-related risk for the maritime sector; and work with federal and nonfederal stakeholders to determine if the Maritime Modal Sector Coordinating Council should be reestablished to better facilitate stakeholder coordination and information sharing across the maritime environment at the national level. To help ensure the effective use of Port Security Grant Program funds to support the program s stated mission of addressing vulnerabilities in the maritime port environment, we recommend that the Secretary of Homeland Security direct the FEMA Administrator to take the following actions: in coordination with the Coast Guard, develop procedures for officials at the field review level (i.e., captains of the port) and national review level (i.e., the National Review Panel and FEMA) to consult cybersecurity subject matter experts from the Coast Guard and other relevant DHS components, if applicable, during the review of cybersecurity grant proposals for funding and in coordination with the Coast Guard, use any information on cyber- related threats, vulnerabilities, and consequences identified in the maritime risk assessment to inform future versions of funding guidance for grant applicants and reviews at the field and national levels. <5. Agency Comments and Our Evaluation> We provided a draft of this report to the Departments of Homeland Security, Commerce, Defense, Justice, and Transportation for their review and comment. DHS provided written comments on our report (reprinted in app. IV). In its comments, DHS concurred with our recommendations. In addition, the department stated that the Coast Guard is working with a variety of partners to determine how cyber- related threats, vulnerabilities, and potential consequences are to be addressed in the maritime risk assessment, which the Coast Guard will use to inform security planning efforts (including area maritime security plans and facility security plans). DHS also stated that the Coast Guard will continue to promote the re-establishment of a sector coordinating council, and will also continue to use existing information-sharing mechanisms. However, DHS did not provide an estimated completion date for these efforts. In addition, DHS stated that FEMA will work with the Coast Guard to develop the recommended cyber consultation procedures for the Port Security Grant Program by the end of October 2014, and will use any information on cyber-related threats, vulnerabilities, and consequences from the maritime risk assessment in future program guidance, which is scheduled for publication in the first half of fiscal year 2015. Officials from DHS and the Department of Commerce also provided technical comments via e-mail. We incorporated these comments where appropriate. Officials from the Departments of Defense, Justice, and Transportation stated that they had no comments. We are sending copies of this report to interested congressional committees; the Secretaries of Commerce, Defense, Homeland Security, and Transportation; the Attorney General of the United States; the Director of Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected] or Stephen L. Caldwell at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objective, Scope, and Methodology Our objective was to identify the extent to which the Department of Homeland Security (DHS) and other stakeholders have taken steps to address cybersecurity in the maritime port environment. The scope of our audit focused on federal agencies that have a role or responsibilities in the security of the maritime port environment, to include port facilities. We focused on the information and communications technology used to operate port facilities. We did not include other aspects of the maritime environment such as vessels, off-shore platforms, inland waterways, intermodal connections, systems used to manage water-based portions of the port, and federally managed information and communication technology. To identify federal agency roles and select the organizations responsible for addressing cybersecurity in the maritime port environment, we reviewed relevant federal law, regulations, policy, and critical infrastructure protection-related strategies, including the following: Homeland Security Act of 2002; Maritime Transportation Security Act of 2002; Homeland Security Presidential Directive 7 Critical Infrastructure Identification, Prioritization, and Protection, December 2003; Security and Accountability for Every Port Act of 2006; 2006 National Infrastructure Protection Plan; 2009 National Infrastructure Protection Plan; 2013 National Infrastructure Protection Plan; 2010 Transportation Systems Sector-Specific Plan; Presidential Policy Directive 21 Critical Infrastructure Security and Resilience, February 12, 2013; Executive Order 13636 Improving Critical Infrastructure Title 33, Code of Federal Regulations, Chapter 1, Subchapter H. We analyzed these documents to identify federal agencies responsible for taking steps to address cybersecurity in the maritime environment, such as developing a risk assessment and information-sharing mechanisms, guiding the development of security plans in response to legal requirements, and providing financial assistance to support maritime port security activities. Based on our analysis, we determined that the U.S. Coast Guard (Coast Guard) and Federal Emergency Management Agency (FEMA), within DHS, were relevant to our objective. We also included the Departments of Transportation, Defense, Commerce, and Justice as they were identified as members of the Maritime Modal Government Coordinating Council in the 2010 Transportation Systems Sector-Specific Plan. We also included other DHS components, such as U.S. Customs and Border Protection, National Protection and Programs Directorate, Transportation Security Administration, and United States Secret Service, based on our prior cybersecurity and port security work and information learned from interviews during our engagement. To determine the extent to which the Coast Guard and FEMA have taken steps to address cybersecurity in the maritime port environment, we collected and analyzed relevant guidance and reports. For example, we analyzed the Coast Guard s 2012 National Maritime Strategic Risk Assessment, Coast Guard guidance for developing area maritime security plans, the 2012 Annual Progress Report National Strategy for Transportation Security, the Transportation Sector Security Risk Assessment, and FEMA guidance for applying for and reviewing proposals under the Port Security Grant Program. We also examined our November 2011 and February 2012 reports related to the Port Security Grant Program and our past work related to FEMA grants management for previously identified issues and context. In addition, we gathered and analyzed documents and interviewed officials from DHS s Coast Guard, FEMA, U.S. Customs and Border Protection, Office of Cybersecurity and Communications, Office of Infrastructure Protection, Transportation Security Administration, and United States Secret Service; the Department of Commerce s National Oceanic and Atmospheric Administration; the Department of Defense s Transportation Command; the Department of Justice s Federal Bureau of Investigation; and the Department of Transportation s Maritime Administration, Office of Intelligence, Security and Emergency Response, and the Volpe Center. To gain an understanding of how information and communication technology is used in the maritime port environment and to better understand federal interactions with nonfederal entities on cybersecurity issues, we conducted site visits to three port areas Houston, Texas; Los Angeles/Long Beach, California; and New Orleans, Louisiana. These ports were selected in a non-generalizable manner based on their identification as both high risk (Group I) ports by the Port Security Grant Program, and as national leaders in calls by specific types of vessels oil and natural gas, containers, and dry bulk in the Department of Transportation Maritime Administration s March 2013 report, Vessel Calls Snapshot, 2011. For those port areas, we analyzed the appropriate area maritime security plans for any cybersecurity-related information. We also randomly selected facility owners from Coast Guard data on those facilities required to prepare facility security plans under the Maritime Transportation Security Act s implementing regulations. For those facilities whose officials agreed to participate in our review, we interviewed staff familiar with Coast Guard facility security requirements or information technology security, and analyzed their facility security plans for any cybersecurity-related items. We also included additional nonfederal entities such as port authorities and facilities as part of our review. The results of our analysis of area maritime security plans and facility security plans at the selected ports cannot be projected to other facilities at the port areas we visited or other port areas in the country. We also met with other port stakeholders, such as port authorities and an oil storage and transportation facility. We met with the following organizations: APM Terminals Axiall Cargill Domino Sugar Company Harris County, Texas, Information Technology Center Louisiana Offshore Oil Port Magellan Terminals Holdings, L.P. Metropolitan Stevedoring Port of Houston Authority Port of Long Beach Port of Los Angeles Port of New Orleans SSA Marine St. Bernard Port Trans Pacific Container Service We determined that information provided by the federal and nonfederal entities, such as the type of information contained within the area maritime security plans and facility security plans, was sufficiently reliable for the purposes of our review. To arrive at this assessment, we corroborated the information by comparing the plans with statements from relevant agency officials. We conducted this performance audit from April 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Additional Federal Maritime Cybersecurity Actions This appendix summarizes cybersecurity-related actions, if any, taken by other agencies of the departments identified as members of the Government Coordinating Council of the Maritime Mode related to the nonfederally owned and operated maritime port environment. <6. The Department of Homeland Security> <6.1. Integrated Task Force> Under Executive Order 13636, the Secretary of Homeland Security is to use a risk-based approach to identify critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security. The Secretary is also to apply consistent, objective criteria in identifying such critical infrastructure. Sector-specific agencies were to provide the Secretary with information necessary to identify such critical infrastructure. To implement Executive Order 13636, DHS established an Integrated Task Force to, among other things, lead DHS implementation and coordinate interagency and public- and private-sector efforts. One of the eight working groups that made up the task force was assigned the responsibility for identifying cyber-dependent infrastructure. Officials from DHS s Office of Infrastructure Protection who were responsible for the working group stated that, using the defined methodology, the task force examined the maritime mode as part of its efforts. <6.2. National Protection and Programs Directorate> Office of Cybersecurity and Communications The Office of Cybersecurity and Communications, among other things, is responsible for collaborating with public, private, and international partners to ensure the security and continuity of the nation s cyber and communications infrastructures in the event of terrorist attacks, natural disasters, and catastrophic incidents. One division of the Office of Cybersecurity and Communications (Stakeholder Engagement and Cyber Infrastructure Resilience) offers to partner with critical infrastructure partners including those in the maritime port environment to conduct cyber resilience reviews. These reviews are voluntary and are based on the CERT Resilience Management Model, a process improvement model for managing operational resilience. They are facilitated by field-based Cyber Security Advisors. The primary goal of this program is to evaluate how critical infrastructure and key resource providers manage the cybersecurity of significant information. In addition, the Industrial Control Systems Cyber Emergency Response Team a branch of the National Cybersecurity and Communications Integration Center division within the Office of Cybersecurity and Communications directed the development of the Cyber Security Evaluation Tool, which is a self-assessment tool that evaluates the cybersecurity of an automated industrial control or business system using a hybrid risk- and standards-based approach, and provides relevant recommendations for improvement. We observed one maritime port entity engage with Office of Cybersecurity and Communications staff members to conduct a cyber resilience review. According to data provided by Office of Cybersecurity and Communications officials, additional reviews have been conducted with maritime port entities. In addition, three maritime port entities informed us they conducted a self-assessment using the Cyber Security Evaluation Tool. The Office of Infrastructure Protection is responsible for working with public- and private-sector critical infrastructure partners and leads the coordinated national effort to mitigate risk to the nation s critical infrastructure. Among other things, the Office of Infrastructure Protection has the overall responsibility for coordinating implementation of NIPP across 16 critical infrastructure sectors and overseeing the development of 16 sector-specific plans. Through its Protective Security Coordination Division, the Office of Infrastructure Protection also has a network of field-based protective security advisors, who are security experts that serve as a direct link between the department and critical infrastructure partners in the field. Two nonfederal port stakeholders identified protective security advisors as a resource for assistance in cybersecurity issues. Officials from Infrastructure Protection s Strategy and Policy Office supported the Coast Guard in developing the sector-specific plan and annual report for the maritime mode. <6.3. U.S. Customs and Border Protection> U.S. Customs and Border Protection (CBP) is responsible for securing America s borders. This includes ensuring that all cargo enters the United States legally, safely, and efficiently through official sea ports of entry; preventing the illegal entry of contraband into the country at and between ports of entry; and enforcing trade, tariff, and intellectual property laws and regulations. In addition, CBP developed and administered the Customs-Trade Partnership Against Terrorism program, a voluntary program where officials work in partnership with private companies to review the security of their international supply chains and improve the security of their shipments to the United States. Under this program, CBP issued minimum security criteria for U.S.-based marine port authority and terminal operators that include information technology security practices (specifically, password protection, establishment of information technology security policies, employee training on information technology security, and developing a system to identify information technology abuse that includes improper access). <6.4. United States Secret Service> Among other things, the Secret Service protects the President, Vice President, visiting heads of state and government, and National Special Security Events; safeguards U.S. payment and financial systems; and investigates cyber/electronic crimes. In support of these missions, the Secret Service has several programs that have touched on maritime port cybersecurity. The Electronic Crimes Task Force initiative is a network of task forces established in the USA PATRIOT Act for the purpose of preventing, detecting, and investigating various forms of electronic crimes, including potential terrorist attacks against critical infrastructure and financial payments systems. The Secret Service also conducts Critical Systems Protection advances for protective visits. This program identifies, assesses, and mitigates any risks posed by information systems to persons and facilities protected by the Secret Service. It also conducts protective advances to identify, assess, and mitigate any issues identified with networks or systems that could adversely affect the physical security plan or cause physical harm to a protectee. The advances support all of the Secret Service s protective detail offices by implementing network monitoring, and applying cyber intelligence analysis. Additionally, the program supports full spectrum protective visits, events, or venues domestically, in foreign countries, special events, and national special security events. In addition, Secret Service personnel in Los Angeles have engaged with maritime port stakeholders in Los Angeles and Long Beach in several ways. For example, Secret Service staff gave a general cybersecurity threat presentation to port stakeholders, though no specific cyber threats to the maritime port environment were discussed. In addition, Secret Service was requested by a local governmental entity to assist in assessing the cyber aspects of critical infrastructure. Secret Service officials stated that they are still very early on in this process and are currently working with the entity to identify the critical assets/components of the cyber infrastructure. The process is still in the information-gathering phase, and officials do not expect to release any sort of summary product until mid-2014 at the earliest. Officials stated that the end product would detail any potential vulnerabilities identified during the assessment and make recommendations for mitigation that the stakeholder could implement if it chooses. Secret Service officials also stated that an evaluation was conducted under the Critical Systems Protection Program with a maritime port stakeholder in the Houston area, but did not provide details regarding this evaluation. <6.5. Transportation Security Administration> The Transportation Security Administration (TSA) is the former lead sector-specific agency for the transportation systems sector. TSA currently co-leads the sector with the Department of Transportation and Coast Guard, and it supports, as needed, the Coast Guard s lead for maritime security. TSA also uses the Transportation Sector Security Risk Assessment to determine relative risks for the transportation modes. However, according to TSA officials, Coast Guard and TSA agreed in 2009 that the maritime modal risk assessment would be addressed in a separate report. TSA also established the Transportation Systems Sector Cybersecurity Working Group, whose meetings (under the Critical Infrastructure Partnership Advisory Council framework) have discussed maritime cybersecurity issues. <7. The Department of Commerce> Although components of the Department of Commerce do have maritime- related efforts under way, none are directly related to the cybersecurity of the port environment. Further, the National Institute of Standards and Technology (NIST) has not developed any specific standards related to the cybersecurity of maritime facilities within our scope. NIST has started to work with private sector stakeholders from different critical infrastructure sectors to develop a voluntary framework for reducing cyber risks to critical infrastructure, as directed by Executive Order 13636. It is developing this voluntary framework in accordance with its mission to promote U.S. innovation and industrial competitiveness. The framework has been shaped through ongoing public engagement. According to officials, more than 3,000 people representing diverse stakeholders in industry, academia, and government have participated in the framework s development through attendance at a series of public workshops and by providing comments on drafts. On February 12, 2014, NIST released the cybersecurity framework. Though representatives from numerous critical infrastructure sectors provided comments on the draft framework, only one maritime entity provided feedback, in October 2013. The entity stated that the framework provided a minimum level of cybersecurity information, but may not provide sufficient guidance to all relevant parties who choose to implement its provisions and suggestions. Additionally, the entity stated that it found the framework to be technical in nature and that it does not communicate at a level helpful for business executives. Department of Commerce officials stated that NIST worked to address these comments in the final version of the framework. <8. The Department of Transportation> The mission of the Department of Transportation is to serve the United States by ensuring a fast, safe, efficient, accessible, and convenient transportation system that meets our vital national interest and enhances the quality of life of the American people. The department is organized into several administrations, including the Research and Innovative Technology Administration, which coordinates the department s research programs and is charged with advancing the deployment of cross-cutting technologies to improve the nation s transportation networks. The administration includes the Volpe Center, which partners with public and private organizations to assess the needs of the transportation community, evaluate research and development endeavors, assist in the deployment of state-of-the-art transportation technologies, and inform decision- and policy-making through analyses. Volpe is funded by sponsoring organizations. In 2011, Volpe entered into a 2-year agreement with DHS s Control Systems Security Program to evaluate the use of control systems in the transportation sector, including the maritime mode. Under this agreement, Volpe and DHS developed a road map to secure control systems in the transportation sector in August 2012. The document discussed the use of industrial control systems in the maritime mode, and described high-level threats. It also established several goals for the entire transportation sector with near- (0-2 years), mid- (2-5 years), and long-term (5-10 years) objectives, metrics, and milestones. Volpe and DHS also developed a cybersecurity standards strategy for transportation industrial control systems, which identified tasks for developing standards for port industrial control systems starting in 2015. Volpe also conducted outreach to various maritime entities. According to Volpe officials, this study was conducted mostly at international port facilities and vessels (though U.S. ports were visited under a different program). The officials stated that the agreement was canceled due to funding reductions resulting from the recent budget sequestration. DHS officials gave two reasons why funding for Volpe outreach was terminated after sequestration. First, as part of a reorganization of the Office of Cybersecurity and Communications, there is a heightened focus on operational activities, and DHS characterized Volpe s assistance under the agreement as outreach and awareness. Second, the officials stated that because the demand for incident management and response continues to grow, a decision was made to stop funding Volpe to meet spending cuts resulting from sequestration and increase funding for cyber incident response for critical infrastructure asset owners and operators who use industrial control systems. <9. The Department of Justice> Although components of the Department of Justice have some efforts under way, most of those efforts occur at the port level. Specifically, the department s Federal Bureau of Investigation is involved in several initiatives at the local level, focused on interfacing with key port stakeholders as well as relevant entities with state and local governments. These initiatives are largely focused on passing threat information to partners. Additionally, the Bureau s Infragard program provides a forum to share threat information with representatives from all critical infrastructure sectors, including maritime. <10. The Department of Defense> While the Department of Defense has recognized the significance of cyber-related threats to maritime facilities, the department has no explicit role in the protection of critical infrastructure within the maritime sub- sector. Officials also said that the department had not supported maritime mode stakeholders regarding cybersecurity. In addition, though the Department of Defense was identified as a member of the Maritime Modal Government Coordinating Council in the 2010 Transportation Systems Sector-Specific Plan, the department was not listed as a participant in the 2013 or 2014 council meetings. Further, DHS, including the U.S. Coast Guard, had not requested support from Defense on cybersecurity of commercial maritime port operations and facilities. Appendix III: Full Text for Figure 1 on Examples of Technologies Used in Maritime Port Environments Figure 2 provides an overview of the technologies used in the maritime port environment (see interactive fig. 1) and includes the figure s rollover information. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contacts and Staff Acknowledgments <11. GAO Contacts> <12. Staff Acknowledgments> In addition to the contacts named above, key contributions to this report were made by Michael W. Gilmore (Assistant Director), Christopher Conrad (Assistant Director), Bradley W. Becker, Jennifer L. Bryant, Franklin D. Jackson, Tracey L. King, Kush K. Malhotra, Lee McCracken, Umesh Thakkar, and Adam Vodraska. Related GAO Products National Preparedness: FEMA Has Made Progress, but Additional Steps Are Needed to Improve Grant Management and Assess Capabilities. GAO-13-637T. Washington, D.C.: June 25, 2013. Communications Networks: Outcome-Based Measures Would Assist DHS in Assessing Effectiveness of Cybersecurity Efforts. GAO-13-275. Washington, D.C.: April 3, 2013. High Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Maritime Security: Progress and Challenges 10 Years after the Maritime Transportation Security Act. GAO-12-1009T. Washington, D.C.: September 11, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Homeland Security: DHS Needs Better Project Information and Coordination among Four Overlapping Grant Programs. GAO-12-303. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Port Security Grant Program: Risk Model, Grant Management, and Effectiveness Measures Could Be Strengthened. GAO-12-47. Washington, D.C.: November 17, 2011. Coast Guard: Security Risk Model Meets DHS Criteria, but More Training Could Enhance Its Use for Managing Programs and Operations. GAO-12-14. Washington, D.C.: November 17, 2011. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs Allocation and Management Methods, But Measuring Programs Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program s Staffing, Practices, and Data. GAO-08-12. Washington, D.C.: February 14, 2008. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005.
Why GAO Did This Study U.S. maritime ports handle more than $1.3 trillion in cargo annually. The operations of these ports are supported by information and communication systems, which are susceptible to cyber-related threats. Failures in these systems could degrade or interrupt operations at ports, including the flow of commerce. Federal agencies—in particular DHS—and industry stakeholders have specific roles in protecting maritime facilities and ports from physical and cyber threats. GAO's objective was to identify the extent to which DHS and other stakeholders have taken steps to address cybersecurity in the maritime port environment. GAO examined relevant laws and regulations; analyzed federal cybersecurity-related policies and plans; observed operations at three U.S. ports selected based on being a high-risk port and a leader in calls by vessel type, e.g. container; and interviewed federal and nonfederal officials. What GAO Found Actions taken by the Department of Homeland Security (DHS) and two of its component agencies, the U.S. Coast Guard and Federal Emergency Management Agency (FEMA), as well as other federal agencies, to address cybersecurity in the maritime port environment have been limited. While the Coast Guard initiated a number of activities and coordinating strategies to improve physical security in specific ports, it has not conducted a risk assessment that fully addresses cyber-related threats, vulnerabilities, and consequences. Coast Guard officials stated that they intend to conduct such an assessment in the future, but did not provide details to show how it would address cybersecurity. Until the Coast Guard completes a thorough assessment of cyber risks in the maritime environment, the ability of stakeholders to appropriately plan and allocate resources to protect ports and other maritime facilities will be limited. Maritime security plans required by law and regulation generally did not identify or address potential cyber-related threats or vulnerabilities. This was because the guidance issued by Coast Guard for developing these plans did not require cyber elements to be addressed. Officials stated that guidance for the next set of updated plans, due for update in 2014, will include cybersecurity requirements. However, in the absence of a comprehensive risk assessment, the revised guidance may not adequately address cyber-related risks to the maritime environment. The degree to which information-sharing mechanisms (e.g., councils) were active and shared cybersecurity-related information varied. Specifically, the Coast Guard established a government coordinating council to share information among government entities, but it is unclear to what extent this body has shared information related to cybersecurity. In addition, a sector coordinating council for sharing information among nonfederal stakeholders is no longer active, and the Coast Guard has not convinced stakeholders to reestablish it. Until the Coast Guard improves these mechanisms, maritime stakeholders in different locations are at greater risk of not being aware of, and thus not mitigating, cyber-based threats. Under a program to provide security-related grants to ports, FEMA identified enhancing cybersecurity capabilities as a funding priority for the first time in fiscal year 2013 and has provided guidance for cybersecurity-related proposals. However, the agency has not consulted cybersecurity-related subject matter experts to inform the multi-level review of cyber-related proposals—partly because FEMA has downsized the expert panel that reviews grants. Also, because the Coast Guard has not assessed cyber-related risks in the maritime risk assessment, grant applicants and FEMA have not been able to use this information to inform funding proposals and decisions. As a result, FEMA is limited in its ability to ensure that the program is effectively addressing cyber-related risks in the maritime environment. What GAO Recommends GAO recommends that DHS direct the Coast Guard to (1) assess cyber-related risks, (2) use this assessment to inform maritime security guidance, and (3) determine whether the sector coordinating council should be reestablished. DHS should also direct FEMA to (1) develop procedures to consult DHS cybersecurity experts for assistance in reviewing grant proposals and (2) use the results of the cyber-risk assessment to inform its grant guidance. DHS concurred with GAO's recommendations.
<1. Introduction> Social Security forms the foundation for our retirement income system. In 1998, it provided approximately $264 billion in annual benefits to 31 million workers and their dependents. However, the Social Security program is facing significant future financial challenges as a result of profound demographic changes, including the aging of the baby boom generation and increased life expectancy. In response, different groups and individuals have advanced numerous proposals that have called for the creation of some sort of mandatory or voluntary individual accounts. To better understand the potential implications of individual accounts, the Chairman of the House Committee on Ways and Means asked GAO to determine how individual accounts could affect private capital and annuities markets as well as national savings, the potential risks and returns to individuals, and the disclosure and educational information needed for public understanding and use of an individual account investment program. <1.1. Social Security Has a Financing Problem> The Social Security program is not in long-term actuarial balance. That is, Social Security revenues are not expected to be sufficient to pay all benefit obligations from 1999 to 2073. Without a change in the current program, excess cash revenues from payroll and income taxes are expected to begin to decline substantially around 2008. Based on the Social Security Trustees latest best estimate projections, in 2014 the combined OASDI program will experience a negative cash flow that will accelerate in subsequent years. In addition, the combined OASDI trust funds are expected to be exhausted in 2034, and the estimated annual tax income will be enough to pay approximately 70 percent of benefits. Every year, Social Security s Board of Trustees estimates the financial status of the program for the next 75 years using three sets of economic and demographic assumptions about the future. According to the Trustees intermediate set of these assumptions (or best estimate), the nation s Social Security program will face both solvency and sustainability problems in the years ahead unless corrective actions are taken. Over the next 75 years, Social Security s total shortfall is projected to be about $3 trillion in 1998 dollars. Social Security s long-term financing problem is primarily caused by the aging of the U.S. population. As the baby boom generation retires, labor force growth is expected to slow dramatically. Beyond 2030, the overall population is expected to continue aging due to relatively low birth rates and increasing longevity. These demographic trends will require substantial changes in the Social Security benefits structure and/or revenues (i.e., taxes and/or investment returns). Without such changes, current Social Security tax revenues are expected to be insufficient to cover benefit payments in about 2014, less than 15 years from now. These trends in Social Security s finances will place a significant burden on future workers and the economy. Without major policy changes, the relatively smaller workforce of tomorrow will bear the brunt of financing Social Security s cash deficit. In addition, the future workforce also would likely be affected by any reduction in Social Security benefits or increased payroll taxes needed to resolve the program s long-term financing shortfall. As a result, without timely actions, certain generations could face the twin blows of higher burdens and reduced benefits. <1.2. Individual Accounts Proposed to Help Solve Social Security s Financing Problem> Proposals have been advanced by different groups to reform Social Security through individual accounts. Such proposals basically also try to restore the Social Security program s solvency and conserve its sustainability. In its report to the Social Security Commissioner, the 1994- 1996 Advisory Council on Social Security offered three alternative reform proposals, two of which would create individual accounts. The remaining proposal called for having the government invest the trust fund in financial assets, such as corporate equities. Numerous other proposals, also calling for individual accounts, have since been put forth by various organizations. Currently, therefore, there are a wide array of proposals that rely on some form of individual accounts. These proposals have in common the idea that to varying extents, individuals would manage their own individual accounts. The returns from these accounts would provide some or much of an individual s future retirement income. Social Security is currently structured as a defined benefit program. The current Social Security program s benefit structure is designed to address the twin goals of individual equity and income security including retirement income adequacy. The basis of the benefit structure is that these twin goals, and the range of benefits Social Security provides, are currently combined within a single defined benefit formula. Under this defined benefit program, the worker s retirement benefits are based on the lifetime record of earnings, not directly on the payroll tax he or she contributed. Alternatively, a number of individual account proposals introduce a defined contribution structure as an element of the Social Security program. A defined contribution approach to Social Security focuses on more directly linking a portion of the worker s contributions to the retirement benefits that will be received. The worker s contributions are invested in financial assets and earn market returns, and the accumulations in these accounts can then be used to provide income in retirement and an additional pre-retirement death benefit. One advantage of this approach is that the individual worker has more control over the account and more choice in how the account is invested. In essence, the defined contribution structure is similar to the current 401(k) or IRA systems. Some proposals combine defined contribution and defined benefit approaches into a two-tiered structure for Social Security. The aim is to maintain in some form the current existing system as a base tier and add an individual account component as a supplemental tier. Some proposals modify the existing benefit structure; and others propose features that provide guarantees of current law benefits or some other level, such as the poverty line. Other proposals have a more complicated formula including forms of matching. Thus, the relationship between contributions and benefits may be less direct. Under most of these proposals, individuals would receive part of their future benefits from a modified Social Security program and part from the accumulations from their individual account. <1.2.1. Four Main Characteristics of Individual Account Proposals> Most of the individual account proposals seek to create investment accounts that to varying extents are managed by the participants themselves. However, the actual details of how to structure individual accounts vary by each proposal. Individual account proposals are usually framed by four characteristics: (1) carve-out versus add-on; (2) mandatory versus voluntary participation; (3) range of investment options offered; and (4) distribution options (e.g., required annuitization or lump-sum pay- out). <1.2.1.1. Carve-out Versus Add-on> The first characteristic pertains to whether to carve-out a portion of Social Security s tax that is to be invested in financial assets or to add-on a percentage to the current tax that is to be invested in financial assets. OASDI has a payroll tax of 12.4 percent. A carve-out involves creating and funding individual accounts with a portion of the existing payroll tax. Thus, some portion of the 12.4 percent payroll tax, such as 2 percent, would be carved out of the existing Social Security cash flow and allocated to individual account investments. The resulting impact would be that revenues are taken out of Social Security and less is left to finance current benefits. Other proposals take a different approach and add-on individual accounts as a type of supplementary defined contribution tier. For instance, 2 percent would be added on to the current tax of 12.4 percent. The resulting effect of an add-on leaves the entire 12.4 percent payroll tax contribution available to finance the program while dedicating additional revenues for program financing either from higher payroll taxes and/or from general revenue. <1.2.1.2. Mandatory Versus Voluntary> The second characteristic of individual account proposals concerns whether to make investments in individual accounts mandatory or voluntary. Mandatory participation in individual accounts would require that each individual invest some percentage of his or her payroll tax contribution in financial assets such as equities. Voluntary participation in individual accounts could allow individuals to opt in or opt out of investing any portion of their payroll tax contributions into financial assets. Individuals would rely on the existing Social Security if they chose to opt out of participating in individual accounts. Other voluntary approaches allow individuals to contribute with or without matching to a retirement account. Additionally, mandatory or voluntary can also refer to the pay- out an individual receives upon retirement, such as a pay-out in the form of a lump sum. <1.2.1.3. Investment Choices> The third characteristic has to do with the degree of choice and flexibility that individuals would have over investment options. Some proposals would allow unlimited investment choices, such as investments in corporate equities, bonds, or real estate. Other proposals would offer a more limited range of choices, such as equity or bond indexed funds. Thus, individual account investments offer individuals some range of choice over how to accumulate balances for their retirement. <1.2.1.4. Annuitization Versus Lump-Sum> The final characteristic centers around how the accumulated earnings in individual accounts will be paid out. Preserving individual s retirement income prior to pay-out by prohibiting pre-retirement distributions or loans is also a requirement of most proposals. However, upon pay-out, some proposals would permit requiring annuities--contracts that convert savings into income and provide periodic pay-outs for an agreed-upon span of time in return for a premium. Other proposals suggest allowing the individual to withdraw the account balance in lumpsum or through gradual pay-outs. <1.2.2. Individual Accounts are Different From the Current Social Security Program> Among the changes implementing individual accounts would make to the current Social Security program is to move away from a pay-as-you-go system in the direction of an advanced funded system. <1.2.2.1. Pay-As-You-Go> Social Security is currently financed largely on a pay-as-you-go basis. Under this type of financing structure, the payroll tax revenues collected from today s workers are used to pay the benefits of today s beneficiaries. Under a strict pay-as-you-go financing system, any excess of revenues over expenditures is credited to the program s trust funds, which function as a contingency reserve. <1.2.2.2. Advanced Funding Through Individual Accounts> Advanced funding refers to building and maintaining total balances for Social Security, whether that is done through individual accounts or some other mechanism. Thus, although individual accounts are a form of advanced funding, the two terms are distinct. For instance, building up the balance in the Trust Funds is a form of advanced funding. The creation of individual accounts refers to a defined contribution system of accounts connected to Social Security and held in individuals names. Essentially, individual accounts would be advanced funded income arrangements similar to defined contribution plans or 401 (k) plans. Although privately held individual accounts are a widely discussed means to achieve advanced funding, there are other ways to achieve advanced funding. Another approach to advanced funding using private markets would have the government invest directly in private capital markets. Building up the Trust Fund using Treasury securities (marketable or nonmarketable) is another form of advanced funding, although it does not involve diversification gains. Proponents of individual accounts often state that advanced funding and asset diversification are benefits of their proposals. Yet, although advanced funding, individual accounts, and asset diversification are often linked, they are conceptually different. Diversification refers to investing in more than one asset and can be performed by individuals investing in individual accounts or by the government investing the trust fund in corporate equities stocks as well as corporate bonds. Any one of the three categories could change without changing the other. For instance, Social Security s Trust Funds are currently invested in nonmarketable Treasuries. Allowing the Trust Funds to invest in assets other than Treasuries would be diversifying without introducing individual accounts. Alternatively, individual accounts could be introduced whereby individuals are allowed to invest in only one asset--thereby introducing individual accounts without diversifying. <1.2.2.3. Savings Implications of Advanced Funding> Whether advanced funding through individual accounts increases national saving is uncertain. The nation s saving are composed of the private saving of individuals and businesses and the saving or dissaving of all levels of government. Supporters of advanced funding point out that individual accounts offer a way to increase national savings as well as investment and economic growth. Others suggest that the national saving claims of those favoring advanced funding through individual accounts may not be realized. Whether advanced funding through individual accounts increases national saving depends on a number of factors, including how individual accounts are financed (existing payroll tax, general revenues); how private saving responds to an individual account system; the structure of the individual account system (mandatory or voluntary), and the limitation or prohibition of pre-retirement distributions and loans to make sure retirement income is preserved. Furthermore, even if national saving increases as a result of individual accounts, individuals may or may not be better off. Saving involves giving up consumption today in exchange for increased consumption in the future. Some economists have stated that it is not necessarily the case that all increases in saving are worth the cost of foregone consumption. <1.3. Objectives, Scope, and Methodology> The Chairman of the House Committee on Ways and Means asked us to determine how individual accounts could affect (1) private capital and annuities markets as well as national savings, (2) potential returns and risks to individuals, and (3) the disclosure and educational information needed for public understanding and use of an individual account investment program. To determine the effect of individual accounts on the private capital and annuities markets, as wells as risk and return issues, we interviewed economists and other officials who were both proponents and opponents of individual accounts. These officials included officials from think tanks as well as academicians who have studied Social Security reform. We also reviewed and analyzed several studies relating to the impact of individual accounts on the market as well as studies that had tried to assess the risks and return issues that would arise because of individual accounts. We also analyzed data from the Federal Reserve Flow of Funds as well as data provided by the insurance industry. Additionally, we talked to industry officials from both the insurance and securities industries to obtain their views, and we interviewed government agency officials as. To determine the disclosure and educational requirements needed, we spoke to officials from the Securities and Exchange Commission (SEC), the Department of Labor s (DOL) Pension and Welfare Benefits Administration (PWBA), the Pension Benefit Guaranty Corporation, and the Social Security Administration (SSA). We also spoke to private sector officials about the educational requirements that would be needed for an individual account program. Additionally, we reviewed various studies that have looked at the best ways to educate people about investment and retirement education. Because of the wide-ranging nature of the numerous proposals being advanced, our report focuses on the common, or generic, elements that underlie various proposals to reform Social Security financing rather than on a complete evaluation of specific proposals. We did our work in accordance with generally accepted government auditing standards between October 1998 and June 1999 in Washington, D.C., and New York, NY. We requested comments on a draft of this report from SSA, SEC, DOL, the Department of Treasury, and the Federal Reserve Board. SSA provided written comments that are included in appendix I. A discussion of these comments appears at the end of chapters 2 and 3. SSA and the other agencies also provided technical and clarifying comments, which we incorporated in this report where appropriate. <2. Capital and Annuities Markets Able to Absorb Individual Account Investments> Individual accounts can affect the capital markets in several ways depending on how the accounts are funded, how the funds are invested, how people adjust their own savings behavior in response to having individual accounts, and the restrictions placed on using funds in individual accounts for anything other than retirement income. Most of the proposals use either the Social Security cash flow or federal general revenues as a source of funds. As a result, the primary capital market effect is a purely financial one: borrowing in the Treasury debt market (or retiring less debt) to provide funding for investment in private debt and equity markets. Although the amounts involved are likely to be sizeable, the effect would primarily be one of redirecting funds and readjusting the composition of financial portfolios. There may also be some effect on the difference between the return on Treasury debt and that paid on riskier assets, although the effect is not likely to be large. Although substantial inflows into the private debt market could, in certain circumstances, result in some increased volatility, both the private equity and debt markets should be able to absorb the inflows without significant long-term disruption. There could eventually be a significant increase in the amount of new funds flowing into the annuities market. However, the magnitude of annuity purchases is likely to build gradually over time as more retirees build larger balances, allowing the market sufficient time to adjust. Another potential effect of individual accounts would be an increase or decrease in national savings the overall level of domestic financial resources available in the economy for the purpose of investing in plant and equipment. Whether individual accounts would increase or decrease national savings depends on how they are financed, how private savings changes as a result of individual accounts, and whether there are restrictions on households ability to borrow. <2.1. Redirection of Funds Could Affect Composition of Portfolios> Most proposals use either the Social Security cash flow or federal general revenues as a source of funds for individual accounts. The funds raised are then to be invested in private equity or debt markets. As a result, there would be an increase in the relative supply of Treasury debt available to the public and an increase in the relative demand for private debt and equities to be held in individual accounts. This redirection of funds selling Treasury debt for the cash to invest in private debt and equity is a purely financial effect. It is likely to result in a change in the composition of private sector holdings as businesses and households absorb the extra government debt and provide new or existing private debt and equity, thereby adjusting their portfolios. Whether the resources for individual accounts come from Social Security contributions or general revenues, the level of government debt held by the public would increase, or not fall as much as it otherwise would. The only cases in which an increase in debt held by the public would not occur would be those in which the resources come from an additional source of funding either a tax increase, an expenditure reduction, or the result of some voluntary private saving that would not otherwise have occurred. Increased government borrowing from the public could put some upward pressure on the interest rate at which the government borrows, if private sector borrowers are to be persuaded to hold the increased supply of government debt. Funds diverted to private equity and debt markets could have the effect of raising the prices and therefore lowering the yields (rates of return) on these higher risk assets. The combined effect could narrow somewhat the difference between the more risky and least risky assets. <2.1.1. Debt Held by the Public Will Likely Rise to Provide Funding> Whether resources used to finance individual accounts come from new revenues, additional borrowing, or surpluses, the amounts flowing into private capital markets are likely to be substantial. Funding of individual accounts will come directly or indirectly from increased government borrowing from private markets, unless funded by a tax increase or spending reduction. To fund most individual account proposals, the government would need to raise resources either by borrowing in the market or under a surplus scenario by not retiring as much maturing debt as it otherwise would. For certain proposals, changes in borrowing may not arise because these proposals rely on a tax increase or benefit reduction so that current cash flow is not affected. If the source of funding for individual accounts is a carve-out from the current Social Security cash flow, this loss in cash flow would have to be made up from increased borrowing, a reduction in benefits, or some other program change. Alternatively, if the source of funding is general revenues, either additional borrowing from the public or less debt retired will be necessary depending on whether the overall budget is in deficit or surplus. Only if the government raises taxes or reduces spending, and uses those revenues to finance individual accounts, is there not likely to be any effect on borrowing because the remaining cash flow would not be affected. <2.1.2. Funds Would Be Redirected Into Private Capital Markets> The uses of the funding for individual accounts will depend on the options available to investors and the choices they make within those options. To the extent that investors choose to invest in Treasury debt, there is that much less flowing into private capital markets, and any effects on those markets would be reduced. However, investors or their agents are likely to put at least some, if not most, of the funds into the private equity or debt market, and some proposals call for all of the funds to be invested in private markets. The size of this potential flow of funds into the private sector depends on whether individual account investments are mandatory or voluntary as well as the percentage of payroll that forms the basis for the program. The actual amounts allocated to private equity and debt will depend upon individual choice to the extent such choice is allowed, or on selected percentages if those are set by law. The initial annual dollar amount flowing into the capital markets as a result of individual account investments could be about $70 billion (2 percent of payroll) in 1998 dollars. According to our analysis of Social Security Administration (SSA) data, the effective taxable payroll for all working individuals will steadily increase well into the future. As a result, the annual dollar amount from individual account investments is likely to increase. For instance, our analysis of SSA data indicates that in the year 2020, the effective taxable payroll will be almost $11 trillion. On the basis of that dollar amount, if 2 percent is the designated percentage, the amount flowing into the private equity and debt markets from individual accounts would be about $220 billion in the year 2020. <2.2. Current Size of the Private Capital Markets> U.S. capital markets are the largest and most liquid in the world. The total market value of U.S. equities outstanding at the end of 1998 was about $15 trillion. The total value of corporate bonds outstanding in the United States was about $4 trillion at the end of 1998. The amount of Treasury debt outstanding was also about $4 trillion. As shown in table 2.1, the amounts outstanding for corporate equities and corporate bonds have been increasing. For instance, in 1997 there was about $13 trillion in equities outstanding, up from $10 trillion in 1996. The amounts outstanding for corporate bonds has increased from about $3 billion in 1996 to about $4 billion in 1998. On the basis of the current size of the corporate equity and bond markets, the amount representing individual accounts is likely to be a small percentage of private capital markets, at least for a number of years. For instance, using a payroll percentage of 2 percent, if $70 billion were to come from individual accounts, it would represent less than 0.5 percent of the $15 trillion in equity outstanding in 1998 and less than 2 percent of the $4 trillion in corporate bonds outstanding in 1998. Various officials have expressed concern that over time, individual account investments would represent significant portions of the corporate equities and bond markets. It is likely that investments from individual accounts could eventually rival current holdings of other major sectors of the market and represent a sizeable portion of equity and corporate bond holdings. For instance, if 2 percent of payroll is placed in individual accounts annually, SSA estimates that stock holdings in individual accounts could grow to between $1 trillion and $2 trillion in 1996 dollars over the next 15 years. The overall market will grow at about the market rate of return, although individual components may grow faster or slower depending on strategies and relative demands by mutual funds, pension plans, and other investors. For instance, as shown in table 2.2, the total value of equity holdings of mutual funds was $2.5 trillion in 1998, and the total value of corporate and foreign bond holdings was about $339 billion. The holdings of various sectors, such as private pension plans, were about $2.2 trillion of equities and about $301 billion of corporate bonds in 1998. Thus, although individual account holdings are likely to increase over time, the holdings of many other sectors of the economy are also likely to rise, although certain individual sectors may not. In general, it is difficult to predict how rapidly the sum of these sectors holdings will grow, especially in the presence of individual accounts. <2.2.1. Current Flows Into Private Capital Markets> Even if the annual flows from individual accounts into private capital markets were a small percentage of the total market value of outstanding debt and equities, these amounts could still represent a substantial increase in the annual flows into those markets. The actual amounts will depend on the options available to individuals as well as the choices they make. If a large percentage of funds from individual accounts flowed into the equity markets, it could represent an increase of approximately 15 to 20 percent in the flow of funds into and out of the equity market, according to data from the Federal Reserve Flow of Funds. It is not clear that such an increase would have much effect on the pricing, or volatility, of the equity markets. However, the corporate bond market, which is smaller, could be affected, at least in the short term, depending on how much of the funds flow into the market and, to some extent, on the timing of those flows. <2.2.1.1. Current Stock Market Flows> Most U.S. equities markets are very liquid it is easy for investors to buy and sell equities without moving the price. Various sectors of the economy, such as the household sector, mutual funds, private pension plans, and life insurance companies, purchase and sell equities every day. The equities market is a secondary market in which much of the transaction volume and value reflects movement of equities between purchasers and sellers. The annual net purchases can be positive or negative, reflecting the difference between the value of new equities issued and the value of equities repurchased; however, the amounts purchased and sold by specific sectors can be quite large. For instance, the annual net purchases of equities were minus $3 billion in 1996, minus $79 billion in 1997, and minus $178 billion in 1998. As can be seen in table 2.3, the three largest purchasers bought in the range of $300 billion in securities each year from 1996 to 1998. In terms of sellers, the household sector sold almost $300 billion in 1996 and about a half of a trillion dollars in both 1997 and 1998. Annual flows within the equities market were in the hundreds of billions of dollars between 1996 and 1998. Over that period, mutual funds, life insurance companies, and state and local government retirement plans were the primary purchasers, and private pension plans and households were the major sellers of equities. Compared to these annual amounts, an additional tens of billions of dollars generated by individual accounts is not likely to cause major disruptions and could potentially be absorbed without significant price or volatility effects. There is a greater chance of some possible disruption, however, if all of the individual account funds were to flow in at once rather than regularly, but not too predictably, over the course of the year. For instance, $70 billion distributed evenly over the year would be unlikely to cause much disruption. However, concentrating that same flow into one quarter of the year could have some short-term effect on the market because it would represent a substantial increase in quarterly flows. As a result, to minimize the likelihood of disruption, it would make sense, to the extent practicable, to smooth out the inflows so that they do not all come into the market within a short time period. If the inflows are lumpy and predictable, the market may be able to anticipate the inflows and adjust prices somewhat, which could mean that individual account purchases would pay slightly higher prices than they otherwise would. <2.2.1.2. Corporate Debt Flows> The corporate debt markets are not as transparent as the corporate equities markets; for example, there are no central listings for the prices of the bonds or the volume of corporate bonds sold. They also do not have as much depth as the equities markets there are fewer buyers and sellers in the corporate bond markets. Many corporate bond transactions are done through private placements; i.e., they are not offered to the corporate debt market as a whole. The result is a market with less liquidity reflected in a greater spread between the bid price (what you will pay for the bond) and the ask price (the price at which you would sell the bond). As stated previously, the value of outstanding corporate debt is substantially less than the market value of corporate equities. On an annual flow basis, corporate debt issues have been running in the hundreds of billions of dollars over the last decade. However, some proportion of that is short term (less than 1 year in maturity) so that the total is not easily comparable to the annual amounts of equities purchased and sold. As shown in table 2.4, the annual net purchases of corporate bonds by various sectors ranged from as low as $17 billion for state and local government retirement plans of in 1996 to as high as $79 billion for life insurance companies in 1996. On the basis of annual flows, it is difficult to say what the effect on the bond market is likely to be. However, if we compare the corporate bond and equity markets, we can draw some tentative conclusions about the likelihood of individual accounts having a disruptive effect on either market. The corporate bond market is relatively smaller and less liquid than the equity market. As a result, an inflow into the bond market is more likely to affect the market price and the volatility of the market, compared to an equivalent inflow into the equity market, especially if it is concentrated in a short period of time. Any disruption is still likely to be short term in nature and can be mitigated if the inflow is spread over time, so that other market participants are less able to predict the inflows and raise prices in anticipation of the inflow. <2.2.1.3. Treasury Debt> Although there are various types of Treasury debt, the overall market for U.S. Treasuries is far more liquid and transparent than the corporate bond market. A large secondary market in which Treasury securities are bought and sold subsequent to original issuance exists for Treasuries and helps to make it one of the most liquid markets in the world. Annual net purchases of Treasuries were $23 billion in 1997 and minus $55 billion in 1998. The effect on the Treasury debt market from a movement to individual accounts will depend not only on the choices available to individuals but also on the extent to which the government borrows from the private capital markets to fund individual accounts. As stated previously, to fund any individual account proposal that does not increase Social Security contributions, the government would need to raise resources either by borrowing in the market or by not retiring as much maturing debt as it otherwise would. The Treasuries market, therefore, could be affected in two ways: (1) by how much the government borrows to fund individual accounts, and (2) by how much individuals choose to invest in Treasuries. However, the depth and liquidity of the Treasury debt market is such that the market is unlikely to be significantly disrupted even by a large flow of funds resulting from individual accounts. <2.2.2. Affect of Individual Accounts on the Annuities Markets> Annuities protect against the possibility of outliving one s financial resources by guaranteeing a stream of income for the remainder of one s life, regardless of how long that may be. Annuities basically convert savings into income and may be sold individually or as a group product. In a group annuity a pension plan provides annuities at retirement to a group of people under a master contract. It usually is issued by an insurance company to an employer plan for the benefit of employees. The individual members of the group hold certificates as evidence of their annuities. Depending on the structure of individual accounts, individuals may be required to purchase individual annuities or, similar to pension and other retirement plans, fall under a group annuity. One measure of the size of the annuities market is the level of the insurance industry s policy reserves the sum of all insurers obligations to their customers arising from annuity contracts outstanding. Each company is required by state insurance regulators to maintain its policy reserves at a level that will ensure payment of all policy obligations as they fall due. As shown in table 2.5, policy reserves for individual annuities were about $693 billion and for group annuities about $762 billion. Insurance industry officials told us that the annuities industry is likely to be able to absorb the flows from either mandatory or voluntary annuitization. Once again, we are talking about a movement of financial resources from one form to another rather than a new source of funds. The funds will be moved out of whatever investment instruments (assets) workers were using for accumulation purposes into a potentially different combination of assets held by companies supplying annuities. Insurance industry officials believe that, generally, annuities resulting from the liquidation of the individual accounts would be phased in gradually and over a number of decades. In the early years, few if any retirees would have built up substantial individual account balances. As time passes, both the number of retirees with individual account balances and the average size of those balances would gradually increase, allowing the industry and the market time to adjust without difficulty. One issue raised by insurance industry officials was that an individual account proposal that made annuity purchases mandatory at retirement could result in the demand for a significant number of very small annuities. For instance, at least initially, there would be many small accounts below $2,000. Currently, annuity purchases average about $100,000. Although the industry could absorb a significant number of small accounts, industry officials said that providing annuities that small could be uneconomical for the industry because the cost of issuing a monthly check, and other administrative costs, would be prohibitive. <2.3. Effect of Individual Accounts on National Savings Depends on Financing, Structure, and Behavioral Effects> Although the financial effects of individual accounts are an important consideration, a related but somewhat separate issue is the potential for individual accounts to increase or decrease national saving. Along with borrowing from abroad, national savings provides the resources for private investment in plant and equipment. The primary way in which a movement to individual accounts could change the overall capacity of the economy to produce goods and services would be if individual accounts were to lead to a change in the overall level of national saving. The extent to which individual accounts affect national saving depends on how they are financed (existing payroll tax, general revenues) the effect on government saving; how private savings the savings of households and businesses respond to an individual account system; the structure of the individual account system (mandatory or voluntary); and the limitation or prohibition of the pre-retirement distribution or loans to make sure retirement income is preserved. <2.3.1. Savings Affected by Funding Source> One important determinant of the effect of individual accounts on national savings is the funding source. There are several possible funding sources, although most involve a movement of funds from or through the federal government and each has its own effects on the federal government s portion of national saving. For some funding sources these savings effects are clearer than others. As previously stated, the funds can come from (1) within the current Social Security system, i.e., the surplus or current cash flows; (2) a change in the system resulting from increased payroll taxes or reduced benefits; or (3) outside the system using a general fund surplus or general revenues. Using either the Social Security surplus or more generally the current Social Security cash flow is likely to reduce government saving. If part of the cash flow is diverted to individual accounts but there is no change in the benefits paid or the taxes collected, the lost cash flow will either result in a smaller addition to the surplus or be replaced by borrowing. In either case the result is a reduction in the measured government surplus the sum of the Social Security surplus and the general fund surplus or an increase in the deficit. From the government s perspective, its saving has gone down to provide the resources for increased personal savings through individual accounts. This is a case of a carve-out from Social Security. If the resources for individual accounts are financed by additional Social Security taxes or reduced benefits instead, there will be no direct effect on government savings. The increased outlays for individual accounts will be offset by higher government revenues or lower government benefit payments. In the absence of other changes in Social Security cash flows, government savings remain constant, and any increase in private saving would be an increase in national saving. This is a case of an add-on to both Social Security and to the overall government budget. The most complicated case involves the use of funds that are outside of the Social Security system but part of the overall government budget. There are proposals to use the overall budget surplus or general government revenues as a source of funds for individual accounts. Although on its face this appears to reduce government savings by the amount diverted, the actual effect on government savings depends on what would have been done with the surplus or revenue if it had not been used to finance individual accounts. For example, if the resources would have been used to finance additional government spending, and the diversion of the funds to individual accounts means that such spending is not undertaken, government saving would not be reduced by individual accounts. In this case, any increase in private saving would be an increase in national saving. Similarly, if the resources would have been used to finance a tax cut, then diverting funds to individual accounts does not directly reduce government savings if the tax cut is not undertaken. In the case of a tax cut, national saving will go up if individual accounts generate more private saving than the tax cut. If the funds would have been used to pay down debt, the direct effect of diverting those resources to individual accounts would be to reduce government saving. The full effect on national saving depends on the extent to which individuals adjust their own savings behavior. If they do not adjust, national saving is on balance unaffected. To the extent individuals or businesses reduce their saving, national saving will fall. <2.3.2. Behavioral Effects Are Difficult to Predict> The effects of various individual account proposals on national saving depend not only on how the proposals affect government savings but also on how private savings behavior will respond to such an approach. Regardless of the financing source, the effect of individual accounts will be to raise, at least to some extent, the level of personal or household saving unless households fully anticipate and offset through a reduction in their own saving. For example, a carve-out from the existing Social Security cash flow would provide funding for individual accounts for everyone (under a mandatory approach) or for those who wished to participate (under a voluntary approach). Such a carve-out is likely to reduce government saving and raise private saving by an equivalent amount in the absence of any behavioral effects. If households are forgoing current consumption by saving for their retirement, then, in response to this potential increase in future retirement benefits, they may reduce, to a greater or lesser extent and in various ways, their own savings, including retirement saving. To the extent that household responses lead to reduced personal saving, national savings as a whole would fall under a carve-out. In general, the result would be similar under any proposal that reduced government saving to fund private saving through individual accounts. This includes proposals that use general revenues that would have been saved by the government; i.e., used to reduce the deficit or retire debt outstanding. The overall level of consumption in the economy is not likely to change as a result of the movement of funds. Any significant change in the level of consumption resulting from such proposals would result from some households reducing their retirement savings to fund consumption because they now had individual accounts. <2.3.3. Behavioral Change Depends on Preferences and Opportunities> The extent of these behavioral effects will depend on the structure of the program and any limitations that are placed on the use of funds in individual accounts, such as restrictions on preretirement withdrawals. If such a program is mandatory rather than voluntary, it is more likely to affect those households who currently either do not save or do not save as much as the amounts in their individual accounts. A mandatory program would increase savings for those who do not usually save, who are usually low-income people. Household behavior in response to individual accounts will depend on the extent that the household is currently saving for retirement and how the set of options available to households is changed by the presence of individual accounts. One group of households, those that are currently saving as much as they choose for retirement, given their income and wealth, would probably reduce their own saving in the presence of individual accounts. For those households for whom individual accounts closely resemble 401(k)s and IRAs, a shift to individual accounts might lead them to decrease their use of these accounts. They would have additional retirement income possibilities available and might choose to reduce their retirement or other saving to use for consumption in the present rather than in the future. However, unless they were target savers, i.e., savers who were trying to reach a specific retirement income goal, they might not reduce their other savings dollar for dollar with individual accounts. Therefore, we might expect some reduced saving by a significant number of households; for certain households, we might expect a substantial reduction. Under a voluntary approach, the households that are most likely to participate are those households that are currently saving but that face some constraint in terms of the type of retirement saving they can do or the amount of tax-preferred saving they are allowed. For example, someone whose employer offered only a defined benefit retirement plan or a defined contribution plan with very limited options might find that voluntary individual accounts offered a new opportunity. In addition, someone who was already contributing as much as he or she was legally allowed to tax-deferred savings would find a voluntary program attractive if it allowed an additional amount of tax-deferred saving. These and others who take advantage of a voluntary program may be more likely to reduce other forms of saving in response. Households that are currently not saving, either because they are resource constrained or because they are not forward-looking, would be forced to save some amount by a mandatory individual account system. Households in such situations may welcome the additional resources, especially if they do not come from a direct reduction in their own consumption. However, such households may also try to transform some of the additional resources into consumption if they are able to borrow from the accounts or otherwise tap into the accounts before retirement. To maintain retirement income adequacy and to keep savings from being dissipated, it may be necessary to prohibit or restrict borrowing or other methods of drawing down individual accounts prior to retirement. Even with such restrictions, it may not be possible to completely eliminate all options that households could use to indirectly increase consumption from individual accounts. For example, households with little or no retirement saving or other financial wealth could have wealth in some other form, such as home equity. It is conceivable that such households could borrow against that home equity as a way of turning their increased future consumption into present consumption. In addition to the effects of individual accounts on household savings there are also other potential indirect effects on private saving. For example, the incentives for employers to provide retirement benefits, either through defined benefit or defined contribution plans, could be affected by individual accounts. In addition, if less compensated workers in a defined contribution plan reduce their contributions to the plan, higher compensated workers may be required to reduce their own contributions under the antidiscrimination rules. Offsetting these tendencies to reduce saving, however, there are some economists who believe that individual accounts might encourage certain individuals to save more for retirement and thus not reduce their current savings. Such an effect is more likely to be present if there is some form of matching by the government as part of the individual account proposal. Others believe that to the extent that a lack of saving is based on people not taking a long enough view, the presence of individual accounts and watching them accumulate could give people a better sense of how saving small amounts can add up over time. This, plus observing how compounding works, could induce some to save who otherwise would not. National saving is more likely to be increased by some approaches to individual accounts than by others. Using sources of government funding that would more likely have resulted in spending rather than saving decreases the likelihood that government saving would be reduced. Proposals that are mandatory are more likely to increase private saving because a mandatory program would require that all individuals, including those who do not currently save, place some amount in an individual account. Certain prohibitions or restrictions on borrowing or other forms of preretirement distributions would also limit the ability of some households to reduce their savings in response to individual accounts. <2.4. Agency Comments> SSA commented that we needed to discuss the savings implications of the President s proposal. This report was not intended to comment on specific reform proposals. <3. Return and Risks Are Likely to be Higher With Individual Accounts> There is a risk/return trade-off for individuals under an individual account program; instituting such a program would likely raise both the risks and the returns available to participants compared to the current system. In order to receive higher returns, individuals would have to invest in higher risk investments. The return that individuals receive would depend on both their investment choices and the performance of the market. Individuals who earn the same wages and salaries and make the same contributions to Social Security could have different retirement incomes because of the composition of their portfolios and market fluctuations. As with any investment program, diversification and asset allocation could reduce the risks while still allowing an individual to earn potentially higher returns. Most advocates of individual accounts state that the expected return on investments under an individual account program would be much higher for individuals than the return under the current Social Security program. Proponents of individual accounts usually point out that equities have historically substantially yielded higher returns than U.S. Treasuries, and they expect this trend to continue. Others are skeptical about the claims for a continuation of such a high expected return on equities. They state that history may not be a good predictor of the future and that the expected premium generated by investing in equities has steadily been declining. Furthermore, they state that even if expected equity returns are higher than other investments, equity returns are risky. Thus, in order to determine what returns individuals might expect to receive on their individual account investments, the riskiness of the investment should be taken into account. Adjusting returns to include risks is important, but there are many ways to do this, and no clearly best way. Lastly, comparing the implicit rate of return that individuals receive on their Social Security contributions to expected rates of return on market investments may not be an appropriate comparison for measuring whether individuals will fare better under an individual account system. Such comparisons do not include all the costs implied by a program of individual accounts. In particular, the returns individuals would effectively enjoy under individual accounts would depend on how the costs of the current system are paid off. Rates of return would also depend on how administrative and annuity costs affect actual retirement incomes. <3.1. Instituting an Individual Account Program Means Greater Risk to Individuals for Potentially Greater Return> An individual account program would offer individuals the opportunity to earn market returns that are higher than the implicit returns to payroll under the current Social Security program. However, investing in private sector assets through individual accounts involves a clear trade-off-- greater return but more risk or more variability in future rates of return. Under the current Social Security program, risks are borne collectively by the government. Moving to an individual account program would mean that individuals reap the rewards of their own investments, but they also incur risk not only about future returns, but also the possibility of losing money and even having inadequate income for retirement. However, holding assets for the long term, diversification, and the proper asset allocation can mitigate certain risks and improve an individual s risk/return trade-off. <3.1.1. Risk/Return Trade-Off> A trade-off exists between risk and return in investments. If an individual is willing to consider the possibility of taking on some risk, there is the potential reward of higher expected returns. The capital markets offer a wide variety of investment opportunities with widely varying rates of return, which reflect variations in the riskiness of those investments. For instance, Treasury Bills are considered to be relatively risk free because they have almost no default risk and very little price risk. Alternatively, equities are considered to be relatively risky because the rate of return is uncertain. Because debt holders are paid out of company income before stockholders, equity returns are more variable than bonds. Overall, annual returns on equities are more volatile than returns on corporate bonds or Treasuries. On a long-term average basis, the market compensates for this greater risk by offering higher average returns on equities than on less risky investments. Thus, among the three types of investments, corporate equities are the riskiest investments but pay the highest returns, followed by corporate debt and then Treasuries. However, holding riskier investments such as equities over long periods of time can substantially diminish the risk of such investments. The degree of risk and the size of potentially higher returns with individual accounts depend on the equities chosen as well as the performance of the market. A stock s value is tied to the expected performance of the issuing company. If the company does well, investing in individual equities could be very lucrative for investors. However, if the company does poorly, investing in individual equities could result in low returns or losses to the investor. Many financial analysts go through intensive research to try and pick the best stocks. Choosing the right stock, however, can be mostly a matter of a random walk. <3.1.2. Diversification Improves Risk/Return Trade-Off> Individuals may mitigate the risk of holding equities and bonds by diversifying their portfolios and allocating their investments to adjust their risk exposure and to reflect their own risk tolerance and circumstances. Ultimately, the composition of an individual portfolio, along with the performance of the market, determines the return individuals receive and the risk they bear. In constructing a portfolio investors combine equities and bonds and other securities in such a way as to meet their preferences and needs, especially their tolerance for risk. Individuals manage their portfolios by monitoring the performance of the portfolios and evaluating them compared to their preferences and needs. Many people have been managing portfolios for years. There are, however, many others who either do not have portfolios or do not consider what they have as a portfolio. With individual accounts, all individuals would eventually have to manage their portfolios as they start to own various investments, especially if they have options over individual securities or types of securities. A well-diversified portfolio could help to diminish risk without lowering the return, thereby improving the risk/return trade-off. For instance, a properly selected combination of risky assets can have a lower risk than any of its individual assets because the risk is spread out among different assets allowing for gains in some assets to offset losses in others. Such portfolios could provide higher average returns over the long term than a single asset with equal risk. Furthermore, diversifying an equity portfolio across companies and industries reduces both default and concentration risk and reduces the likelihood that a portfolio s return will vary widely from the expected market return. In order to quantify the diversification of a portfolio, concepts like correlation and covariance are used to measure how much the returns on assets move in tandem with one another. For instance, if annual returns on different investments are not very correlated, their risks can offset each other even though they still individually earn higher average returns. Such techniques, however, are very sophisticated, require substantial data analysis, and would require the help of professional advisors for the average investor. However, there are ways for individuals to take advantage of many of the benefits of diversification without needing to calculate correlation and covariance measures. Indexing is one way to broadly diversify an equity portfolio and to match the approximate market return. Typically, investing in broad-based stock indexes such as the Standard & Poor s 500 index which represents about two-thirds of the value of the U.S. stock market diversifies an individual s portfolio by reducing the likelihood of concentrating investments in specific companies. Such investments also tend to reduce turnover and lower administrative costs because they do not involve as much research or expensive investment advice. A diversified stock portfolio, however, does not protect against the risk of a general stock market downturn. One way to mitigate U.S. stock market risk is to diversify into international markets. An investor can also shield against general stock market risk by diversifying into other types of assets, such as corporate bonds. To minimize exposure to short-term stock market fluctuations, an investor can hold less risky, albeit lower yielding, assets to cover liquidity needs in the short run. Asset allocation can provide an approach to portfolio diversification. For example, percentages can be allocated to equities (including indexes), bonds, and Treasuries. These allocations will generally reflect preferences for risk as well as an individual s life-cycle phase. Those with a higher tolerance for risk and those who are younger would generally invest more in equities. Those in later life-cycle phases might invest more in bonds or Treasuries. <3.1.3. Individuals Bear Most of the Risk> The primary risk that individuals would face with diversified or indexed individual account investments is market risk, the possibility of financial loss caused by adverse market movements. When the stock market drops, prices of some equities fall and can stay depressed for a prolonged period of time. Although a long investment time horizon provides the individual more time to recover from short-term fluctuations, an individual also would have more time to encounter a prolonged stock market downturn. Thus, although long periods of time can help mitigate the effects of market risk, it does not disappear over time. Under most individual account programs, individuals would bear much if not all of the market risk. Although market risk would not increase with the introduction of an individual account program, more people would be exposed to it under an individual account program than are under the current Social Security system. Some individuals would do very well under such an individual account program, but others may not do as well and could experience a significant drop in their expected retirement income compared to others in the same age group or to the current Social Security program. Furthermore, those who are reluctant to invest in the stock market may not benefit from the potentially higher returns of equity investing. Thus, the investment choices individuals make, as well as the performance of the market, would determine the return they would receive under an individual account program. <3.1.3.1. Individual Returns May Vary Under an Individual Account Program> Individuals who retire at the same time may receive different pay-outs from individual account investments because of the choices they have made. Although some individuals could make the same choices, individuals are more likely to make different choices. In part, differences may come about due to luck; other differences may be more systematic. For instance, higher income people may be willing to take on more risk and possibly earn higher returns than lower income people. For this reason, higher income individuals could earn higher rates of return than lower income individuals under an individual account program, which is not the case under the current Social Security program. Many programs also provide for a default option for those who do not wish to take an active part in investing in individual accounts. One type of default option would provide investments in Treasuries with very low risk and a low return. Others could provide an asset allocation, possibly age related, with more equities included for younger workers and more Treasuries for older workers. Returns could vary across cohorts as well under an individual account program. Even if some cohorts made the same choices, given the volatility of the stock market, the returns could vary substantially across different time periods and affect cohorts differently. For instance, even if the market experienced no dramatic or long-lasting downturns, the market will create winners and losers depending on when and how individuals invest their individual account investments and when they liquidate their holdings. As long as workers are aware of and accept the idea that returns may vary across individuals as well as cohorts, there will probably not be calls to fix the unfair benefits outcomes. However, if large differences in outcomes become commonplace, many participants could become dissatisfied with the program and demand some payment from the government to make up for any losses they incur or even if substantial differences result. For instance, those that have incurred losses may expect the government to mitigate their losses when they do not receive the return they believe they were led to expect. Furthermore, individual accounts are at least in part an attempt to finance the unfunded liability with the excess returns of equities over nonmarketable Treasuries. To the extent that individuals receive low or even negative returns over time, individual account investments could actually lead to an increase in the unfunded liability of the current Social Security program. <3.2. The Expected Market Return for Individual Account Investments> The expected return from investments of individual accounts is likely to be higher than the average implicit rate of return of the current system, but it is unlikely to be as high as many advocates presume. Advocates and opponents of individual accounts have estimated what the likely market return would be for an individual s investments under an individual account program. When discussing equity returns, advocates often point to the fact that equities have historically yielded higher returns than Treasuries. They expect returns on equities to continue to be higher than Treasuries and to boost individual returns on individual account investments. Other economists are skeptical that the higher returns presumed under an individual account program will be realized. They state that history may not be a good predictor of the future. Others state that even if expected equity returns are higher than other investments, equity returns are risky. For instance, the average historical return reveals nothing about how variable that return has been from year to year. Thus, in an estimation of an expected return to investments of individual accounts, the riskiness of the investment should be taken into account. Estimating expected returns without mention of the risk and costs of the investments will overstate the benefits of investing in marketable securities because the return on marketable securities varies substantially with the riskiness of those investments. <3.2.1. Future Returns to Equities Uncertain> Advocates of individual accounts have stated that individuals would receive higher returns by investing in the stock market than they receive under the current Social Security program. Although,comparing investment returns with the rate of return paid by Social Security is always problematic, advocates of individual accounts point out that the rate of return on equities has been significantly higher than other rates of returns. For instance, compounded annual average rates of return on equities have averaged about 7 percent per year since 1900 and 6 percent per year since 1957. Alternatively, the compounded annual average return on Treasuries has been between 1 and 2 percent per year on an inflation-adjusted basis, and long-term corporate bonds have averaged 2 percent. The capital markets generally offer higher potential rates of return on riskier investments such as equities. Figure 3.1 shows the annual returns of Standard & Poor s (S&P) 500 Index, which is a measure of the performance of the stocks of 500 large companies traded on the U.S. stock exchange. Actual nominal (non-inflation-adjusted) returns for large company stocks varied widely from the annualized average return over long periods and have ranged from a low of minus 26.5 percent in 1974 to a high of 52.6 percent in 1954. As can be seen in figure 3.1, returns are variable. An average return over a long period of time can obscure the reality that equity returns fluctuate substantially from year to year. There have also been years in which equities have yielded negative returns. For instance, over the past 70 years or so, equity returns were negative in nearly 1 out of every 4 years. Even taking into account the variability of returns, some analysts have suggested that historic U.S. returns may overstate future returns. They state that the equity markets in the United States have tended to outperform the equity markets in other countries. Thus, when relying on historical data as the basis for estimates of long-term market growth, if one looks not just at U.S. data, but also at the historical returns of other countries, then the high historical returns to equities in the United States could be an exception rather than the rule. Historical returns are the only empirical basis with which to judge equity returns, but there is no guarantee that the future will mirror the averages of the past in the United States as opposed to some subperiod of the U.S. market or, alternatively, returns to foreign stock markets. <3.2.1.1. Equity Premium Diminishing> In general, investors, tend to be averse to risk and demand a reward for engaging in risky investments. The reward is usually in the form of a risk premium an expected rate of return higher than that available on alternative risk-free investments. For instance, the historical advantage enjoyed by equity returns over the returns of other assets is what is known as the equity premium. The premium is said to exist because equities have historically earned higher rates of return than those of Treasuries to compensate for the additional risk associated with investing in equities. However, the equity premium has slowly been declining. Studies have shown that the equity premium has declined since the 1950s. A number of studies have attempted to measure the equity premium as well as explain its size. One study found that the premium appeared to be quite high in the 1930s and 1940s and was caused by the perception of the high volatility in the stock market in the late 1920s and the early 1930s. This led investors to favor less risky securities as opposed to equities, generating a high equity premium. However, as the volatility of stock market declined after the 1929 stock market crash, the appeal of investing in equities began to increase; and although an equity premium continues to exist, it has steadily declined. However, in the 1970s the equity premium increased somewhat from its general downward trend; this was attributed to inflation. The study concluded that decreases in the equity premium were the result of increases in expected bond rates and decreases in the expected rates of returns to equities. It has also been suggested that the shrinking premium reflects a structural change in that the economy appears less susceptible to recessions. To the extent that corporate profits fluctuate with general economic conditions, fewer downturns translate into less volatility in corporate earnings. If investors perceive that the outlook for corporate earnings is more certain and that equities may be less risky than they have been historically, equity investing might carry a lower premium and, therefore, relatively lower returns. As a result, the equity premium diminishes. It is unclear whether the equity premium will continue to decline. However, if individual accounts affect equity prices in the short run, the equity premium could decrease. For instance, if the demand for equities increases as a result of individual accounts, the prices of equities are likely to increase. This in turn lowers the expected return on equities. As the expected return on equities decreases, the equity premium decreases because the difference between the return on equities and the risk-free asset such as Treasury bills would diminish. The decreasing equity premium could imply that people do not view the stock market to be as risky as they once did. One possible implication is that if people view the stock market as not very risky, and they prove to be right, they will continue to invest in it, and the equity premium is likely to continue decreasing. Alternatively, if the stock market is in fact riskier than investors believe, then investors will be surprised by underperformance and volatility over time and will begin to reduce their equity holdings, which could eventually cause the equity premium to go back to values consistent with past decades. The size of the equity premium has implications for analyzing the benefits of an individual account program. The potential gain from equity investing under an individual account program depends on what future equity returns are and in particular how much return might be expected for taking on additional risk. A significant part of the gain that might be generated from diversifying into equities comes from the equity premium. To the extent that the equity premium continues to decline, individuals are unlikely to receive as high a return from stock investing as they have in the past. <3.2.2. The Returns of Investments> The return that individuals are likely to receive from individual account investments would depend on what they are allowed to invest in, e.g. stocks, bonds, indexed mutual funds, as well as the risk of the asset being invested in. When estimating expected returns under an individual account program, most proposals have tended to focus on equities. However, other assets may offer different returns. Corporate equities have tended to have higher market returns than other investments because they are riskier. Other investments, such as corporate bonds, have also tended to offer high yields. For instance, corporate bonds offer higher yields than Treasuries to entice investors to buy these securities, which have some risk of default. As in the case of corporate equities, investors are offered a higher reward for taking on the additional risk that the company may default. If an individual account system were to provide for mutual funds, depending on the type of mutual fund allowed, individuals would receive various returns. For instance, a government bond mutual fund may yield a lower return to investors than an equity indexed mutual fund. Overall, the capital markets offer higher market returns only by having investors take on additional risk. Thus, in estimating expected returns for individual account investments, it is important to not only consider the type of asset invested in but also the riskiness of the investment. <3.2.3. Adjusting the Rate of Return for Risk> Higher returns are possible for individuals investing through individual accounts than under the current Social Security program, but only if individuals take on more risk. Individuals should therefore not only be interested in the returns of various assets but also in the risks that have to be incurred to achieve higher returns under an individual account program. The difficulty is how to measure risk and how to adjust rates of return for risk so that investors would be able to compare various returns to investments. Risk is often considered to be the uncertainty of future rates of return, which in turn are equated with variability. In fact, one of the underlying concepts of risk is inherent volatility or variability. For instance, the variability of equity prices is among the key factors that cause investors to consider the stock market risky. The price at which an individual purchases shares of a company early in the morning is not guaranteed even later in the day. Bond prices also vary due to changing interest rates and inflation. <3.2.3.1. There are Many Ways to Measure Risk> There are a number of different ways to try to measure variability or risk. All such measures give some estimate of the riskiness of investments. Classic risk measures such as variance or the standard deviation are often used to measure the risk of an asset. However these measures are often considered to be difficult for investors to understand and may not reflect how people perceive risk. For instance, investors do not generally take a symmetrical view of the variability of returns downward deviations are perceived as economic risks, but upward deviations are regarded positively or as unexpected gains. Furthermore, quantifying uncertainty or risk is usually done using probability distributions. As long as the probability distribution falls symmetrically about the mean or average what is known as a normal distribution the variance and standard deviation are adequate measures of risk. However, to the extent that the probability distributions are asymmetrical, as is the case with the returns from a combination of securities, those measures are not as meaningful in terms of measuring risk. Other ways to measure risk include (1) the value at risk (VAR) --how much the value of a portfolio can decline with a given probability in a given time period, or (2) the beta of a security--the tendency of a security s returns to respond to swings in the broad market. VAR is an approach used by money risk managers to measure the riskiness of their portfolios. It is an estimate of the maximum amount a firm could lose on a particular portfolio a certain percent of the time over a particular period of time. For example, if an investor wanted to put money into a mutual fund and wanted to know the value at risk for the investment of a given time period, the investor could determine the percentage or dollar amount that their investment could lose, e.g., a 2-percent probability that the investor could lose at least $50 of a $1,000 investment over a certain period of time. VAR models construct measures of risk using the volatility of risk factors, such as interest rates or stock indexes, which is helpful for mutual funds that have a wide variety of investments. Measuring the beta is another way to measure risk. In essence, if an investor wanted to know how sensitive a particular asset s return is to market movements, calculating the beta would do so. Beta measures the amount that investors expect the equity price to change for each additional 1-percent change in the market. The lower the beta, the less susceptible the stock s return is to market movements. The higher the beta, the more susceptible the stock s return is to market movements. Thus, the beta would measure the risk that a particular stock contributes to an individual s portfolio. <3.2.3.2. Adjusting for Risk> As previously stated, estimating a return on investments without taking in to account the riskiness of the investment is likely to overstate the benefit of investing in that asset. Adjusting returns to account for risk is important because risk-adjusted returns are likely to be lower than unadjusted returns but more comparable across asset classes. There are different ways to adjust returns for risk, but there is no clear best way to do so. The appropriate risk-adjusted measurements depend on what is being evaluated. For instance, in terms of evaluating the returns of mutual funds, various risk-adjusted performance measures could be used.One measure used is the Sharpe Ratio, which basically measures the reward to volatility ratio and is the most commonly used measure for determining the risk-adjusted performance of mutual funds. A high Sharpe ratio means that a mutual fund delivers a high return for the level of volatility of the fund s investments. Thus, if individuals were trying to determine the mutual fund that had the best combination of return for risk, they would choose the fund that had the highest Sharpe Ratio. An alternative to the Sharpe Ratio is the Modigliani Measure, which measures a fund s performance relative to the market. The measure uses a broad- based market index, such as the S&P 500, as a benchmark for risk comparison. In essence, the measure is equivalent to the return a mutual fund would achieve if it had the same risk as a market index. Another measure is one calculated by Morningstar, Incorporated. Unlike the Sharpe Ratio, which compares the risk-adjusted performance of any two mutual funds, Morningstar measures the risk-adjusted performance of mutual funds within the same asset class. It usually assigns ratings to mutual funds on the basis of the risk-adjusted return and risk of a mutual fund. Thus, if individuals wanted to know how various mutual funds did within their asset groups, they would look at the Morningstar rating. There are other risk-adjusted measures that are used. However, there is no clear best way to adjust a return for risk, and there is no one risk- adjusted measure that everyone agrees is the correct measure. Many of the measures are complicated and may require more sophistication to understand than could be expected of individual account investors. It should be noted, however, that although risk-adjusted rates of return are the appropriate measure for individual account investments, an investor s entire portfolio has a different risk than that of its individual components. Thus, risk-adjusted returns depend fundamentally on how portfolios are managed. <3.3. Comparing Rate of Return From Social Security to Expected Return With Individual Accounts Requires Careful Consideration> Comparing rates of return on Social Security and private market investments has frequently been discussed in evaluating options for reforming Social Security, but comparing the two does not capture all the relevant costs and benefits that reform proposals imply. Such comparisons often do not factor in the costs of disability and survivors insurance when determining a rate of return on Social Security contributions for retirement. Individual accounts would generally increase the degree to which retirement benefits are funded in advance. Today s pay-as-you-go Social Security program largely funds current benefits from current contributions, but those contributions also entitle workers to future benefits. The amount necessary to pay the benefits already accrued by current workers and current beneficiaries is roughly $9 trillion. Any changes that would create individual accounts would require revenues both to deposit in the new accounts for future benefits and to pay for existing benefit promises. Rate of return estimates for such a program should reflect all the contributions and benefits implied by the whole reform package, including the costs of making the transition. Administrative and annuity costs could also affect actual retirement incomes. <3.4. Agency Comments> SSA commented that we needed to clarify that comparisons between the rate of return implicit in the Social Security system and those of individual accounts were problematic for many reasons including the fact that Social Security provides survivors and disability insurance. We have further clarified issues regarding the rate of return comparisons and have referred to our forthcoming report that provides a more detailed discussion on comparing the rate of return implicit in the Social Security system with those of market investments. <4. Enhanced Education is Necessary for an Individual Account Program> Under many of the individual account programs that have been proposed, individual accounts to varying extents would be managed by participants themselves. To operate fairly and efficiently, such a system would have to provide participants with information adequate for their decisionmaking as well as to protect against misinformation that could impair that process. Existing SEC disclosure and antifraud rules and related doctrines provide for the disclosure of information that is material to an investment decision. However, such disclosure alone would not enable participants in an individual account program to understand how best to use such information for purposes of their retirement investment decisions. To provide participants with a clear understanding of the purpose and structure of an individual account program, an enhanced educational program would be necessary. Such an enhanced and broad-based educational effort would have to be undertaken in order to provide individuals with information they need and can readily understand, as well as with tools that can help both improve the decisionmaking process and awareness of the consequences of those decisions. Individuals would need education on the benefits of saving in general, the relative risk-return characteristics of particular investments, and how different distribution options can affect their retirement income stream. If a wide variety of choice is offered individuals so that they could potentially choose less diversified investments, such as individual equities, a more broad-based educational program would be necessary. The wider the variety of choices, and thus more potential risks, offered individuals under an individual account program, especially a mandatory program, the more broad-based the education will need to be. If fewer, well-diversified choices are provided under an individual account program, the educational effort could be targeted more to the purpose for investing and the potential long-term consequences. It is also likely that some sort of provision, such as a default option--either a default to the defined benefit part of Social Security (staying in the current Social Security program) or to a mandatory allocation--may be needed for those individuals who, regardless of the education provided, will choose not to make investment choices. <4.1. The Significance of Disclosure Rules Would Depend Upon Available Investment Choices> Existing disclosure rules require that material information be provided about a particular instrument and its issuer. Such disclosure would be essential to an individual account program, with some rules having more significance than others, depending on the investment choices offered. For example, if participants were allowed to acquire corporate securities such as stocks and bonds, the disclosure and reporting requirements of the Securities Acts of 1933 and 1934, such as those applicable to the governance, activities, and financial status of the issuer, would be particularly important to participants choosing such instruments. If investment choices were limited to mutual funds, disclosure about the funds would have primary importance, and information about the issuers of the securities owned by the funds would be relatively less significant for participants. In addition, the Employee Retirement Income Security Act of 1974 (ERISA) requires disclosures in connection with pension funds (covered by Title I of ERISA). If products offered by banks and insurance companies were permitted, special disclosure rules would apply. <4.1.1. Disclosures in Connection with Securities and Pension Plans> The Securities Acts of 1933 and 1934 generally require disclosure and reporting of detailed information about an issuer of securities, such as its management, activities, and financial status. The Securities Act of 1933 (1933 Act) primarily focuses upon the disclosure of information in connection with a distribution of securities; the Securities and Exchange Act of 1934 (1934 Act) concentrates upon the disclosure of information trading, transactions, and sales involving securities. The 1933 Act requires the disclosure of information intended to afford potential investors an adequate basis upon which to decide whether or not to purchase a new security and to prevent fraudulent conduct in connection with the offering. This disclosure generally takes place through a registration statement filed with SEC (and made available to the public, except for confidential information) and a related prospectus. Both documents contain detailed factual information about the issuer and the offering, including statements about the specifics of the offering as well as detailed information about the management, activities, and financial status of the issuer. The 1934 Act, among other things, contains extensive reporting and disclosure requirements for issuers of securities registered under the act. Issuers must file current, annual, and quarterly reports with SEC, and the annual report must be distributed to security holders. The 1934 Act also governs brokers, dealers, and others involved in selling or purchasing securities. The act contains a broad prohibition against fraud in connection with securities transactions that frequently has served as a basis for disclosing to customers an abundance of details about a particular instrument or transaction. ERISA and DOL regulations require the administrator of a plan covered by Title I of ERISA to file certain information about the plan with DOL and distribute it to plan participants and beneficiaries receiving benefits. One of the principal disclosure documents, the summary plan description (SPD), must include information specified in the regulations, which includes details about the structure, administration, and operation of the plan as well as the participant s or beneficiary s benefits and rights under the plan. The SPD must be written in a manner calculated to be understood by the average plan participant and must be sufficiently comprehensive to apprise the plan s participants and beneficiaries of their rights and obligations under the plan. Moreover, in fulfilling these requirements the plan administrator is to take into account such factors as the level of comprehension and education of typical participants in the plan and the complexity of the plan. In addition to general reporting and disclosure requirements, DOL regulations contain special disclosure rules for participant-directed accounts. A participant-directed account plan is one that permits participants and beneficiaries to direct the investment of assets in their individual accounts. The special rules arise in the connection with the obligations of a fiduciary to a plan that permits such accounts. Under DOL regulations, a fiduciary can avoid liability for any loss arising from the participant s exercise of control over account assets, provided that the participant has the opportunity to exercise control over the account assets and may choose, from a broad range of investment alternatives, the manner in which assets are invested. The regulations further provide that a participant has the opportunity to exercise control only if, among other things, the participant is provided or can obtain information sufficient for him or her to make informed investment decisions. This information includes (a) a description of investment alternatives and associated descriptions of the investment objective, risk and return characteristics of each such alternative; (b) information about designated investment managers; (c) an explanation of when and how to make investment instructions and any restrictions on when a participant can change investments; and (d) a statement of fees that may be charged to an account when a participant changes investment options or buys and sells investments. <4.1.2. Disclosure in Connection With Mutual Fund Shares> The information that the 1933 and 1934 Acts require issuers to disclose pertains to details about the issuers of securities and the securities themselves. Such information is significant to a person investing in a specific issuer. For the purchaser of shares in an investment company, such as a mutual fund, which is the vastly prevalent form of investment company, information about the company itself, rather than individual issuers, is most significant. Mutual funds are subject to the Investment Company Act of 1940, which deals with the registration, formation, and operation of investment companies, as well as provisions of the 1933 and 1934 Acts governing disclosure and prohibiting fraud. Disclosure about the fund, such as information concerning its investment strategies and its management, is provided in the registration statement filed with SEC; the prospectus or an alternative, less detailed document known as a profile ; and periodic reports filed with the Commission and distributed to shareholders. <4.1.3. Disclosure Concerning Certain Products Offered by Depository Institutions and Insurance Companies> The expansion of products offered by depository institutions (primarily federally insured banks and thrifts and their subsidiaries or affiliates) and insurance companies carries with it the potential for confusion about the nature and risk of investment products offered by such institutions. For example, bank sales of nondeposit instruments, such as mutual fund shares and variable annuities, could lead an investor to conclude that such instruments are federally insured bank products. Investment products sold by insurance companies, such as certain variable annuities and equity- indexed agreements, might be viewed as traditional insurance products, under which the insurer assumes the payment risk. If such products are securities, they are subject to the requirements of federal and state securities laws. The activities of institutions in connection with the products would be subject to regulation under the securities laws as well as regulation by their supervising agencies. <4.1.3.1. NonDeposit Bank Products> The federal bank regulators have promulgated rules, guidelines, and policies containing standards for disclosure in connection with a banking institutions involvement in sales of nondeposit instruments such as securities. These regulators issued an Interagency Statement on Retail Sales of Non-Deposit Investment Products ( Interagency Statement ) together with subsequent statements that focuses on issues specifically pertaining to the retail sale of investment products to customers on depository institution premises. Among other things, the standards seek to prevent customer confusion over whether such products are FDIC-insured, primarily through disclosure and separation of sales of investment products from other banking activities. New products being offered by insurance companies can also confuse investors about whether such a product is insurance (the insurer accepts the repayment risk) or a security (the purchaser of the product faces some or all repayment risk). States typically regulate disclosure about insurance products by prohibiting unfair, deceptive, or misleading statements about a product. However, to the extent such instruments are securities, their purchase and sale are subject to federal and state securities laws. <4.1.4. Initiatives to Facilitate Understanding of Information> To address concerns about the effectiveness of disclosures regarding investing, particularly with respect to mutual funds, SEC and some states have established programs to provide for disclosing information to investors in a more understandable way. SEC s plain English program is an example. The Commission instituted the program because much of the disclosure provided in prospectuses and other documents often is complex, legalistic, and too specialized for investors to understand. Under this program, the Commission revised its rule for the presentation of information in a prospectus to require that the prospectus comply with plain English writing principles listed in the regulation. SEC also amended its Form N-1A, the registration form used by mutual funds for registration, to provide for the use of plain English principles and simplified descriptions of information essential to an investor s evaluation of the fund. In March 1998, SEC adopted a rule permitting mutual funds to offer investors a new disclosure called a profile. The document summarizes key information about the fund, including its investment strategies, risks, performances, and fees, in a concise, standardized format. A fund offering a profile can give investors a choice about the amount of information they wish to consider before making a decision about investing in the fund. Investors have the option of purchasing the fund s shares on the basis of the profile, in which case they are to receive the fund s prospectus along with the purchase confirmation. Among other things, the new SEC rules are designed to reduce the complexity of information provided to mutual fund customers and the potential for confusion that sometimes accompanies such information. They are an attempt to make the disclosure of material information more useful to those who invest in mutual fund securities. <4.2. Enhanced Education Is Necessary for an Individual Account Program> Whether an individual account program is mandatory or voluntary, giving millions of working Americans the responsibility for investing part of their Social Security payroll taxes on their own requires enhanced education. Social Security has provided a safety net for millions of people for a long time in that it has been the foundation of the nation s retirement income system, providing income for millions of Americans. Introducing an individual account program would change the nature of the current Social Security program and would require increased education if people are to understand the individual account program and what may be required of them. Although education would be necessary regardless of whether the program was voluntary or mandatory, the government would have a special responsibility under a mandatory program to provide individuals with the basic investment knowledge that they would need in order to make informed investment decisions affecting their retirement. The extent to which enhanced education would be necessary would depend upon the available investment choices and the fees and expenses associated with an individual account program. An individual account program that offers many investment choices especially one that is mandatory would likely require a substantial amount of education because the wider the options provided an individual, the greater the chances are that the individual could lose money. If fewer well-diversified options are offered under an individual account program the fewer risk factors the individual has to consider and the more targeted the education could be. It would also be important to educate individuals about how to interpret the fees associated with individual account investments and how fees would affect their account balances. <4.2.1. Enhanced Education Is Important for All Individuals> The Social Security program includes workers from all levels of income, those who currently invest in equity and bond markets and those who do not. It is unlikely that a one size fits all educational effort would be appropriate for an individual account program. Because a mandatory individual account program would require everyone to participate, including those who do not currently make investment decisions, educational efforts would be especially crucial and would need to reach all individuals. <4.2.1.1. Enhanced Education Is Important for Those Who Do Not Currently Make Investment Decisions> Large segments of the working population do not currently make investment decisions for various reasons. For instance, some people do not believe that they have enough money to save or at least to save in any vehicle other than a bank account. Others do not know the benefits of investing. Lastly, there are those who do not appear to understand the benefits of saving and investing or the necessity of doing so for retirement. Whatever the reason, millions of people have never made investment decisions. Investor education is especially important for individuals who are unfamiliar with making investment choices, including low-income and less well-educated individuals who may have limited investing experience.Thus, one of the primary areas of enhanced education under an individual account program would be to educate those who do not know the basics about savings or diversification, especially if the individual account program is mandatory. Those individuals and households who do not currently make investment decisions, but rely on Social Security as their primary source of retirement income, are likely to be the ones who are most affected by a mandatory individual account program and thus most in need of education. <4.2.1.2. Current Initiatives Focus on Saving, Fraud, and Retirement Income> Congress and various agencies and organizations have instituted programs to educate people about the benefits of saving and investing. In the Savings Are Vital to Everyone s Retirement Act of 1977, Congress mandated an education and outreach program to promote retirement income savings by the public. The act also required the Secretary of Labor, in consultation with other federal agencies selected by the President, to plan and conduct a National Summit on Retirement Savings. As part of this mandate, the act required the Secretary to bring together retirement and investment professionals, Members of Congress, state and local officials, and others to discuss how to educate the public--employers and individuals--about the importance of saving and about the tools available to enable individuals to retire and remain financially independent. Pursuant to this mandate, DOL sponsored the National Summit in 1998. Other efforts have been made to reach out to investors to educate them about both how to protect themselves against fraud. SEC has realized that an important part of its role in combating fraud is to educate the public about what to be aware of and how to avoid being taken advantage of. If investors are adequately informed about the risks associated with potential securities frauds, then they will be less likely to fall victim to scams. SEC has implemented several programs to advise the investing public about potential frauds. For instance, SEC has issued numerous pamphlets about what types of questions investors should ask about investing and the people who sell those products. Additionally, SEC has held local town meetings across the United States to discuss investment risks. It also coordinates the Facts on Savings and Investing Campaign with federal, state, and international securities regulators. SEC officials said that in order to have a successful education program, it is necessary to determine what people do and do not know. This has entailed determining people s level of literacy and math knowledge in order to design a program that could provide education for individuals with various levels of investment knowledge. DOL s Pension Welfare and Benefits Administration has several educational outreach efforts for encouraging employers to establish retirement programs and employees to save for retirement. The basic program is a joint effort with a wide range of private sector partners, including the American Savings Education Council, the Employee Benefit Research Institute, banks, insurance companies, consumer groups, retiree groups, participant rights groups, mutual funds, and other large companies. This joint effort was designed to provide very basic information to individuals and employers about the different types of savings vehicles available under the law and to encourage the private sector to provide employees with models of pension programs. The educational program tries to target special groups whose pension coverage is low, including such groups as women and minorities as well as small businesses; only about one-fifth of small businesses offer pension plans to their employees. DOL has issued numerous pamphlets on what individuals should know about their pension rights and what businesses can do to start pension plans for their employees. For instance, they regularly use the Small Business Administration s newsletters to encourage members to establish pension plans and have developed a Web site for small businesses to give them information on various pension plan options, depending on how much each business can afford to contribute to a pension fund. These current programs have a limited ability to reach the overall population. One clear constraint is the low level of resources, including funding directed to investor education. Another limitation is that they are targeted to circumscribed audiences, such as companies that do not have retirement programs as opposed to individuals who do not invest. Furthermore, most efforts are reaching those individuals who choose to take it upon themselves to find out what they need to do to save more or to learn how to make better investment decisions. Thus, even as a result of the various targeted efforts undertaken, large segments of the population are still not being reached. <4.2.1.3. Education Is Also Important for Those Individuals Who Currently Make Investment Decisions> Numerous studies have been done that have looked at how well individuals who are currently investing understand investments and the markets. On the basis of those studies, it is clear that among those who save through their company s retirement programs or on their own, there are large percentages of the investing population who do not fully understand what they are doing. For instance, one study found that a little more than a third of American workers have tried to calculate how much money they would need to retire comfortably. Another study found that 47 percent of 401 (k) plan participants believe that stocks are components of a money market fund, and 55 percent of those surveyed thought that they could not lose money in government bond funds. Another study on the financial literacy of mutual fund investors found that less than half of all investors correctly understood the purpose of diversification. Further, SEC reported that over half of all Americans do not know the difference between a stock and a bond, and only 16 percent say they have a clear understanding of what an IRA is. Although individuals who currently make investment decisions are likely to have some familiarity with investing, education would also be important for them because of their increased responsibility under an individual account program. Furthermore, according to the studies cited above, there would be a real need for enhanced education about such topics as investing, risk and return, and diversification. As the Chairman of SEC has said, there is a wide gap between financial knowledge and financial responsibilities. Closing that knowledge gap is imperative under an individual account program. <4.2.2. Enhanced Education Is Important for Individual Accounts Program> Moving to an individual account program is going to require a thorough education effort for everyone to understand the program and how it is different from the current Social Security program. The government has much more responsibility for educating individuals under a mandatory program because people would effectively be forced by the government to save and to make decisions about what to do with that saving as well as bear the consequences of a decision. Even with a default option for those who do not choose to participate, the government needs to explain why the option was provided and what are its implications. Many people do not understand the current Social Security program, how their contributions are measured, and how their benefits are computed, even though the program is over 60 years old. Yet, millions of individuals rely on the program as their sole source of retirement income. In order to increase people s understanding of Social Security, SSA has implemented various efforts to educate people. Such efforts have included providing a 1-800 number for recipients to ask questions, having a public education service campaign, and providing educational packages to individuals. Despite these efforts, SSA officials said that people still have a hard time understanding the program. Implementing an individual account program is likely to require enhanced education not only about individual accounts but also about how an individual account program would change the nature of Social Security and what that means for the individual. At a minimum, under an individual account program, educational efforts would be needed to help people understand how individual accounts would work and how the accounts would affect their retirement income security. Many proposals do not specify what entity would be responsible for the public education program that would be needed for an individual account program. On the basis of the type of information experts in employee education say is needed, education about an individual account program could include the following information: Goals of the program individuals need to know what the goals of the program are and why they are participating. Responsibilities individuals need to know what their responsibilities are under the program. Retirement Income individuals need to know what their retirement income needs are and how their retirement needs will be affected under an individual account program. Materials individuals need materials that convey the message of the program and what will be required of them. <4.2.3. Amount of Education Necessary is Directly Linked to the Choices Offered> The amount of education that would be necessary under an individual account program depends on the range and type of investment choices offered to individuals. There are basic issues that individuals will need to be educated about regardless of how the program is structured. Such issues include (1) the choices they have to make; (2) the consequences of those choices; (3) what the investment options are, such as stocks, bonds, and indexed mutual funds; (4) rates of return of different investment vehicles; and (5) the risks of investment vehicles. However, as a wider variety of choice is offered to individuals, more education beyond the basics would be necessary because broader issues would need to be considered. With more variety of choice, investors would need to choose among various assets, which requires the investor to have certain skills to evaluate the risks and his or her own preference for risks. If the structure allows for an even broader variety of choices such as real estate, the educational requirements would mount. When choices are limited to a few well-diversified choices (such as a few indexed mutual funds), many decisions are made by those managing the funds or by rules governing the fund (such as what an indexed mutual fund can invest in). If the investor has the option of frequently moving funds from one investment to another, the educational effort needs to include analytical tools to aid such decisions and advice about the importance of a long-term horizon. Thus, the fewer well-diversified choices offered, the less risk to the individual and the more targeted the education could be. A variety of choices may benefit people in that it offers them a wider selection from which to choose, allowing them to choose the option that is in line with their preferences. However, it also increases their risk in that they could potentially choose less diversified investments, such as individual equities, that could result in financial loss. Furthermore, the wider the variety of choice offered, the greater the need for people to consider other issues. For instance, because offering a wide variety of investment options is likely to promote competition among financial institutions to provide a range of investment vehicles, investors would need to be educated about fraud and how to avoid it. When Great Britain went to an individual account program, individuals purchased unsuitable investments because of high-pressured sales tactics that resulted in individuals losing billions of dollars. The Chairman of the SEC has stated that allowing a broad range of investment options under individual accounts provides opportunities for fraud and sales practice abuses. Thus, education about fraud becomes important. For example, an investor would need to know what to look for, what type of questions to ask, what type of advice is biased, what the investor s rights are, or what the law requires. When investment options are limited, the chances of fraud are reduced. Moreover, the wider the variety of choice that is offered individuals, the more they will need education about understanding the value of diversification and the possible consequences of not having a diversified portfolio. If choices are limited to indexed mutual funds, less education about diversification would be needed because indexed funds are by nature diversified. Education is also necessary for understanding risks and the various returns that are likely with different investment options. With a wider variety of investment options, understanding risk and being able to manage the risk become important. It is important to explain to people that historical returns may not always be good predictors of future returns, especially when risks are ignored. As stated in chapter 3, measuring risk and comparing risk-adjusted returns can be a difficult process. Furthermore, being able to understand the rates of returns of various options and pick the appropriate investment vehicles become more difficult, as more variety is offered. Individuals would need more expertise to understand differences in the rates of return of equities, bonds, equity mutual funds, indexed funds, and so on. <4.2.3.1. Fewer Investment Choices, Less Education Needed> If the program has fewer well-diversified choices, limits would be placed on the ways that people could lose money. The educational effort could, therefore, focus more on getting individuals to be informed participants in the program. Educational issues that become relevant when individuals are offered numerous options are of less concern when they are offered fewer, well-diversified options. With fewer, well-diversified investment choices, the educational effort could be more targeted to the purpose of retirement savings, e.g., educating people about how much they would need to save and invest for retirement or determining their goals for retirement. Other issues, such as compounding the calculation of interest earned on a daily, quarterly, semiannual, or annual basis or the impact of inflation on returns are issues that individuals need to fundamentally understand. For example, with compounding interest individuals could earn interest on the money they save and on the interest that the money earns, e.g., if they invested $1,000 at 3-percent interest they could double their money in 24 years, but at 4 percent interest they could double it in 18 years. With inflation, or rising prices, the money that individuals earn on their investments would potentially be worth less and less as prices rose. In addition, seemingly small annual fees can eat away at the accumulated value. Offering fewer, more well-diversified options enables the education effort to be targeted on basic issues that would be helpful for individuals to understand in order to save for retirement. <4.2.4. Default Option> Despite current efforts to increase people s awareness to save more, many people are still not saving and making the retirement choices they need to make, effectively relying on Social Security to be their primary source of retirement income. It is unlikely that moving to individual accounts will result in active participation by all individuals. Thus, various officials have suggested that a default option be provided for those individuals who, regardless of educational effort, will not make investment choices. Default options could include a default to the defined benefit portion of Social Security (staying in the current Social Security program) or to some type of mandatory allocation. One example would be an investment vehicle in which, depending on the age of the individual, certain portions of the investment could be in equities and certain portions in bonds. The portion in bonds would increase with the age of the individual. Alternatively, the default option could be invested totally in Treasuries. As with any option, a default option with less risk is also likely to provide lower returns.
Why GAO Did This Study Pursuant to a congressional request, GAO provided information on the issues associated with individual social security accounts, focusing on how such accounts could affect: (1) private capital and annuities markets as well as national savings; (2) potential returns and risks to individuals; and (3) the disclosure and educational efforts needed to inform the public about such a program. What GAO Found GAO noted that: (1) individual investment accounts could affect the capital markets in several ways; (2) as a source of funds for the accounts, most proposals use either the cash collected from social security taxes or federal general revenues; (3) as a result, the primary capital market effect is a purely financial one: borrowing in the Treasury debt market to provide funding for investment in private debt and equity markets; (4) although the annual flows are likely to be sizeable, both the private debt and equity markets should be able to absorb the inflow without significant long-term disruption; (5) there could eventually be a significant increase in the amount of new funds flowing into the annuities market; (6) however, the magnitude of annuity purchases is likely to build gradually over time as more retirees build larger balances, allowing the market sufficient time to adjust; (7) individual account proposals could also affect the level of financial resources available for private investment by increasing or decreasing national savings; (8) the extent to which individual accounts affect national savings will depend on how they are financed, the structure of the program, and any behavioral responses of businesses and individuals; (9) national savings is more likely to increase if: (a) the government funds would have been spent but instead are not; (b) the program is mandatory and prohibits pre-retirement distributions; and (c) households do not fully adjust their retirement saving; (10) to the extent that households use the opportunities offered by an individual account program to invest in private equities and debt rather than Treasury securities, they could increase both the returns they receive and the risks they face compared to the Social Security program; (11) although asset diversification offers mitigation against certain risks, the returns that individuals receive would depend on and vary with their investment choices and the performance of the private debt and equity markets; (12) most advocates of individual accounts state that the expected future returns on private investments would be much higher for individuals than the implicit return available under the Social Security program; (13) some argue that historical returns may not be a good predictor of future returns; and (14) to provide participants with a clear understanding of the purpose and structure of an individual account program, an enhanced educational program would be necessary.
<1. Background> The Forest Service, within the U.S. Department of Agriculture, manages the 192-million-acre national forest system with its 155 national forests. The national forests generate receipts from a variety of resources, including recreation, grazing, and minerals; however, timber sale receipts have traditionally generated more than 90 percent of the total receipts. For example, in fiscal year 1996, timber sale receipts totaled about $576 million of the Forest Service s $638 million in receipts from all resources. Receipts from all resources, except timber, are deposited directly in the National Forest Fund (NFF), which is a receipts-holding account from which the Forest Service s obligations are distributed. For timber sale receipts, the Forest Service first distributes a portion of the receipts to two funds that are used for various timber sale activities, such as reforestation or preparing and administering future salvage sales. The remaining timber sale receipts are deposited in the NFF and combined with the receipts from other resources. Each forest has its own sub-NFF account that is accumulated at the regional level, and all regional NFF accounts are accumulated to develop the national NFF. At the end of the fiscal year, any amount not distributed from the NFF is deposited in the General Fund of the U.S. Treasury. (App. I provides additional information about the NFF and its receipts and distributions for fiscal years 1990 through 1996 and the Forest Service s projections for fiscal year 1997.) <2. The Decline in the Timber Harvests, Timber Receipts, and Returns to the Treasury> For fiscal years 1990 through 1996, the key indicators of the timber program harvested volumes, timber receipts, and amounts available for return to the Treasury decreased dramatically. As can be seen in table 1, for timber sales the largest component of the Forest Service s receipts harvested volumes decreased by 65 percent, receipts decreased by 55 percent, and the amounts available for return to the Treasury decreased by 86 percent. One of the reasons for the decline in the level of harvests was the listing of the northern spotted owl as a threatened species, which virtually halted all timber sales in the Pacific Northwest. The listing was followed by a decline in timber receipts and returns to the Treasury. However, the decline in the amounts available for return to the Treasury was even more severe because the Forest Service chose to make the payments for the spotted owl guarantee from the NFF during fiscal years 1994 and 1995. In fiscal year 1995, the amount available for return to the Treasury from the timber program dropped to a low of $1.5 million. In fiscal year 1996, the NFF lacked sufficient funds to meet its obligations including the spotted owl payments by a deficit of $77.6 million. Therefore, the Forest Service exercised its authority to use the General Fund Appropriation Northern Spotted Owl Guarantee provided for by P.L. 103-66, as amended. <3. Actions Taken by the Forest Service to Maintain a Positive NFF Balance in Fiscal Year 1996> In fiscal year 1996, the Forest Service was faced with having insufficient funds available in the NFF to make its payments to the states including the spotted owl guarantee and to meet its other required obligations. The Forest Service took two actions to remedy this problem. First, the Forest Service transferred to the NFF a total of $56.1 million originally intended to be deposited in the Salvage Sale Fund and the Knutson-Vandenberg Fund. However, even with this additional money, a shortfall of $17.8 million still remained in the NFF. The Forest Service s next action was to request the appropriation of about $135 million for the 1996 payments for the spotted owl guarantee authorized by the Omnibus Budget and Reconciliation Act of 1993 (P.L. 103-66, as amended). <3.1. Forest Service Transferred $56.1 Million to the NFF> The Forest Service s first analysis performed in May 1996 of the estimated receipts for fiscal year 1996 showed that the NFF s anticipated receipts were dangerously low. The analysis, generally performed to estimate the payments to the states, resulted in the Forest Service s beginning a series of internal discussions to identify why the receipts were so low. While the Forest Service estimated that it would be able to cover the payments to the states, it also estimated that only $33.6 million would be available in the NFF to cover all other needs. Even though the Forest Service was aware as early as May 1996 that the NFF was projected to be dangerously low at the end of 1996, and informally discussed the potential shortage internally between April and August, it did not formally initiate procedures to activate the spotted owl guarantee appropriation until September 1996. Instead, on August 27, 1996, the Forest Service instructed its regions to transfer the funds to the NFF that had been originally intended for deposit in the Salvage Sale Fund and Knutson-Vandenberg Fund for the remainder of the fiscal year to make up for the shortfall. The memorandum pointed out that the problem was occurring for several reasons, including the reduction in total receipts, the requirement for the spotted owl guarantee payments to some states, the setting aside of funding for tripartite land exchanges by the national forests covered by the spotted owl guarantee, and the deposit of receipts in both the Salvage Sale Fund and the Knutson-Vandenberg Fund. The memorandum pointed out that the regions needed to review the balances in their NFF, Salvage Sale Fund, and Knutson-Vandenberg Fund and stressed that if the regions had a deficit in their NFF accounts, it should be offset by a transfer of funds from one of the other accounts. These adjustments resulted in a total of $56.1 million being transferred to the NFF $35.6 million that would have been deposited in the Salvage Sale Fund and $20.5 million that would have been deposited in the Knutson-Vandenberg Fund. According to the Forest Service s records, the regions used a variety of approaches to make these accounting adjustments. While most regions made the adjustments at the regional level, some were made at the forest level, and one region was granted permission to make no adjustments at all. Although the regions and forests were told that the August and September accounting adjustments would be reversed, thus allowing them to deposit the funds into the Salvage Sale Fund and the Knutson-Vandenberg Fund as originally intended, this was not possible because the balance in the NFF is unavailable for disbursement after the close of the fiscal year. These funds must be returned to the Treasury, and therefore, the Forest Service s Salvage Sale Fund and the Knutson-Vandenberg Fund lost this amount for fiscal year 1996. <3.2. Forest Service Received $135 Million Spotted Owl Guarantee Appropriation> According to Forest Service officials, several situations arose after the initial analysis of the NFF shortfall. In early summer, the Pacific Northwest Region sharply curtailed its timber harvesting program because of the extensive fire season it was experiencing, which reduced the estimated receipts from that region. In addition, several internal deliberations raised concerns about the budget implications of requesting the spotted owl guarantee appropriation, which necessitated additional discussions with congressional committees. Also, according to Forest Service officials, external concerns arose about the interpretation of the statutory amounts allowed under the legislation that is, Office of Management and Budget (OMB) attorneys questioned whether the Forest Service was entitled to the entire spotted owl guarantee or just the shortfall. Because of the uncertainty about whether the Forest Service would receive the appropriation, the Forest Service needed to assure the U.S. Department of Agriculture that all external parties would agree to the request before it could be submitted. Thus, in early September 1996, the Forest Service started working with OMB to obtain its concurrence with the request for the spotted owl guarantee appropriation from the Treasury because of the $17.8 million shortfall in the NFF. In a letter dated September 19, 1996, the Forest Service requested that the Treasury provide the spotted owl guarantee appropriation for fiscal year 1996 of $135 million as authorized by P.L. 103-66, as amended. In its request, the Forest Service stated that its national forest receipts had declined significantly in fiscal year 1996 and would not be sufficient to cover the full payments due the states, including the spotted owl guarantee. On October 3, 1996, the Treasury advised the Forest Service that while the Forest Service had the authority to obtain the spotted owl guarantee appropriation, funds could not be deposited directly into the NFF as requested and that the request must be resubmitted for a new General Fund expenditure account. Five weeks later, on November 7, 1996, the Forest Service resubmitted its request to the Treasury for a new General Fund expenditure account entitled Payments to the States, Northern Spotted Owl Guarantee, Forest Service. According to a Forest Service official, this delay in resubmitting the request to Treasury resulted from higher-priority tasks of year-end closings. Because they were assured that they would be receiving the appropriation and that the moneys would be received in fiscal year 1997 for the fiscal year 1996 payments, the Forest Service considered the year-end closings a higher priority. On November 26, 1996, the Treasury with the concurrence of OMB approved the request and provided a warrant of $135 million to the Forest Service to make the spotted owl guarantee payments. Because the deficit in the NFF was only $17.8 million, when the Forest Service placed the $135 million into the NFF, it created a balance of $115.9 million after final adjustments. Forest Service officials told us that they will return this amount to the Treasury; however, as of August 12, 1997, the Forest Service still retained the money in the NFF. <3.3. Poor Financial Management Contributed to the NFF Shortfall in Fiscal Year 1996> Our review of the fiscal year 1996 timber balances in the NFF revealed that many forests especially in the Pacific Northwest Region had negative year-end balances in their NFF accounts. According to the Forest Service s records, the negative balances at the forest level resulted when these forests transferred funds from the NFF to the Salvage Sale Fund and the Knutson-Vandenberg Fund during the year, even if sufficient receipts had not been received on the particular sale in the current fiscal year. According to a Forest Service official, the forests and regions were not aware that the NFF is closed out annually. The Salvage Sale Fund and the Knutson-Vandenberg Fund, however, remain open. A negative balance in the NFF is very similar to writing a check without any money in the bank. The forests with negative NFF balances were forests that relied on the fact that other forests would have sufficiently large positive balances to counteract their negative amounts. For example, while 10 of the 19 forests in the Pacific Northwest Region had negative balances of about $37 million, the overall region had a positive balance of about $24 million. While we do not know the extent of all of these types of adjustments nationwide, we have reason to believe that their total amount would exceed $37 million. However, even if the amount was only $37 million, it still would mean, in effect, that over 10 forests in the Pacific Northwest Region deposited nothing in the NFF for the entire fiscal year. We believe that these adjustments contributed to the overall shortfall in the NFF and portray a lack of sound financial management by the Forest Service. It is our view that such adjustments, if permitted, should be limited to the current year s receipts. <4. Unauthorized Use of the NFF to Make Spotted Owl Guarantee Payments in Fiscal Years 1994 and 1995> The Forest Service used the NFF in fiscal years 1994 and 1995 for the required spotted owl guarantee payments to certain counties in California, Oregon, and Washington. This was an unauthorized use of the fund. Instead, the Forest Service was required to use the appropriations specifically made available by the Congress by the Omnibus Budget and Reconciliation Act of 1993 (P.L. 103-66, as amended) for the spotted owl guarantee and should continue to use this appropriation until fiscal year 2003, when it expires. The Forest Service is required to pay the states 25 percent of the gross receipts earned on national forests for the use by the counties in which the receipts were earned. For specific counties in California, Oregon, and Washington, the listing of the northern spotted owl as a threatened species accounted for a substantial drop in the size of timber harvests and therefore a substantial drop in the receipts that the counties would have received. To reduce this fiscal impact, the Congress included the safety net spotted owl guarantee legislation in the yearly appropriations for fiscal years 1991, 1992, and 1993, and provided that the payments to the states be made out of the NFF an indefinite appropriation. The Omnibus Budget Reconciliation Act of 1993 provided appropriations to make such payments to these states beginning in fiscal year 1994 through fiscal year 2003 and established the formulas for calculating the payments. The Forest Service did not use this authority in 1994 and 1995; rather, it elected to make the spotted owl guarantee payments from the NFF as it made its normal payments to the states. The Forest Service chose this method of payment because ample receipts were available in the NFF, which, if not used for the payment, would have been returned to the Treasury. The Forest Service also told us that its decision not to use the spotted owl appropriation was articulated in its budget explanatory notes approved by OMB and submitted to the House and Senate Committees on Appropriations. The Forest Service should have used the spotted owl appropriation rather than the NFF to make the spotted owl guarantee payment for fiscal years 1994 and 1995. This specific appropriation was enacted in lieu of the Congress s prior practice in fiscal years 1991, 1992, and 1993 of providing annual appropriations from the Forest Service s receipts for this purpose. Using the specific appropriation is in keeping with 31 U.S.C. 1301(a), which provides that public funds may be used only for the purpose or purposes for which they were appropriated. This provision prohibits charging authorized items to the wrong appropriation and unauthorized items to any appropriation. Moreover, the Forest Service s disclosure in its budget submission to the Congress is not a substitute for legislation and, therefore, did not authorize continued payments from the NFF. <5. The Forest Service s Actions in Fiscal Year 1997 to Improve the NFF s Management> On January 29, 1997, the Deputy Chief, National Forest System, issued initial guidance to the regions on the actions they should take in the short term and discussed the long-term actions needed to more effectively manage these funds. In the short term, the regions were asked to implement a series of distribution priorities for timber sale receipts to ensure that funds are available to make the payments to the states and to meet other obligations, as well as to support critical elements of the reforestation and salvage sale programs. The guidance also required that each region initiate a sale review process within the region to ensure that the trust funds and timber sale accounts are being managed in accordance with these priorities. According to Forest Management and Financial Management officials, the intent of the guidance was not to dictate a specific priority or action for each individual timber sale. Rather, the guidance was intended to establish a framework for managing overall receipts and to make the regions and the forests aware of their obligations and of the need to manage their programs to meet these obligations. For the long term, the January 29, 1997, guidance pointed out that solutions to the problem will require changes in both work processes and patterns of behavior and that effective controls will also require changes in accounting procedures. The Deputy s memorandum concluded that, clearly, actions are needed at all levels to tackle the problem. At the national level, an improved process is needed for program-level decisions to cover the required payments. To make progress in these areas, the Deputy said that he would appoint a task force in early 1997 to focus on the long-term solutions that would ensure that sufficient money is available to make the 25-percent payments to the states. We contacted each of the regions to gain an understanding of how they were implementing the short-term actions discussed in the January 29, 1997, guidance. All of the regions told us that they would manage the timber receipts and corresponding deposits to the NFF from a forest or regional perspective rather than on a sale-by-sale basis. Most of the regions have instituted monitoring procedures, such as developing a spreadsheet showing projected total receipts and balances for each forest s NFF after required obligations are met. However, four regions said that because they had not experienced any problems in the past, they had instituted no special procedures. On May 2, 1997, the Forest Service provided additional guidance to the regions on how to correct some of the accounting adjustments made in August and September 1996. As pointed out earlier, these adjustments created a multitude of problems. For example, some of the regional and forest adjustments resulted in overpayments to the states of about $730,000 in fiscal year 1996. According to the Forest Service, these overpayments will be adjusted in the states fiscal year 1997 payments. In addition, this guidance formally advised the regions that the amounts transferred to the NFF in fiscal year 1996 that had originally been intended for deposit in the Salvage Sale Fund and the Knutson-Vandenberg Fund would not be returned to each forest but instead would be returned to the Treasury. However, the guidance permitted each region to recover these funds out of fiscal year 1997 receipts to the extent that the region was able to meet all of its NFF requirements. In other words, if a forest earned fiscal year 1997 receipts beyond those needed for the payments to the states and for other NFF obligations that normally would have been deposited in the NFF, the forest could deposit that excess into the Salvage Sale Fund and the Knutson-Vandenberg Fund to the extent that it had transferred funds originally intended for those accounts in fiscal year 1996. Forest Service officials told us that the regions had sufficient receipts in fiscal year 1997 to recover the $56.1 million they had transferred to the NFF in fiscal year 1996. The Forest Service is also projecting that there will be a balance in the NFF at the end of the fiscal year of $127.5 million to be returned to the Treasury. The Forest Service also told us that in early October 1997, it would request the fiscal year 1997 spotted owl guarantee appropriation amounting to $129.9 million. According to Forest Service officials, because the Treasury account is already established, they should not experience the same types of problems for fiscal year 1997. A final long-term action involved establishing, in late May 1997, the National Task Force for Trust Funds and Payments to the States composed of regional and headquarters fiscal, accounting, and forest management representatives. The task force was charged with developing a national policy on the management of receipts and trust funds so that sufficient receipts would be available in the NFF to make the payments to the states along with meeting the Forest Service s other mandatory obligations. According to the task force leader, the task force plans to provide definitive guidance on periodic monitoring of NFF balances; adjustments among the NFF, the Salvage Sale Fund, and the Knutson-Vandenberg Fund; and the allowable uses of excess NFF balances. The task force estimates that it will issue its final report in August 1997. In addition, the Forest Service told us that it eventually plans to incorporate the results of the task force s report into the Forest Service s Manual and its fiscal and timber management handbooks. <6. Observations> Traditionally, the Forest Service has had a large timber program that returned hundreds of millions of dollars to the U.S. Treasury. However, the magnitude of receipts returned to the Treasury masked some of the Forest Service s underlying financial management weaknesses. Only in recent years, with the drastic reduction in timber sales and corresponding decreases in receipts, has it become more apparent that the Forest Service s financial management of its receipts and trust funds is in need of improvement. Lured into a false sense of security by the historically large returns to the Treasury, the Forest Service was unprepared to handle the crises it faced in fiscal year 1996. The problems of insufficient funds in its NFF and the loss of $56.1 million to other timber-related funds could have been lessened, if not mitigated, if the Forest Service had better financial controls over the adjustments made among the Salvage Sale Fund, the Knutson-Vandenberg Fund, and the NFF and more oversight of its funds management practices. The inability of the Forest Service to initiate the spotted owl guarantee appropriation in a timely manner greatly contributed to the problems experienced at the forest, regional, and national levels. However, the fiscal year 1996 occurrences are an illustration of the much larger fiscal accountability problems facing the Forest Service. In short, because the Forest Service does not now have the benefit of hundreds of millions of dollars as a cushion, it is now incumbent on the Forest Service to establish sound financial management controls. We have pointed out some of these weaknesses in two of our recent reports on the Knutson-Vandenberg Fund. On balance, while we believe that the establishment of the task force to review the management of the trust funds is a good first step, we also believe that the Forest Service has a long way to go toward solving its fiscal and accountability problems. <7. Recommendations> Because the Forest Service inappropriately made the spotted owl guarantee payments out of the National Forest Fund in fiscal years 1994 and 1995, its accounting records do not properly reflect the operations of the National Forest Fund for these years. Therefore, we recommend that the Secretary of Agriculture request that the Secretary of the Treasury establish the spotted owl appropriations account for fiscal years 1994 and 1995, pursuant to P.L. 103-66, as amended, and continue to use this authority until the termination of the statute in fiscal year 2003. We also recommend that the Secretary of Agriculture direct the Chief of the Forest Service to make the necessary accounting adjustments to properly reflect the use of the spotted owl appropriation in lieu of the National Forest Fund to make the spotted owl payments in fiscal years 1994 and 1995. <8. Agency Comments> We provided a draft of this report to the Forest Service for review and comment. We met with Forest Service officials, including the Deputy Director, Forest Management; the Director, Financial Management; the Director, Program Development and Budget Staff; the Acting Associate Deputy Chief, Operations; and a representative of the U.S. Department of Agriculture s Office of General Counsel. The Forest Service said that the information in our report accurately presented the operations of the National Forest Fund during fiscal years 1990 through 1997. The Forest Service acknowledged that it should have used the spotted owl guarantee appropriation instead of the NFF during fiscal years 1994 and 1995, and agreed with the recommendations for corrective action. We conducted our review at the Forest Service s headquarters and each of its regional offices. We interviewed officials and reviewed and analyzed records of the Forest Service s headquarters fiscal, budget, and forest management staffs. We also interviewed and obtained information from the Division of Funds Management, U.S. Treasury; and the Agriculture Branch of the Office of Management and Budget. We did not independently verify the reliability of the data provided nor of the systems from which they came. In addition, we did not attempt to determine what the results would have been if the Forest Service had used the proper appropriation to make the spotted owl guarantee payments in fiscal years 1994 and 1995 because we were specifically asked to provide a historical view of what actually occurred in fiscal years 1990 through 1996. We conducted our review from May 1997 through August 1997 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. We will then send copies to the Secretary of Agriculture and the Chief of the Forest Service. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (206) 287-4810. Major contributors to this report are listed in appendix II. The National Forest Fund and Its Distributions, Fiscal Years 1990 Through 1996 The National Forest Fund (NFF) an indefinite appropriation was established pursuant to the Act of March 4, 1907 (P.L. 59-242, as amended, 16 U.S.C. 499). This act provides that all moneys received from the national forests are deposited into a Department of the Treasury miscellaneous receipts account the NFF. For timber sale receipts, the Forest Service first distributes a portion of the receipts into two funds that are used for various timber sale activities, such as reforestation or preparing and administering future salvage sales. The remaining timber sale receipts are deposited into the NFF and combined with the receipts from other resources. Moneys from the NFF are transferred to other specified Treasury accounts or funds to satisfy various legal obligations. Moneys remaining after meeting these obligations must be transferred to the Treasury at year-end. Basically, the NFF serves as a holding account for national forest receipts from such resources as grazing, mining, recreation, and timber after payments are made to the Salvage Sale Fund and the Knutson-Vandenberg Fund from the timber receipts and are available for use by the Forest Service. The statutes listed below provide the authority for making the distributions: Payments to the States (Act of May 23, 1908, P.L. 60-136, as amended, 16 U.S.C. 500). This act requires that 25 percent of all receipts from each national forest be paid to the state in which the forest is located to be used to benefit roads and schools in the counties where the receipts were earned. This payment was established as a substitute for property taxes on national forest lands because the federal government cannot be taxed by state or local governments. For purposes of calculating the payments to the states, receipts are defined as the amount of receipts deposited in the Salvage Sale Fund, the Knutson-Vandenberg Fund, the amount of Purchaser Road Credits used, and the amount deposited in the NFF from all resources. Payments to States Concerning Northern Spotted Owl (Department of the Interior and Related Agencies Appropriations Acts, 1991, 1992, and 1993, P.L. 101-512, P.L. 102-154, and P.L. 102-381, respectively). The Forest Service s appropriations acts for fiscal years 1991 through 1993 provided for payments to California, Oregon, and Washington, for counties that had lost portions of the 25-percent payments to the states because of the listing of the northern spotted owl as a threatened species. These payments, which are in lieu of the 25-percent payments to the states, are based on an average of the receipts from prior years. The Forest Service continued to make these payments from the NFF in fiscal years 1994 and 1995. The Forest Service was not authorized to make these payments from the NFF and should have used the spotted owl guarantee appropriation established specifically for that purpose by the Congress in the Omnibus Budget Reconciliation Act of 1993 (P.L. 103-66, as amended). Payments to Minnesota (Act of June 22, 1948, 16 U.S.C. 577g). This act provides a special payment to the state of Minnesota for lands in the Boundary Waters Canoe Area in St. Louis, Cook, and Lake counties. Under the act, the Secretary of Agriculture pays 0.75 percent of the appraised value of certain Superior National Forest lands for distribution to the counties. Roads and Trails Fund (Act of March 4, 1913, as amended, 16 U.S.C. 501). This provision specifies that 10 percent of all moneys received except salvage sale receipts from the national forests during each fiscal year are to be expended for the construction and maintenance of roads and trails within the national forests in the states where the receipts were collected. Since fiscal year 1982, the amount deposited into the Roads and Trails Fund has been transferred to the General Fund of the Treasury to offset annual appropriations for road and trail construction and maintenance. Purchaser-Elect Roads (National Forest Management Act of 1976, P.L. 94-588, 16 U.S.C. 472a(i)). This act allows certain timber purchasers designated as small business concerns to elect to have the Forest Service build the roads required by the timber sale contracts. If the purchaser makes the election, the price paid for the timber will include the estimated cost of the roads. The Forest Service transfers this amount from the NFF to the purchaser-elect account. Acquisition of National Forest Lands Under Special Acts (Act of June 11, 1940, 54 Stat. 297; Act of June 11, 1940, 54 Stat. 299, Act of May 26, 1944, 58 Stat 227; and Act of Dec. 4, 1967, P.L. 90-171, 81 Stat. 531, 16 U.S.C. 484a). The first three acts provide for a special fund to acquire lands within critical watersheds to provide soil stabilization and the restoration of vegetation. The funds are available only for certain national forests in Utah, Nevada, and southern California. The final act provides for the replacement of National Forest System lands acquired by state, county, or municipal governments or public school authorities in land exchanges. Range Betterment Fund (Federal Land Policy and Management Act of 1976, P.L. 94-579, as amended by the Public Rangelands Improvement Act of 1978, P.L. 95-514, 43 U.S.C. 1751). This act provides that 50 percent of all moneys received as fees for grazing domestic livestock on national forest lands in the 16 western states is to be credited to a separate account in the Treasury. These funds are authorized to be appropriated and made available for use for on-the-ground rehabilitation, protection, and improvements of such lands. Recreation Fee Collection Costs (Land and Water Conservation Fund Act of 1965, P.L. 88-578, 78 Stat. 897, as amended by the Omnibus Budget Reconciliation Act of 1993, P.L. 103-66, 16 U.S.C. 4601-6a(i)(1)). These acts authorize the Secretary of Agriculture in any fiscal year to withhold from certain fees collected an amount equal to the cost of collecting such fees, but not more than 15 percent of the fees collected. Such amounts shall be retained by the Secretary and shall be available for expenditure without further appropriation to cover such fee collection costs. Tongass Timber Supply Fund (Alaska National Interest Lands Conservation Act of 1980, P.L. 96-487, 94 Stat. 1761, as amended). This act was intended to maintain the timber supply from the Tongass National Forest to the dependent industry at a rate of 4.5 billion board feet per decade and to protect the existing timber industry in southeast Alaska from possible reductions in the timber sale program as a result of wilderness and national monument designations in the Tongass National Forest. This fund was eliminated by the Tongass Timber Reform Act (P.L. 101-626), enacted in November 1990. Timber Sales Pipeline Restoration Fund (Omnibus Consolidated Rescissions and Appropriations Act of 1996, P.L. 104-134). This act created a fund to receive a portion of the receipts from certain timber sales released under the fiscal year 1995 Supplemental Appropriations for Disaster Assistance and Rescissions Act, to be used for the preparation of additional timber sales that are not funded by annual appropriations and for the backlog of recreation projects. In fiscal years 1990 through 1996, the Forest Service received almost $3.9 billion in national forest fund receipts and distributed about $2.6 billion to these various funds or accounts. The remaining $1.3 billion was returned to the U.S. Treasury. In addition, the $378 million deposited in the Roads and Trails Fund was also returned to the U.S. Treasury. Table I.1 provides the details, by fiscal year, of these transactions. Table I.1: National Forest Fund Receipts and Distributions, Fiscal Years 1990 Through 1996 (Table notes on next page) Figures may not add because of rounding. At the end of the fiscal year, some adjustments are made to other funds before a final amount is determined as the amount in the NFF to be distributed. Since fiscal year 1982, the amount distributed to the Roads and Trails Fund has been returned to the Treasury to offset appropriations for road and trail construction. Major Contributors to This Report <9. Energy, Resources, and Science Issues> Linda L. Harmon John P. Murphy Victor S. Rezendes Hugo W. Wolter, Jr. <10. Office of General Counsel> Alan R. Kasdan The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO reviewed the Forest Service's use of its National Forest Fund, focusing on: (1) the timber harvest volumes, the timber receipts for fiscal years (FY) 1990 through 1996, and the timber sale funds returned to the Treasury from the National Forest Fund; (2) the actions taken by the Forest Service toward the end of FY 1996 to cover the shortfall in the National Forest Fund; (3) whether the Forest Service has been using the proper funding source for the spotted owl guarantee payment; and (4) the Forest Service's plans for FY 1997 to ensure that the National Forest Fund has sufficient funds to make the payments to the states. What GAO Found GAO noted that: (1) GAO's analysis of timber sales activities in FY 1990 through 1996 showed that the key indicators of the timber program--harvested volumes, timber receipts, and amounts available for return to the U.S. Treasury--have dramatically decreased; (2) in FY 1996, the forest service was faced with having insufficient funds available in the National Forest Fund to make the required payments to the states--including the legislatively required payment to compensate certain counties in California, Oregon, and Washington for the listing of the northern spotted owl as a threatened species (spotted owl guarantee)--and to meet its other required obligations; (3) in August and September 1996, the Forest Service transferred to the National Forest Fund a total of $56.1 million in timber sale receipts originally intended for deposit in other specific Forest Service funds; (4) however, even with this adjustment, a shortfall of $17.8 million remained; (5) in mid-September, the Forest Service requested that the Treasury make available $135 million appropriated under the Omnibus Budget Reconciliation Act of 1993, for the 1996 payment of the spotted owl guarantee; (6) the Forest Service received approval for the appropriation on November 26, 1996; (7) as of August 12, 1997, the National Forest Fund had a balance of about $116 million for FY 1996 activities; (8) the Forest Service plans to return this amount to the Treasury's General Fund; (9) the Forest Service used the National Forest Fund in FY 1994 and 1995 to make the spotted owl guarantee payments to certain counties in California, Oregon, and Washington; (10) this was an unauthorized use of the fund; (11) instead, the Forest Service was required to use the spotted owl guarantee appropriation specifically enacted for this purpose; (12) on January 29, 1997, the Forest Service: (a) provided initial guidance to its regions on the priority for the distributions of receipts to ensure that funds are available to make payments to the states and to meet other obligations; and (b) required the regions to initiate a review process to ensure that the receipts were managed in accordance with these priorities; (13) in May 1997, the Forest Service established a National Task Force for Trust Funds and Payments to the States; and (14) the task force was charged with developing a national policy for the management of receipts and trust funds so that there would be sufficient receipts available in the National Forest Fund to make the payments to the states and to meet other mandatory obligations.
<1. Background> Financing homes on trust lands presents unique difficulties. Because individuals do not hold unrestricted title to these lands, they cannot convey the title to lenders to secure financing. To help overcome these difficulties and promote homeownership among Native American, Native Hawaiian, and Pacific Islander veterans, the Congress established the Native American Veterans Direct Home Loan Program in 1992. Begun as a 5-year pilot, the program has been extended twice and is currently authorized through 2005. To support loans under the program, the Congress provided an appropriation of $4.5 million in 1993 that continues to be available for the lifetime of the program. This amount is sufficient to allow VA to make more than $58 million in home loans, and $26 million had been obligated for loans through February 2002. VA receives an additional $0.5 million each year for administration and outreach activities, including travel to meet with tribes and individuals on the mainland and in the Pacific. The program is intended to assist eligible veterans living on trust or equivalent lands to obtain loans at market rates to purchase, construct, or rehabilitate homes. On the mainland, most trust land is located on or near reservations, with about 55 million acres held in trust by the U.S. government for Indian tribes and individuals. In the Pacific, communally owned lands in American Samoa, Guam, and the Northern Marianas and 200,600 acres of Hawaiian homelands are also covered by this program. Under the program, individual loans are limited by law to the cost of the home or $80,000, whichever is less. However, the law permits VA to make exceptions to the loan limit if VA determines that the costs in an area are significantly higher than average housing costs nationwide. Loans are available only for single-family homes that the owner occupies, not for multifamily dwellings, rentals, or investment properties. To be eligible for a loan under this program, veterans must meet certain statutory requirements. Veterans must demonstrate that they are honorably released from active military duty or members of the Selected Reserve, including the National Guard, and have served the required length of time; creditworthy, that is, they are a satisfactory credit risk with stable and sufficient income to meet mortgage payments; holders of a meaningful interest in the trust or equivalent land on which their homes will be located that entitles them to use and occupy the land; and members of a federally recognized tribe or the equivalent that has signed a memorandum of understanding with VA. A meaningful interest in the land may take the form of a long-term lease, allotment or other interest conveyed by the tribe or entity with jurisdiction over the land. For example, this generally takes the form of 99-year leases on Hawaiian homelands. The meaningful interest serves as security for the loan and must be transferable in the event of foreclosure. The tribe or other responsible entity must enter into a memorandum of understanding with VA to cover standards and procedures for foreclosure and related issues before any loans can be made under this program to an eligible veteran of that tribe. Lands held in trust for tribes are generally leased and at foreclosure, cannot be taken out of trust status. Lands held in trust for individuals can be inherited and can lose their trust status at foreclosure. VA administers the program through nine regional loan centers and its Honolulu Regional Office in Hawaii, shown on the map in figure 1 below. The five regional loan centers in the East, however, have a limited role in the program s operation, because many of the states they serve have few or no federally recognized tribes. However, the regional loan centers in Denver and St. Paul each serve eight states with federally recognized tribes, and the center in Phoenix serves three states with some of the largest concentrations of Native Americans in the country. In addition, Denver oversees loan activities conducted out of VA s Anchorage office in Alaska. The Honolulu Regional Office administers the program in the South Pacific. <2. Several Factors May Explain Disparity in Number of Loans Made to Different Groups> Native Hawaiian and Pacific Islander veterans have received more loans than Native American veterans during the lifetime of the program, and several factors may explain this difference. VA cannot address some of these factors, such as applicants income levels and credit history, or their lack of a meaningful interest in the land; VA also cannot address the availability of infrastructure on trust lands. Other factors that VA can address are program-related, such as loan limits and assistance with the mortgage process. <2.1. Number of Loans to Native Hawaiian Veterans and Pacific Islander Veterans Is Almost Five Times That of Native American Veterans> Four out of every five loans made under the program s auspices have been provided to Native Hawaiian or Pacific Islander veterans. Of 227 total loans, 143 have been made to Native Hawaiians and 46 to Pacific Islanders. Combined, the 189 loans made to these two groups greatly exceed the 38 loans made to Native American veterans. The year-by-year analysis in figure 2 indicates that the number of loans made to Native Americans has been relatively constant. While the number of loans for Native Hawaiians and Pacific Islanders combined has varied, it has consistently exceeded the number of loans made to Native Americans, averaging twice as many loans since 1998. In the first 5 years of the program, Native Hawaiians received most of the loans. VA officials in the Honolulu office explained that the number of loans made to Native Hawaiians peaked in 1995 and 1996 because the officials were able to grant 60 loans to veterans purchasing homes in two housing subdivisions. VA officials said that loans to Pacific Islanders rose in 1996 and 1997 because they made a conscious decision to launch the program in stages focusing initially on the nearest and easiest to serve areas of Hawaii and later moving on to promote the program in American Samoa, the Northern Marianas, and Guam. Although the number of loans completed in Hawaii in recent years has declined, VA officials anticipate an increase in the future as other eligible veterans obtain leases on Hawaiian homelands. <2.2. Factors That VA Cannot Address May Contribute to Lower Participation of Native American Veterans> Among the factors that VA cannot address that may contribute to lower participation of Native American veterans are low-income levels and unacceptable credit histories of potential applicants. To implement the statutory creditworthiness requirement, VA requires that an individual have sufficient income to qualify for a program loan. Based on 1990 census data, Native Americans had an average annual income of $16,800 while the average annual income of Native Hawaiians was $26,600. VA officials said that while they make every effort to assist applicants in qualifying for a loan, insufficient income and unacceptable credit history are still major barriers to loan approval for Native Americans who are found not to be creditworthy. Specifically, 23 of the 39 Native American loan applications in the St. Paul, Minnesota VA region were denied because of applicants insufficient income and unacceptable credit history. The remaining 16 were denied because of problems with land ownership. Officials at the Denver, Colorado, VA regional office also said that insufficient income and unacceptable credit history were the reasons for denying three of seven Native American veterans loan applications received in the region since the program s inception. VA s Honolulu field office stated that while insufficient income and unacceptable credit history have been a barrier for some veterans in American Samoa, other Native Hawaiian and Pacific Islanders have less difficulty meeting this eligibility requirement because of their higher income. Problems with establishing a meaningful interest in trust lands have also precluded some Native American veterans from obtaining a mortgage loan under this program. To obtain a mortgage loan, VA requires that veterans have a meaningful interest in the trust land on which their homes will be located. However, ownership of some Native American trust land has become fractionated as the ownership interests passed through several generations of multiple heirs, with an increasing number of people owning smaller shares of land over time. This land fractionation has increased at a rapid pace. Under such circumstances, a loan applicant would need to obtain the approval of everyone with shares in the land in order to mortgage it. For example, one applicant for a VA mortgage loan in the St. Paul region was unable to obtain a loan because he owned a 192nd interest in the trust land where he wanted to locate his home. This veteran would have had to obtain the approval of other co-owners to mortgage the land. In addition, all four federally recognized tribes in Kansas informed VA that they were not interested in participating in the VA direct loan program because of the extent of fractionated land interests within their reservation boundaries. In VA s St. Paul region, 12 of the 39 loans, which have been denied since the program s inception, were denied because the applicant had a fractionated interest in the land. This unique land ownership problem does not exist on the Hawaiian homelands because the land is leased. Another barrier for Native American veterans that VA cannot address is the lack of infrastructure that is needed for housing development on trust lands. The remoteness of some tribal lands has been an ongoing problem for housing development on Native American trust lands. In contrast to metropolitan areas, where basic infrastructure systems (such as sewers, electricity, and water supply) are already in place, building in remote trust lands requires the tribe or homeowner to install infrastructure to support new housing, or self-contained housing must be built. For example, much of the housing constructed on Navajo and Sioux trust lands is scattered across remote sites. A builder on Navajo lands told us that the cost to provide infrastructure to remote home sites is often too expensive for the tribe or homeowners and can cost over $20,000 per home. For Hawaiian veterans, infrastructure costs do not present such a barrier. The state of Hawaii, as part of its homeland development program, provides eligible Native Hawaiians (veterans and nonveterans) with infrastructure funding. For example, in a remote housing development containing homes purchased by Native Hawaiian veterans with VA loans, the Department of Hawaiian Homelands provided as much as $50,000 per lot for sewer, water, and electrical services. <2.3. VA Can Address Some Factors That May Contribute to Lower Participation of Native American Veterans> One program-related factor that VA can address that may have affected participation of Native American veterans is the $80,000 loan limit, established by the Congress when it created the program in 1992. VA has the authority to raise the loan limit for a geographic area if VA determines that the average housing cost in the area is significantly higher than the national average. Using this authority, VA officials said that the loan limit was raised to $120,000 for Hawaii, the Pacific Islands, and the state of Washington and to $100,000 for one tribe in New Mexico, to more closely approximate the housing costs in those areas. However, VA has not attempted to determine if the maximum loan limit should be raised for other Native American tribes. VA reported that they have not initiated increases for other areas because neither the VA regional loan centers nor the tribes have requested a change. Officials at the Denver Regional Loan Center said that the $80,000 loan limit may prevent some veterans from participating in this program. One tribal housing specialist with the nation s largest tribe, the Navajo, has directed veterans who wanted to purchase homes costing about $100,000 to other loan programs because he was not aware that VA could make exceptions to the $80,000 loan limit. In comparison, other federal programs that provide homeownership assistance to Native Americans on trust lands have loan limits between $144,000 and $278,000 (depending on the geographic region) and during 2001, guaranteed loans averaging $102,000. Finally, recent reports on mortgage lending concluded that Native Americans could benefit from homebuyer counseling and education. One report stated that Native Hawaiians could also benefit from homebuyer counseling and education. These two groups were found to have little experience with the mortgage lending process and the necessary steps required to obtain a mortgage loan. To overcome this barrier, the Director of VA s Honolulu field office said that local housing authorities and other organizations in Hawaii and the Pacific Islands provide mortgage counseling and homebuyer education that assist Native Hawaiians and Pacific Islanders to negotiate the homebuying and mortgage process. Officials at VA s Honolulu regional loan center said they help ensure that veterans receive the services and assistance of these organizations by actively communicating and partnering with them. VA s mainland regional loan centers have not established similar relationships with organizations to provide the same types of services for Native American veterans. For example, VA regional loan centers have not partnered with other organizations that focus on mortgage lending on trust lands, such as the One-Stop Mortgage Centers located on the Navajo reservation in New Mexico and Arizona, and the Oglala Sioux reservation in South Dakota. The centers are nonprofit organizations that specialize in mortgage lending and credit counseling, guiding potential Native American borrowers through the homebuying process, simplifying procedures, and educating potential borrowers about the types of home loans available on trust lands. We found that VA regional offices in Phoenix and St. Paul have had little contact with these centers and have not used them to identify, educate, and assist prospective borrowers. For example, a One-Stop Mortgage Center official in Arizona estimated that as many as 200 Navajo veterans who had visited the center and expressed an interest in homeownership did not receive complete information about the VA program because the One-Stop Mortgage Center s staff was not familiar with it. <3. VA Has Conducted Various Outreach Activities but Has Taken Limited Steps to Meet Assessment and Reporting Requirements> VA has conducted outreach but has taken limited steps to meet assessment and reporting requirements as specified in the program s authorizing legislation. Outreach requirements specified in the program s authorizing legislation state that VA, among other things, is to attend housing conferences, and provide information to veterans, tribal governments and organizations. VA has performed many of these activities. Other program requirements state that VA should annually assess and report to the Congress on the effectiveness of its outreach activities and annually report on the pool of eligible Native American, Native Hawaiian, and Pacific Islander veterans. VA has reported that it has undertaken extensive outreach activities but has not reported on how effective its outreach has been. VA s annual report has included information on how many Native American and Pacific Islander veterans identified themselves as such on the 1990 census but has not indicated the number of these individuals who would meet the program s eligibility requirements. However, VA said it will use new data available from the 2000 census to provide a more accurate count. <3.1. VA Has Conducted Various Outreach Activities> The direct loan program s authorizing legislation states that VA must, among other things, attend housing conferences and conventions; and produce and disseminate information to tribal governments, tribal veterans service organizations, and tribal organizations regarding the availability of such benefits. VA s regional loan center staff have attended and made presentations at housing conferences sponsored by the Department of Housing and Urban Development s Office of Native American Programs, the Department of Hawaiian Homelands, other Native American housing organizations, and Native American veterans forums. For example, the Phoenix regional loan center made a presentation on the direct loan program at the National Native American Veterans Symposium in February 2001. In addition, VA produced a video called Coming Home: Native American Veteran Home Loans that has been distributed to tribal officials and organizations. This video shows Native American veterans and tribal officials how the direct loan program may be used to help them achieve their homeownership goals. VA has also distributed information pamphlets and applications to interested veterans and organizations. VA s regional loan center staff has also traveled to tribal trust lands to meet and talk with tribal representatives and veteran liaison representatives to solicit their assistance in reaching tribal members who are veterans. For example, the Phoenix and St. Paul regional loan centers sent representatives to talk to tribes in those areas about the program. Furthermore, the Honolulu field office has expanded on these outreach activities to promote the program to Native Hawaiian veterans. Officials at the Honolulu office said they use local media, including radio, television, and newspaper to promote the program. In addition, officials said they use local housing organizations to inform veterans of the program. <3.2. VA Has Taken Limited Steps to Meet Assessment and Reporting Requirements> The program s authorizing legislation requires, among other things, that VA assess the effectiveness of outreach efforts it undertakes in connection with the direct loan program and report this assessment to the Congress annually. In its reports, VA states that the low level of program participation is not due to a lack of outreach on its part. However, program officials said that VA has not assessed the effectiveness of its outreach efforts. VA notified us that it plans to evaluate the Native American Veterans Direct Home Loan program as part of a larger study that it expects to complete in 2003, but an assessment of outreach effectiveness is not part of the planned work. The program s authorizing legislation directs VA to report annually on the pool of eligible veterans. To meet this requirement, VA reported that about 436,000 individuals had identified themselves as Native American and Pacific Islander veterans in the 1990 census. VA also acknowledged that the number of veterans obtained from census data did not wholly correlate to eligible veterans because the tally included veterans who were living in cities and who may not have been members of tribes thus, included were some veterans not eligible for leases or ownership of trust lands. Also, VA officials stated that they could not definitively quantify the pool of veterans who might be eligible for the program because they are dependent on veterans volunteering to identify their race and ethnicity. We analyzed the 1990 census data, however, and were able to distinguish Native American veterans who were living in tribal areas with trust lands from those living elsewhere. Our analysis revealed that there were approximately 18,000 veterans living on trust lands associated with about 50 federally recognized tribes. Although, it is likely that there are eligible veterans associated with the remaining federally recognized tribes, the data were not readily available. Further identification of eligible veterans might be possible with an examination of 2000 census data. Population data for Native Hawaiians who live on their equivalent of trust lands Hawaiian homelands were not collected in the 1990 census but were collected in the 2000 census. The more recent census will also identify Native American veterans living on trust lands associated with nearly 90 federally recognized tribes. VA program officials said they have asked VA s Office of Policy and Planning to analyze the 2000 census data and will use the data to provide a more current, accurate count of veterans eligible for the program. This analysis could allow VA to report to the Congress a more accurate count of the eligible pool of program participants. <4. Conclusions> Although the program is designed to help Native American, Native Hawaiian, and Pacific Islander veterans living on trust lands achieve homeownership, our review suggests that certain elements of the program may be barriers to participation for Native Americans. Some of these barriers are difficult to overcome; for example, problems with establishing meaningful interest in trust lands. But, VA can address some of the other barriers. For example, the program loan limit of $80,000 may be limiting the usefulness of the program to Native American veterans on some trust lands. By not partnering with other organizations, VA may be missing opportunities to get Native American veterans into the program and to guide them through the mortgage process to buy a home. Furthermore, by not assessing its outreach efforts, VA cannot be certain that it is effectively reaching the population that the program was designed to serve. While VA has not completely met requirements for reporting on the pool of eligible veterans, we are not making a recommendation because VA plans to use the 2000 census data to provide a more accurate count of eligible veterans. Changes to VA s loan program might improve the program s contribution to the federal effort to increase opportunities for Native American homeownership on trust lands. <5. Recommendations for Executive Action> To increase opportunities for participation for all Native American, Native Hawaiian, and Pacific Islander veterans, we recommend that the Secretary of Veterans Affairs: Direct regional loan centers to obtain local housing cost data for trust lands to determine the need for exceptions to the current loan limit. Additional exceptions should be granted if the data support such increases. Explore partnerships with local housing organizations, such as One-Stop Mortgage Centers, that assist and support Native Americans on trust lands with the mortgage lending process. Assess program outreach efforts to Native American, Native Hawaiian, and Pacific Islander veterans and report on this assessment to the Congress, as the program s authorizing legislation directs. <6. Agency Comments> We provided a draft of this report to VA for its review and comment. We received written comments on the draft report (see app. II). VA agreed that it could do more in delivering its benefits, and concurred with our recommendations. In addition, VA provided technical clarifications to the report, which we have incorporated into this report where appropriate. As part of its comments, VA suggested that we use statistical data from its National Survey of Veterans 2000 on veteran homeownership and income, rather than the general population. VA did not provide a copy of this unpublished survey, and we were unable to verify these data to determine their validity. Therefore, we did not include them in this report. <7. Scope and Methodology> To address the issues discussed in this report, we reviewed the statute, regulations, annual reports, and informational materials on VA s Native American Veterans Direct Home Loan program as well as our other work and related studies of Native American trust land issues. We also interviewed numerous officials in Washington, D.C., and elsewhere with responsibilities for the program or knowledge of Indian housing issues. To gain a fuller perspective on Native American housing and trust land issues, we interviewed HUD officials, who administer block grant and home-loan programs for Native Americans. We also interviewed an official at the Bureau of Indian Affairs, who is familiar with Native American tribes and trust land issues. We gained some perspective on the views of Native American veterans by interviewing officials from the National Congress of American Indians, the National American Indian Housing Council, and the Center for Minority Veterans as well as representatives of the Navajo Tribe and the Oglala Sioux Tribe. We selected these tribes because they are among the largest in the nation, have completed memorandums of understanding with VA, and are served by the two VA regional loan centers we visited. To determine if there was a disparity in program participation, we used VA data to calculate the number of loans made to Native Americans on the mainland by each regional loan center in every year since 1992 and compared the results with the number of loans made to Native Hawaiians and Pacific Islanders during the same period. To identify the factors contributing to the disparity, we interviewed program officials at VA headquarters in Washington D.C., and at four VA centers: Honolulu, Denver, St. Paul, and Phoenix. We selected these four centers because they are responsible for about 96 percent of all loans made under the program since its inception. During site visits to the St. Paul and Phoenix centers, we reviewed case files to determine the reasons that loan applications had been rejected and describe contacts with veterans and tribes. To see how income and other program requirements may affect eligibility for the program, we reviewed available census data on Native Americans incomes, poverty levels, veteran status, and residency on or near reservations. We also interviewed an official at the One-Stop Mortgage Center in Window Rock, Arizona, to discuss the significance of providing assistance with the mortgage process. To determine the steps VA has taken to meet outreach, assessment, and reporting requirements, we reviewed data provided at our request from all nine VA regional loan centers. This data provided information on VA program staffing, loan activity, and outreach efforts. We assessed the availability of information that VA could use to better identify the pool of eligible veterans by reviewing Census Bureau statistical reports and by interviewing officials concerning the availability of 2000 census data on Native Hawaiians and on Native Americans residing on trust lands. We conducted our work from November 2001 through May 2002 in accordance with generally accepted government auditing standards. We checked data that we obtained from federal agencies for internal consistency, but we did not independently verify the data. As arranged with your offices, we will also send copies of this report to the Secretary, Department of Veterans Affairs; the Ranking Minority Member of the Committee on Veterans Affairs, U.S. Senate; and the Committee on Veterans Affairs, U.S. House of Representatives. We will make copies available to others on request. In addition, this report is also available on GAO s Web site for no charge at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contacts and major contributors to this report are listed in appendix III. Appendix I: Other Federal Homeownership Programs for Native Americans on Trust Lands In addition to VA s Native American Veterans Direct Home Loan program, four other federal programs provide homeownership assistance to Native American individuals or tribes on trust lands on the mainland. The Department of Housing and Urban Development (HUD) administers two programs, and the Department of Agriculture administers two programs through its Rural Housing Service. Key aspects of each of these programs are shown in table 2. Appendix II: Comments from the Department of Veterans Affairs Appendix III: Contacts and Staff Acknowledgments <8. GAO Contacts> <9. Acknowledgments> Dwayne Curry, Shelia Drake, Patricia Elston, Colin Fallon, John McGrail, Michael Mgebroff, and William Sparling made key contributions to this report. Related GAO Products Welfare Reform: Tribes Are Using TANF Flexibility To Establish Their Own Programs. GAO-02-768. Washington, D.C.: July 5, 2002. Economic Development: Federal Assistance Programs for American Indians and Alaska Natives. GAO-02-193. Washington, D.C.: December 21, 2001. Indian Issues: Improvements Needed in Tribal Recognition Process. GAO-02-49. Washington, D.C.: November 2, 2001. Rural Housing: Options for Optimizing the Federal Role in Rural Housing Development. GAO/RCED-00-241. Washington, D.C.: September 15, 2000. Native American Housing: Information on HUD s Funding of Indian Housing Programs. GAO/RCED-99-16. Washington, D.C.: November 30, 1998. Native American Housing: Homeownership Opportunities on Trust Lands Are Limited. GAO/RCED-98-49. Washington, D.C.: February 24, 1998. Hawaiian Homelands: Hawaii s Efforts to Address Land Use Issues. GAO/RCED-94-24. Washington, D.C.: May 26, 1994. Veterans Benefits: Availability of Benefits in American Samoa. GAO/HRD-93-16. Washington, D.C.: November 18, 1992. Indian Programs: Profile of Land Ownership at 12 Reservations. GAO/RCED-92-96BR. Washington, D.C.: February 10, 1992.
What GAO Found Several federal programs have been developed to provide homeownership opportunities for Native Americans because private institutions have rarely supplied conventional home loans to Native Americans on trust lands. In 1992, Congress directed the Department of Veterans Affairs (VA) to create the Native American Veterans Direct Home Loan Program to assist veterans in purchasing, constructing and improving homes. The Native American Veterans Direct Home Loan Program has been characterized by differences in the numbers served, with native Hawaiians and Pacific Islanders together receiving almost five times as many as loans as Native Americans. Several factors that apply to Native Americans, but not to native Hawaiians and Pacific Islanders, may explain this difference. Long-standing barriers to lending on Native American trust lands include insufficient income and credit history, a lack of meaningful interest in land among many Native Americans, and insufficient infrastructure on trust lands. Other factors that VA can address include program limits that may be lower than housing costs for some trust lands and potential applicants' inexperience with the mortgage lending process. VA has conducted outreach but has taken limited steps to meet the assessment and reporting requirements specified in the program's authorizing legislation. VA attends housing conferences, distributes promotional materials, and responds to inquiries about the program to meet outreach requirements specified in its authorizing legislation.
<1. Introduction> The Military Sealift Command (MSC) provides ships for fleet support; special missions; and strategic sealift of equipment, supplies, and ammunition to sustain U.S. forces worldwide. While MSC uses a combination of government and privately owned ships to carry out this mission, all these ships have civilian crews who work either directly for MSC or for MSC s contract operators. <1.1. MSC s Contractor-Operated Ships> This report deals with contractor-operated ships, which account for 69 of the 200 ships in MSC s fleet (see table 1.1). Our review specifically focused on 40 ships in the 5 programs where MSC awarded long-term charter contracts for 3 or more ships. These programs include maritime prepositioning ships, T-5 tankers, oceanographic survey ships, T-AGOS surveillance ships, and fast sealift ships (see fig. 1.1). MSC spends over $400 million per year to operate and maintain these 40 ships. This figure includes payments for leasing the 18 privately owned ships in the group. <1.1.1. Maritime Prepositioning Ships> Maritime prepositioning ships rapidly deliver urgently needed Marine Corps equipment and supplies to a theater of operations during a war or contingency. These 13 privately owned ships are divided into three squadrons located in the Atlantic, Pacific, and Indian Oceans and carry everything from tanks and ammunition to food, water, and fuel. Each squadron can support a U.S. Marine Corps Expeditionary Brigade of 17,300 troops for 30 days. The maritime prepositioning ships were among the first ships to arrive in Saudi Arabia during Operation Desert Shield and in Somalia during Operation Restore Hope. <1.1.2. T-5 Tankers> The primary mission of the five privately owned T-5 tankers is point-to-point delivery of refined petroleum products to Department of Defense (DOD) users throughout the world. In addition, two of the tankers are equipped with modular fuel delivery systems, which allow them to refuel combatant ships at sea. At 30,000 tons displacement, the T-5 tankers are 3,000 tons larger than the contractor-operated sealift tankers that we reported on last year. In addition, the T-5s have ice-strengthened hulls and are approximately 10 years newer than the sealift tankers. During Operations Desert Shield and Desert Storm, MSC tankers provided fuel to naval fleet units operating in the Red Sea, the Persian Gulf, and the Gulf of Oman. <1.1.3. Fast Sealift Ships> The mission of the eight government-owned fast sealift ships is to provide rapid surge capability to U.S. armed forces throughout the world. They are the fastest roll-on/roll-off cargo ships in the world and are designed to carry bulky Army equipment such as tanks and helicopters. Combined, the eight ships can carry almost a full Army mechanized division. The fast sealift ships are normally maintained in a reduced operating status, with skeleton crews who perform preventive and corrective maintenance and basic operational checks. All eight ships are assigned to Fast Sealift Squadron One, in New Orleans, Louisiana, and they can be activated and underway from ports on the U.S. East and Gulf Coasts in 96 hours. Each of the fast sealift ships made up to seven trips to Saudi Arabia during Operations Desert Shield and Desert Storm. They were also involved with Operation Restore Hope. <1.1.4. T-AGOS Ocean Surveillance Ships> The mission of 7 of the 10 government-owned T-AGOS ships is to locate and track submarines. The remaining three have been converted to do counterdrug missions. These ships are homeported in Little Creek, Virginia, and Pearl Harbor, Hawaii, and are monitored by MSC field organizations located at these homeports. The T-AGOS ships operate towed array sensor systems to gather submarine acoustical data, especially to locate new and quieter submarines. <1.1.5. Oceanographic Ships> The mission of the four government-owned oceanographic ships is to support worldwide oceanographic survey programs with acoustical, biological, physical, and geophysical research. Their precision sonar systems permit continuous charting of a broad strip of ocean floor. The research conducted by these ships helps to improve the Navy s undersea warfare and enemy ship detection capabilities. <1.2. Maintenance and Repair of Contractor-Operated Ships> MSC s contract operators are tasked with providing personnel, equipment, tools, and supplies to maintain MSC s ships. They use three different levels of maintenance and repair to keep MSC s ships operational. The first level of maintenance and repair is performed by the ship s crew. It includes preventive maintenance and minor mechanical and electrical repairs. This work may be done during regular or overtime hours, and it may or may not be reimbursable under the terms of the applicable contract. The second level of maintenance and repair is industrial assistance, which is done by subcontractors. This work is beyond the capability of the ship s crew but does not require an overhaul. The subcontractors may actually maintain or repair the ship s equipment, or a technical representative may provide expertise to the ship s crew. Industrial assistance is usually reimbursable, either directly or through a budgeted system of payments. Overhauls are the third level of maintenance and repair. They can be scheduled, as required by Coast Guard regulations, or unscheduled, for example, to repair a damaged propeller. Since none of the MSC contract operators we reviewed function under firm fixed-price contracts, overhauls are directly reimbursable. <1.3. Objectives, Scope, and Methodology> The Ranking Minority Member of the Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, asked us to examine the Military Sealift Command s contractor-operated ship programs. Specifically, we determined whether MSC has adequate management controls (1) to oversee contractors and prevent abuses and (2) to ensure contractual requirements are being met. To determine whether MSC has adequate oversight of the maintenance and repair work done on its contractor-operated ships, we reviewed MSC s engineering and maintenance and repair instructions, files, and manuals, including the Engineering Operations and Maintenance Manual. We also reviewed maintenance and repair invoices, visited a sample of ships, and interviewed responsible MSC personnel. We used the ships operational schedules to visit ships that were about to complete an overhaul. For four of the five programs we were able to visit a ship that was in for overhaul, but this was not possible for the T-5 tankers. Therefore, we visited a tanker that was in its full operational status. (App. I lists the ships that we visited.) During our ship visits, we interviewed crew members, contractor and shipyard officials, MSC field personnel, and Coast Guard and American Bureau of Shipping inspectors. We visited several fast sealift ships because they were all located at the same port. To determine MSC s effectiveness in establishing and administering contract requirements, we reviewed the contracts for each of the ship programs and compared and contrasted the requirements contained in those contracts. We then discussed the contract differences with cognizant MSC officials to determine why the differences existed and to determine what, if any, standardized procedures these officials used to establish and administer program requirements. We also reviewed numerous MSC instructions dealing with funding, billing, and invoice certification. We reviewed the Department of Defense s National Industrial Security Program Operating Manual and MSC s security and crew qualification files to verify the suitability of the crew members on MSC s contractor-operated ships. To determine the effectiveness of MSC s current organizational structure, we met with various MSC officials and discussed their responsibilities with regard to MSC s contractor-operated ship programs. We also reviewed MSC s Standard Operating Manual, the draft proposal Reinventing MSC, and the MSC Commander s June 1, 1995, update to the reinvention proposal. We then discussed the reorganization initiative with MSC s current program managers. We did not address this area in depth because MSC s reinvention management team and its working groups had not developed the program management organization s structure by the time we completed our audit work. We conducted our work between July 1994 and August 1995 in accordance with generally accepted government auditing standards. <2. MSC s Internal Controls Are Inadequate to Prevent Overpayments for Ship Repairs> An ongoing joint investigation by the Federal Bureau of Investigation and the Naval Criminal Investigative Service has led to guilty pleas by four former employees of MSO, Inc., an MSC contractor that operated 10 oceanographic vessels. The investigation revealed that these employees had fraudulently altered overtime records of other MSO employees (crew members), changing nonreimbursable overtime charges to overtime charges that are reimbursable. It is estimated that these fraudulent overcharges amounted to millions of dollars during a 3-year period. This case shows that oversight and basic internal controls are fundamental for any entity to ensure that payments are made accurately and correspond to goods and services actually received. During our review of MSC s contractor-operated ship programs, we found that those who approve and pay bills do not verify that MSC has received the goods or services it is paying for. Part of the reason for this practice is a disconnect between headquarters-level invoice reviewers and field-level personnel, whose main concern is the operation but not the cost of the ships repair. In fiscal year 1994 alone, MSC spent $93.8 million to maintain and repair the ships in the five contractor-operated programs we reviewed. Given the large amounts of money spent on maintenance and repairs, it is imperative that MSC have effective controls over these expenditures. MSC lacks controls in three general areas: verification of crew-performed repairs, review of invoices for subcontracts, and oversight of repair work performed during overhauls. Though MSC s Comptroller is responsible for coordinating MSC s internal control program, he does not have the authority to ensure that MSC has a sufficient system of internal controls that is being adhered to. <2.1. Inadequate Documentation and Review of Crew-Performed Repairs Have Led to Overpayments> For three of MSC s contractor-operated ship programs, MSC has included in its contracts predetermined dollar amounts for crew-performed minor repairs that are to be done as part of the contracts fixed price. According to the contracts, these predetermined amounts, or minor repair thresholds, can be met in three ways. Contractors can apply toward the thresholds (1) overtime and straight time performed by extra crew (beyond those normally required), (2) overtime by the regular crew performing minor repairs, and (3) industrial assistance (work done by subcontractors, not by the ships crews). Contractors are to report how they meet their thresholds in minor repair reports. After contractors meet these minor repair thresholds, they can be reimbursed by MSC for all minor repairs. According to the contracts, the cleaning of the ship and preventive maintenance are part of the fixed price. They are not to be included in the contractors minor repair reports. In our review of minor repair reports, we found that, because of either inadequate supporting documentation, inadequate review, or both, contractors were meeting their thresholds in ways that are not allowed by the contracts or listing the same jobs more than once. Contractors for these three programs were essentially overstating their minor repair reports in the following ways: The contractor for one ship program was including in its minor repair reports the straight time hours of its regular crew. The contractor for a second program was including cleaning jobs in its minor repair reports. The contractor for a third program was listing the same jobs twice in its minor repair reports. For all three programs, the contractors were not submitting supporting documentation that matched their minor repair reports. According to an MSC instruction, proper knowledge of receipt or disposition of goods/services during the invoice certification process will reduce the chances of fraudulent claims being paid. However, MSC reviews minor repair reports and invoices for over-threshold repairs without adequate supporting documentation to show that work was done. Contractors for two of the three programs had been paid by MSC for over-threshold repairs. As of October 10, 1995, one of the contractors had received $685,946 from MSC for over-threshold repairs for fiscal years 1991 through 1995. MSC paid a second contractor $741,360 for over-threshold repairs for fiscal year 1994 alone. At the end of our review, MSC had not yet calculated whether the contractor for the third program had met or exceeded its minor repair thresholds. MSC had no plans to recover amounts for jobs that should not have been included as minor repairs. <2.1.1. MSC Failed to Detect Overstatement of Minor Repair Reports> The contract operator for the first of the three programs we discussed above included in its minor repair reports the straight time hours of its regular crew, but at the end of the 5-year contract period, MSC was not aware of this practice. MSC had never requested or reviewed the complete supporting documentation for the contractor s minor repair reports during the 5-year contract period that would have uncovered this practice. For the life of the contract, the contractor reported nearly $6 million in crew-performed repairs in their minor repair reports. Of this amount, MSC reimbursed the contractor $685,946 for over-threshold minor repairs. MSC s contract allows its contractor to apply to the minor repair threshold repairs done by the ship s regular crew while on overtime but not during straight-time work hours. Because MSC does not require the contractor to submit supporting documentation, however, it has no proof that the contractor has not manipulated the reporting of overtime. Field staff for this program told us that, at a recent meeting with MSC headquarters personnel, they had recommended that the contractor be required to submit crew overtime sheets as supporting documentation for its minor repair reports. However, MSC headquarters personnel have taken no action in response to this recommendation. For this program, we requested supporting documentation from the contractor for one ship s minor repair report, which totaled $25,859 and covered about 5 months. We reviewed this documentation to verify that the crew had actually listed this overtime work on their timesheets. We found that for this minor repair report, $8,406 of repairs had been performed by the ship s regular crew during straight-time hours. Another $860 was unsupported by crew overtime sheets. When we disclosed our findings to program officials, they stated that they were unaware that the contractor was not complying with the contract and said that they would investigate the matter further. It is particularly important that MSC fully review supporting documentation for the minor repair reports because the Naval Criminal Investigative Service has found erroneous overtime documentation practices on the part of ship contract operators. These practices involved (1) ship officers fraudulent rewriting of crew members overtime sheets, (2) the contractor s application of nonreimbursable work toward the minor repair thresholds, and (3) the doublebilling of MSC for the same hours of work. During our review, we also found instances of doublebilling and the application of nonreimbursable work toward minor repair thresholds. <2.1.2. MSC Allows Inclusion of Cleaning Jobs in Minor Repair Reports> Another contractor was including cleaning jobs, which are nonreimbursable, in its minor repair reports. MSC did not require documentation that would have allowed it to verify that the contractor s crew had actually done the work or that the work was in fact minor repairs, rather than cleaning and maintenance work. The contracting officer told us that MSC did not request this documentation because the paperwork was excessive and burdensome for MSC. According to the contract, the cleaning and maintenance of the ship are paid for in the fixed-price portion of the contract. Cleaning and maintenance work is not to be included in the contractor s listing of minor repairs; nor is it to be billed as a reimbursable expense. While the contract contains a list of sample minor repairs, it does not contain a similar list of cleaning jobs. We asked contracting officials whether such lists might clarify what jobs can and cannot be claimed as minor repairs and therefore be reimbursable. They told us that the contract was already too specific and that adding such a list would be adversarial to the contractor because distinguishing between cleaning and minor repairs is by nature subjective. During our review, we requested that the contractor for this second ship program provide supporting documentation for one of its minor repair reports. We reviewed this documentation for three ships for a 3-month period. We traced the contractor-generated list of minor repairs back to original timesheets filled out by the crew members. For one ship, we found that of the $15,897 the contractor claimed to meet its minor repair threshold, $3,202 (or 20 percent) was unsupported by crew overtime sheets. In addition to this unsupported work, we found that at least 24 of the 131 jobs listed as minor repairs appeared to be cleaning or preventive maintenance. That is, 24 jobs which cost $2,445 were for wiping up oil; defrosting the icebox; cleaning the galley, oven, staterooms, and pantry; lubricating hoses; rotating stores; waxing floors; sweeping the deck; entering timesheet data; and other similar cleaning and preventive work. When MSC s invoice reviewer approved this list of minor repairs, he deducted only one job, which entailed waxing the decks. This deduction was for $487.65. For the other two ships lists of minor repairs, we found that the contractor had similarly claimed cleaning and maintenance jobs as minor repairs. These included sweeping, picking up trash, removing dust and dirt, stripping and waxing decks, and cleaning the galley and a shower, among others. For these two ship reports, the MSC reviewer made no deductions at all. <2.1.3. MSC s Invoice Review Process Does Not Ensure Minor Repair Reports Accuracy> In our review of minor repair reports for a third ship program, we found numerous instances in which the supporting documentation did not match the jobs listed in the minor repair reports. For example, we found instances in which the contractor had listed the same jobs twice. In addition, we found instances in which the contractor had claimed work done by individual crew members, but its minor repair report did not include timesheets as documentation to verify that these crew members were actually aboard the ships and had done the work as claimed. MSC personnel for this ship program review minor repair reports for engineering content only. That is, they review these reports only to verify that the costs are reimbursable under the contract, not to verify the accuracy of the reports or to take steps that would detect duplicate listings. <2.2. Documentation Submitted With Invoices Is Insufficient to Ensure That Subcontractors Prices Are Fair and Reasonable> Not only is MSC s oversight of crew repairs inadequate, but its review of invoices for subcontracted work (second-level maintenance) is insufficient to prevent excessive payments by MSC. First, MSC does not uniformly require contractors to provide supporting documentation with their invoices that would indicate that prices are fair and reasonable. Second, MSC headquarters invoice reviewers generally do not rely on available field staff to verify that the subcontracted work was done or that it was reasonably priced. <2.2.1. Requirements for Supporting Documentation for Invoices Are Inconsistent> Included in all of MSC s contracts for the operation of its ships are clauses stating that the government is obligated to pay only the costs it deems are fair and reasonable. In only one of its contracts, however, does MSC include requirements for the contractor to submit documentation with its invoices that would allow the invoice reviewer to determine whether the price of the goods or services is fair and reasonable. In this one contract, MSC states that without such documentation, it will not reimburse the contractor. According to MSC, its subcontract review for one contractor was heightened because this contractor s purchasing system is not reviewed by the Defense Contract Management Command (DCMC), which is part of the Defense Logistics Agency. DCMC declined to review this contractor s purchasing system because the dollar value of its subcontracts was so low. MSC stated that for all but this one contract, MSC has required the contractors to maintain DCMC-approved purchasing systems. We analyzed the April 1995 DCMC audit of a contractor for two of the ship programs in our review. The DCMC auditors evaluated, among other things, whether the contractor had awarded subcontracts competitively and performed adequate price analysis and negotiations. At the end of its review, DCMC approved the contractor s purchasing system. However, it noted several weaknesses in this system and recommended corrective action. For example, DCMC found that only 54.5 percent of the contractor s purchase orders had been awarded competitively. For purchase orders under $25,000, only 48 percent had been awarded competitively. Finally, DCMC found that for awards without competition, 63 percent of the purchase order files neglected to include detailed evidence of effective price analysis or negotiation. Among the agency s recommendations was that the contractor assure that effective price analysis is performed for each applicable single-sole source purchase order over $10,000 and to a lesser degree those under $10,000. The contractor notified MSC that it intended to implement DCMC s recommendations. Despite the weaknesses revealed in the DCMC audit of this contractor, MSC has not adjusted its oversight of the contractor s awarding of subcontracts under $25,000. On the basis of what is submitted by the contractor to support subcontract invoices, the MSC invoice reviewer has no way of knowing whether the subcontract was awarded competitively or not. Neither does the supporting documentation show whether or how the contractor determined that prices were fair and reasonable. We asked the invoice reviewer for this program whether he had ever made deductions based on his determination that the price charged was not reasonable. He said that he only remembered questioning the reasonableness of price in two cases, in 1991 and 1992. One involved whether a technical representative had flown first class or coach, and the other involved whether the technical representative had rented the appropriate rental car. In neither case did the invoice reviewer determine that a deduction was necessary. We believe that these cases involved determining allowability of costs rather than reasonableness of costs. That is, under the terms of MSC s contracts with its ship operators, government regulations on travel apply. Allowing a technical representative to fly first class and drive a luxury rental car would violate the terms of MSC s contracts. On the other hand, during our review of invoices for the ship program that does require documentation of fair and reasonable prices, we found that invoices consistently included evidence of competitive bidding or a justification for a sole-source subcontract. We also found several cases in which an MSC field unit had deducted amounts from the contractor s invoices for inadequate documentation. For example, the field unit had deducted amounts for repairs and for repair parts because documentation did not indicate that the charges were fair and reasonable. We also saw a case in which the field unit deducted fax and telephone charges because the contractor had not submitted a statement explaining the nature of calls made to ensure the calls had been made for official government business. By contrast, for the contractor whose subcontracting weaknesses were cited by DCMC, we saw an invoice for $1,456.73 for telephone calls for a 3-day period. The invoice contained no indication of whether any of these calls were for official government business, yet the invoice was approved for payment. In our review of this same contractor s invoices, we found an invoice whose price appeared excessive. This invoice was for $3,560 to provide labor, tools and material as necessary to replace twenty (20) lampshades . . . relamp and repair as necessary. The invoice included no evidence of whether this work had been awarded competitively, why it had not been done by the ship s crew, or how extensive the work was. Before approving this invoice for payment, the MSC invoice reviewer did not seek further information from the contractor. When we asked for an explanation of this invoice, the invoice reviewer said that he did not know whether the lamps had been repaired or whether the lampshades had simply been replaced. After we requested supporting documentation from the contractor on this invoice, we found that MSC had paid $260 per lamp to repair 10 lamps and replace their lampshades, when it could have purchased new lamps for $210 each (excluding the costs of installation). Work on the other 10 lamps was less extensive, ranging from simply replacing the lampshades to replacing the toggle switches and/or modifying the lamp bases. (See fig. 2.1 for an example of the type of lamp repaired.) We also found that the ship s crew includes a qualified electrician whose overtime labor rate is about half that charged by the subcontractor. On another ship in this program, lampshades were replaced by the third assistant engineer, also at an hourly overtime rate about half that charged by the subcontractor. The master and the chief engineer on this ship stated that they could see no reason to use subcontractors to repair lamps because it is such a simple task and fully within the crew s capability. <2.2.2. MSC Field Staff Generally Not Involved in the Invoice Review Process> MSC headquarters personnel who review invoices do not know whether goods have been delivered or services provided, as dictated in MSC invoice certification instructions. In their review of invoices, headquarters personnel are ensuring that what is charged by the contractors is allowable under the terms of the contract. However, they are not ensuring that parts were actually delivered or work was actually done. In effect, these reviewers are relying heavily on the integrity of the contractors and are essentially approving all invoices for items or services allowed by the contract. Field personnel, who could be used to personally verify that work has been done at reasonable costs, are primarily concerned with the condition and operation of the ships. A senior-level official from one field unit told us that when he wants something fixed, cost is not his main concern. On one program, MSC field personnel do not see invoices reflecting the cost of work performed as a result of their recommendations. In two of the five contractor-operated ship programs, field staff are located near the ships and visit them regularly. These personnel could be used to verify that work billed MSC has been done and is reasonably priced. They could easily check work performed on the ships as part of their routine inspections. For one program, field staff are already reviewing invoices. <2.2.3. No Controls to Prevent Contractors From Circumventing Requirement to Receive MSC s Prior Approval for Subcontracts> The MSC contracting officer has no visibility over many large-dollar repair expenditures for one ship program. MSC s contract with its contract operator on this program requires that the contractor first obtain MSC approval before subcontracting for industrial assistance that costs more than $25,000. This requirement is intended to help MSC ensure that it receives fair and reasonable prices for large repair jobs and that the work is needed. Because the contractor for this program breaks large jobs down into multiple smaller ones, it is evading the contractual requirement to obtain the contracting officer s prior approval. Contractor officials told us that they routinely split jobs into segments because these ships needed to be ready to go to sea with 4 days notice. They said that they split jobs into pieces because obtaining the MSC contracting officer s approval delays payment to the subcontractor. During our review, we found that MSC has known about this practice since 1990. In a 1990 memorandum to MSC s Contracts and Business Management Directorate the former director of engineering at MSC stated that although Contractors are required to obtain Contracting Officer approval for subcontracts in excess of $25,000, there are many instances where Contractors have instituted procedures that evade compliance. These procedures, he said, included issuing multiple work orders, each less than $25,000, to a single subcontractor. During our review, we asked MSC officials whether they had taken any action to prevent contractors from issuing multiple work orders and thereby evade the requirement to seek MSC s prior approval. They said they had not. In one case, the contract operator split a job totaling $143,740 into 18 separate jobs, each under the $25,000 threshold. This work was for ship cleaning that was done by the same subcontractor on the same ship over a 3-month period. After we requested that the contractor provide us with evidence that this work had been competitively awarded, we found that the contractor had obtained quotations from three subcontractors on the price per square foot for cleaning the ship. The contractor awarded the work to the lowest bidder based on a single price quotation. It then split the job into 18 smaller ones involving the cleaning of different parts of the ship. In another case, this same contractor submitted 71 separate invoices totaling $202,294 for welding-related work done by one subcontractor on one ship over a 4-month period. In many cases, multiple invoices were submitted to MSC on the same day. For example, 9 invoices were submitted on December 2, 1994; 12 were submitted on December 30, 1994; 18 were submitted on January 5, 1995; and 12 were submitted on February 10, 1995. Despite this pattern of billing, the MSC person responsible for reviewing these invoices said that he was not aware of the contractor s practice of splitting large jobs into smaller ones. During our review, we asked the contractor to provide documentation showing which of these 71 jobs had been competitively bid or justified as sole source. He was able to show that only 30 had been awarded competitively and that 7 had been awarded sole source because they were related to competitively bid work. The contractor did not supply documentation on the other 34 jobs. <2.3. MSC Does Not Consistently Verify That Overhaul Work Is Done and Prices Are Reasonable> MSC headquarters personnel review overhaul work packages and discuss them in detail with representatives from the contract operators engineering staffs before overhaul subcontracts are solicited and awarded. However, even though a ship s overhaul can cost MSC up to $6 million, MSC does not always have an MSC representative on-site during the overhauls to ensure that work contained in these work packages is actually done and that unforeseen repairs not specified in overhaul contracts are completed or are reasonably priced. This lack of assurance is due at least in part to the fact that MSC has no agencywide requirement for its representatives to be present during ship overhauls. This presence during an overhaul enables a representative of MSC to observe the condition of items of equipment when these items are opened and inspected and to determine the extent of needed repairs. In addition, the presence of an MSC representative enables MSC to monitor the extent of the repairs to prevent unneeded work. When an MSC representative is not present during an overhaul, MSC is relying entirely on the integrity and professionalism of the contract operator to protect the government s interest. Even when MSC representatives are present, the amount of involvement among them, contract operators representatives, and shipyard personnel varies because MSC has no written guidelines governing the authority and responsibilities of its representatives. For the three contractor-operated programs whose ships are owned by the government, we found that some MSC representatives significantly contributed to the contracting officer s ability to enforce the terms of MSC s contracts and to ensure that repairs were made in the best interest of the government. Other MSC representatives contributions were not as significant. Even though an MSC presence during overhauls helps to protect the government s interest, having an MSC representative on-site did not always ensure that MSC obtained negotiated prices on change orders. During one overhaul, we found that for $271,755 of a total $544,135 (about 50 percent) in change orders, the contract operator s and the shipyard s estimates were identical. For $427,111 of this change order work (about 78 percent), the negotiated prices between the shipyard and the contract operator were the shipyard s estimated prices. The lack of clear written guidance on the authority and responsibilities of the MSC representative contributed to MSC s failure to obtain negotiated prices on this overhaul. Because the MSC representative did not independently estimate change orders, MSC had no assurance that it did not pay excessive prices. During this overhaul, the MSC representative was simply providing the administrative contracting officer with a statement that funds were available for the work. He was not preparing independent government estimates. Such independent estimates form the basis on which the government can challenge prices charged by the shipyard. MSC does not have written guidance to address the oversight of work done by its contract operators extra crew members during overhauls. During overhauls, MSC s ships maintain skeleton crews to monitor alteration, maintenance, and repair work and to provide security for the ships. However, MSC sometimes authorizes its contractors to retain additional crew members during overhauls when the contractors can provide justification for the special work requiring their retention. MSC has no written guidance regarding oversight responsibilities for this work, and it has not established procedures for taking deductions if the authorized work is not completed. An MSC representative for one ship program told us that he routinely inspects the work of additional crew members during overhauls. However, the benefit of these inspections is questionable for two reasons. First, MSC does not use these inspections as a basis for taking contract payment deductions. The MSC representative who actually inspects the approved work items does not receive or review the bill for this work, and no one at MSC asks for the results of his inspection when the bill for the work is reviewed. Second, MSC does not require the contractor to obtain prior approval when changing the work items used to justify the extra crew members. The contracting officer for this program told us that she does not see why the contractor cannot deviate from the special work items it submitted as justification for its extra crew members. We visited one ship from this program on the last day of its overhaul. During that visit we observed, as did an MSC representative, that many of the work items used to justify the ship s extra crew had been only partially completed or not completed at all. According to the MSC representative, this was not an isolated case, since on other overhauls he found that the work used to justify the extra crew had not been completed. Later that day we were told by the ship s master and chief mate that the work items had changed, and we were given a handwritten list of changes that had not been approved by MSC. Until that time, the MSC representative had not known what jobs the extra crew members were actually doing. At the end of our review, MSC had still not received a bill for this work, 10 months after the completion of the overhaul. <2.4. MSC s Internal Controls Are Weak> As we discuss in this chapter, MSC s internal controls to prevent the possibility of contractor fraud and abuse are weak in many cases. MSC s Comptroller is responsible for the coordination of MSC s internal control program. However, according to the MSC Comptroller, he does not have direct authority to ensure the sufficiency of these controls or their implementation. In 1990, Congress mandated governmentwide financial management reform by enacting the Chief Financial Officers (CFO) Act (P. L. 101-576). This act was based at least in part on the finding of Congress that billions of dollars are lost each year through fraud, waste, abuse, and mismanagement among the hundreds of programs in the Federal Government. The Secretary of Defense has recognized that the CFO Act is a vehicle for improving DOD s financial operations. He has therefore directed that senior managers throughout DOD play a more active role in identifying, reporting, and correcting poor internal controls. This does not appear to have occurred at MSC. <2.5. Conclusions> MSC s oversight of ship repairs for its contractor-operated ships is inadequate to prevent overcharges. MSC lacks basic internal controls that would help to ensure that MSC is paying reasonable prices for work that is actually being done. Specifically, MSC lacks basic internal controls in its supervision of overhaul work, in its verification of crew-performed repairs, and in its review of invoices for subcontracts. Furthermore, though MSC s Comptroller is responsible for coordinating its internal controls, this person has no authority over internal controls throughout the agency. <2.6. Recommendations> We recommend that the Secretary of Defense direct the Commander of MSC to take the following actions: Institute MSC-wide procedures to ensure that contractors are (1) accurately reporting how they meet contract-defined thresholds for crew-performed minor repairs, (2) submitting adequate documentation with invoices for MSC to determine that prices are fair and reasonable, and (3) obtaining prior MSC approval for subcontracted work above thresholds required by the contracts. When practical, require that MSC representatives verify, through spotchecks, that minor repairs and industrial assistance paid for by MSC have actually been done and recommend deductions if necessary. These spotchecks could be done by MSC personnel as part of their normal inspections. When practical, require an MSC representative to verify, based upon physical observation, the satisfactory completion of work performed at various stages of overhauls of MSC contractor-operated ships. Provide written guidance defining the roles, responsibilities, and authority of MSC representatives in protecting the government s interests during overhauls and other major repair work. Consider expanding the responsibilities of MSC s Comptroller or creating a new position for a financial management expert to oversee the implementation of the above recommendations. If a new position is created, this person should report directly to the Commander of MSC. In addition to the existing duties of the Comptroller, this person would be responsible for setting minimal internal controls for all aspects of financial management throughout MSC and overseeing the implementation of these controls. The responsibilities of this position would be similar to those of a Chief Financial Officer established under the CFO Act of 1990. <2.7. Agency Comments and Our Evaluation> In official oral comments, DOD partially concurred with the report and generally agreed with our recommendations. However, DOD generally disagreed with the details of the report and the conclusion that internal controls are weak. DOD did agree that there are opportunities for further improvements in the internal controls applied to contractor operation of MSC ships and said it has already implemented remedial measures. DOD also stated that in view of the unusual procurement situations highlighted in the report, the Commander of MSC is focusing additional attention on risk analysis and design of appropriate internal controls. We continue to believe, based on the findings discussed in this chapter, that MSC does not have an adequate system of internal controls in place. Recent fraudulent practices of a former MSC contractor and the continuing investigation by federal law enforcement agencies into MSC operations support our conclusion that MSC s internal controls are inadequate. <3. Management Control Weaknesses Impede MSC s Efforts to Effectively Manage Its Ship Programs> Effectively managed programs have three things in common. First, program requirements are carefully and systematically established based on past experience and input from customers and knowledgeable people throughout the organization. Second, responsibility for monitoring program performance and ensuring that programs meet the established requirements is clearly delineated. Third, program managers are constantly looking for ways to improve program performance and to reduce costs. During our review, however, we found that MSC does not have the organizational structure or the standardized procedures necessary to effectively manage its contractor-operated ship programs. MSC does not have guidelines for systematically establishing personnel requirements such as citizenship and security requirements. Neither does it systematically compare contractual requirements with contractors performance in obtaining security clearances and trustworthiness evaluations for crew members. Finally, MSC has no formal system to coordinate ideas to improve the contractors performance or reduce the programs costs. Because its own management controls are weak, MSC relies heavily on its operating contractors to prevent contract abuses. The dangers of such a heavy reliance on contractors have been demonstrated through MSC s past experiences. For example, a now defunct ship management company billed and collected payments from MSC for fraudulent overtime aboard MSC s oceanographic ships. In another case, MSC management s poor oversight resulted in the deteriorated and unsafe condition of its sealift tankers and in the crewing of these ships with significant numbers of personnel who had been convicted of felonies. We reported on the condition of the sealift tankers and their crews in a 1994 report. MSC s fragmented lines of organizational authority represent a significant impediment to sound management controls. MSC recognized the problems caused by its current organizational structure and planned to begin implementing a new program management structure on October 1, 1995. Under MSC s new structure, accountability that was previously divided among various MSC headquarters departments and field levels will reside with a single individual, the program manager. <3.1. MSC Does Not Have Standard Procedures to Develop Personnel Requirements> Despite the fact that MSC s contract provisions can affect a ship program s operation for 20 years or more, MSC does not have standard procedures to develop personnel requirements in its contracts. The personnel from MSC s Operations Office, who are responsible for coordinating contract requirements with the ship s sponsors, told us they do not follow checklists or standard procedures to ensure that important personnel requirements are not overlooked. Neither do they routinely consult existing contracts for other programs prior to the award of new contracts. As a result of this lack of standard procedures, MSC failed to review the resumes of some ships crews, and some ships did not have U.S. citizenship, security clearance, or trustworthiness requirements for their crews. <3.1.1. Guidelines to Establish Crew Qualification Requirements Are Lacking> MSC has no guidelines to ensure that crew qualification requirements are consistently established. Qualified crew are critical, especially in situations such as underway refueling, where the chance of a collision at sea is significantly increased. Therefore, it is essential for ship owners, operators, and those who charter ships to take precautions to ensure that the crews are qualified. Although four of the five ship program contracts we reviewed require contractors to submit the resumes of key personnel to MSC for approval before the personnel are assigned to a ship, the fifth ship program s contracts do not. An MSC official in charge of the fifth ship program told us that MSC did not need to review the resumes of crew members. He said that contractors should not crew their ships with improperly licensed crew members because they could be fined by the Coast Guard. However, for one program that required resumes, the contractor did attempt to crew its ships with improperly licensed crew members. After its review of resumes, MSC rejected two of the contractor s nominees for master positions because they did not have the proper licenses and had never served as chief mates on the program s ships. <3.1.2. Citizenship Requirements Overlooked> MSC s lack of standard procedures contributed to a routine citizenship requirement clause being left out of the contracts for one contractor-operated ship program. While contracts for four of the ship programs we reviewed included clauses requiring all crew members to be U.S. citizens, the fifth program did not include this clause. The contracts for this fifth program were signed in October 1982 and April 1983, just months after one of the other programs had signed contracts requiring all crew members to be U.S. citizens, in August 1982. Military and civilian officials in MSC s Pacific and Far East Offices expressed concern that all personnel aboard T-5 tankers were not U.S. citizens, and following the Persian Gulf War, MSC tried to add citizenship clauses to the T-5 contracts. When the contractor refused, MSC dropped the issue. The contract for this program still does not require all its crew members to be U.S. citizens, and only Coast Guard regulations limit the number of foreign nationals on these ships. While MSC s contracts for its other four contractor-operated ship programs require all the contractors personnel assigned to ships to be U.S. citizens, they do not require the contractors shore personnel to be U.S. citizens. MSC field personnel for one program said that MSC s failure to include this clause for shore personnel was an oversight on MSC s part. These field personnel said that the contractor, aware of this loophole, had proposed a port engineer who was not a U.S. citizen. However, this person was disapproved because a foreign national cannot hold a security clearance and thus would not have been able to deal with any ship maintenance or repair work that involved classified material. <3.1.3. MSC Has No Guidelines for the Establishment of Security Clearance Requirements in Contracts> Contracts for all five of the ship programs we reviewed require at least some security clearances for the ships crew members. However, no one at MSC has established guidelines for the inclusion of security clearance requirements in contracts. As a result, a key contract requirement was inadvertently left out in one case. Four of the ship programs we reviewed had security clearance requirements in their original contracts. The fifth program added security clearance requirements during the ninth year of its contracts through contract modifications. These modifications required all corporate officers and the master, chief mate, and radio operator of each ship to have secret clearances. <3.1.4. No Trustworthiness Evaluations for Three Ship Programs Despite Concerns About Sabotage> Although the contracts for all five ship programs require some crew members to hold security clearances, only the T-AGOS and oceanographic ships contracts require noncleared crew members to pass trustworthiness evaluations. Some MSC officials stated that these two ship programs have more stringent requirements for trustworthiness evaluations because of their sensitive missions. However, the program manager for another program stated that security requirements for his ship program were based on the fact that the ships are subject to sabotage. Trustworthiness evaluations determine the loyalty of an individual by checking whether the individual has committed any prior act of sabotage, espionage, treason, or terrorism. For the three ship programs that do not require trustworthiness evaluations for their unlicensed crew members, MSC does not collect or review any background information about these crew members. The Coast Guard does require mariners working aboard U.S. vessels to hold merchant mariner documents that include a criminal record check every 5 years. However, MSC does not spot-check these documents. If MSC ships are subject to sabotage, trustworthiness evaluations should be required of all its ship crew members. <3.2. MSC Does Not Ensure That Contractors Comply With Requirements for Crew Trustworthiness and Security Clearances> No office in MSC is responsible for tracking trustworthiness evaluations and security clearances for MSC s contractor-operated ship programs to ensure that contractors are complying with contract requirements. MSC s Office of Security, Operations Office, and Operating Contracts Division are involved with the security clearances and trustworthiness evaluations of ship crews, but communication among these offices is poor. As a result, MSC cannot ensure that its crews are trustworthy or appropriately cleared, and untrustworthy individuals may be assigned to ships with sensitive missions for extended periods of time before they are removed. Though we did not document any unauthorized disclosures of classified material by contractor employees, we did find that 300 crew members who were later found to be untrustworthy had been assigned to MSC s ship programs for the time it took to conduct the trustworthiness evaluations. In one case, it took 23 months to determine that a crew member was untrustworthy. <3.2.1. Trustworthiness Evaluations Completed Years After Crew Are Allowed to Sail on Ships> Three separate offices in MSC headquarters have distinct roles in maintaining information on contractor-operated ship crews. The Operating Contracts Division and the Operations Office maintain crew lists. The Office of Security maintains a list of trustworthy contractor personnel. However, no one from any of these three offices compares these lists to ensure that all crew members are trustworthy. In addition, the Office of Security does not track the length of time between the date the contractor submits the crew members original paperwork to MSC and the date MSC completes trustworthiness evaluations. As a result, crew members who may sail aboard MSC contractor-operated ships as soon as their trustworthiness paperwork has been submitted may be found much later to be untrustworthy. Over the last 8 years, MSC s Office of Security has completed trustworthiness evaluations for approximately 2,900 of the crew members on its contractor-operated ships. It has found that 300 of these crew members did not meet the trustworthiness criteria contained in the Navy s security instruction and thus had to be removed from MSC s ships. Because the Office of Security destroys its original records after it makes trustworthiness determinations, we could not determine how long these 300 untrustworthy individuals had been assigned to the MSC ships with sensitive missions before they were removed. We were able, however, to determine how long it took to do 29 evaluations. We did this by matching a contractor s active crew list to MSC s trustworthiness file. Until MSC makes its trustworthiness evaluation, the contractor s active crew list contains the dates the crew members forms were submitted. Once the evaluation is made, these original dates are lost because they are changed to the date of the completed evaluation. Therefore, we had to match an old crew list (containing the dates the forms had been submitted) to recently completed evaluations in MSC s trustworthiness file. Eight of the 29 evaluations were completed within 4 months. However, in three of the five cases in which MSC determined that the crew members were untrustworthy, the evaluations took 10 or more months to complete (see table 3.1). During the intervening months, the untrustworthy crew members were eligible to sail on MSC ships with the most sensitive missions. Crew members who require security clearances are not assigned to MSC s ships until their clearances have been completed. Even though more than 10 percent of the crew members MSC evaluated over the last 8 years were found to be untrustworthy and were removed from its ships, trustworthiness evaluations are still processed slowly. For example, when we matched one contractor s August 1994 crew list to MSC s trustworthiness evaluation file (updated through March 1995), we found that MSC had completed 255 of the 341 evaluations required for the contractor s crew members, but it had not completed the remaining 86 evaluations (see table 3.2). The trustworthiness evaluation forms for 21 of the 86 crew members were submitted in 1994. However, the forms for one crew member had been submitted in August 1989, and MSC had still not completed its evaluation in March 1995, almost 6 years later. In addition, four of the contractor s shore personnel had access to the ships with sensitive missions, even though they did not have security clearances and were not required by the contract to undergo trustworthiness evaluations. MSC s trustworthiness evaluations for crew members on ships in MSC s other sensitive program were delayed as well. We reviewed January 1995 crew lists for all four ships in this program and found that MSC had completed only 39 of the 94 required trustworthiness evaluations. <3.2.2. MSC Does Not Ensure That Contractual Requirements for Security Clearances Are Complied With> While we did not document any unauthorized disclosures of classified material by the employees of MSC s contract operators, we found that MSC is vulnerable to unauthorized disclosures because it is not consistently enforcing requirements for its security clearances. All of MSC s contract operators must obtain their required clearances from the Defense Industrial Security Clearance Office, but MSC does not monitor all its contract operators to ensure that they are complying with this requirement. For one program, MSC keeps lists of the contractors cleared personnel in three different places the Office of Security, the Operating Contracts Division, and the Operations Office. However, for another program, no one at MSC keeps track of the contractor s cleared personnel. There was confusion about who was responsible for this tracking, and when we interviewed personnel from MSC s Office of Security, Operating Contracts Division, Engineering Directorate, and Operations Office, we found that none of them had documentation showing that the officers on the ships held the proper clearances. In addition, when we visited one of this program s ships, the master told us that only he and the radio officer had secret clearances. The contract required the chief mate to have a secret clearance as well. Even when MSC does receive clearance letters from the contractors, it does not verify the clearances with the Defense Industrial Security Clearance Office or compare the clearance letters with the contractor s active crew lists to ensure the clearance lists are complete. Therefore, MSC cannot verify that all its contractor personnel and crew members have appropriate security clearances. <3.3. No Systematic Approach to Identify and Implement Best Practices> When we talked to MSC s program managers, they told us that MSC does not have a formal system for them to get together, share ideas, and evaluate the costs of different contracting techniques. As a result, MSC may be missing opportunities to implement best practices. For example, the contractor-operated ship programs we reviewed used two different contracting methods to control ship maintenance and repair costs. However, no one at MSC has compared the two contracting methods to determine whether one method is more cost-effective than the other and therefore should be adopted for all of MSC s contractor-operated ship programs. Under one method, MSC uses a yearly budget to predict the maintenance and repair costs of its T-5 tankers. The operating contractor submits a proposed budget to MSC 30 days prior to each annual operating hire period. This proposed budget is based on historical costs and planned maintenance that will be completed in the following year. Personnel from MSC s Engineering and Contracting Directorates review the proposed budget and develop their own estimates. MSC and the contractor then negotiate a final budget through a contract modification. The contractor must submit quarterly reports that separate parts and technical representative services for 24 different maintenance and repair categories. At the end of the year, the Defense Contract Audit Agency audits the contractor s actual maintenance and repair costs based on a stratified statistical sample of invoices. If actual costs exceed budgeted costs, MSC reimburses the contractor. If budgeted costs are higher than actual costs, the contractor credits MSC. When we reviewed one year s records for the T-5 tankers we found that three ships were under budget, and two were over budget. The actual maintenance and repair cost for all T-5 tankers combined was within 6 percent of the budget. According to the contracting officer, because this process worked so well on the T-5 tankers, he later incorporated it into most of his contracts for the maritime prepositioning ships. In awarding contracts for three other contractor-operated ship programs, MSC uses a threshold method to control its maintenance and repair costs. This method, however, has not accurately predicted maintenance and repair costs, and it does not attempt to do so. It attempts only to set a fixed price for a portion of the repair costs. Under the threshold method, MSC sets a level of maintenance for the contractor to accomplish each month. This threshold is generally expressed in terms of a number of overtime hours of work to be done by a particular crew member often the second engineer. The threshold method of controlling costs offers less flexibility than the budget method used on the T-5 tankers and maritime prepositioning ships because unlike the budget, the threshold remains constant over the life of these short-term contracts. Contractors do not always submit monthly maintenance reports, as required under the threshold method, and the level of maintenance and repair reported is rarely close to the threshold level. Consolidated maintenance and repair figures vary among programs and contractors, but the fiscal year 1994 figures for one ship program were almost twice the threshold level. The maintenance and repair cost for each ship in that program was 59 to 175 percent more than the ship s threshold level. The second program was 13 percent over threshold for the contract period. MSC awarded a new contract for the third program on May 23, 1995, but as of October 10, 1995, MSC still could not determine whether the operator under the previous contract was over or under the threshold. This was largely due to contractor delays in submitting reports. While the threshold method controls costs by setting a fixed price for all work up to the threshold level, maintenance and repair work above the threshold is fully reimbursable, and the contractors are not required to obtain prior approval for this work. MSC plans to expand its thresholds in the future by including preventive maintenance, cleaning, and other work that is excluded under the current thresholds. However, if MSC does not accurately predict the costs of this excluded work and increase the threshold amounts appropriately, the contractors could quickly reach the threshold levels and then be fully reimbursed for all additional work. <3.4. Fragmented Lines of Authority Impede Sound Management Controls> Until November 28, 1994, MSC had not formally designated program managers for any of its contractor-operated ship programs. However, on that date MSC s Commander directed the head of the Operations Office to formally appoint program managers for several ship programs. As a result, two individuals from the Operations Office were designated as program managers for the five contractor-operated ship programs we reviewed. One individual was designated as the program manager for the T-5 tankers and the fast sealift ships. The other was designated as program manager for the oceanographic, maritime prepositioning, and T-AGOS ships. Since these program managers are not assigned any staff outside the Operations Office, they rely on MSC s various headquarters and field organizations to cooperate in developing and administering their program requirements. That is, the legal, contracting, engineering, accounting, and security personnel who administer various parts of the contractor-operated ship programs are all located in different departments in MSC and report to the heads of their individual departments. Also, ship programs that are contractor-operated are not collocated but, rather, are spread out over several departments. Such an organization is not conducive to the uniform administration of contracts or to the dissemination of best practices. Ultimately, it has contributed to MSC s failure to ensure that its contractors comply with their contracts. Specifically, MSC s fragmented lines of authority have hindered enforcement of trustworthiness and security provisions. <3.4.1. Personnel Frustrated by MSC s Organization> Some MSC personnel we talked to were very frustrated with MSC s unclear lines of authority, especially with the chain of command for contracting issues. The contracting officer s representative for one program told us that upper-level management provides minimal leadership and the contracting officer s representative has little authority to act independently. Until recently, another program did not even have a contracting officer s representative. The contracting officer for that program designated a person in the Operations Office to serve as his contracting officer s representative on October 28, 1994. However, this person did not sign his authorization letter until August 29, 1995, the day after we had discussed our completed review with MSC officials. <3.4.2. MSC s Proposed Actions to Improve Accountability> MSC is planning a reorganization to clarify accountability, responsibility, and authority for its ship programs. Under the proposed reorganization, six program managers will oversee MSC s ship programs. Unlike the current program managers, these new program managers will have authority over staff members assigned to their programs from the field and from the Operating Contracts Division and the Engineering Directorate. MSC s new program management structure was scheduled for implementation beginning in October 1995. <3.5. Conclusions> MSC s plan to designate program managers and to establish formal lines of accountability from personnel in the field and from the Operating Contracts Division and the Engineering Directorate directly to the program managers will improve communication within ship programs and should improve MSC s ability to monitor contractors compliance with the terms of their contracts. However, MSC still will not have a system in place to systematically establish personnel requirements and to identify and implement best practices. The use of standardized procedures and best contracting practices is important for all ship programs, but it is especially critical for contractor-operated ship programs where a single contract may remain in effect for 20 years or more. <3.6. Recommendations> We recommend that the Secretary of Defense direct the Commander of MSC to take the following actions: Develop and require the use of standardized procedures by program managers and their staffs whenever possible to establish personnel requirements in their contracts. As part of MSC s upcoming reorganization, direct program managers to clarify accountability by (1) assigning a specific individual responsibility for each contract requirement and (2) periodically checking that contract provisions, such as those dealing with trustworthiness and security clearances, are correctly administered and met. Instruct program managers and contracting personnel to meet to discuss and evaluate ways to identify and implement best practices into their contractor-operated ship programs. <3.7. Agency Comments and Our Evaluation> DOD concurred with the recommendations contained in this chapter. However, it did not concur with our findings that (1) MSC does not have standard procedures to develop personnel requirements and (2) MSC has no systematic approach to identify and implement best practices. In addition, DOD only partially concurred with out findings that (1) MSC does not ensure that contractors comply with requirements for crew trustworthiness and security clearances and (2) fragmented lines of authority impede sound management. In disagreeing with the finding concerning standard procedures for personnel requirements, DOD stated that MSC evaluates lessons learned from operating contracts before issuing solicitations for new contracts. It also stated that while MSC does not require 100 percent of its tanker crews to be U.S. citizens, currently, all of them are. We maintain that MSC s failure to require 100 percent citizenship on its T-5 tankers indicates that MSC does not always evaluate lessons learned from other ship operating contracts. In contracts signed less than a year before the T-5 tanker contracts, MSC required that 100 percent of the maritime prepositioning ships crews be U.S. citizens. Furthermore, in contracts signed after the T-5 tanker contracts, MSC required that all crew members be U.S. citizens on T-AGOS, fast sealift, and oceanographic ships. Although all the crew members now on the tankers are U.S. citizens, this was not the case in the past. For example, past crews have included citizens from Romania and Yemen. In addition, there is no guarantee 100 percent of the future crew members will be U.S. citizens, since that is not an MSC requirement. In disagreeing with the finding concerning best practices, DOD stated that best practices are shared, but the budgeting system used for the maritime prepositioning ship and T-5 tanker programs is not appropriate for other ship programs because the circumstances and contract terms are different. In our report, we acknowledged the differences between the T-5 tankers and maritime prepositioning ships and the rest of the contractor-operated ships we reviewed. However, these differences do not preclude the sharing of best practices between the programs. Furthermore, MSC has not done a cost comparison between the two different methods of controlling maintenance and repair costs. Although DOD partially concurred with our finding concerning MSC s tracking of crew trustworthiness and clearances, it said that trustworthiness evaluations are done by the Defense Investigative Service and the reports should be destroyed following final action. As our report points out, trustworthiness determinations are made by MSC, not by the Defense Investigative Service. Although the Defense Investigative Service reports MSC uses during the trustworthiness evaluation process must be destroyed after a final determination is made, MSC can and should track whether or not crew members have trustworthiness evaluations. Although DOD partially concurred with our finding concerning fragmented lines of authority, it stated that lines of authority have always delineated responsibilities for contractor-operated ships. We maintain that the lines of authority and responsibility were not always clearly delineated in the past, particularly regarding contracting officers representatives. <4. Ships Visited During Our Review> Norfolk, Va. Jacksonville, Fla. Norfolk, Va. Charleston, S.C. New Orleans, La. New Orleans, La. New Orleans, La. New Orleans, La. New Orleans, La. <5. Major Contributors to This Report> <5.1. National Security and International Affairs Division, Washington, D.C.> Sharon A. Cekala Joan B. Hawkins Joseph P. Walsh Michael J. Ferren Beverly C. Schladt Martin E. Scire The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO reviewed the Military Sealift Command's (MSC) management of its contractor-operated ships, focusing on whether MSC has: (1) adequate management controls to oversee contractors and prevent abuses; and (2) sufficient oversight to ensure that contractual requirements are being met. What GAO Found GAO found that MSC: (1) does not require contractors to adequately document minor repairs, crew time, or subcontracted work; (2) does not adequately verify crew-performed repairs, review subcontractor invoices, or supervise overhaul work; (3) lacks sufficient internal controls to adequately manage its ship operation contracts; (4) has no guidelines for systematically establishing personnel requirements; (5) does not ensure that contractors comply with requirements for trustworthiness evaluations and security clearances; (6) has no formal system to identify and implement best practices that could improve contractor performance and reduce costs; and (7) has acknowledged its organizational problems and plans to designate program managers and establish formal lines of accountability.
<1. Background> Bankruptcy is a federal court procedure designed to help both individuals and businesses address debts they cannot fully repay as well as help creditors receive some payment in an equitable manner. Individuals usually file for bankruptcy under one of two chapters of the Bankruptcy Code. Under Chapter 7, the filer s eligible nonexempt assets are reduced to cash and distributed to creditors in accordance with distribution priorities and procedures set out in the Bankruptcy Code. Under Chapter 13, filers submit a repayment plan to the court agreeing to pay part or all of their debts over time, usually 3 to 5 years. Upon the successful completion of both Chapter 7 and 13 cases, the filer s personal liability for eligible debts is discharged at the end of the bankruptcy process, which means that creditors may take no further action against the individual to collect the debt. Child support is not a debt eligible for discharge. The Bankruptcy Reform Act, among other things, amended the Bankruptcy Code to require those filers with the ability to pay some of their debts to enter into repayment plans under Chapter 13 of the Bankruptcy Code instead of liquidating their assets under Chapter 7 and granting the debtor a discharge from eligible debts. During the first year of implementation under the Bankruptcy Reform Act, about 628,537 individuals filed for bankruptcy, based on the Administrative Office bankruptcy data we used for our national data match. <1.1. The Bankruptcy System> The bankruptcy system is complex and involves many entities in the judicial and executive branches of the federal government. (See fig. 1.) Within the judicial branch, 90 federal bankruptcy courts have jurisdiction over bankruptcy cases. The Administrative Office is the central support entity for federal courts, including bankruptcy courts, providing a wide range of administrative, legal, financial, management, and information technology services. It also maintains the U.S. Party/Case Index, which contains information collected from all 90 federal bankruptcy courts and allows courts to identify parties involved in federal litigation almost anywhere in the nation. The Director of the Administrative Office is supervised by the Judicial Conference of the United States. The Judicial Conference also considers administrative problems and policy issues affecting the federal judiciary and makes recommendations to Congress concerning legislation affecting the federal judicial system. The bankruptcy courts share responsibility for bankruptcy cases with the United States Trustee Program, which is part of the executive branch s U.S. Department of Justice (Justice). In all but six bankruptcy court districts in Alabama and North Carolina, the U.S. Trustee Program is responsible for appointing and supervising private bankruptcy case trustees who manage many aspects of individual bankruptcy cases. The Executive Office for U.S. Trustees at Justice provides general policy and legal guidance, oversees operations, and handles administrative functions for the U.S. Trustee Program. It also manages the Automated Case Management System, which functions as the U.S. Trustee Program s system for administering bankruptcy cases. Separate from the U.S. Trustee Program, the remaining six districts have judicial branch bankruptcy administrators (referred to as the Bankruptcy Administrator Program) who perform duties similar to those of the U.S. Trustees, including overseeing the administration of bankruptcy cases, maintaining a panel of private case trustees, and monitoring the transactions and conduct of parties in bankruptcy in those states. <1.2. The Child Support Enforcement Program> The federal government partners with states to operate the child support enforcement program, making available to parents a range of child support services, including establishing and enforcing child support orders. A child support order can be entered into voluntarily, ordered by a court, or established by a state agency through an administrative process. Once established, it generally legally requires a noncustodial parent to provide financial support to a custodial parent with at least one child. Nationwide, almost 10 million noncustodial parents had child support orders in place in June 2007, based on the Federal Case Registry maintained by OCSE. This registry, part of the Federal Parent Locator Service, contains information about individuals with child support cases and orders administered by state CSE agencies as well as individuals not part of the CSE program, but who had orders established after 1998. About 78 percent of these 10 million noncustodial parents had orders enforced through state CSE agencies; the remaining parents are not involved with a state agency in enforcing their orders. The CSE program makes services available, upon request, to any parent or other person with custody of a child (custodial parent) who has a parent living outside of the home (noncustodial parent). Parents that receive public assistance through the Temporary Assistance for Needy Families (TANF), Medicaid, and Foster Care programs receive CSE services free; others are charged a nominal fee not to exceed $25. TANF recipients are required to assign their rights to child support payments to the state. In fiscal year 2006, the state CSE agencies administered 15.8 million cases, providing a range of services, including establishing paternity and support orders, locating noncustodial parents, collecting and distributing child and medical support, and reviewing and modifying support orders. The majority of child support is collected through wage withholding, but state agencies also use other methods for enforcing child support orders. In 2006, about 69 percent of child support payments were collected through wage withholding, which involves employers withholding support from noncustodial parents wages and sending it to the appropriate state agency for distribution. Other methods include intercepting federal and state income tax refunds; liens against property; as well as withholding or suspending driver s licenses, professional licenses, recreational and sporting licenses, and passports of persons who owe past-due support. During fiscal year 2006, total distributed collections were almost $24 billion. Program costs for that year totaled $5.6 billion, of which $3.7 billion was federally funded. State agencies administer the CSE program, but the federal government plays a major role in supporting them. At the federal level, OCSE within the Administration for Children and Families of HHS provides a majority of program funding. It also establishes enforcement policies and guidance, provides state agencies with technical assistance, and oversees and monitors state programs. <1.3. Bankruptcy Reform Act s Treatment of Child Support> The Bankruptcy Reform Act included new provisions to help better ensure that noncustodial parents who file for bankruptcy continue paying child support and that child support payments are given a high priority in bankruptcy. One of these provisions clarifies that proceedings to establish or modify a domestic support obligation (e.g., child support) owed to a governmental unit (e.g., state CSE agencies) are exempt from the automatic stay. An automatic stay bars creditors from taking measures to collect a debt pending resolution of the bankruptcy proceeding. Another provision allows for the continued operation of wage withholding for domestic support obligations (e.g., child support). Further, the Bankruptcy Reform Act, for example, requires that noncustodial parents filing for Chapter 13 bankruptcy must be current on their child support obligations to confirm a repayment plan. In addition, the Bankruptcy Reform Act provides child support with the first priority for payment of unsecured claims, up from a seventh-level priority under previous Bankruptcy Code provisions. <1.4. Notifying Custodial Parents and State Child Support Enforcement Agencies of Bankruptcies> The Bankruptcy Code requires bankruptcy filers to submit a list of their creditors, which could include a custodial parent or state CSE agency, in their financial disclosures. The court, in general, is to provide listed creditors with notice of a meeting of creditors. A filer who knowingly and fraudulently conceals a debt owed to a creditor is subject to criminal penalties. In addition, the Bankruptcy Reform Act amended the Bankruptcy Code to require that child support claimants, such as custodial parents and state agencies, be specifically notified of the bankruptcies of parents having a domestic support obligation (DSO), a designation that includes child support and alimony. Case trustees are to send notices of the bankruptcy case to these parties after bankruptcy filers report in their paperwork that they have a DSO. Figure 2 shows court and trustee notification processes based on law, regulations, and guidelines. State agencies and custodial parents benefit from knowing about the bankruptcies of parents who owe child support. The notice to the custodial parent provides information about the state agency and his or her right to use its services. Knowing about the bankruptcy of a noncustodial parent is important so that the state agency or custodial parent can participate in and be a party to pertinent bankruptcy proceedings. Knowing about the bankruptcy also helps state agencies avoid violating any automatic stay that may be in place. Although the CSE program may continue using many of its collection tools, such as wage withholding, a few of these tools are still subject to the automatic stay. According to a state official, agencies can face penalties if they collect funds using tools subject to the automatic stay. <2. About 7 Percent of Those Who Filed for Bankruptcy Have Orders to Pay Child Support and Most Are Part of the CSE Program> Our data match using the national bankruptcy and OCSE data found that among the 628,537 individuals who filed for bankruptcy between October 17, 2005, and October 17, 2006, the first year of implementation of the Bankruptcy Reform Act, about 7.2 percent were noncustodial parents with orders to pay child support. This population represents just one-half of 1 percent of the 9.9 million noncustodial parents who have orders to pay child support. While these proportions are small, they nevertheless represent 45,346 adults with orders to pay child support and at least as many children. About three-quarters (33,958) of bankruptcy filers with orders to pay support were receiving services from CSE programs in various states. At least half of these bankruptcy filers were past due on their payments. While data obtained for our study did not include the past due amounts owed by these parents, fiscal 2004 data reported by OCSE, the most recent available, show that of all noncustodial parents with orders who are part of the CSE program, the average total past due amount owed was about $9,400. A greater number of noncustodial parents filed for Chapter 7 than Chapter 13 bankruptcy. Nevertheless, proportionally more noncustodial parents filed for bankruptcy under Chapter 13 than did all filers (see table 1). Several experts on bankruptcy and child support as well as officials in some state agencies said the state agency is likely to play a role in Chapter 13 filings because under this chapter an individual repays some or all debt under a court-approved plan prior to a discharge. Past due child support is a debt that can be included in the repayment plan and state agencies may opt to continue collecting past due support through the state agency enforcement process or through the Chapter 13 repayment plan. In contrast, a large majority of filers under Chapter 7 have no assets available for liquidation, and thus no funds are available to pay creditors. Regardless of which chapter a noncustodial parent files under, collection of ongoing child support would continue if, for example, the filer had income and a wage withholding order in place. Although our study does not focus on custodial parents who are owed child support and who filed for bankruptcy, our match showed that a slightly higher percentage of bankruptcy filers were custodial parents than noncustodial parents. Specifically, custodial parents represented 10 percent of all those who filed for bankruptcy while noncustodial parents represented 7 percent. <3. A Routine, National Data Match Might Identify Filers Who Do Not Report Their Support Obligations and Reduce the Workload Associated with the Current Process> A national match of bankruptcy data with child support enforcement data conducted on a recurring basis might help identify filers who, for one reason or another, fail to report their child support obligations in their bankruptcy paperwork. The results of such a match would also reduce the research workload for state agencies by providing positive identification of bankruptcy filers with orders under the states purview by comparing the full SSNs of individuals in both databases. This step would allow state agencies to more quickly and accurately identify the relevant individuals in their records. Currently, some case trustees do not include the full SSN of the filer in their notifications to the state agencies, which imposes additional work on the state agency staff to make a positive identification. For case trustees in all but six bankruptcy districts in two states, guidance calls for them to provide full SSNs in the notices they send to state agencies. <3.1. A National Match of Federal Bankruptcy with Child Support Enforcement Data Might Identify Some Filers Who Do Not Report a Child Support Obligation> Conducting a national bankruptcy and child support enforcement data match on a recurring basis might identify some additional filers who have orders to pay child support but who do not report this obligation, as required, when they file for bankruptcy. In a test review of bankruptcy filings involving orders to pay child support in Texas, we found that an estimated 2 percent of filers who completed all of their bankruptcy paperwork may not have reported their child support obligations. (The results could be higher or lower in other states.) For these and other filers who fail to report their obligations in their paperwork, they may subsequently report these obligations at a later stage in the bankruptcy process when case trustees ask them under oath whether they have a domestic obligation. Almost all of the 16 case trustees we spoke with for this review said they always ask debtors this question under oath. <3.2. A Data Match Might Readily Provide State Agencies with Positive Identifications and Reduce the Workload Associated with the Current Trustee Notification Process> A data matching process in which OCSE conveys results to state agencies that positively identify bankruptcy filers would allow state agencies to process the information more efficiently and accurately than the current process, reducing state agency workload. State agency officials reported that their staff currently take steps when they receive notices from case trustees to determine whether the named individual is in their agency s database. A significant portion of the notices a state agency receives may pertain to noncustodial parents who are not part of that state s CSE program. For example, our national data match analysis identified about one-quarter of noncustodial parent filers with orders not administered as part of any state agency. Match results distributed to state agencies by OCSE would, in effect, pre-sort the orders, only sending to state agencies the information on bankruptcy filers whose orders are under their purview. Also, agency staff can have difficulty distinguishing among the noncustodial parents in their caseload with similar names when the notices do not contain the full SSNs. Federal agencies often use full SSNs when data matching or other information-sharing is used to help them meet program goals, such as improving collections or minimizing fraud, as long as they take the required steps to safeguard the personally- identifiable information in their possession. We found that it is not always the practice for case trustees to include full SSNs in their bankruptcy notices to state agencies, even though some guidance has been issued on this. In our selected six states, state agency officials said that trustee notices did not always contain full SSNs. In Alabama, Illinois, and New York, for example, agency officials estimate that half or more of the trustee notices they receive contain the filer s partial SSN. Of the 16 case trustees we interviewed, 5 said they do not include the full SSN in the notices they send to state agencies. Four of these five case trustees participating in the U.S. Trustee Program expressed a variety of reasons for not providing full SSNs, such as administrative convenience or some concerns about privacy, despite EOUST guidance instructing them to do so. In Alabama, where a bankruptcy administrator rather than a regional U.S. Trustee oversees case trustees, a trustee and the bankruptcy administrator said that their policy is to provide only a partial SSN to the custodial parent and state agency. In developing guidance for trustee noticing under the U.S. Trustee Program, EOUST officials told us that they worked closely with OCSE regarding what information to include in the notices going to state agencies. The guidance notes that state child support agencies have requested that the notices identify bankruptcy filers by name and SSN. The guidance also includes sample notices that trustees can use that indicate that the full SSN should be included for notifying the state agencies. EOUST officials told us that OCSE officials emphasized the importance of the full SSN for effective processing of notices. EOUST officials also said that providing the full SSN to state agencies is consistent with the Bankruptcy Reform Act. In addition, EOUST officials said that they provided training about the notices to case trustees, through the regional U.S. Trustees, as part of training on all aspects of the new bankruptcy reform provisions and posted the guidance on their external and internal Web sites. EOUST officials also said they had considered executive branch policies about privacy and security of personal identifiers and determined that its guidance was consistent with these policies. It is important to note that the notices from case trustees are not made available to the public and are not part of the bankruptcy case docket, which is publicly available. Officials from state agencies said similarly that they do not make this information in the notices publicly available. We have previously reported that SSNs can be useful tools to enhance program integrity through data matching; however, government agencies and courts need to take steps to prevent the improper disclosure of SSNs, including limiting the use and display of SSNs in public records (e.g., SSN truncation in all lien records). While EOUST officials acknowledged the importance of full SSNs in notices, they told us that they do not have authority to require case trustees to provide them. They said that case trustees are not directly supervised by, or employees of, EOUST. The EOUST officials also said that case trustees are required to administer a bankruptcy estate in accordance with applicable state laws. For case trustees who are overseen by judicial branch bankruptcy administrators in the six bankruptcy districts in Alabama and North Carolina, neither the Judicial Conference nor the Administrative Office has established an explicit policy about case trustees providing the filer s full 9-digit SSN in the notices sent to custodial parents and the state child support enforcement agencies. In addition to reducing state agencies workload, a routine data match would have the additional advantage of identifying those parents who may be part of the CSE program, but whose cases are administered by an agency in another state. In some cases the notices could go to the wrong state because the Bankruptcy Reform Act requires that notices be sent to the state in which the child support claimant, such as a custodial parent, lives, although some may live in a state other than the one administering CSE services. Also, more than one state may be involved in some case activity. For example, according to OCSE officials, a January 2000 national analysis showed that, of noncustodial parents with orders to pay child support, and who were past due on their payments, 24 percent resided in a state other than the state seeking collection of these payments. <4. Although a Data Match Is Technically Feasible, There Would Be Substantial Start- Up Costs as well as Some Policy Considerations> A national data match conducted on a recurring basis is technically feasible, although it would require modifications to existing systems at national and state levels, including many steps for effectively developing and implementing data matching that are costly. Moreover, bankruptcy and CSE program officials expressed concern about implementing an automated system that provides notification of noncustodial parent filers to state agencies because of potential duplication between any new automated system and the existing trustee notification process that was implemented as a result of the Bankruptcy Reform Act. In addition to these costs, bankruptcy officials cited some statutory and policy considerations to releasing their own data or to performing a data match. Weighing these factors and concerns against the benefits of conducting a data match is an important consideration. <4.1. A Data Match with Transmission of Results to State Agencies Is Technically Feasible, Though It Would Not Replace Notifications to Custodial Parents> Officials from the Administrative Office, EOUST, and CSE agencies said that it is technically feasible to provide information in their databases to the other system and then match records between the two systems on a routine basis. They also brought up legal and policy considerations, which we discuss in more detail later. The bankruptcy system and CSE program each have federal databases that use SSNs as key identifiers and contain the information that potentially can be used to identify, on a routine basis, bankruptcy filers with orders to pay child support. Both the Administrative Office and EOUST databases contain the full SSNs of filers for consumer bankruptcies. The EOUST database does not include bankruptcy filers in Alabama and North Carolina because these two states do not participate in the U.S. Trustee Program. OCSE maintains the Federal Case Registry, a national automated system containing limited data of noncustodial parents with orders to pay child support that are enforced through state CSE programs and those that are not, among other information. OCSE also maintains the Federal Offset Program file that contains information on individuals who owe past due child support who are part of the state CSE programs. Using the Federal Case Registry and its other automated systems, OCSE currently conducts routine data matches with other entities to help state agencies locate parents and enforce child support orders. For example, the registry helps state agencies identify noncustodial parents who are located or working in other states. By matching its data with data held by other agencies, such as the Social Security Administration, the Department of Defense, the Federal Bureau of Investigation, and the Internal Revenue Service, it can locate the parent s employer for state agencies, allowing them to issue income-withholding orders, among other actions. Moreover, an OCSE analysis estimated that its National Directory of New Hires Database matches result in about $400 million in child support collected annually. Typically, OCSE conducts matches with entities that have information common among many individuals in its target population or that are expected to yield significant results. See figure 3. With regard to using the results of a data match, current technical capability differs among agencies. OCSE and some state agency officials we spoke with said that OCSE s Federal Case Registry could disseminate this information to the 54 state agencies after modifications to this system and state systems. Upon receiving an electronic notification that a noncustodial parent in their caseload has filed for bankruptcy, state agencies would also be able to identify custodial parents in their caseloads who are associated with these noncustodial parent filers. However, notifying the custodial parent about the bankruptcy is not currently part of the state agencies or OCSE s duties. Also, these agencies do not have much information on custodial parents who are not part of their state CSE programs. Alternatively, case trustees could use the match results to continue carrying out their statutorily required duty to notify these parents. However, EOUST officials told us they would need to build the capacity to transmit the match results from EOUST to case trustees who participate in the U.S. Trustee Program. <4.2. A Data Match Would Likely Involve Substantial Start- up Costs and Would also Duplicate a Part of the Current Notification System> Although electronic data sharing across government agencies is not uncommon, it can be a complex and costly undertaking. Data matching would need to be done frequently (e.g., weekly) to be useful, according to some state agency officials, and would likely involve developing automated interfaces to exchange data effectively on a recurring basis. In developing such systems, to reduce the risks to acceptable levels, following and effectively implementing accepted best practices in systems development and implementation (commonly referred to as disciplined processes) is important. It would include at a minimum defining the detailed requirements for the new or modified systems and interfaces, and thorough and complete testing to determine that new or modified systems will work as intended. Even when the agencies have effectively implemented the disciplined processes necessary to reduce risks to acceptable levels, a framework is needed to guide a data sharing project such as this. Specifically, agencies generally enter into written agreements when they share information for conducting data matches. Based on their experience, OCSE officials estimate that developing such agreements generally requires many months. Officials from OCSE and EOUST believe that system modifications that would precede data sharing would involve significant costs. They said, for example, they would need to build an exchange method that would allow for the secure exchange of data. Overall, OCSE officials estimate that their development costs would be between $2 million and $2.5 million and would take between 15 and 18 months to implement. Once a matching process is established, disseminating match results would not be a cost-free proposition. EOUST officials said that it would take a considerable effort to establish an internal process, either manual or automated, for disseminating the match results to the case trustees. While state agencies could accept match results from OCSE using an existing system, OCSE officials said that this would require building this capability into the state agencies respective automated systems. States would incur some of these up-front costs, according to these officials. Additional costs may be incurred at the county level, with officials at one state agency saying that counties, and not just the state, might need to modify their systems to receive matched data. Once the necessary interfaces and system changes have been developed and effectively implemented, there are ongoing operation and maintenance costs to consider. OCSE estimates annual costs of between $35,000 and $50,000, depending on which entity conducts the match. These costs would include computer processing time and staff resources for managing data transactions. For example, EOUST currently employs two full-time staffers to extract bankruptcy data weekly from the Administrative Office s bankruptcy case database, and a data match between the bankruptcy system and CSE program would likely involve staffing. Some Administrative Office, EOUST, and CSE officials expressed concern about implementing an automated system providing notification of noncustodial parent filers to state agencies because of potential duplication between any new automated system and the existing paper system, which was implemented as a result of the Bankruptcy Reform Act. If a new system duplicates the notices that state agencies now receive from bankruptcy case trustees, it could add to their workload. That is, state agencies would be receiving information about bankruptcy filers with child support obligations from both trustees and OCSE unless the Bankruptcy Code is amended. Overall, officials from several of the state agencies we talked with said that while conducting electronic matching and sending the results to their agencies could be useful to them, the costs might not warrant such a match. Moreover, according to OCSE officials, state agency directors they have communicated with about a potential data match have similarly noted this trade-off. <4.3. Officials of the Administrative Office Say That Their Current Policy Does Not Allow for a Data Match while Officials from Other Programs Say It Could Be Acceptable> Officials from the Administrative Office said that their current policy does not allow a data match while officials from EOUST and OCSE said that a data match would be acceptable if the match met specific privacy guidelines. Officials at the Administrative Office cited a policy against releasing and disseminating their bankruptcy data to OCSE. This federal judiciary policy specifically bars release of the names and SSNs of bankruptcy filers to HHS on the grounds that the judicial branch must remain an independent and objective adjudicator of creditor claims. Administrative Office officials also noted that data on bankruptcy filers is available at EOUST, which is responsible for managing bankruptcy cases and ensuring compliance with applicable laws and procedures. For their part, officials at EOUST stated that their policy on data sharing is guided by the Privacy Act the federal law governing federal agencies use and disclosure of records containing individuals personal information. The officials said that EOUST s current policy implementing the routine use exception of the Privacy Act does not support a match with the system of records in which the bankruptcy data are kept, because identifying bankruptcy filers with child support obligations is not part of its mission. However, if OCSE requested the bankruptcy data from EOUST and EOUST determined that this request falls within the law enforcement agency exception of the Privacy Act, then EOUST officials said that it could share its data with OCSE. According to OCSE officials, it would be acceptable for OCSE data to be matched with bankruptcy data and for OCSE to disseminate the results to state agencies on a recurring basis. However, OCSE officials noted that the match results could only be used for CSE program purposes. That is, EOUST or the Administrative Office could perform a match using CSE data and bankruptcy data and return the results to OCSE, but these entities could not use the CSE data or match results for their own purposes, such as sending match results to case trustees. With respect to sending match results to custodial parents outside the CSE program, OCSE officials said that OCSE would not be the appropriate entity to do this because it is neither authorized nor funded to interact with these parents in this way. <5. Conclusion> While matching federal bankruptcy data with child support records might facilitate the identification of some additional bankruptcy filers with child support obligations and improve the current system for notifying state agencies, these potential improvements seem modest in comparison to the costs, efforts, and statutory and policy considerations involved in implementing and maintaining a data matching system. As a result, it appears that instituting a routine data matching system may not be warranted. A relatively small percentage of bankruptcy filers have orders to pay child support. In addition, a process is currently in place to identify and notify custodial parents and state agencies of bankruptcy proceedings, as called for under the Bankruptcy Reform Act. Moreover, a data matching system with results transmitting electronically to state agencies would not offer a comprehensive alternative to the trustee notification system insofar as it would not transmit information to custodial parents and would partially duplicate the trustee notification process. Finally, legal and policy considerations would need to be addressed to institute data matching between these two systems. Although these challenges are not insurmountable and data matching can be a useful tool, in this case, there is an alternative that should improve information sharing between case trustees and state child support agencies within the current system of trustee notices. Notwithstanding EOUST guidance calling for case trustees to provide the full SSNs, some case trustees only provide partial SSNs. Although EOUST cannot require case trustees to provide the full SSN, its examination of the trustee notification process might identify reasons for case trustees not providing the full SSNs as well as measures to help encourage the provision of full SSNs in notices to state agencies. Without EOUST more actively encouraging case trustees to provide full SSNs, state agencies may continue to experience more difficulties than necessary in accomplishing the child support goals of the Bankruptcy Reform Act. While neither the Judicial Conference nor the Administrative Office has developed similar guidance for bankruptcy administrators, the same reasons exist for state agencies having full SSNs, regardless of which program supervises case trustees. These reasons warrant some examination of the trustee notification process in the bankruptcy administrator districts. <6. Recommendations for Executive and Judicial Branch Action> To help improve the bankruptcy trustee notification process for state child support enforcement agencies called for under the Bankruptcy Reform Act, we are making two recommendations. First, we recommend that the Attorney General direct the Director of the Executive Office for U.S. Trustees to more actively encourage case trustees to provide state agencies the full SSNs of bankruptcy filers. This could be accomplished, for example, by working with case trustees to identify and address any issues related to implementation of the current guidance, such as lack of clarity in the guidance or concerns about preserving the security of SSNs. Second, we recommend that the Judicial Conference of the United States work with bankruptcy administrators in the six bankruptcy court districts in Alabama and North Carolina not subject to EOUST guidance to examine whether case trustees should provide state agencies with the full SSN of bankruptcy filers. This might be done in the following ways: Inform bankruptcy administrators and the bankruptcy court judges in those six districts about the importance of including the full SSN, how this information would be used by state agencies if provided, and to do so in a way that preserves the security of the information. Work with the bankruptcy administrators and bankruptcy court judges in those six districts to identify and if possible, address any issues or concerns, including the security of the information, related to the use of full SSNs in the notices. <7. Agency Comments and Our Evaluation> We provided Justice, the Administrative Office, and HHS with a draft of this report for their review and comments. The U.S. Trustee Program at Justice said that it supported the recommendation and would continue to work with the private case trustees, including through their national associations, to identify and address impediments to ensuring that full SSNs are provided to state CSE agencies. Its written comments are included in appendix II. Officials from the Administrative Office, in commenting orally on the draft, said that in light of our recommendation, they would review in the bankruptcy districts in Alabama and North Carolina the entire process in place for notifying state CSE agencies to see if the process is working correctly and take action as needed. They also provided technical comments that we incorporated as appropriate. In addition, HHS provided technical comments that we incorporated as appropriate. We are sending electronic copies of this report to the directors of the Administrative Office of United States Courts and the Executive Office for U.S. Trustees at the Department of Justice; the Secretary of Health and Human Services; appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology <8. Objectives> The objectives of this report were to determine (1) What percent of bankruptcy filers are parents who have orders to pay child support? (2) In what ways, if any, might matching national bankruptcy and child support enforcement data on a routine basis facilitate the identification of bankruptcy filers who have child support obligations? (3) What is the feasibility and estimated cost of conducting such a data match on a routine basis? <9. Scope and Methodology> To conduct our work we reviewed relevant laws, rules and regulations, and guidance that affect the bankruptcy process and child support enforcement (CSE) program, including the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005, Title IV-D of the Social Security Act, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, and the Privacy Act. We also interviewed bankruptcy and CSE program officials. This information was also used to review the national court, bankruptcy, and the CSE data systems that might be used for a potential recurring, national data match. To identify the proportion of parents with orders to pay child support who filed for bankruptcy nationwide, we worked with the U.S. Department of Health and Human Services Office of Child Support Enforcement (OCSE) to develop an analysis plan. This plan outlined how they would match their national CSE data with a national extract of personal bankruptcy filers that we obtained from the Administrative Office of United States Courts (the Administrative Office). The national CSE data from the Federal Case Registry, as of June 2007, contained information about individuals who are participants of the CSE program and individuals who are not participants of the CSE program but had orders established after 1998 to pay child support. The national CSE data also included data from the Federal Offset Program file, which contains only current information about noncustodial parents that participate in the CSE program who owe past due child support. The bankruptcy data from the U.S. Party/Case Index included names and Social Security numbers (SSNs) of all individuals that filed for Chapter 7 or Chapter 13 bankruptcy between October 17, 2005, and October 17, 2006, the first year of implementation under the Bankruptcy Reform Act. We recognize that the difference in time frames for the bankruptcy and CSE data could mean that we over-or under-counted individuals in this population. For example, we may have under-counted if a noncustodial parent s order ended in May 2007, but this noncustodial parent filed for bankruptcy on August 1, 2006. However, we determined that this was not a significant methodological limitation for the purposes of testing this data match and our analysis. From the Administrative Office we received 839,597 records of bankruptcy case data. After cleaning the data, 642,709 records were left for our work. Records were removed for the following reasons: missing SSN, bad SSN (more or less than nine digits), text strings instead of SSN, duplicates, and bankruptcy chapters other than 7 and 13. We had several communications with the system administrators to clarify our reasoning before dropping any records. We were told that although the system has data checks there is no automatic cleaning performed. Rather, notices are sent to the district courts and it is left to them to correct the data. We assessed the reliability of the respective bankruptcy and CSE data by reviewing existing information about these data and the systems that produced them, interviewing agency officials knowledgeable about these data, and performing electronic testing. Because of OCSE s legal concerns, we agreed that they would not provide us with child support case data. Instead, they performed the test match of the bankruptcy data and national CSE data themselves to meet certain specifications we provided, and included some information to allow us to assess the work performed. If we are not provided with underlying data, the ability to conduct electronic testing as a part of the data reliability assessment is limited. For analyses such as these, electronic testing of the data is generally a routine part of the reliability assessment. However, based on interviews of knowledgeable officials and reviews of relevant documentation, and because OCSE routinely performs SSN checks with the Social Security Administration, we have sufficient reason to believe that the OCSE data are reliable for the purpose of this report. In preparation for matching, we eliminated duplicate SSNs from the data within each bankruptcy chapter, which brought our total to 630,075 individuals who filed for bankruptcy. This total double-counts the 1,538 individuals who filed for both Chapter 7 and Chapter 13 bankruptcy. To help determine the potential benefits of data matching on a routine basis, we conducted a match ourselves of national bankruptcy filings with CSE data in Texas to ascertain whether bankruptcy filers volunteered their child support obligations when they file for bankruptcy. Among the six states we contacted, Texas was readily able to provide us with an extract of their child support caseload. Our universe totaled 1,931 individuals, which included noncustodial parents with child support orders who were participating in the Texas CSE program at some point between October 17, 2005, and October 17, 2006, and who filed for bankruptcy between October 17, 2005, and October 17, 2006. From this universe, we then selected a simple random probability sample of 100 noncustodial parents. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Each sample element was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that the interval ranging from less than 1 percent to over 7 percent would contain the true percentages of our sample population who completed all of their bankruptcy paperwork and had not reported their child support obligations. To conduct our review of the bankruptcy case files for the Texas sample, we developed a data collection instrument to gather information systematically from the selected bankruptcy files and used the Administrative Office s electronic public access service to review all bankruptcy filings and to record whether the child support obligation was reported in the bankruptcy paperwork. Bankruptcy filers (and their attorneys) can report these obligations in a number of places in the paperwork. We did not determine whether the individuals who neglected to report their obligations eventually did so when asked by a case trustee. The results of this case file review cannot be generalized nationwide; however, they can be generalized to the population of 1,931 noncustodial parents with IV-D orders on record in the automated system of the Texas State CSE program who also filed for bankruptcy nationwide and are intended for illustrative purposes. Moreover, it is possible that we identified some individuals as non-reporters due to a timing issue rather than their not disclosing a current obligation. While we attempted to match the time frames of the bankruptcy and child support data as closely as possible, it is possible that an individual s child support status on the exact date they filed for bankruptcy might not have been captured in our data match. We determined that this timing issue was not a significant methodological limitation because we found so few filers that did not report their child support obligations. To help us understand the potential benefits as well as the feasibility and cost of data matching on a routine basis, we interviewed officials in both the bankruptcy system and the CSE program, including officials representing federal, regional, and state entities. In interviews with these officials, we also discussed challenges that data matching would involve for all parties, including technical, legal, financial, and security challenges that data matching would entail for all parties. We spoke with officials in the Administrative Office, the Executive Office for U.S. Trustees, and OCSE. We also interviewed officials at state agencies in Alabama, California, Illinois, New York, Texas, and West Virginia. We chose these six states for their diverse geography, caseload sizes, and administrative structures. Our work at the six state agencies focused on the notices they receive from case trustees under the new DSO provisions of the Bankruptcy Reform Act rather than the notices they receive from bankruptcy courts under the more general requirement that all creditors specified in bankruptcy filings are to be notified by the courts. Additionally, we interviewed 5 regional U.S. Trustees and 1 bankruptcy administrator in Alabama and the 16 case trustees who report to them in bankruptcy districts in these six states. We conducted this performance audit from December 2006 to January 2008 with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Justice Appendix III: GAO Contact and Staff Acknowledgments <10. GAO Contact> <11. Acknowledgments> In addition to the contact named above, Denise M. Fantone, Acting Director; Gale Harris, Assistant Director; James Whitcomb, Analyst-in- Charge; Susan Higgins; and Sara Pelton made significant contributions to this report. In addition, Ron La Due Lake, Cynthia Grant, and John Chris Martin provided assistance in data collection and analytical support; Linda Watson, Ellen Wolfe, and Jessikah Foulk provided assistance in data collection; Susan Bernstein provided writing assistance; Jim Rebbe and Geoff Hamilton provided legal assistance; and Lise Levie provided technical assistance.
Why GAO Did This Study Recognizing the importance of child support, the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 requires that if a parent with child support obligations files for bankruptcy, a bankruptcy trustee must notify the relevant custodial parent and state child support enforcement agency so that they may participate in the case. The act also required GAO to study the feasibility of matching bankruptcy records with child support records to assure that filers with child support obligations are identified. GAO therefore (1) identified the percent of bankruptcy filers with obligations nationwide, (2) examined the potential for routine data matching to facilitate the identification of filers with child support obligations, and (3) studied the feasibility and cost of doing so. GAO interviewed child support enforcement and bankruptcy officials at the federal level and in six states. GAO also conducted a nationwide test data match and reviewed national bankruptcy filings for people with support obligations in Texas for an indication of whether filers are failing to provide this information. What GAO Found Nationwide, about 7 percent of individuals who filed for bankruptcy between October 17, 2005, and October 17, 2006--the first year of the bankruptcy act implementation--were noncustodial parents with child support orders. They, in turn, represented about one-half of 1 percent of the 9.9 million noncustodial parents with orders to pay child support. While these proportions are small, they represented 45,346 adults and at least as many children. Routine data matching might identify individuals who have not reported their child support obligations. However, GAO estimated from a random sample file review that 98 percent of noncustodial parents nationwide with orders in Texas had volunteered this information when they filed. (The results could be higher or lower in other states.) Another potential benefit would be to reduce the workload for state child support agencies by providing positive identification of bankruptcy filers with orders under the states' purview by comparing the full social security numbers (SSNs) of individuals in both bankruptcy and child support databases. This would help address the current situation state agency officials described, in which significant numbers of the notices they receive from bankruptcy trustees included only partial SSNs of the named person, imposing additional work on staff to make a positive identification in their databases. For bankruptcy case trustees participating in the U.S. Trustee Program, we found this to be the case, even though program guidance--covering 84 of the 90 bankruptcy districts--calls for case trustees to provide full SSNs in notices sent to state agencies. These notices are not part of any public record and trustee program officials said this use of the full SSNs is consistent with executive branch policies designed to guard privacy. For the remaining six districts, administered under a separate program, no guidance has been developed. A data matching system is technically feasible, but it would be a complex and costly undertaking, and would involve addressing some statutory and policy considerations. Regarding notifying state agencies of the match results, federal child support enforcement officials said that their national automated system could disseminate this data after modifications to federal and state systems. However, a data matching system would not offer a comprehensive alternative to the trustee notification system, because it would not transmit information to custodial parents. Regarding cost, bankruptcy and child support enforcement officials said that the development and implementation of an automated interface between two separate databases is a complex and costly undertaking, requiring modifications to each, with many steps required to assure that the matching system is developed and deployed without critical flaws and allowing for the secure exchange of data. Also, bankruptcy officials cited some statutory and policy considerations to releasing their own data or to performing a data match. It would also duplicate a portion of the current trustee notification process. In view of these findings, instituting a data matching system may not be warranted, especially if the case trustees can provide full SSNs of bankruptcy filers when notifying state agencies.
<1. Background> Dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic imaginable; financial and other business transactions can be executed almost instantaneously, often 24 hours a day; and electronic mail, Internet Web sites, and computer bulletin boards allow us to communicate quickly and easily with a virtually unlimited number of individuals and groups. However, in addition to such benefits, this widespread interconnectivity poses significant risks to the government s and our nation s computer systems and, more important, to the critical operations and infrastructures they support. For example, telecommunications, power distribution, water supply, public health services, national defense (including the military s warfighting capability), law enforcement, government services, and emergency services all depend on the security of their computer operations. The speed and accessibility that create the enormous benefits of the computer age on the other hand, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Table 1 summarizes the key threats to our nation s infrastructures, as observed by the Federal Bureau of Investigation (FBI). Government officials remain concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, since these individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation s defense and intelligence communities increasingly rely on commercially available information technology (IT), the likelihood increases that information attacks will threaten vital national interests. As the number of individuals with computer skills has increased, more intrusion or hacking tools have become readily available and relatively easy to use. A hacker can literally download tools from the Internet and point and click to start an attack. Experts also agree that there has been a steady advance in the sophistication and effectiveness of attack technology. Intruders quickly develop attacks to exploit vulnerabilities discovered in products, use these attacks to compromise computers, and share them with other attackers. In addition, they can combine these attacks with other forms of technology to develop programs that automatically scan the network for vulnerable systems, attack them, compromise them, and use them to spread the attack even further. Along with these increasing threats, the number of computer security incidents reported to the CERT Coordination Center has also risen dramatically from 9,859 in 1999 to 82,094 in 2002 and 76,404 for just the first half of 2003. And these are only the reported attacks. The Director of CERT Centers stated that he estimates that as much as 80 percent of actual security incidents goes unreported, in most cases because (1) the organization was unable to recognize that its systems had been penetrated or there were no indications of penetration or attack or (2) the organization was reluctant to report. Figure 1 shows the number of incidents reported to the CERT Coordination Center from 1995 through the first half of 2003. According to the National Security Agency, foreign governments already have or are developing computer attack capabilities, and potential adversaries are developing a body of knowledge about U.S. systems and methods to attack these systems. Since the terrorist attacks of September 11, 2001, warnings of the potential for terrorist cyber attacks against our critical infrastructures have also increased. For example, in February 2002, the threat to these infrastructures was highlighted by the Special Advisor to the President for Cyberspace Security in a Senate briefing when he stated that although to date none of the traditional terrorists groups, such as al Qaeda, have used the Internet to launch a known assault on the United States infrastructure, information on water systems was discovered on computers found in al Qaeda camps in Afghanistan. Also, in his February 2002 statement for the Senate Select Committee on Intelligence, the director of central intelligence discussed the possibility of cyber warfare attack by terrorists. He stated that the September 11 attacks demonstrated the nation s dependence on critical infrastructure systems that rely on electronic and computer networks. Further, he noted that attacks of this nature would become an increasingly viable option for terrorists as they and other foreign adversaries become more familiar with these targets and the technologies required to attack them. Since September 11, 2001, the critical link between cyberspace and physical space has been increasingly recognized. In his November 2002 congressional testimony, the Director of the CERT Centers at Carnegie- Mellon University noted that supervisory control and data acquisition (SCADA) systems and other forms of networked computer systems have been used for years to control power grids, gas and oil distribution pipelines, water treatment and distribution systems, hydroelectric and flood control dams, oil and chemical refineries, and other physical systems, and that these control systems are increasingly being connected to communications links and networks to reduce operational costs by supporting remote maintenance, remote control, and remote update functions. These computer-controlled and network-connected systems are potential targets for individuals bent on causing massive disruption and physical damage, and the use of commercial, off-the-shelf technologies for these systems without adequate security enhancements can significantly limit available approaches to protection and may increase the number of potential attackers. The risks posed by this increasing and evolving threat are demonstrated in reports of actual and potential attacks and disruptions. For example: On February 11, 2003, the National Infrastructure Protection Center (NIPC) issued an advisory to heighten the awareness of an increase in global hacking activities as a result of the increasing tensions between the United States and Iraq. This advisory noted that during a time of increased international tension, illegal cyber activity often escalates, such as spamming, Web page defacements, and denial-of-service attacks. Further, this activity can originate within another country that is party to the tension, can be state sponsored or encouraged, or can come from domestic organizations or individuals independently. The advisory also stated that attacks may have one of several objectives, including political activism targeting Iraq or those sympathetic to Iraq by self-described patriot hackers, political activism or disruptive attacks targeting U.S. systems by those opposed to any potential conflict with Iraq, or even criminal activity masquerading or using the current crisis to further personal goals. According to a preliminary study coordinated by the Cooperative Association for Internet Data Analysis (CAIDA), on January 25, 2003, the SQL Slammer worm (also known as Sapphire ) infected more than 90 percent of vulnerable computers worldwide within 10 minutes of its release on the Internet, making it the fastest computer worm in history. As the study reports, exploiting a known vulnerability for which a patch has been available since July 2002, Slammer doubled in size every 8.5 seconds and achieved its full scanning rate (55 million scans per second) after about 3 minutes. It caused considerable harm through network outages and such unforeseen consequences as canceled airline flights and automated teller machine (ATM) failures. Further, the study emphasizes that the effects would likely have been more severe had Slammer carried a malicious payload, attacked a more widespread vulnerability, or targeted a more popular service. In November 2002, news reports indicated that a British computer administrator was indicted on charges that he broke into 92 U.S. computer networks in 14 states; these networks belonged to the Pentagon, private companies, and the National Aeronautics and Space Administration during the past year, causing some $900,000 in damage to computers. According to a Justice Department official, these attacks were one of the biggest hacks ever against the U.S. military. This official also said that the attacker used his home computer and automated software available on the Internet to scan tens of thousands of computers on U.S. military networks looking for ones that might suffer from flaws in Microsoft Corporation s Windows NT operating system software. On October 21, 2002, NIPC reported that all the 13 root-name servers that provide the primary roadmap for almost all Internet communications were targeted in a massive distributed denial of service attack. Seven of the servers failed to respond to legitimate network traffic, and two others failed intermittently during the attack. Because of safeguards, most Internet users experienced no slowdowns or outages. In July 2002, NIPC reported that the potential for compound cyber and physical attacks, referred to as swarming attacks, is an emerging threat to the U.S. critical infrastructure. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For example, cyber attacks can be used to delay the notification of emergency services and to deny the resources needed to manage the consequences of a physical attack. In addition, a swarming attack could be used to worsen the effects of a physical attack. For instance, a cyber attack on a natural gas distribution pipeline that opens safety valves and releases fuels or gas in the area of a planned physical attack could enhance the force of the physical attack. Consistent with this threat, NIPC also released an information bulletin in April 2002 warning against possible physical attacks on U.S. financial institutions by unspecified terrorists. In August 2001, we reported to a subcommittee of the House Government Reform Committee that the attacks referred to as Code Red, Code Red II, and SirCam had affected millions of computer users, shut down Web sites, slowed Internet service, and disrupted business and government operations. Then in September 2001, the Nimda worm appeared using some of the most significant attack profile aspects of Code Red II and 1999 s infamous Melissa virus that allowed it to spread widely in a short amount of time. Security experts estimate that Code Red, Sircam, and Nimda have caused billions of dollars in damage. <2. Significant Weaknesses Persist in Federal Information Security> To better understand the risks facing DOD systems, it is useful to consider the overall status of information security for the federal government. Our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from computer-based threats, even though these systems process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal agency operations. For the past several years, we have analyzed audit results for 24 of the largest federal agencies and found that all 24 had significant information security weaknesses. As reported in November 2002, our latest analyses of reports issued from October 2001 through October 2002, continued to show significant weaknesses in federal computer systems that put critical operations and assets at risk. Weaknesses continued to be reported in each of the 24 agencies included in our review, and they covered all six major areas of general controls the policies, procedures, and technical controls that apply to all or a large segment of an entity s information systems and help ensure their proper operation. These six areas are (1) security program management, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented; (2) access controls, which ensure that only authorized individuals can read, alter, or delete data; (3) software development and change controls, which ensure that only authorized software programs are implemented; (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse; and (6) service continuity, which ensures that computer-dependent operations experience no significant disruptions. Figure 2 illustrates the distribution of weaknesses for the six general control areas across the 24 agencies. Although our analyses showed that most agencies had significant weaknesses in these six control areas, as in past years analyses, weaknesses were most often identified for security program management and access controls. For security program management, we identified weaknesses for all 24 agencies in 2002 the same as reported for 2001, and compared to 21 of the 24 agencies (88 percent) in 2000. Security program management, which is fundamental to the appropriate selection and effectiveness of the other categories of controls, covers a range of activities related to understanding information security risks; selecting and implementing controls commensurate with risk; and ensuring that controls, once implemented, continue to operate effectively. For access controls, we found weaknesses for 22 of 24 agencies (92 percent) in 2002 (no significant weaknesses were found for one agency, and access controls were not reviewed for another). This compares to access control weaknesses found in all 24 agencies for both 2000 and 2001. Weak access controls for sensitive data and systems make it possible for an individual or group to inappropriately modify, destroy, or disclose sensitive data or computer programs for purposes such as personal gain or sabotage. In today s increasingly interconnected computing environment, poor access controls can expose an agency s information and operations to attacks from remote locations all over the world by individuals with only minimal computer and telecommunications resources and expertise. Our analyses also showed service-continuity-related weaknesses at 20 of the 24 agencies (83 percent) with no significant weaknesses found for 3 agencies (service continuity controls were not reviewed for another). This compares to 19 agencies with service continuity weaknesses found in 2001 and 20 agencies found in 2000. Service continuity controls are important in that they help ensure that when unexpected events occur, critical operations will continue without undue interruption and that crucial, sensitive data are protected. If service continuity controls are inadequate, an agency can lose the capability to process, retrieve, and protect electronically maintained information, which can significantly affect an agency s ability to accomplish its mission. Further, such controls are particularly important in the wake of the terrorist attacks of September 11, 2001. These analyses of information security at federal agencies also showed that the scope of audit work performed has continued to expand to more fully cover all six major areas of general controls at each agency. Not surprisingly, this has led to the identification of additional areas of weakness at some agencies. These increases in reported weaknesses do not necessarily mean that information security at federal agencies is getting worse. They more likely indicate that information security weaknesses are becoming more fully understood an important step toward addressing the overall problem. Nevertheless, the results leave no doubt that serious, pervasive weaknesses persist. As auditors increase their proficiency and the body of audit evidence expands, it is probable that additional significant deficiencies will be identified. Most of the audits represented in figure 2 were performed as part of financial statement audits. At some agencies with primarily financial missions, such as the Department of the Treasury and the Social Security Administration, these audits covered the bulk of mission-related operations. However, at agencies whose missions are primarily nonfinancial, such as DOD and the Department of Justice, the audits may provide a less complete picture of the agency s overall security posture because the audit objectives focused on the financial statements and did not include evaluations of individual systems supporting nonfinancial operations. However, in response to congressional interest, beginning in fiscal year 1999, we expanded our audit focus to cover a wider range of nonfinancial operations a trend we expect to continue. Audit coverage for nonfinancial systems has also increased as agencies and their IGs reviewed and evaluated their information security programs as required by GISRA. To fully understand the significance of the weaknesses we identified, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the degree of risk caused by security weaknesses is extremely high. The weaknesses identified place a broad array of federal operations and assets at risk. For example, resources, such as federal payments and collections, could be lost or stolen; computer resources could be used for unauthorized purposes or to launch sensitive information, such as taxpayer data, social security records, medical records, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of espionage or other types of crime; critical operations, such as those supporting national defense and emergency services, could be disrupted; data could be modified or destroyed for purposes of fraud or disruption; agency missions could be undermined by embarrassing incidents that result in diminished confidence in their ability to conduct operations and fulfill their fiduciary responsibilities. <3. Congress Consolidates and Strengthens Federal Information Security Requirements> Concerned with accounts of attacks on commercial systems via the Internet and reports of significant weaknesses in federal computer systems that make them vulnerable to attack, on October 30, 2000, Congress enacted GISRA, which was signed into law and became effective November 29, 2000, for a period of 2 years. GISRA supplemented information security requirements established in the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996 and was consistent with existing information security guidance issued by OMB and the National Institute of Standards and Technology (NIST), as well as audit and best practice guidance issued by GAO. Most importantly, however, GISRA consolidated these separate requirements and guidance into an overall framework for managing information security and established new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. GISRA assigned specific responsibilities to OMB, agency heads and CIOs, and IGs. OMB was responsible for establishing and overseeing policies, standards, and guidelines for information security. This included the authority to approve agency information security programs, but delegated OMB s responsibilities regarding national security systems to national security agencies. OMB was also required to submit an annual report to the Congress summarizing results of agencies independent evaluations of their information security programs. OMB released its fiscal year 2001 report in February 2002 and its fiscal year 2002 report in May 2003. GISRA required each agency, including national security agencies, to establish an agencywide risk-based information security program to be overseen by the agency CIO and ensure that information security is practiced throughout the life cycle of each agency system. Specifically, this program was to include periodic risk assessments that consider internal and external threats to the integrity, confidentiality, and availability of systems, and to data supporting critical operations and assets; the development and implementation of risk-based, cost-effective policies and procedures to provide security protections for information collected or maintained by or for the agency; training on security responsibilities for information security personnel and on security awareness for agency personnel; periodic management testing and evaluation of the effectiveness of policies, procedures, controls, and techniques; a process for identifying and remediating any significant deficiencies; procedures for detecting, reporting, and responding to security incidents; an annual program review by agency program officials. In addition to the responsibilities listed above, GISRA required each agency to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. The evaluations of non-national-security systems were to be performed by the agency IG or an independent evaluator, and the results of these evaluations were to be reported to OMB. For the evaluation of national security systems, special provisions included having national security agencies designate evaluators, restricting the reporting of evaluation results, and having the IG or an independent evaluator perform an audit of the independent evaluation. For national security systems, only the results of each audit of an evaluation are to be reported to OMB. With GISRA expiring on November 29, 2002, on December 17, 2002, FISMA was enacted as title III of the E-Government Act of 2002 to permanently authorize and strengthen the information security program, evaluation, and reporting requirements established by GISRA. Among other things, FISMA also requires NIST to develop, for systems other than national security systems, (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. In addition, FISMA requires each agency to develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is also to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. <4. DOD Highlights Initiatives, But Also Reports Weaknesses> DOD has undertaken several initiatives to improve its information security, including the development of an overall IA strategy and the issuance of information security policy and guidance. However, information that DOD s CIO and IG submitted for fiscal year 2002 GISRA reporting showed that a number of challenges remain for the department in implementing both its policies and procedures and the statutory information security requirements. These challenges are indicated by the material weaknesses DOD reported related to its IA capabilities and its performance data, which showed that further efforts are needed to implement key requirements. <4.1. DOD Efforts to Improve Information Security> Overall, the DOD CIO reported in its fiscal year 2002 GISRA report that the department has an aggressive IA posture and highlighted several initiatives to improve its IA program. In particular, DOD has developed an overall IA strategic plan to define the department s goals and objectives and to provide a consistent departmentwide approach to information assurance. Further, according to a DOD official, DOD is aligning its strategic initiatives to objectives in this plan and is developing milestones and performance measures to gauge success. Specific plan goals include: protecting information to ensure that all information has a level of trust commensurate with mission needs; defending systems and networks to ensure that no access is uncontrolled and that all systems and networks are capable of self-defense; and creating an IA-empowered workforce that is trained, highly skilled, knowledgeable, and aware of its role in assuring information. The plan also identified specific objectives for each goal. For example, to meet the goal of protecting information to ensure that all information has a level of trust commensurate with mission needs, DOD identified objectives including defining data protection requirements, applying protection mechanisms across the enterprise, and developing robust mechanisms that protect information. In addition, DOD has developed a complementary implementation mechanism for IA known as Defense in Depth that uses a multilayered approach with defense mechanisms on successive layers at multiple locations. Other initiatives highlighted in the DOD CIO s fiscal year 2002 GISRA report included establishing a number of senior-level bodies that discuss, brief, and shape the future of IA efforts such as the CIO Executive Board and the Military Communications-Electronics Board and issuing information security policy directives, instructions, manuals, and policy memorandums. During fiscal year 2003, DOD has continued its efforts to implement IA departmentwide by issuing additional policy and guidance. Specifically, in October 2002, it issued DOD Directive 8500.1 to establish policy and assign responsibility for IA management. Further, in February 2003, DOD issued DOD Instruction 8500.2, which prescribes a framework for implementing the department s IA program and establishes baseline levels of assurance for information systems. <4.2. Material Weaknesses Identified By DOD> DOD reported eight material weaknesses in fiscal year 2002 for which it said it is undertaking aggressive action to improve and expand its IA capabilities. The actions DOD identified to address the eight deficiencies are: completing the implementation of the Information Assurance Vulnerability Alert process to all services and agencies; ensuring that effective computer security policies and procedures are distributed in a timely manner; improving DOD business processes to ensure that all systems are protected; decreasing the time necessary for correction of reported weaknesses; ensuring that computer security policies are enforced and security capabilities are tested regularly; ensuring that training is conducted for all network personnel (this includes awareness training for all personnel to specific network defense training for system and network administrators); increasing access security through the use of electronic tokens; and increasing security through certificates (for authentication and nonrepudiation). <4.3. DOD Reports Show Further Efforts Needed to Implement Key Information Security Requirements> OMB s fiscal year 2002 reporting instructions included new high-level management performance measures that the agencies and IGs were required to use to report on agency officials performance, such as the number and percentage of systems that have been assessed for risk and that have an up-to-date security plan. In addition, OMB s reporting instructions for fiscal year 2002 stated that agencies were expected to review all systems annually. OMB explained that GISRA requires senior agency program officials to review each security program for effectiveness at least annually, and that the purpose of the security programs discussed in GISRA is to ensure the protection of the systems and data covered by the program. Thus, a review of each system is essential to determine the program s effectiveness, and only the depth and breadth of such system reviews are flexible. DOD reported data for most performance measures as required. However, as agreed with OMB, DOD reported these data for only a sample of its systems and networks rather than for all systems. As a result, DOD cannot ensure that these performance measures accurately reflect the information security status of its thousands of systems or that potential weaknesses for all systems have been identified for correction. Further, reporting on only a sample of systems limited the usefulness of OMB s analysis of the governmentwide status of IT security reported in its fiscal year 2002 report to the Congress, which considered data for only DOD s sample of systems in measuring the overall progress by 24 large agencies. DOD indicated in its report that because of its size and complexity, the collection of specific metrics required sizable lead time to allow for the collection and approval process by each military service and agency. For this reason, DOD focused its fiscal year 2002 GISRA efforts on (1) a sample of 366 of its networks (241 unclassified and 125 classified) and (2) a sample of 155 systems that were selected from the sample of systems used for DOD s fiscal year 2001 GISRA review. Although DOD reported performance measure data for both the sample of networks and the sample of systems, OMB s provided comparative results in its report to Congress primarily for the sample of 155 systems. However, as discussed later in this statement, DOD did report that 96 percent of its sample of networks was certified and accredited. OMB s fiscal year 2002 GISRA report to the Congress summarized both agency and overall results for certain key measures for 24 large federal agencies. Subject to the limitation of DOD s data, figure 3 summarizes DOD results for six of these measures for the 155 systems and shows that most of these measures actually decreased from fiscal year 2001 to fiscal year 2002. DOD attributed the decreases to inaccuracies in the fiscal year 2001 data. Discussion of these and other measures follow figure 3 and include a comparison of DOD results to results for other agencies as presented in our recent testimonies before a subcommittee of the House Government Reform Committee. Agencies are required to perform periodic threat-based risk assessments for systems and data. Risk assessments are an essential element of risk management and overall security program management and, as our best practice work has shown, are an integral part of the management processes of leading organizations. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. Our reviews of federal agencies, however, frequently show deficiencies related to assessing risk, such as security plans for major systems that are not developed on the basis of risk. As a result, the agencies had accepted an unknown level of risk by default rather than consciously deciding what level of risk was tolerable. OMB s performance measure for this requirement mandated that agencies report the number and percentage of their systems that have been assessed for risk during fiscal year 2001 and fiscal year 2002. DOD reported that for its sample of 155 systems, 68 percent (106) had risk assessments for fiscal year 2002 as compared to 81 percent (125) for fiscal year 2001 a decrease of 13 percentage points. In comparison, our overall analyses of reporting for this measure for all 24 agencies (including DOD) showed that for fiscal year 2002, 11 agencies reported that they had assessed risk for 90 to 100 percent of their systems, and of the remaining 13, 8 reported less than 50 percent. <4.3.1. Systems With Up-to-Date Security Plans> An agency head is required to ensure that the agency s information security plans are practiced throughout the life cycle of each agency system. In its reporting instructions, OMB required agencies to report whether the agency head had taken specific and direct actions to oversee that program officials and the CIO are ensuring that security plans are up to date and practiced throughout the life cycle of each system. Agencies also had to report the number and percentage of systems that had an up- to-date security plan. Regarding the status of agencies security plans, DOD reported that for its sample of 155 systems, 66 percent (103) had up-to-date security plans for fiscal year 2002 a decrease from the 84 percent (130) reported for fiscal year 2001. In comparison, our overall analysis for all 24 agencies showed that for fiscal year 2002, 7 agencies reported that they up-to-date security plans for 90 to 100 percent of their systems, and of the remaining 17 agencies, 9 reported up-to-date security plans for less than 50 percent of their systems. <4.3.2. Systems Certified and Accredited> As one of its performance measures for agency program official responsibilities, OMB required agencies to report the number and percentage of systems that have been authorized for processing following certification and accreditation. Certification is the comprehensive evaluation of the technical and nontechnical security controls of an IT system to support the accreditation process that establishes the extent to which a particular design and implementation meets a set of specified security requirements. Certification provides the necessary information to a management official to formally declare that an IT system is approved to operate at an acceptable level of risk. Accreditation is the authorization of an IT system to process, store, or transmit information, granted by a management official that provides a form of quality control and challenges managers and technical staff to find the best fit for security, given technical constraints, operational constraints, and mission requirements. The accreditation decision is based on the implementation of an agreed upon set of management, operational, and technical controls, and by accrediting the system, the management office accepts the risk associated with it. DOD has established a standard departmentwide process, set of activities, general tasks, and a management structure to certify and accredit information systems and maintain the IA and security posture throughout the life cycle of the system. A companion manual, the DOD Information Technology Security Certification and Accreditation Process (DITSCAP) Application Manual, provides implementation guidance to standardize the certification and accreditation process throughout DOD. The DOD CIO reported that the department is implementing the DITSCAP process, but realizes the actual process is complex, lengthy, and costly; and several internal agencies are exploring efforts to streamline DITSCAP. DOD reported that for fiscal year 2002, 55 percent (85) of its sample of 155 systems was authorized for processing following certification and accreditation a decrease from the 61 percent (95) reported for fiscal year 2001. For this particular measure, DOD also reported that in fiscal year 2002, 96 percent (352) of its 366-network sample was certified and accredited to operate. In comparison, our overall analysis for all 24 agencies showed that for fiscal year 2002, only 3 agencies reported that 90 to 100 percent of their systems were authorized for processing following certification and accreditation, and of the remaining 21 agencies, 13 reported that less than 50 percent of their systems were authorized, including 3 that reported that none were authorized. According to the DOD IG s fiscal year 2002 GISRA report, the certification and accreditation data reported by the department for fiscal year 2001 included systems that were certified and accredited either under the DITSCAP or another process. In addition, in analyzing a sample of the systems used for the department s fiscal year 2001 GISRA reporting, the IG found the certification and accreditation status for some systems was incorrectly reported. <4.3.3. Security Control Testing and Evaluation> An agency head is responsible for ensuring that the appropriate agency officials evaluate the effectiveness of the information security program, including testing controls. Further, the agencywide information security program is to include periodic management testing and evaluation of the effectiveness of information security policies and procedures. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks cost-effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Further, management control testing and evaluation as part of the program reviews can supplement control testing and evaluation in IG and our audits to help provide a more complete picture of the agencies security postures. As a performance measure for this requirement, OMB required agencies to report the number and percentage of systems for which security controls have been tested and evaluated during fiscal years 2001 and 2002. DOD reported that for fiscal year 2002, it had tested and evaluated controls for only 28 percent (43) of the 155-system sample a slight increase from the 23 percent (35) reported for fiscal year 2001. In comparison, our overall analysis for all 24 agencies showed that for fiscal year 2002, only 4 agencies reported they had tested and evaluated controls for 90 to 100 percent of their systems, and of the remaining 20 agencies, 10 reported less than 50 percent. <4.3.4. System Contingency Plans> Contingency plans provide specific instructions for restoring critical systems, including such items as arrangements for alternative processing facilities, in case the usual facilities are significantly damaged or cannot be accessed. These plans and procedures help to ensure that critical operations can continue when unexpected events occur, such as temporary power failure, accidental loss of files, or major disaster. Contingency plans should also identify which operations and supporting resources are critical and need to be restored first and should be tested to identify their weaknesses. Without such plans, agencies have inadequate assurance that they can recover operational capability in a timely, orderly manner after a disruptive attack. As another of its performance measures, OMB required agencies to report the number and percentage of systems for which contingency plans had been prepared and had been tested in the past year. DOD reported that of its 155-system sample, 66 percent (103) of its systems had contingency plans for fiscal year 2002 a decrease from the 85 percent (131) reported for fiscal year 2001. However, more significantly, DOD also reported that for fiscal year 2002, only 21 percent (32) of its sample of systems had contingency plans that had been tested within the past year. In comparison, our overall analysis for all 24 agencies showed that for fiscal year 2002, only 2 agencies reported they had tested contingency plans for 90 to 100 percent of their systems, and of the remaining 22 agencies, 20 reported less than 50 percent, including 1 that reported none had been tested. <4.3.5. Incident-Handling Capabilities> Agencies are required to implement procedures for detecting, reporting, and responding to security incidents. Although even strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they promptly take steps to detect intrusions and misuse before significant damage can be done. In addition, accounting for and analyzing security problems and incidents are effective ways for an organization to gain a better understanding of threats to its information and of the cost of its security-related problems. Such analyses can also pinpoint vulnerabilities that need to be addressed to help ensure that they will not be exploited again. In this regard, problem and incident reports can provide valuable input for risk assessments, help in prioritizing security improvement efforts, and be used to illustrate risks and related trends in reports to senior management. In March 2001, we reported that over the past several years, DOD had established incident response capabilities for the military services and enhanced computer defensive capabilities across the department. However, we also identified six areas in which DOD faced challenges in improving its incident response capabilities, including (1) coordinating resource planning and priorities for incident response across the department; (2) integrating critical data from systems, sensors, and other devices to better monitor cyber events and attacks; (3) establishing a departmentwide process to periodically and systematically review systems and networks on a priority basis for security weaknesses; (4) ensuring that components across the department consistently and fully report compliance with vulnerability alerts; (5) improving the coordination and suitability of component-level incident response actions; and (6) developing departmentwide performance measures to assess incident response capabilities and thus better ensure mission readiness. Although DOD was aware of these challenges and had undertaken some initiatives to address them, the initiatives were not complete at the time of our review. We recommended that DOD act to address these challenges to better protect its systems and networks from cyber threats and attacks. Currently, DOD reports that it has made progress in addressing many of these challenges. For fiscal year 2002 GISRA reporting, OMB required agencies to report several performance measures related to detecting, reporting, and responding to security incidents. These included the number of agency components with an incident-handling and response capability, whether the agency and its major components share incident information with the Federal Computer Incident Response Center (FedCIRC) in a timely manner, and the numbers of incidents reported. OMB also required that agencies report on how they confirmed that patches have been tested and installed in a timely manner. In its fiscal year 2002 GISRA report, the DOD CIO reported that essentially all its components have an incident handling and response capability and that DOD has made significant progress in developing its computer network defense capabilities, including the January 2001 issuance of DOD Directive O-8530.1, Computer Network Defense, which established computer network defense policy, definition, and department responsibilities. The CIO also reported that through its computer network defense capabilities, DOD could monitor, analyze, detect, and respond to unauthorized activity within DOD information systems and computer networks. In addition, the CIO reported that each of the major military services has a robust computer emergency response team (CERT) and integrated network operations centers. Further, the report states that the DOD CERT works closely with FedCIRC on all incidents within the .gov Internet domain and, along with other service and agency CERTs, shares incident information with FedCIRC within 10 minutes to 48 hours depending on the seriousness of the incident. The Joint Task Force for Computer Network Operations and the DOD CERT take responsibility for incidents within the .mil Internet domain. In comparison to DOD, our analyses of agencies fiscal year 2002 GISRA reports showed that most agencies reported that they have established incident-response capabilities. For example, 12 agencies reported that for fiscal year 2002, 90 percent or more of their components had incident handling and response capabilities, and 8 others reported that they provided these capabilities to components through a central point within the agency. <4.3.6. Security Training for Employees and Contractors> Agencies are required to provide training on security awareness for agency personnel and on security responsibilities for information security personnel. Our studies of best practices at leading organizations have shown that such organizations took steps to ensure that personnel involved in various aspects of their information security programs had the skills and knowledge they needed. They also recognized that staff expertise had to be frequently updated to keep abreast of ongoing changes in threats, vulnerabilities, software, security techniques, and security monitoring tools. Among the performance measures for these requirements, OMB mandated that agencies report the number and percentage of employees including contractors who received security training during fiscal years 2001 and 2002, and the number of employees with significant security responsibilities who received specialized training. In response to these measures, the DOD CIO reported that it provides departmentwide, component-level security training and periodic updates for all employees, but that actual numbers and the percentage of agency employees who received security training in fiscal year 2002 were not available at the time of its report. For employees with significant security responsibilities, the CIO reported that specialized security and technical training is provided to persons empowered to audit, alter, or affect the intended behavior or content of an IT system, such as system/network administrators and information systems security officers. Additional training is also provided for others, such as CERT members, computer crime investigators, and Web masters/site managers. However, performance measure data reported for employees with significant security responsibilities showed that of 39,783 such employees, 42 percent (16,812) received specialized training in fiscal year 2002 a decrease of 9 percentage points from the 51 percent reported for fiscal year 2001. In comparison with other major federal agencies, for specialized training for employees with significant security responsibilities, our analyses showed that 12 agencies reported 50 percent or more of their employees with significant security responsibilities had received specialized training for fiscal year 2002, with 5 of these reporting 90 percent or more. Of the remaining 12 agencies, 9 including DOD reported that less than half of such employees received specialized training, 1 reported that none had received such training, and 2 did not provide sufficient data for this measure. <4.3.7. Security of Contractor- Provided Services> Agencies are required to develop and implement risk-based, cost-effective policies and procedures to provide security protection for information collected or maintained by or for the agency. In its fiscal year 2001 GISRA report to the Congress, OMB identified poor security for contractor- provided services as a common weakness, and for fiscal year 2002 reporting, included performance measures to help indicate whether the agency program officials and CIO used appropriate methods, such as audits and inspections, to ensure that service provided by a contractor are adequately secure and meet security requirements. For fiscal year 2002 GISRA, the DOD CIO reported that there was insufficient time and resources to accurately collect requested performance measure data. The CIO also reported that execution and verification of contractor services and facilities are managed at the subagency levels, and that agency program officials use audits or inspections to ensure that contractor-provided services are adequately secure and meet statutory information security requirements, OMB policy, and NIST guidance. The DOD IG did not review the status of contractor- provided services for compliance with GISRA, but did identify several reports issued from August 2001 to July 2002 by military service audit agencies that discussed weaknesses in background investigations. Screening of contractor or subcontractor employees as a condition for physical or computer systems access is a recommended safeguard, and depending on the program or system criticality or information sensitivity, can range from minimal checks to complete background investigations. <5. Challenges to Implementing an Effective Information Security Management Program> As previously discussed, our past analyses of audit results for 24 of the largest federal agencies showed that all 24 had significant weaknesses in security program management, which covers a range of activities related to understanding information security risks; selecting and implementing controls commensurate with risk; and ensuring that controls, once implemented, continue to operate effectively. Establishing a strong security management program requires that agencies take a comprehensive approach that involves both (1) senior agency program managers who understand which aspects of their missions are the most critical and sensitive and (2) technical experts who know the agencies systems and can suggest appropriate technical security control techniques. We studied the practices of organizations with superior security programs and summarized our findings in a May 1998 executive guide entitled Information Security Management: Learning From Leading Organizations. Our study found that these organizations managed their information security risks through a cycle of risk management activities. These activities, which are now among the federal government s statutory information security requirements, included assessing risks and determining protection needs, selecting and implementing cost-effective policies and controls to meet those needs, promoting awareness of policies and controls and of the risks that prompted their adoption among those responsible for complying with them, and implementing a program of routine tests and examinations for evaluating the effectiveness of policies and related controls and reporting the resulting conclusions to those who can take appropriate corrective action. Although GISRA reporting provided performance information on these areas, it is important for agencies to ensure that they have the appropriate management structures and processes in place to strategically manage information security, as well as ensure the reliability of performance information. For example, disciplined processes can routinely provide the agency with timely, useful information for day-to-day management of information security. Also, developing management strategies that identify specific actions, time frames, and required resources may help to significantly improve performance. In January 1998, DOD announced its plans for DIAP a program intended to promote integrated, comprehensive, and consistent IA practices across the department. In February 1999, the department issued an approved implementation plan, which described, at a high level, the program s goals, objectives, and organizational structure, and confirmed its responsibility for the planning, coordination, integration, and oversight of Defense-wide computer security initiatives. In March 2001, we reported that DIAP had made progress in addressing IA, but that the department had not yet met its goals for promoting integrated, comprehensive, and consistent practices across DOD. The program s progress was limited by weaknesses in its management framework and unmet staffing expectations. DOD had not established a performance- based management framework for IA improvement at the department level. As a result, DOD was unable to accurately determine the status of IA across the department, the progress of its improvement efforts, or the effectiveness of its initiatives. Also, understaffing kept the program from fulfilling its central role in planning, monitoring, coordinating, and integrating Defense-wide IA activities, and changes in the composition and authority of other key organizations interacting with DIAP left it without a consistent and fully supportive environment for its operations. We concluded that achieving this program s vision for information superiority would require the commitment of DOD to proven IA management practices. To improve progress toward the department s goals, we made recommendations to the Secretary of Defense in the areas of component commitments to DIAP and executive-level monitoring of the program. We also recommended that the DOD CIO institute performance-based management of DIAP through a defined budget and performance objectives, and that the program manager take steps to address the program s unmet goals. DOD has made some progress in addressing our previous recommendations and, as discussed previously, during fiscal year 2003, DOD issued guidance to establish policy and assign responsibility for IA management and to prescribe a framework for implementing the department s IA program and establish baseline levels of assurance for information systems. Despite such steps, OMB reported in its fiscal year 2002 report to the Congress that the overall results of the Defense audit community s assessment of the DOD fiscal year 2001 GISRA reporting reinforced the position that DOD does not have mechanisms in place for comprehensively measuring compliance with federal and Defense information security policies and ensuring that those policies are consistently practiced throughout the department. In summary, DOD has taken positive steps through its policy and guidance to establish information security as a priority for the department. However, as its fiscal year 2002 GISRA reporting showed, further effort is needed to fully implement statutory information security requirements departmentwide and to expand future FISMA reporting to all systems. Significant improvement will likely require DOD to establish departmentwide processes that routinely provide information for day-to- day management of information security and to develop management strategies that identify specific actions, time frames, and required resources. With the first agency reporting under FISMA due in September 2003, updated information on the status of DOD s efforts will be available for continued congressional oversight. Mr. Chairman, this concludes my written testimony. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-3317. I can also be reached by E- mail at [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Department of Defense (DOD) faces many risks in its use of globally networked computer systems to perform operational missions--such as identifying and tracking enemy targets--and daily management functions--such as paying soldiers and managing supplies. Weaknesses in these systems, if present, could give hackers and other unauthorized users the opportunity to modify, steal, inappropriately disclose, and destroy sensitive military data. GAO was asked, among other things, to discuss DOD's efforts to protect its information systems and networks from cyber attack, focusing on its reported progress in implementing statutory information security requirements. What GAO Found In its fiscal year 2002 report on efforts to implement information security requirements under Government Information Security Reform law, DOD reported that it has an aggressive information assurance program and highlighted several initiatives to improve it. These initiatives included developing an overall strategy and issuing numerous departmentwide information security policy documents. DOD's reporting highlighted other accomplishments, but acknowledged that a number of challenges remain for the department in implementing both its policies and procedures and statutory information security requirements. DOD reported several material control weaknesses, which included needing to decrease the time necessary for correcting reported weaknesses and ensuring that computer security policies are enforced and security capabilities are tested regularly. Further, performance data DOD reported for a sample of its systems showed that further efforts are needed to fully implement key information security requirements, such as testing systems' security controls, throughout the department. Although DOD has undertaken its Defense-wide Information Assurance Program to promote integrated, comprehensive, and consistent practices across the department and has recently issued both policy guidance and implementation instructions, it does not have mechanisms in place for comprehensively measuring compliance with federal and Defense information security policies and ensuring that those policies are consistently practiced throughout DOD.
<1. Marketplace Lending> Marketplace lending connects consumers and small businesses seeking online and timelier access to credit with individuals and institutions seeking investment opportunities. Marketplace lenders use traditional and may use less traditional types of data and credit algorithms to assess creditworthiness and underwrite consumer loans, small business loans, lines of credit, and other loan products. <1.1. What It Is and How It Works> The marketplace lending subsector originated as person-to-person lending where individual investors financed loans to consumers. The investor base for online marketplace lenders has expanded to include institutional investors such as hedge funds and financial institutions. Additionally, there has been the emergence of a market for securitizations of marketplace lending loans both consumer and small business loan- backed offerings. Marketplace lending firms have evolved to offer a wide variety of loan products and services to consumers and small businesses and have recently begun to offer mortgages, life insurance, and auto loans. Although a number of marketplace lending models exist, publications we reviewed highlighted two common models: direct lenders and platform lenders. Direct lenders, also known as balance sheet lenders, use capital obtained from outside sources to fund loans and often hold loans on their balance sheet. Examples of direct lenders include CAN Capital, Kabbage, and SoFi. Platform lenders partner with depository institutions to originate loans that are then purchased by the lender or by an investor through the platform. Examples of platform lenders include LendingClub Corporation, Prosper, and Upstart. However, there are various permutations based on these two common models. For example, direct lenders like OnDeck have developed hybrid models, selling some whole loans to institutional investors while retaining servicing responsibilities. The marketplace lending process for the two models typically begins with a prospective borrower filling out an online application on the marketplace lending platform s website. Marketplace lenders use traditional and may use less traditional types of data and credit algorithms to assess creditworthiness and underwrite loans. Marketplace lenders use traditional credit data (e.g., credit scores, income, and debt repayment history) but, according to publications we reviewed, may also use less traditional data such as monthly cash flow and expenses, educational history, payment and sales history, and online customer reviews. After assessing the creditworthiness and needs of the applicant, the marketplace lender will approve or deny the borrower s loan request. Generally, the loan will include a principal amount, an interest amount, and the marketplace lender may charge a servicing fee for collecting and transmitting payments and handling collections in case of a default. Funding a borrower s request depends on the business model of the marketplace lender. Direct lenders typically originate the loan, hold most or all of the loans on their own balance sheets, earn interest on the loans, and carry credit risk for the entire loan (the risk is that the borrower does not repay), see figure 1. These lenders can raise funds to make loans by issuing equity to institutional investors (in addition to other means). Platform lenders match investors (institutional or individual) to loans that a depository institution, such as a bank, originates (see fig. 2). If the loan is made and transferred to investors, the platform lender services the account. Investors have the option of either partially or fully funding a loan. <1.2. Who Uses It> Consumers: can use term loans from marketplace lenders to cover personal expenses (such as home or medical expenses); consolidate debt; or refinance student loans, among other reasons. According to Treasury, three marketplace lenders offer consumer loans ranging from $1,000 to $40,000. Treasury also indicated that marketplace lending firms generally provide consumer loans to prime and near-prime borrowers although some marketplace lending firms target subprime borrowers or applicants without credit scores or with a limited credit history. Small Businesses: can use short and fixed-term loans, lines of credit, and merchant cash advances from marketplace lenders, among other products and services, to finance business expenses and expansions, among other reasons. According to a Federal Reserve Bank of Cleveland publication, limited data are available about the types of small businesses that use online lenders, why they have chosen to apply, how successful they are in obtaining funds, and how satisfied they are with their experiences as borrowers. <1.3. Potential Benefits> Lower costs: Marketplace lenders online structure may reduce overhead costs because not all firms have brick-and-mortar locations. In addition, the algorithms used by marketplace lenders to underwrite credit decisions may result in lower underwriting costs when compared to banks underwriting costs. Expanded access to credit: Marketplace lending may expand credit access to underserved populations that may not meet traditional lending requirements or that seek smaller loans than those that banks traditionally offer. Faster service: According to Treasury, marketplace lenders can provide funding decisions within 48 to 72 hours from when applications are submitted. According to an SBA Office of Advocacy publication, LendingClub Corporation advertises that potential applicants can receive a quote within minutes and that its approval and funding process typically takes 7 days, Kabbage Inc. can provide same-day approval for small business loans, and OnDeck can provide funding within 24 hours. According to representatives from one industry organization we spoke with, faster service is beneficial to small businesses that may need quick access to credit in an emergency, such as a restaurant that needs its oven or refrigerator repaired to continue operations. <1.4. Potential Risks> Payment term transparency: Marketplace lending firms offer various loan types and terms, particularly for small business loans. It can be difficult for small businesses to understand and compare loan terms such as the total cost of capital or the annual percentage rate. According to a Federal Reserve 2015 survey, one reason for small business lenders dissatisfaction with online lenders was a lack of transparency. Small business borrower protections: Current federal laws and regulations applicable to marketplace lending generally apply to consumer loans and not small business loans or other commercial loans. For example, the Truth in Lending Act, which among other things, requires the lender to show the cost and terms to the borrower, applies to consumer loans but generally not small business loans. According to Treasury, small business loans under $100,000 share common characteristics with consumer loans, yet do not receive the same protections. However, the report also notes that small business loans may receive protection under the enforcement of fair lending laws under the Equal Credit Opportunity Act. Use of less traditional data in credit decisions: Unlike traditional lending companies that look at a person s credit reports (which include reported installment credit and revolving credit), publications we reviewed indicate that some marketplace lenders also take into account or have considered using less traditional data (e.g., utilities, rent, telephone bills, educational history) during the underwriting process. However, according to Treasury, data-driven algorithms used by marketplace lenders carry the risk for potential fair lending violations. According to staff from FTC, marketplace lenders must ensure that their practices meet fair lending and credit reporting laws. The use of less traditional data also introduces the risk that the data used are inaccurate and concerns that consumers may not have sufficient recourse if the information being used is incorrect. Uncertainty about performance in full credit cycle: According to publications we reviewed, the marketplace lending subsector experienced considerable growth following the 2007-2009 economic downturn in an environment with tightened lending standards and low interest rates. In addition, little is known about how the industry will perform in other economic conditions such as a recession, which could lead to delinquency and defaults of marketplace loans. According to the Congressional Research Service (CRS), it is also possible that loan servicing could be disrupted in the event the marketplace lender goes out of business. <1.5. Industry Trends> Partnerships: According to Treasury, some marketplace lenders have sought partnerships with traditional banks and community development financial institutions (CDFI) in various models. According to a CRS report, in a white label partnership, a traditional bank sets underwriting standards, originates the loan, and holds the loan once issued. The bank can integrate a marketplace lending firm s technology services to originate the loan. For example, JPMorgan Chase & Co. partnered with OnDeck to offer small business loans to JPMorgan Chase & Co. customers. In referral partnerships, banks refer customers who do not meet a bank s underwriting standards, or who are seeking products the bank does not offer, to a marketplace lender. In turn, the bank may collect a fee from the marketplace lender. Referrals may also allow CDFIs to reach customers that may otherwise not be served. For example, in 2015, Regions Bank, Fundation Group LLC (an online small business marketplace lender), and TruFund (a CDFI) partnered to provide small loans to underserved small businesses. Self-regulatory efforts: A number of self-regulatory marketplace lending efforts were established with the intent of developing responsible innovation and mitigating and reporting risks to potential borrowers seeking marketplace lending products. However, limited information is available on the impact of these efforts. Four examples are discussed below. The Marketplace Lending Association (MLA) was established in April 2016 to represent the marketplace lending industry. MLA states that one of its goals is to support responsible growth in the marketplace lending sector. The Online Lenders Alliance represents firms offering loans online. The Alliance provides resources including a consumer hotline, a portal to report fraud, and consumer tips. In 2016, three small business lending platforms formed the Innovative Lending Platform Association. The Association developed the Straightforward Metrics Around Rate and Total cost (SMART) Box tool to help small businesses understand and assess the cost of their small business finance options. For example, some metrics described in the SMART Box tool include total cost of capital, annual percentage rate calculations, and average monthly payment amounts. Its goal is to include clear and consistent pricing metrics, metric calculations, and metric explanations to help small businesses understand and assess the costs of their small business finance options. In 2015, the Responsible Business Lending Coalition launched the Small Business Borrowers Bill of Rights to foster greater transparency and accountability across the small business lending sector. <1.6. Regulation and Oversight> The regulation of marketplace lenders is largely determined by the lenders business model and the borrower or loan type. For example, marketplace lenders that provide services through an arrangement with a federally regulated depository institution may be subject to examination as a third-party service provider by the federal prudential regulator. The federal prudential regulators have provided third-party guidance or vendor risk management guidance to depository institutions that describes the risk assessment, due diligence and risk monitoring, and oversight that depository institutions should engage in when they deal with third parties, including marketplace lenders. Depending on the facts and circumstances, including the type of activities being performed, marketplace lenders may be subject to federal consumer protection laws enforced by CFPB and FTC. Also, CFPB and FTC maintain databases of consumer complaints. In March 2016, CFPB announced it would begin accepting consumer complaints about marketplace lenders. However, according to CFPB staff, CFPB s complaint system does not specifically categorize complaints for marketplace lending because consumers may not know whether to categorize those services as such. FTC encourages consumers to file a complaint if they believe they have been the victim of fraud, identity theft, or other unfair or deceptive business practices. According to FTC staff, fintech is not a category within FTC s consumer complaint database and marketplace lending complaints are generally categorized as consumer loan complaints. As previously discussed, certain regulations generally apply to consumer loans but may not apply to small business loans or other commercial loans. However, FTC has authority under Section 5 of the Federal Trade Commission Act to protect, among others, small businesses that are consumers of marketplace lending products or services from unfair or deceptive business acts or practices. At the federal level, we previously noted that SEC regulates the offer and sale of securities to investors through disclosure requirements and antifraud provisions that can be used to hold companies liable for providing false or misleading information to investors. The Securities Act of 1933 generally requires issuers that make a public offering of securities to register the offer and sale of their securities with SEC and provide investors with disclosures that include information about the company issuing securities such as risk factors and financial information.According to staff from SEC, certain transactions by marketplace lenders may be exempt from the registration requirements of the Securities Act of 1933 depending on the particular facts of their securities offerings. At the state level, state securities regulators are generally responsible for registering certain securities products and, along with SEC, investigating securities fraud. Table 1 provides examples of federal laws and regulations relevant to marketplace lending. Marketplace lenders are subject to state-level laws in each state in which they are licensed to conduct business. Specifically, some marketplace lenders that originate loans directly to consumers or businesses (e.g., a direct marketplace lender) are generally required to obtain licenses and register in each state in which they provide lending services. According to officials from CSBS, state regulators then have the ability to supervise these lenders, ensuring that the lender is complying with state and federal lending laws. CSBS officials noted that the states leverage the Nationwide Multistate Licensing System (NMLS) to facilitate compliance with state-by-state licensing mechanisms. NMLS is intended to enable firms to complete one record to apply for state licensing that fulfills the requirements of each state, for states that participate in the system. Some agencies have taken a number of steps to understand and monitor the fintech industry, including the marketplace lending subsector. For example, in May 2016, Treasury issued a whitepaper on marketplace lending. In November 2016, SEC hosted a fintech forum where industry representatives and regulators discussed capital formation (including marketplace lending and crowdfunding) and related investor protections. On December 2, 2016, the Comptroller of the Currency announced intent to make special-purpose national bank charters available to fintech companies, such as marketplace lenders. OCC published a paper discussing issues related to chartering special-purpose national banks and solicited public comment to help inform its path moving forward. OCC plans to evaluate prospective applicants reasonable chance of success, appropriate risk management, effective consumer protection, fair treatment and access, and capital and liquidity position. <2. Mobile Payments> Mobile payments allow consumers to use their smartphones or other mobile devices to make purchases and transfer money. Consumers and businesses use these devices to make and receive payments instead of relying on the physical use of cash, checks, or credit and debit cards. <2.1. What It Is and How It Works> According to publications we reviewed, there are different ways to make mobile payments, including the use of a mobile wallet. Mobile wallets are electronic versions of consumers wallets that offer consumers the convenience of faster transactions without having to enter credit or debit card information for each transaction. Using a mobile wallet, consumers can store payment card information and other information on their mobile devices that is often needed to complete a payment for later use. Generally, mobile wallets replace sensitive information with randomly- generated numbers a process called tokenization that provides greater security when making a payment, and then transmit this information using existing credit and debit card networks. A variety of companies provide mobile wallets, including Apple, Google, and Samsung; merchants such as Starbucks, Walmart, and CVS; and financial institutions such as JPMorgan Chase & Co. and Citibank. Consumers may use mobile wallets to make payments to other consumers, referred to as person-to-person (P2P) payments, or to businesses, referred to as person-to-business (P2B) payments, either in mobile applications, through mobile browsers, or in person at a store s point-of-sale terminal. In addition, other providers, such as Paypal or Venmo, allow individuals to create accounts to receive and make payments. P2P payments: Consumers can transfer value from a bank account (checking or savings), stored funds in a mobile wallet, credit/debit card, or prepaid card to another consumer s account. P2P methods use the Internet, mobile applications, or text messages and generally move funds through the automated clearing house (ACH) network or debit and credit card networks. A variety of fintech firms provide P2P services. For example, current P2P providers include PayPal, Venmo, and Google; social networks such as Facebook and Snapchat; and financial institutions such as Bank of America Corporation and JPMorgan Chase & Co. P2B payments: Consumers can also use their mobile devices to make payments to businesses in stores or on their mobile device. In stores, consumers can use mobile wallets to pay a business for goods or services at compatible point-of-sale terminals. These transactions rely on various technologies to transfer payment data between the consumer s mobile device and the business, including quick response (QR) codes and wireless communication technologies that enable the payment information to be transferred by allowing compatible devices to exchange data when placed in very close proximity to each other (see fig. 3). <2.2. Who Uses It> The Federal Reserve s 2016 report on Consumers and Mobile Financial Services found that of those with a mobile phone in 2015, 30 percent of individuals ages 18 to 29 and 32 percent of individuals ages 30 to 44 made mobile payments. By comparison, 13 percent of those ages 60 or over made a mobile payment (see fig. 4). From 2011 to 2014, the same general trend was true: younger adults were more likely to make a mobile payment than older age groups. However, the survey results are not comparable because the definition of mobile payments was revised for the 2015 survey. According to a survey by the Pew Charitable Trusts of over 2,000 consumers, 46 percent of the U.S. population reported having made a mobile payment. Specifically, 39 percent of mobile payments users were millennials and 33 percent were between the ages of 35 and 50 compared to 29 percent of users over the age of 50. Underbanked: FDIC and the Federal Reserve have found that underbanked consumers use mobile financial services. According to a 2015 survey by FDIC, 20 percent of households in the United States were underbanked, meaning that the household had an account at an insured institution but also obtained financial services and products outside of the banking system. According to qualitative research conducted by FDIC in 2016, underbanked consumers stated that they used P2P payments and a variety of financial products to manage their day-to-day finances. The Federal Reserve s 2015 survey indicated that a higher percentage of underbanked consumers used mobile payments than fully banked respondents (34 percent versus 20 percent). <2.3. Potential Benefits> Convenience and efficiency: According to publications we reviewed, mobile wallets offer consumers the convenience of instant transactions without having to enter credit card information, PIN numbers, and shipping addresses each time they make a purchase. Mobile wallets can also streamline the checkout time. For example, consumers can wave their smartphone in front of an in-store terminal to make a purchase, which can be faster than swiping a credit or debit card. Data security: Mobile payments can be protected by various security mechanisms, such as codes that must be entered to access a mobile device. According to publications we reviewed, mobile wallets may also improve data security by replacing a consumer s payment card information with a randomly generated number, or token. Mobile payments can use this token to transact with a merchant, which better protects consumer account credentials. <2.4. Potential Risks> Many of the potential risks associated with mobile payments are the same as those that exist with traditional payment products. Some examples of those risks are discussed below. Data security: Data security risks include the possibility of payment and personal data being lost or vulnerable to theft because of consumers reliance on the use of smartphones or other mobile communication devices. According to the Federal Reserve s 2015 survey, respondents identified concerns about the security of the technology as one of the main reasons they do not use mobile payments. Security concerns include the event of a smartphone being hacked, the loss or theft of a smartphone, or if a company does not sufficiently protect mobile transactions, among other concerns. Human error and confusion: According to publications we reviewed, mobile payment methods can create operational risk for human error. For example, consumers can deposit or send money to the wrong person when using P2P payments (e.g., if they type in the wrong phone number). Mobile payment methods can also increase consumer confusion regarding protections based on the underlying funding source. According to FDIC, consumers may not understand which regulators supervise the parties providing mobile payments and may be unsure which consumer protections apply. <2.5. Industry Trends> Mobile Payment Activities: According to the Federal Reserve s 2015 survey, the three most common mobile payment activities among mobile payments users with smartphones were paying bills through a mobile phone web browser or app (65 percent), purchasing a physical item or digital content remotely using a mobile phone (42 percent), and paying for something in-store using a mobile phone (33 percent). Partnerships: Some industry stakeholders we spoke with said that the relationship between banks and mobile payment firms has changed to more partnerships because banks and mobile payment firms recognize mutual benefits. For example, mobile payment firms can benefit from banks experience with regulatory compliance and banks can remain competitive by meeting the needs of their customers. <2.6. Regulation and Oversight> The regulatory and oversight framework for mobile payments consists of a variety of federal and state regulation and oversight. Determining which laws apply to mobile payments depends on several factors, including agency jurisdiction, mobile payment providers relationship to depository institutions, and the type of account used by a consumer to make a mobile payment. Three of the federal prudential regulators Federal Reserve, FDIC, and OCC are authorized to examine and regulate the provision of certain services provided by mobile payment providers for federally insured banks and thrifts. For example, these regulators can examine mobile payment providers that are considered third-party service providers of a regulated depository institution if the payment provider offers services to customers on behalf of a depository institution. The federal prudential regulators can also take enforcement actions against mobile payment providers if the provider is an institution-affiliated party of the bank. CFPB has consumer protection authority over certain nonbank institutions and enforcement jurisdiction over entities that offer or provide consumer financial products or services. In October 2016, CFPB issued a final rule to add prepaid cards and some of the payment services that fintech providers are offering, such as PayPal, to the definition of accounts covered under regulations applicable to electronic fund transfer systems such as automated teller machine transfers, telephone bill-payment services, point-of-sale terminal transfers in stores, and preauthorized transfers from or to a consumer s account (such as direct deposit and Social Security payments). According to CFPB staff, the rule is aimed at providing wide-ranging protections to consumers holding prepaid accounts. Although this rule largely focuses on prepaid cards, the protections also extend to P2P payments and certain mobile wallets that can store funds. Nonbank providers of financial products and services, including mobile payment providers and prepaid card providers, may be subject to FTC consumer protection enforcement actions. According to FTC staff, FTC has brought and settled enforcement actions alleging unfair or deceptive conduct by wireless providers providing mobile payment services. Finally, at the federal level, the Federal Communications Commission (FCC) has jurisdiction over wireless providers, which provide the devices used for mobile payments or sometimes collect such payments through their customers billing statements. According to FDIC, to date, no federal laws and regulations specifically govern mobile payments. However, to the extent a mobile payment uses an existing payment method, the laws and regulations that apply to that method also apply to the mobile payment. Table 2 provides examples of federal laws and regulations relevant to mobile payment transactions. State regulators also have authority to regulate mobile payment providers. For example, most states have licensing and regulatory authority over money service businesses that provide money transfer services or payment instruments, which can include mobile payment providers. For example, fintech firms such as PayPal and Google Wallet are subject to state money transmitter laws. State regulators have made efforts to make the state licensing process less burdensome by conducting multistate exams and using NMLS to facilitate these processes. According to interviews with some agencies, they formed working groups to monitor and understand mobile payments. These examples are listed below. In January 2010, the Federal Reserve started the Mobile Payments Industry Working Group to facilitate discussions as to how a successful mobile payments (as opposed to mobile banking) system could evolve in the United States. The working group meets several times annually to share information and ideas. In addition, the Federal Reserve established a multidisciplinary working group focused on analyzing potential innovation in fintech including payments. FDIC established a formal FinTech Steering Committee and two working groups, one focus of one of the working groups includes mobile payments. CFPB met with payment innovators through its Project Catalyst. CSBS formed an Emerging Payments and Innovation Task Force in 2013 to study changes in payment systems to determine the potential impact on consumer protection, state law, and banks and nonbank entities chartered or licensed by the states. <3. Digital Wealth Management> Digital wealth management platforms, including robo-advisors, use algorithms based on consumers data and risk preferences to provide digital services, including investment and financial advice, directly to consumers. Digital wealth management platforms provide services including portfolio selection, asset allocation, banking and account aggregation, and online risk assessments. <3.1. What It Is and How It Works> According to data from SEC, there were over 12,000 SEC-registered investment advisers in 2016. However, according to staff from SEC, because digital wealth management firms register as investment advisers and are not all separately counted or categorized, the total number of these entities is not known. Digital wealth management firms incorporate technologies into their portfolio management platforms primarily through the use of algorithms designed to optimize wealth management services. Fully automated platforms have features that let investors manage their portfolios without direct human interaction. Examples of current digital wealth management firms include Betterment, Wealthfront, Personal Capital, BlackRock s Future Advisor, and Acorns. Publications we reviewed indicate that digital wealth management platforms typically collect information on customers and their financial history using online questionnaires. These questionnaires may cover topics such as the customer s age, income, investment horizon, risk tolerance, and expected returns, among other information. Digital wealth management platforms allow customers with a need to connect multiple accounts often across multiple providers to create a holistic picture of their wealth and more easily manage their finances across multiple asset classes and firms. Digital wealth management platforms use the information inputted by the customer to help the customer select a risk profile. The firms then use algorithms to generate a suggested investment strategy to the customer based on that risk profile. Platforms can automatically rebalance customers portfolios in response to the performance of the underlying investments, and the customers goals (see fig. 5). Adviser-assisted digital wealth management platforms combine a digital client portal and investment automation with a virtual financial adviser typically conducting simple financial planning and periodic reviews over the phone. Examples of current platforms in this category include Personal Capital, Future Advisor, and LearnVest. To further differentiate themselves, they may offer value-added services like asset aggregation capabilities that enable the provision of more holistic advice than fully automated digital wealth managers, based on a comprehensive view of client assets and liabilities, as well as expense-tracking and advice on budgeting and financial-goal planning. <3.2. Potential Benefits> Increased access to wealth management services: Publications we reviewed indicated that digital wealth management platforms may expand access to underserved segments such as customers with smaller asset amounts than those of traditional consumers of wealth management services. For example, some platforms may not require customers to maintain minimum balance amounts. Traditional firms may require minimum investment amounts of $250,000, whereas some digital platforms require a minimum of approximately $500 or no minimum at all. Convenience: Regardless of location or the time of day, investors with a smart phone, tablet, or computer can make changes to their data and preference inputs, send instructions, access their portfolios, and receive updated digital advice. Lower fees: According to publications we reviewed, digital wealth management platforms may charge lower fees for services such as investment trade fees than traditional wealth management firms. <3.3. Potential Risks> Some of the potential risks associated with digital wealth management platforms may be similar to those that exist with traditional wealth management services. Examples of those risks are discussed below. Insufficient or incomplete information from customers: According to publications we reviewed, some digital wealth management platforms generate investment outputs based on information provided by the client from questionnaire responses. A traditional wealth manager is able to ask and clarify questions and request follow-up information to capture a customer s full finances and goals. However, automated responses may not allow the platform to capture a full picture of the customer s circumstances or short-term goals, for example, whether the customer may need investment money to buy a new home. If the customer does not understand a question, or does not answer it completely, the platform may not assess customers full financial circumstances; for example, if a customer provides conflicting information on his or her finances, the digital wealth management platform may not have a full picture of the client s financial condition or a customer may end up with an undesired portfolio. Inaccurate or inappropriate assumptions: Staff of SEC s Office of Investor Education and Advocacy (OIEA) and FINRA issued an investor alert on May 8, 2015, which cautioned that assumptions that underlie the algorithms used by digital wealth management firms could be incorrect. For example, the alert states that the platform may be programmed to use economic assumptions that will not react to shifts in the market. Specifically, if the platform assumes that interest rates will remain low but interest rates rise instead, the platform s output will be flawed, which could adversely affect investors. Consumer Data Protection: To use digital wealth management platforms customers must enter personal information. According to an investor alert issued by SEC and FINRA staff, digital wealth management platforms may be collecting and sharing personal information for purposes unrelated to the platform. The alert cautions customers to safeguard personal information. <3.4. Industry Trends> According to publications we reviewed, fintech firms, including at least one digital wealth management platform, are using or have considered using innovative technologies such as machine learning and artificial intelligence. For example, one platform is intended to track consumers financial account activity and apply user behavior to the advice it delivers. Hybrid services have evolved that combine traditional wealth management and digital wealth management. For example, in 2015 Vanguard implemented a service that offers investors an option of consulting with a human advisory representative in addition to its automated investment platform. Traditional wealth management firms also offer digital wealth management services. For example, in 2015, Charles Schwab developed Intelligent Portfolios, available to customers with $5,000 in savings, and Deutsche Bank launched a robo-advisor within its online investment platform. <3.5. Regulation and Oversight> SEC regulates investment advisers, which generally includes firms that provide digital wealth management platforms. Other federal and state agencies have a role with respect to oversight of digital wealth management firms, depending upon the services a digital wealth management platform provides. SEC and state securities regulators share responsibility for the oversight of investment advisers in accordance with the Investment Advisers Act of 1940 (Advisers Act). SEC subjects digital wealth management firms to the same regulations as traditional investment advisers and requires digital wealth management firms that manage over $110 million in assets to register as investment advisers. The Advisers Act generally requires anyone in the business of receiving compensation for providing investment advice to others regarding securities to register with SEC or one or more states. SEC s supervision of investment advisers includes evaluating their compliance with federal securities laws by conducting examinations, including reviewing disclosures made to customers. It also investigates and imposes sanctions for violations of securities laws. SEC held a forum in November 2016 that discussed fintech innovation in the financial services industry, including the impact of recent innovation in investment advisory services, which includes digital wealth management. In January 2017, SEC s Office of Compliance Inspections and Examinations announced that electronic investment advice is a 2017 examination priority. In February 2017, SEC s Division of Investment Management issued guidance for robo-advisers that provide services directly to clients over the Internet. SEC s Office of Investor Education and Advocacy issued an Investor Bulletin that provided information to help investors using robo-advisers to make informed decisions in meeting their investment goals. State securities regulators generally have registration and oversight responsibilities for investment adviser firms that manage less than $100 million in client assets, if they are not registered with SEC. According to staff from SEC, state securities regulators can bring enforcement actions against firms with assets of any amount for violations of state fraud laws. For example, the state of Massachusetts Securities Division issued a policy in April 2016 stating that fully automated robo-advisers may be inherently unable to carry out the fiduciary obligations of a Massachusetts state-registered investment adviser. The policy states that until regulators have determined the proper regulatory framework for automated investment advice, robo-advisers seeking state registration will be evaluated on a case-by-case basis. FINRA, a self-regulatory organization, is also responsible for regulating broker-dealers doing business with the public in the United States. Broker-dealers can use digital investment advice tools to provide investment services to clients. According to FINRA staff, FINRA may test the use of digital wealth management technologies by broker-dealers as part of its examinations. According to FINRA staff, FINRA has taken one enforcement action against a broker-dealer offering clients robo- adviser-like functionality. In March 2016, FINRA issued a report to share effective practices related to digital investment advice tools and remind FINRA-registered broker-dealers of their obligations under FINRA rules, including that broker-dealers are required to supervise the types of businesses in which they engage. CFTC has oversight authority with respect to commodity trading advisers under the Commodity Exchange Act. According to CFTC officials, digital wealth management firms that meet the statutory definition of a commodity trading adviser would be subject to the same oversight and compliance obligations as other traditional commodity trading advisers. The act generally requires that commodity trading advisers register with CFTC. Digital wealth management firms are subject to consumer protection laws that are enforced by FTC. FTC is charged with protecting consumers against unfair or deceptive acts or practices in commerce. According to FTC staff, FTC enforces applicable consumer protection laws in regard to fintech services, such as digital wealth management, just as it applies those laws to other products and services. According to staff from CFPB, certain aspects of digital wealth management such as data aggregation, credit, or linked deposit accounts may also be subject to consumer oversight authority by CFPB. In April 2016, the Department of Labor (DOL) adopted a regulation that would expand the circumstances in which those who provide retirement investment advice, including digital wealth management firms, would have to abide by a fiduciary standard, acting prudently and in the best interest of their clients. The rule was scheduled to be applicable in April 2017. However, the President issued a memorandum on February 3, 2017, that directed the Secretary of DOL to examine the fiduciary duty rule to determine whether it may adversely affect the ability of Americans to gain access to retirement information and financial advice. In April 2017, DOL extended the applicability date for an extra 60 days. <4. Distributed Ledger Technology> Distributed ledger technology (DLT) was introduced in 2009 as a technology intended to facilitate the recording and transferring of bitcoin, a virtual currency, specifically using blockchain. DLT has the potential to be a secured way of conducting transfers of digital assets in a near real-time basis potentially without the need for an intermediary. <4.1. What It Is and How It Works> DLT is a generic technology for a distributed database, while blockchain is one type of DLT. According to one study we reviewed, DLT involves a distributed database maintained over a network of computers connected on a peer-to-peer basis, such that network participants can share and retain identical, cryptographically secured records in a decentralized manner. A network can consist of individuals, businesses, or financial entities. One type of DLT is blockchain, which is a shared ledger that records transactions in a peer-to-peer network. Blockchain is a series of digital blocks of information (transactions) that are chained together. The party initiating a transaction sends a message represented as a block to a network of participants that can include financial institutions, financial market participants, and regulators. For a transaction to be included, network participants must validate the transaction. Once a transaction has been confirmed, details of the transaction are recorded on the blockchain that can be visible to network participants (see fig. 6). DLT solutions can have different types of access control. For example, there may be permissionless (public) ledgers that are open to everyone to contribute data to the ledger and cannot be owned; or permissioned (private) ledgers that may have one or many owners and only they can add records and verify the contents of the ledger. According to one study, permissioned DLT is not fully decentralized. According to publications we reviewed, an important feature of blockchain is that transactions added to a ledger are validated by network participants. This validation process is referred to as a consensus mechanism. Consensus mechanisms can help prevent the problem of double spending. Publications we reviewed indicate there are different kinds of consensus mechanisms that include proof-of-work and proof-of- stake. Proof-of-work may be used in permissionless DLT and proof-of- stake may be used in permissioned DLT. Consensus mechanisms also incorporate security aspects such as cryptography and digital signatures that are listed below: Cryptography is used to encrypt data to ensure transactions are valid and provide identity verification. For example, during asset transfers, a form of cryptography known as public key cryptography usually forms the foundation of the transaction validation process. Digital signatures are based on cryptography and are used in DLT to certify the authenticity of transactions (i.e., to show that a person is the true owner of an indicated digital identity). When a person creates and sends a DLT transaction, the transaction must also bear that person s digital signature. <4.2. Who Uses It> According to publications we reviewed, agencies, financial institutions, and industry stakeholders have identified potential uses for DLT in the financial service industry through the clearing and settlement of financial transactions. Examples of these transactions include: private trades in the equity market; and insurance claims processing and management. DLT can also incorporate smart contracts. Smart contracts can automate different kinds of processes and operations. For example, smart contracts can facilitate the automation of complex, multiparty transactions, such as the payment of bonds and insurance coupons. According to one study, there are several versions of smart contracts composed using computer code. <4.3. Potential Benefits> Transparency: According to publications we reviewed, DLT has the potential to facilitate transparency between financial institutions, regulators, and other financial market participants. DLT can increase transparency between participants by creating a shared record of activity where participants have access in real time. Changes by any participant with the necessary permission to modify the ledger are immediately reflected in all copies of the ledger. Because distributed ledgers can be designed to be broadly accessible and verifiable, the technology could enhance financial market transparency. Efficiencies: According to publications we reviewed, DLT can enhance efficiencies in securities and payment clearing and settlement times. Specifically, DLT has the potential to reduce settlement times for securities transactions by facilitating the exchange of digital assets during the same period of time as the execution of a trade. According to staff from SEC, while the financial services industry is moving toward shortening settlement cycles, DLT may offer efficiencies should it be deployed in securities clearance and settlement functions. In 2015, SEC requested comments on how blockchain technology could facilitate the role of a transfer agent and separately, in 2016, requested comments on the utility of DLT in shortening the settlement cycle for most broker-dealer securities transactions. In addition, conducting international money transfers through DLT can provide real-time settlement. <4.4. Potential Risks> Like most new technologies, DLT can pose certain risks and uncertainties, which market participants and financial regulators and agencies will need to monitor. Operational risk including security risk: According to a publication by the Board of Governors of the Federal Reserve System, operational failures include errors or delays in processing, system outages, insufficient capacity, fraud, and data loss and leakage. According to a FINRA report, given that DLT involves sharing of information over a network it poses security-related risks. The Financial Stability Oversight Council noted that market participants have limited experience working with distributed ledger systems, and it is possible that operational vulnerabilities associated with such systems may not become apparent until they are deployed at scale. According to officials from CSBS, permissionless DLT presents security risks (e.g., anti-money-laundering and Bank Secrecy Act) that can be mitigated. <4.5. Industry Trends> Publications we reviewed suggest some financial institutions have taken several approaches to adopt DLT. For example, some financial institutions have initiated blockchain projects, joined a multiparty consortium, or announced partnerships to examine DLT s potential. In addition, the largest securities depository and a large stock exchange have used DLT. According to the World Economic Forum, 80 percent of banks are expected to initiate blockchain projects by 2017. The R3 industry consortium made up of over 50 financial institutions designed a DLT platform for recording and managing financial agreements named Corda. The Depository Trust and Clearing Corporation proposed to build a derivatives distributed ledger solution for post-trade processing. Through this initiative, the Depository Trust and Clearing Corporation seeks to reduce costs and increase efficiencies in the post-trade process. In December 2015, the stock exchange Nasdaq enabled its first trade on a blockchain using its Linq ledger through a private blockchain developer. Nasdaq Linq is a digital ledger technology that leverages a blockchain to issue and record transfers of shares of privately-held companies. <4.6. Regulation and Oversight> Continued development of DLT is needed to understand how DLT and its components will be regulated by the existing legal and regulatory system. Additionally, it is unclear whether new regulation will need to be created because DLT can present new and unique challenges. According to the Financial Stability Oversight Council, financial regulators should monitor how a DLT network can affect regulated entities and their operations. Representatives of financial regulators have noted the importance of implementing DLT in a manner that is transparent and satisfies regulatory requirements. With respect to virtual currencies, federal and state regulators have taken varied approaches to regulation and oversight. For example, in 2015, CFTC stated it considers bitcoin and other virtual currencies to be included in the definition of commodity under the Commodity Exchange Act. SEC s Office of Investor Education and Advocacy has stated that the rise of bitcoin and other virtual and digital currencies creates new concerns for investors. Two bureaus within the Department of the Treasury treat bitcoin in different ways, including the Department of the Treasury Financial Crimes and Enforcement Network (FinCEN), which determined that certain virtual currency businesses would be money transmitters under the Bank Secrecy Act, subject to regulation as money services businesses, and the Internal Revenue Service, which treats bitcoin as property for U.S. federal tax purposes. FTC can apply the Federal Trade Commission Act to combat unfair or deceptive acts or practices in or affecting commerce, which includes virtual currencies. In addition, approximately 44 states have issued licenses to companies that use virtual currency in their business model. The existing regulatory complexity for virtual currencies indicates that regulatory approaches for future applications for DLT will also be complex. According to interviews we conducted, some agencies and one industry association formed working groups to monitor and understand DLT and virtual currencies. These examples are listed below. In 2015, CFTC formed a working group on blockchain, distributed ledger technology, and virtual currencies to study their application to the derivatives market and promote understanding and communication across the agency. In 2017, the group broadened its focus to cover other aspects of fintech and changed its name to the FinTech Working Group. In 2016, the Federal Reserve established a working group that is looking at financial innovation across a broad range of responsibilities, including in payments and market infrastructures, supervision, and financial stability. In November 2013, SEC formed an internal Digital Currency Working Group to build expertise; identify emerging risk areas for potential regulatory, examination, and enforcement action; and coordinate efforts within SEC in the digital and virtual currency space. In November 2016, the group changed its name to reflect that its efforts had expanded beyond digital and virtual currencies into related distributed ledger technologies and their applications. According to SEC staff, the Distributed Ledger Technology Working Group plans to evaluate when and how distributed ledger technology will be used within the securities market. In 2016, FDIC established the FinTech wholesale working group of intra- agency experts to monitor work in the areas of DLT, blockchain, and smart contracts. In 2015, the Chamber of Digital Commerce formed an alliance to provide technical assistance and periodic informational sessions on Bitcoin, other digital currencies, and broader uses of blockchain. <5. Agency Comments and Our Evaluation> We provided a draft of this report for review and comment to CFPB, CFTC, CSBS, FDIC, the Federal Reserve, FINRA, FTC, NCUA, OCC, SBA, SEC, and Treasury. We incorporated technical comments we received from these agencies, as appropriate. In addition, we received written comments from NCUA and CSBS, which are summarized below and reprinted in appendixes II and III. In its written comments, NCUA acknowledged that regulators face challenges understanding the risk of the rapidly evolving financial technology industry and the challenge of balancing regulations and guidance to address those risks against stifling innovation. NCUA noted that it continues to evaluate risks and monitor the evolving market impact driven by fintech companies and to indirectly supervise activities through credit unions to the extent possible. In its written comments, CSBS noted that it had formed a task force to study fintech developments and determine the potential impact on consumer protection, state law, and banks and nonbank entities chartered or licensed by the states. CSBS also provided additional information about the state regulatory system for marketplace lending, mobile payments, and distributed ledger consumer products while noting that the states actively license and supervise companies engaged in these services. CSBS also noted that the states have work under way to improve the Nationwide Multistate Licensing System with a technological overhaul to improve compliance with state licensing requirements. We are sending copies of this report to the congressional requesters, agencies, and other interested parties. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Agencies with Oversight Responsibilities Related to Financial Technology Firms Regulation of financial technology (fintech) firms depends on the extent to which the firms provide a regulated service and the format in which the services are provided. Table 3 explains the basic functions of federal and state regulators and agencies with oversight responsibilities related to the following subsectors: marketplace lending, mobile payments, digital wealth management, and distributed ledger technology. Appendix II: Comments from the National Credit Union Administration Appendix III: Comments from the Conference of State Bank Supervisors Appendix IV: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> GAO staff who made major contributions to this report include Harry Medina (Assistant Director), Lauren Comeau (Analyst in Charge), Namita Bhatia-Sabharwal, Chloe Brown, Pamela Davidson, Janet Eackloff, Cody Goebel, Davis Judson, Silvia Porres, Tovah Rom, Jessica Sandler, and Jena Sinkfield. Bibliography Accenture. The Rise of Robo-Advice: Changing the Concept of Wealth Management, 2015. Becker, Krista. Mobile Payments: The New Way to Pay? Federal Reserve Bank of Boston Emerging Payments Industry Briefing, February 2007. BlackRock. Digital Investment Advice: Robo Advisors Come of Age, September 2016. Board of Governors of the Federal Reserve System. Consumers and Mobile Financial Services 2016. March 2016. Board of Governors of the Federal Reserve System. Consumer Compliance Outlook, Fintech Special Edition, 3rd ed. Philadelphia, Pa.: 2016. Chamber of Digital Commerce, Smart Contracts Alliance. Smart Contracts: 12 Use Cases for Business & Beyond. Washington, D.C.: December 2016. Congressional Research Service. Marketplace Lending: Fintech in Consumer and Small-Business Lending. September 6, 2016. Consumer Financial Protection Bureau. Project Catalyst report: Promoting consumer-friendly innovation. Washington D.C.: October 2016. Crowe, Marianne; Susan Pandy, David Lott, and Steve Mott, Is Payment Tokenization Ready for Primetime? Perspectives from Industry Stakeholders on the Tokenization Landscape, Federal Reserve Bank of Atlanta and Federal Reserve Bank of Boston, June 11, 2015. Department of the Treasury. Opportunities and Challenges in Online Marketplace Lending. May 10, 2016. Deloitte. Digital Disruption in Wealth Management - Why Established Firms Should Pay Attention to Emerging Digital Business Models for Retail Investors, 2014. EY, Advice Goes Viral: How New Digital Investment Services Are Changing the Wealth Management Landscape, 2015. Federal Deposit Insurance Corporation. Supervisory Insights, Marketplace Lending. Winter 2015. Federal Deposit Insurance Corporation. Supervisory Insights, Mobile Payments: An Evolving Landscape. Winter 2012. Federal Deposit Insurance Corporation. 2015 FDIC National Survey of Unbanked and Underbanked Households. October 20, 2016. Federal Deposit Insurance Corporation. Opportunities for Mobile Financial Services to Engage Underserved Consumers Qualitative Research Findings. May 25, 2016. Federal Reserve Bank of Cleveland. Click, Submit: New Insights on Online Lender Applications from the Small Business Credit Survey. Cleveland, Ohio: October 12, 2016. Federal Trade Commission Staff Report. Paper, Plastic or Mobile? An FTC Workshop on Mobile Payments. March 2013. Financial Industry Regulatory Authority. Report on Digital Investment Advice. March 2016. Financial Industry Regulatory Authority. Distributed Ledger Technology: Implications of Blockchain for the Securities Industry. January 2017. Financial Stability Oversight Council. 2016 Annual Report. Washington, D.C.: June 21, 2016. GAO. Person-to-Person Lending: New Regulatory Challenges Could Emerge as the Industry Grows, GAO-11-613. Washington, D.C.: July 7, 2011. GAO. Virtual Currencies: Emerging Regulatory, Law Enforcement, and Consumer Protection Challenges. GAO-14-496. Washington, D.C.: May 29, 2014. GAO. Financial Regulation: Complex and Fragmented Structure Could be Streamlined to Improve Effectiveness, GAO-16-175. Washington, D.C.: February 25, 2016. GAO. Data and Analytics Innovation: Emerging Opportunities and Challenges, Highlights of a Forum, GAO-16-659SP. Washington D.C.: September 2016. International Organization of Securities Commissions. IOSCO Research Report on Financial Technologies (Fintech). February 2017. McQuinn, Alan, Weining Guo, and Daniel Castro. Policy Principles for Fintech, Information Technology & Innovation Foundation, October 2016. Mills, David; Kathy Wang, Brendan Malone, Anjana Ravi, Jeff Marquardt, Clinton Chen, Anton Badev, Timothy Brezinski, Linda Fahy, Kimberley Liao, Vanessa Kargenian, Max Ellithorpe, Wendy Ng, and Maria Baird (2016). Distributed ledger technology in payments, clearing, and settlement, Finance and Economics Discussion Series 2016-095. Washington: Board of Governors of the Federal Reserve System. Mills, Karen Gordon, and Brayden McCarthy. The State of Small Business Lending: Innovation and Technology and the Implications for Regulation. Harvard Business School working paper 17-042 (2016). Office of the Comptroller of the Currency. Exploring Special Purpose National Bank Charters for Fintech Companies. Washington, D.C.: December 2016. Office of the Comptroller of the Currency. Comptroller s Licensing manual Draft Supplement, Evaluating Charter Applications from Financial Technology Companies. Washington, D.C.: March 2017. Office of the Comptroller of the Currency. OCC Summary of Comments and Explanatory Statement: Special Purpose National Bank Charters for Financial Technology Companies. Washington, D.C.: March 2017. Pew Charitable Trusts. Who Uses Mobile Payments? Survey findings on consumer opinions, experiences. May 2016. Professor Mark E. Budnitz. Pew Charitable Trusts, The Legal Framework Of Mobile Payments: Gaps, Ambiguities, and Overlap. February 10, 2016. Qplum. What is Robo-Advising. Jersey City, NJ: May 5, 2016. Segal, Miriam. Small Business Administration Office of Advocacy. Peer- to-Peer Lending: A Financing Alternative for Small Businesses, Issue Brief Number 10. Washington, D.C.: September 9, 2015. S&P Global Market Intelligence. An Introduction to Fintech: Key Sectors and Trends. October 2016. S&P Global Market Intelligence. 2016 U.S. Digital Lending Landscape. Charlottesville, Virginia: December 2016. The Clearing House. Ensuring the Safety & Security of Payments, Faster Payments Symposium. August 4, 2015. The Conference of State Bank Supervisors and Money Transmitter Regulators Association. The State of State Money Services Businesses and Regulation and Supervision. May 2016. United Kingdom Government Office for Science. Distributed Ledger Technology: beyond block chain. December 2015. United States Postal Service, Office of the Inspector General, Blockchain Technology: Possibilities for the U.S. Postal Service, Report No. RARC- WP-16-011. May 23, 2016. World Economic Forum. The Future of Financial Infrastructure: An ambitious look at how blockchain can reshape financial services. August 2016.
Why GAO Did This Study Advances in technology and the widespread use of the Internet and mobile communication devices have helped fuel the growth in fintech products and services, such as small business financing, student loan refinancing, mobile wallets, virtual currencies, and platforms to connect investors and start-ups. Some fintech products and services offer the potential to expand access to financial services to individuals previously underserved by traditional financial institutions. GAO was asked to review a number of issues related to the fintech industry, including how fintech products and services are regulated. This report, the first in a series of planned reports on fintech, describes four commonly referenced subsectors of fintech and their regulatory oversight. GAO conducted background research and a literature search of publications from agencies and other knowledgeable parties. GAO also reviewed guidance, final rulemakings, initiatives, and enforcement actions from agencies. GAO interviewed representatives of federal agencies, including the federal prudential regulators, state supervision agencies, trade associations, and other knowledgeable parties. GAO is making no recommendations in this report. What GAO Found The financial technology (fintech) industry is generally described in terms of subsectors that have or are likely to have the greatest impact on financial services, such as credit and payments. Commonly referenced subsectors associated with fintech include marketplace lending, mobile payments, digital wealth management, and distributed ledger technology. Marketplace lenders connect consumers and small businesses seeking online and timelier access to credit with individuals and institutions seeking profitable lending opportunities. Marketplace lenders use traditional and may use less traditional data and credit algorithms to underwrite consumer loans, small business loans, lines of credit, and other loan products. Mobile payments allow consumers to use their smartphones or other mobile devices to make purchases and transfer money instead of relying on the physical use of cash, checks, or credit and debit cards. There are different ways to make mobile payments, including the use of a mobile wallet. use algorithms based on consumers' data and risk preferences to provide digital services, including investment and financial advice, directly to consumers. Digital wealth management platforms provide services including portfolio selection, asset allocation, account aggregation, and online risk assessments. Distributed ledger technology was introduced to facilitate the recording and transferring of virtual currencies, specifically using a type of distributed ledger technology, known as blockchain. Distributed ledger technology has the potential to be a secure way of conducting transfers of digital assets in a near real-time basis potentially without the need for an intermediary. Regulation of these subsectors depends on the extent to which the firms provide a regulated service and the format in which the services are provided. For example, a marketplace lender may be subject to: federal regulation and examination by the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency in connection with certain services provided to depository institutions by the lender; state licensing and regulation in the states in which the lender conducts business; securities offering registration requirements administered by the Securities and Exchange Commission if the lender publicly offers securities; and/or enforcement actions by the Bureau of Consumer Financial Protection and the Federal Trade Commission for violations of certain consumer protection laws. To learn about the fintech industry, some agencies hosted forums, formed working groups, and published whitepapers and regulatory guidance.
<1. Background> DOD s counterdrug mission focuses on supporting local, state, federal, and foreign government agencies in addressing the illegal drug trade and narcotics-related terrorism. DOD conducts its mission in three primary areas: detecting and monitoring drug trafficking into the United States, sharing information on illegal drugs with U.S. and foreign government agencies, and building the counterdrug capacity of U.S. and foreign partners. The National Guard identifies three state-specific projects as comprising its counterdrug program state plans, counterdrug schools, and counterthreat finance. The authority to provide funding for the first state project state plans began in 1989 when DOD was authorized by Congress under section 112 of Title 32 of the United States Code to fund the National Guard s drug interdiction and counterdrug activities. Each participating state counterdrug program must develop an annual plan of activities, in coordination with the state s Governor and Attorney General. In developing their plans, states use annual guidance issued by DOD outlining the department s domestic counterdrug program priorities. Once the state plans have been developed, they are reviewed by National Guard counterdrug program officials, and are then sent to DOD for approval. National Guard policy states that state counterdrug programs can provide assistance to interagency partners in 5 mission areas: reconnaissance, technical support, general support, civil operations, and counterdrug training. In 2006, Congress provided authority to the Chief of the National Guard Bureau (NGB) to operate up to five counterdrug schools. These five schools, located in Florida, Iowa, Mississippi, Pennsylvania, and Washington, provide training in drug interdiction and counterdrug activities to personnel from federal agencies; state, local, and tribal law enforcement agencies; community-based organizations; and other non-federal governmental and private organizations. In 2011 the program added a third state project counterthreat finance to assist interagency partners with investigations of drug trafficking and transnational criminal organizations money laundering schemes. Appendix II provides funding information by project and appendix III provides details on the state plans activities and supported organizations. The National Guard counterdrug program is part of DOD s larger counterdrug effort. Congress appropriates funds to DOD s Drug Interdiction and Counterdrug Activities, Defense account, and DOD is authorized to transfer Drug Interdiction account funds to other armed services and defense agencies appropriation accounts. It is from this account that DOD funds the National Guard s participation in domestic interdiction and counterdrug activities. In his fiscal year 2016 budget the President requested approximately $850.6 million for this account to support DOD-wide drug interdiction efforts. Budget data provided by DOD identify $87.9 million intended for the National Guard counterdrug program s state-specific projects a little more than 10 percent of the overall fiscal year 2016 Drug Interdiction account request. <2. Funding Was Generally Steady for Past Decade but Decreased in Fiscal Year 2015> The National Guard counterdrug program budget data provided by DOD show that for fiscal years 2004 through 2014 the program s total directed funding ranged between $219.3 million and $242.1 million with a peak of $247 million in fiscal year 2013 but in fiscal year 2015 was reduced substantially. Congress appropriates funds into DOD s Drug Interdiction account but through its committee reports provides direction to DOD on the specific amounts to allocate for the counterdrug program. Based on DOD data, in every year since fiscal year 2004, Congress has directed funding above DOD s requested amount, keeping program amounts generally steady through 2014. In fiscal year 2013, when DOD began to reduce the amount of funding within the budget request for this program in order to prioritize funding for other DOD counterdrug programs, Congress directed program amounts generally comparable to those of prior years. Specifically, in fiscal year 2013, DOD requested $117 million for the National Guard counterdrug program, about a 40 percent decrease from the prior year s request. From fiscal years 2013 to 2016, DOD reduced its budget request for counterdrug intelligence and technology support, as well as domestic efforts such as those supported by the National Guard more than international interdiction support activities. DOD officials stated that by decreasing requested funding for the counterdrug program they planned to address spending limits required by the Budget Control Act of 2011 and to fund counternarcotics programs in locations deemed a priority, such as Central and South America. According to DOD s data, Congress directed $130 million more than requested in fiscal years 2013 and 2014. These additions offset DOD s reduced request and kept overall counterdrug program funding generally steady. DOD s data show that DOD s budget request for the counterdrug program continued to decline from $112.1 million in fiscal year 2014 to $89.5 million in fiscal year 2015.In fiscal year 2015 Congress directed $86 million more than DOD requested for the program, ultimately leaving the program with a lower total funding of $175.5 million. Figure 1 details DOD s budget data on the counterdrug program s congressionally directed funding, including the DOD s request and the increases above DOD s request. According to DOD s data, in recent years the program has not obligated all of the funding allocated to it from the Drug Interdiction account. In fiscal years 2004 through 2010 the program obligated at least 95 percent of its allocation. However, from fiscal years 2011 through 2014 the program s obligations fluctuated between 83 percent and 96 percent of DOD s allocations, partly due to the timing and amount of allocations received by the program. Funds transferred or allocated from the Drug Interdiction account to various other DOD drug interdiction accounts or programs, including the National Guard program, can be transferred back to the account upon a determination that all or part of the funds are not necessary and remain unobligated. Once funds are returned to the Drug Interdiction account, they are available for reallocation to other DOD counterdrug programs for obligation. Figure 2 details the counterdrug program s obligations from fiscal years 2004 through 2014. NGB and state counterdrug programs officials stated that DOD s internal transfer process for the Drug Interdiction account causes delays when funds become available for the program, thereby impacting the program s ability to obligate funds for planned activities. For example, state program officials stated that in many cases the program cannot provide long-term analytical support, such as investigative and counterthreat finance analysts, throughout the year, and must wait for additional funding before assigning personnel. In some instances, the program can offer partial-year support, but some interagency partners may not accept support for only part of the year because it is difficult for them to provide the necessary training and access to appropriate databases necessary for investigative case work to be assigned before the fiscal year ends and the funding for the position is no longer available. DOD is examining whether it can improve upon the transfer process in order to reduce delays. According to DOD s data, DOD has reallocated some of the National Guard counterdrug program s unobligated funds that were returned to the Drug Interdiction account to other DOD counterdrug programs. Specifically, in fiscal years 2013 and 2014, DOD reallocated a total of $51.8 million of amounts returned to the Drug Interdiction account from the National Guard s counterdrug program to counternarcotic capacity building efforts in the U.S. Africa Command and U.S. Southern Command areas of responsibility. <3. NGB Has Performance Measures, but Does Not Use the Information Collected to Inform State-Level Programs or Oversee the Counterdrug Schools> The NGB has developed performance measures to report on its counterdrug program; however, we found that the information collected is not used to evaluate and inform funding for state-level programs or oversee the counterdrug schools training. Without performance information to inform funding decisions for state-level programs and oversee the counterdrug schools, DOD and Congress cannot ensure that the counterdrug program achieves its desired results and uses its resources most efficiently. <3.1. NGB Has Developed a Set of Performance Measures to Report on the Counterdrug Program s Activities> In 2012 the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats issued the Counternarcotics and Global Threats Performance Metrics System Standard Operating Procedures to be used in the development and documentation of performance metrics for all DOD counternarcotics activities. In response to the guidance, National Guard counterdrug program officials stated that they developed a set of performance measures for use by their program. In fiscal year 2015 the counterdrug program included 26 performance measures that officials stated they used to evaluate the counterdrug program and report on its aggregate performance. These measures include indicators such as the number of cases supported, analytic products produced, students trained, mobile training courses delivered, and reconnaissance hours flown. Appendix V provides details on each of the 26 measures. Our review of the counterdrug program s fiscal year 2015 performance measures against key attributes of successful performance measures identified by GAO found that the set of measures provided information across the program s broad goals, measured three of the program s five core activities, and had limited overlap with each other. We also found that the individual performance measures were linked to the overall objectives of the program and were focused on measurable goals. Some key attributes, such as a clarity, reliability, and objectivity, were reflected to varying degrees, but we found that the National Guard had actions underway to better define and document the program s individual performance measures to improve the clarity and reliability of those individual measures. In February 2015 the National Guard officials completed the Fiscal Year 2015-2016 Counterdrug Analyst Performance Metrics Guide and stated that they were drafting guides for other program activities. <3.2. NGB Does Not Use its Performance Information to Evaluate and Inform Funding for State-Level Programs and Oversee the Counterdrug Schools> We found that the NGB does not use the performance information it collects to help evaluate and inform funding for state-level programs and oversee the type of training offered by counterdrug schools. We have previously reported that setting useful performance measures can assist oversight; with them, program managers can monitor and evaluate the performance of the program s activities, track how the activities contribute to attaining the program s goals, or identify potential problems and the need for corrective measures.According to leading practices for results- oriented management, to ensure that performance information will be both useful and used in decision making throughout the organization, agencies need to consider users differing policy and management information needs. Performance measures should be selected specifically on the basis of their ability to inform the decisions made at each organizational level, and they should be appropriate to the responsibilities and control at each level. NGB officials stated that they are using performance information to report on the program s aggregate performance to DOD and to respond to other requests for information, such as regarding whom the program supports. DOD officials further stated that they use performance information on an ad hoc basis to inform the funding request for the Drug Interdiction transfer account, but that they do not collect information that could be used to evaluate the effectiveness of individual state-level programs or could be used in decision making about funding distributions to states. Such information could include a measure of the quality of the support provided by the National Guard to interagency partners, among other things. Instead, NBG officials were making funding distribution decisions for individual state programs based solely on assessments of threat. According to NGB officials, in 2012 they began using a model to determine the severity of the drug threat in each state and using the assessments of threat to determine funding levels for state counterdrug programs to implement their plans. NGB officials stated that to employ the threat-based resourcing model, NGB uses statistics from national- level databases to develop a distribution percentage for each state that reflects its relative drug threat. This percentage is then applied to the funding provided to the National Guard s counterdrug program. In fiscal year 2015 the amount distributed to the states was $146.1 million. Table 1 shows the distribution percentage to the states and territories, and table 8 in appendix VI provides a detailed breakout by state. Moreover, during the course of our review, we found that the performance information collected did not assist the DOD Counternarcotic Program to oversee the type of training offered by the counterdrug schools. Specifically, the performance measures employed by the NGB focused on the number of students trained and the number of courses available, among other aspects. The officials stated these measures were not useful in the evaluation of the counterdrug school s training activities because they did not provide information on the type of training being offered, such as whether it had a counterdrug focus. In addition, DOD Counternarcotics Program officials acknowledged that they did not have a full understanding of the counterdrug schools activities. To improve their oversight of the schools, DOD Counternarcotics Program officials began a review in December 2014 of the counterdrug schools activities to assess their training efforts. In May 2015, based on the preliminary findings of the review, the DOD Counternarcotics Program included guidance in its memorandum, Preparation of the Fiscal Year 2016 National Guard State Drug Interdiction and Counterdrug Activities Plan, that clarified the mission of the counterdrug schools and the department s priorities for their training, including that all training offered be explicitly linked to counterdrug efforts. As a result, the counterdrug schools are required to submit annual training plans that detail course offerings for review by the NGB and DOD to ensure that the training is focused on DOD s priorities. However, the guidance did not include any changes to the performance information that would be collected by the NGB on the counterdrug schools. We continue to believe that collecting additional performance information, such as on the type of training offered, could help inform evaluations and identify any need for corrective actions in the future for the counterdrug schools. According to NGB officials, their current performance measures were developed in response to DOD guidance to report on the program s aggregate performance to support DOD s annual performance summary report to ONDCP. NGB officials stated that the guidance did not specifically require them to assess the performance of state-level programs; therefore, they did not fully consider the types of measures or information that would be useful to evaluate the effectiveness of individual state-level programs and oversee the counterdrug schools. NGB officials stated that their performance measures were evolving and they believed incorporating performance information in future funding distribution decisions for state programs would be helpful. Officials stated that they were working to develop an approach that uses performance information to inform future funding decisions. Without performance information to evaluate state-level programs and oversee the counterdrug schools, DOD and Congress cannot ensure that the counterdrug program achieves its desired results and uses its resources most efficiently. <4. Conclusions> The National Guard s counterdrug program was established more than 25 years ago to assist efforts of the Governors of 50 states, the District of Columbia, and three U.S. territories in addressing illicit drug production, trade, and consumption. In recent years DOD has sought to focus its counterdrug efforts on international interdiction support activities with less emphasis on other activities including supporting domestic efforts like the National Guard s counterdrug program. Congress has resisted the reductions to domestic efforts, and has directed increased funding to the program. Given the resources that the program offers to individual states and the interagency partners it supports, it is important to ensure that the program uses these resources efficiently and effectively. While threat is an important factor to consider in funding distributions, performance information can also be used to better inform such decisions. DOD and NGB have taken steps to develop performance measures, but DOD has used performance information only in an ad hoc basis to inform the funding request for the Drug Interdiction transfer account, and has not used performance information to evaluate the effectiveness of individual state programs or to oversee training offered by the counterdrug schools. Therefore, the effectiveness of state efforts is not being considered in DOD s funding distribution decisions, and useful information is not being collected to support oversight of the counterdrug schools training. Without an approach that enables decision makers to objectively judge the performance of all elements of the program, neither DOD nor Congress will have assurance that the counterdrug program is achieving its goals in an effective manner. <5. Recommendation for Executive Action> To ensure that resources are being efficiently applied to meet the National Guard counterdrug program s objectives, we recommend that the Secretary of Defense direct the National Guard Bureau in consultation with the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats to take the following two actions: Identify additional information needed to evaluate the performance of the state programs and oversee counterdrug schools training; and Subsequently collect and use performance information to help inform funding distribution decisions to state programs and to conduct oversight of the training offered by the counterdrug schools. <6. Agency Comments and Our Evaluation> In the written comments on a draft of this report, DOD concurred with our two recommendations and identified specific steps it planned to take to address them. With respect to the first recommendation to identify additional information needed to evaluate the performance of state programs and to oversee the counterdrug schools training, DOD stated that it will hold discussions with the counterdrug program s stakeholders to reassess the current performance criteria and to identify new performance criteria to allow it to assess the support the program provides. DOD then will evaluate the criteria to ensure it is reflective of the current information needs of the program both internally and externally and meets national objectives. These steps, once implemented, should help DOD obtain useful information to better inform decision making and to conduct oversight of the program and would satisfy the intent of our recommendation. With respect to the second recommendation to collect and use performance information to help inform funding-distribution decisions to state programs and to conduct oversight of the training offered by the counterdrug schools, DOD stated that it will apply the criteria it identifies to evaluate the effectiveness of each state program to provide support and to meet its objectives. Furthermore, DOD stated that it would take steps to assist states with any needed corrective- action plans. These steps, once implemented, should help to ensure that the program uses resources efficiently and effectively and would satisfy the intent of our recommendation. DOD s comments are printed in their entirety in appendix VII. DOD also provided technical comments, which we incorporated into the report as appropriate. We also provided a draft of this report to DOJ, DHS, and ONDCP for review and comment. DOJ, DHS, and ONDCP officials provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Attorney General of the United States, and the Director of National Drug Control Policy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Scope and Methodology To address our objectives, we reviewed documentation and interviewed officials from the Department of Defense (DOD) who oversee and manage the National Guard s counterdrug program, select state counterdrug programs, and select interagency partners that receive support from state counterdrug programs. Our analysis focused on the state-level operations of the National Guard s counterdrug program, which includes three state-specific projects: 1) state plans, 2) counterdrug schools, and 3) counterthreat finance. We excluded any counterdrug program projects that were specific to federal operations. Also, we used a nongeneralizable case study approach to obtain the perspectives of state counterdrug program officials and interagency partners receiving support from the program. Specifically, we selected 8 of the 53 participating states and territories identifying 2 states within each of the four counterdrug program regions (selecting 1 state with high and 1 state with low drug threat assessments) that also had a counterdrug school or an international boundary. The 8 states that we included in our review were: Connecticut, Florida, Iowa, Mississippi, Pennsylvania, Texas, Utah, and Washington. In the states selected for case study, we interviewed state counterdrug program officials and officials from the following interagency partners, where applicable: High Intensity Drug Trafficking Areas (HIDTA), Drug Enforcement Administration, Customs and Border Protection, and U.S. Immigration and Customs Enforcement s Homeland Security Investigations. We selected interagency partners based on their receiving support from the counterdrug program and on logistics associated with travel. In addition, we obtained and analyzed information fiscal years 2011 through 2014 from a National Guard counterdrug program database that included descriptive statistics of the number of staff days by mission category, support activities, and supported organization. To ensure the accuracy and reliability of the information from the database, we took steps to review the data fields for consistency and missing data; we found that these data were sufficiently reliable for the purposes of the audit. To identify the changes in funding for the National Guard counterdrug program, we conducted an analysis of relevant appropriations and program budget-related documents provided by DOD for fiscal years 2004 through 2015. We began our analysis with fiscal year 2004 data to ensure that our review included data covering at least a 10-year period. To ensure the reliability of our data, we reviewed documentation on funding distributions and financial management policy and interviewed knowledgeable officials about DOD s Drug Interdiction and Counterdrug Activities account, and about how counterdrug program funds are transferred from the account. We also reviewed financial documentation and interviewed DOD, counterdrug program, and interagency partner officials to obtain information on obligations of available funding. We determined that the data were sufficiently reliable for the purposes of this audit. To assess the extent to which the performance information is used to evaluate the counterdrug program s activities, we reviewed documentation and interviewed counterdrug officials about program activities, types of performance information collected, and funding levels for individual state counterdrug programs. First, we evaluated the counterdrug program s 26 fiscal year 2015 performance measures against nine key attributes of successful performance established by GAO. Next, we evaluated the counterdrug program s use of performance information against leading practices for results-oriented management that help agencies develop useful performance measures and use performance information for management decision making as identified by GAO through a review of literature and interviews with experts and staff from five U.S. agencies. Specifically, we interviewed officials from: National Guard Bureau Counterdrug Program o Connecticut Counterdrug Program o Florida Counterdrug Program Multijurisdictional Task Force Training Center Midwest Counterdrug Training Center Regional Counterdrug Training Academy Northeast Counterdrug Training Center o Texas Counterdrug Program o Utah Counterdrug Program o Washington Counterdrug Program Western Region Counterdrug Training Center National Guard Bureau Budget Execution Office Deputy Assistant Secretary of Defense for Counternarcotics and Office of the Undersecretary of Defense, Comptroller Drug Enforcement Administration (DEA) o DEA Miami Division o DEA Houston Division o DEA Philadelphia Division o DEA Denver Division o DEA New Orleans Division o DEA Seattle Division o DEA St. Louis Division o DEA New England Division o DEA Office of Training Federal Bureau of Investigation United States Marshals Service Executive Office for United States Attorneys Department of Homeland Security: Federal Law Enforcement Training Centers Homeland Security Investigations (HSI) o HSI Special Agent in Charge, Miami, Florida o HSI Special Agent in Charge, Seattle, Washington o HSI Special Agent in Charge, Houston, Texas Customs and Border Protection (CBP) o CBP, Miami, Florida Sector Intelligence Unit o CBP, Spokane, Washington Oroville Station o CBP, Spokane, Washington Sector Intelligence Unit o CBP, Laredo, Texas Special Operation Detachment United States Coast Guard High Intensity Drug Trafficking Areas: North Florida HIDTA South Florida HIDTA Rocky Mountain HIDTA Houston HIDTA Philadelphia/Camden HIDTA New England HIDTA Gulf Coast HIDTA Northwest HIDTA Midwest HIDTA Office of National Drug Control Policy We conducted this performance audit from August 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Obligations by State Project The National Guard identifies three state-specific projects as comprising its counterdrug program state plans, counterdrug schools, and counterthreat finance. Table 2 provides the obligations by each state project. Appendix III: Overview of National Guard State Plans Counterdrug Activities and Supported Organizations, Fiscal Years 2011 through 2014 The National Guard s state plans include 15 support activities, which are grouped into five broad mission categories, as shown in table 3. The National Guard counterdrug program collects information on the activities and supported organizations and uses staff days to measure its resource investment. Our analysis of this information found that from fiscal years 2011 through 2014 the state plans invested most of their staff days in the mission categories of technical support and reconnaissance. During this period, the number of staff days invested in civil operations decreased, as shown in figure 3. Of the 15 support activities, investigative case and analyst support was the support activity most frequently provided from fiscal years 2011 through 2014, as shown in table 4. Among the various categories of supported organizations, law enforcement received the most support from the state plans, as shown in table 5. Lastly, the federal agencies to which state plans provided the most support were the Department of Justice and Department of Homeland Security. The specific components that received the most support included the Drug Enforcement Administration, Customs and Border Protection, and Immigration and Customs Enforcement, as shown in table 6. Appendix IV: Overview of DOD s Drug Interdiction and Counterdrug Activities Account Fund Transfer Process After Congress appropriates amounts to the Drug Interdiction account, there are multiple steps by various organizations before the funds are received by each individual state counterdrug program. To begin each transfer process, DOD Counternarcotic Program officials prepare and submit to the Office of the Under Secretary of Defense (Comptroller) a DD1415-3, which details the allocation of funds by appropriation or budget activity account for each program. If no defense appropriations act has been passed and DOD is operating under a continuing resolution, amounts transferred are based on a rate-per-day formula developed by DOD. Once a defense appropriation act is enacted, the Comptroller is required to submit to Congress the department s intended budget execution based on the appropriation act and congressional directions as expressed in House and Senate Appropriation committee reports. This report, which DOD calls the base for reprogramming and transfer authorities (DD1414), is to be submitted no later than 60 days from the enactment of an appropriation. After this baseline is submitted, Comptroller officials review and approve the DD1415-3 and forward it to the Office of Management and Budget. Once approved by the Office of Management and Budget, the Comptroller issues a funding authorization document to transfer funds to the military services appropriation accounts (such as military personnel or operation and maintenance). The military services then transfer funds to appropriation accounts managed by Army National Guard and Air National Guard, which, in turn, distribute the funds onto each state National Guard participating in the program. Figure 4 outlines the fund transfer process to the counterdrug program. The National Guard Bureau s Counterdrug Program office coordinates the process involving the DOD Counternarcotic Program, the Army and Air National Guard budget and financial management offices, and the individual state counterdrug programs. Appendix V: National Guard Counterdrug Program's Fiscal Year 2015 Performance Measures In fiscal year 2015 the counterdrug program officials used 26 performance measures to report on the program s aggregate performance to DOD and respond to requests for information, as shown in table 7. Appendix VI: Overview of the Threat-Based Resource Model and Funding Amounts by State and Territory Each state within the counterdrug program develops an annual plan of activities, in coordination with the state s Governor and Attorney General, that identifies its counterdrug priorities and how it intends to obligate its available funds. To develop these plans, states use annual guidance from DOD that identifies approved activities for the counterdrug program. For instance, investigative case support, ground and aerial reconnaissance, and counterthreat finance analysis are approved activities. The threat-based resource model uses 22 variables to assess the drug threat across the 53 counterdrug programs. Almost half of the variables are based on information from the National Seizure System database. Other variables are based on information from federal agencies such as the Substance Abuse and Mental Health Services Administration and the Federal Bureau of Investigation. To ensure that every state has a viable counterdrug program, the Chief of the National Guard Bureau established $500,000 as the minimum level of funding for each state. According to counterdrug program officials, this amount enables all the states to maintain some capability to address drug threats while limiting the impact on states with higher threats. Table 8 provides details on the state plans distribution percentages by state and territories for fiscal year 2015. The amount of funding each state receives depends on that state s distribution percentage and available funds for the state plans project. Table 9 details the funding distributed to each state and territory in fiscal years 2014 and 2015. Appendix VII: Comments from the Department of Defense <7. DOD s comments refer to GAO report number, GAO-15-533. Given that GAO is issuing its final report in fiscal year 2016, it has changed the report number to GAO-16-133.> Appendix VIII: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Rich Geiger (Assistant Director), Tom Jessor, Linda S. Keefer, Susan C. Langley, Amie Steele Lesser, Felicia M. Lopez, Tobin J. McMurdie, Carol D. Petersen, Richard Powelson, Caitlin N. Rice, Michael D. Silver, Sabrina C. Streagle, and Cheryl A. Weissman made key contributions to this report. Related GAO Products Budget Issues: Effects of Budget Uncertainty From Continuing Resolutions on Agency Operations. GAO-13-464T. Washington D.C.: March 13, 2013. Drug Control: Initial Review of the National Strategy and Drug Abuse Prevention and Treatment Programs. GAO-12-744R. Washington D.C.: July 6, 2012. Office of National Drug Control Policy: Agencies View the Budget Process as Useful for Identifying Priorities, but Challenges Exist. GAO-11-261R. Washington D.C.: May 2, 2011. Drug Control: DOD Needs to Improve Its Performance Measurement System to Better Manage and Oversee Its Counternarcotics Activities. GAO-10-835. Washington D.C.: July 21, 2010. Preliminary Observations on the Department of Defense s Counternarcotics Performance Measurement System. GAO-10-594R. Washington D.C.: April 30, 2010. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington D.C.: September 24, 2009.
Why GAO Did This Study Since 1989 the National Guard has received hundreds of millions of dollars to help enhance the effectiveness of state-level counterdrug efforts by providing military support to assist interagency partners with their counterdrug activities. The program funds the drug interdiction priorities of each state Governor; counterdrug-related training to interagency partners at five counterdrug schools; and state-level counterthreat finance investigations, all of which are part of DOD's broader counterdrug efforts. Senate Report 113-176 included a provision for GAO to conduct an assessment of the state operations of the National Guard's counterdrug program. This report: (1) identifies the changes in funding for the program since fiscal year 2004, and (2) assesses the extent to which performance information is used to evaluate the program's activities. GAO analyzed the program's budgets and obligations data, performance measures, and program guidance, and interviewed knowledgeable officials. What GAO Found The National Guard Bureau (NGB) counterdrug program's budget data show that funding has ranged from about $219.3 million to $242.1 million in fiscal years 2004 through 2014–with a peak of $247 million in fiscal year 2013–but in fiscal year 2015 funding was reduced substantially. Based on Department of Defense (DOD) data, every year since 2004 Congress has directed funding above the requested amount, thus keeping program amounts steady through 2014. In fiscal year 2013, DOD reported requesting $117 million for the program, about a 40 percent decrease from the prior year's request. While DOD reduced its request, however, Congress in fiscal years 2013 and 2014 directed funding at generally comparable amounts from prior years. In fiscal year 2015 Congress directed less of an increase above DOD's request, leaving the program with lower total funding of $175.5 million. The NGB has developed performance measures to report on its counterdrug program; however, the information collected is not used to evaluate and inform funding for state-level programs or oversee the counterdrug schools' training. GAO has previously reported that setting useful measures is important for oversight; without them, managers cannot monitor and evaluate the performance of programs' activities. NGB officials stated that they developed the current measures in response to DOD guidance to report on the program's aggregate performance and did not fully consider the types of measures or information that would be useful to evaluate individual state-level programs and oversee the counterdrug schools. Without collecting and using useful performance information to evaluate state-level programs and oversee the counterdrug schools, DOD and Congress cannot ensure that the counterdrug program is achieving its desired results and is distributing its funding most efficiently. What GAO Recommends GAO recommends that DOD (1) identify additional information needed to evaluate the performance of state programs and oversee counterdrug schools' training; and (2) subsequently collect and use performance information to help inform funding distribution decisions to state programs and to conduct oversight of the training offered by the counterdrug schools. DOD concurred with GAO's recommendations.
<1. Background> Since the first Winter Olympic Games in 1924, the event has grown from 258 athletes representing 16 countries to, in the case of the 2006 Turin Olympics, approximately 5,000 athletes and coaches from 85 countries. In addition, an estimated 1.5 million spectators and 10,000 media personnel attended the 2006 Winter Olympic Games. While the stated goal of the Olympic movement is to contribute to building a peaceful and better world, its history includes tragedy and terror as well. At the 1972 Munich Games, Palestinian terrorists attacked the Israeli Olympic team, resulting in the deaths of 11 Israeli athletes. The 1996 Atlanta Olympic Games were marred by a pipe-bomb explosion that killed 1 person and injured 110 others. One of the International Olympic Committee requirements for countries bidding to host the Games is to ensure the security of the participating athletes and spectators, which is an increasingly challenging task in today s environment of terrorist threats. According to State documents, Italy spent approximately $110 million on security operations for the 2006 Winter Games. In addition, the Italian government designated 15,000 law enforcement personnel, along with military and intelligence support, to provide for overall security for the Winter Games. Italy s Ministry of Interior designated the Prefect of Turin as the local government authority responsible for providing security inside the official venues of the Winter Games. Italy faced the challenge of hosting an Olympics amid a heightened terrorist threat environment. Al-Qaeda has made threats to coalition partners in Iraq and Afghanistan in the past. Moreover, in March 2004, a terrorist attack on commuter trains in Madrid, Spain, killed nearly 200 people, and, in July 2005, a terrorist attack on commuter trains and a bus in London, England, killed over 50 people and injured more than 700. In addition, there is a known presence of Italy-based international terrorist cells and domestic anarchist groups that actively target multinational corporations, critical infrastructure, and government facilities. Italy has highly advanced antiterrorism capabilities and has recently taken additional antiterrorism measures, such as enactment of improved antiterrorism laws and increased physical security measures. Since 2001, several extremist plots in Milan and Rome have been detected and prevented. In addition to these explicit terror threats, Italy has faced difficult security challenges at other recent major events, such as the meeting of the Group of Eight in Genoa in 2001, at which activists clashed violently with Italian police forces, and the funeral of Pope John Paul II in April 2005. In planning for the 2006 Winter Games, winter conditions presented another set of challenges. The Winter Games were located in the remote, northwest corner of Italy, with venues spread over an extensive land area. Although smaller in scale than the 2004 Athens Summer Games, the 2006 Winter Olympic venues included mountain locations that were as far as 60 miles away from Turin, with limited access routes to these mountain sites. (See fig. 1 for locations of venues for the 2006 Winter Olympic Games in Turin, Italy.) Locating suitable lodging for U.S. security and support personnel near key venues was necessary. Furthermore, the distance from the nearest U.S. presence the U.S. Consulate in Milan, which is located about a 90-minute drive from Turin required the establishment of a temporary U.S. post in Turin to support U.S. security efforts and serve as a platform for U.S. activities. Although the host government has the ultimate responsibility for providing security for the Olympics, the United States has a vested interest in ensuring the safety of its citizens in all Olympic locations. In 2001, the United States began planning its security assistance for the 2004 Athens Summer Olympics, responding both to the heightened worldwide anxiety following the September 11 attacks and to Greece s request for international advice on its security plan. Despite widespread fears of a potential terrorist attack on the Olympics, Greece hosted a safe and secure event with no terrorist incidents. With the conclusion of the Athens Games, the United States began planning for security support to the 2006 Turin Winter Olympic and Paralympic Games. This security support is provided by the United States under general executive branch policy guidance and individual agency authorities. For example, State officials cited the use of Presidential Decision Directive 62, which extends the U.S. counterterrorism policy in Presidential Decision Directive 39 to the protection of Americans overseas. State is the lead agency for ensuring the protection of American citizens overseas. According to U.S. officials in Italy, up to 20,000 Americans attended the 2006 Turin Olympics daily. The next Olympics will be the Summer Games in Beijing, China, August 8 to 24, 2008; followed by the Paralympics, September 6 to 17, 2008. Venues for these Games will be spread out across seven cities in China, presenting unique logistical and coordination challenges for security support efforts. According to State officials, over 1 million spectators are expected to attend the 2008 Beijing Games, including a large number of Americans. The next Winter Olympic Games will be held in Vancouver, Canada, February 12 to 28, 2010, followed by the Paralympics on March 12 to 21, 2010. The close proximity of these Games to the United States presents distinctive challenges, such as border security issues. In 2012, the United Kingdom will host the Summer Olympic and Paralympic Games in London. Past terrorist incidents in London and an ongoing terrorist threat climate are likely to present security challenges for these Games. <2. U.S. Security Support Was Based on Italian Security Capabilities, Supported by Coordinated U.S. Government Effort> The security support that the United States provided for the 2006 Winter Games was largely based on an understanding of Italy s advanced security capabilities gained through a long-standing, U.S.-Italian counterterrorism and military partnership. The U.S. Embassy in Rome led the coordinated governmentwide effort delegating responsibility for the coordination of U.S. activities in Turin to the U.S. Consulate in Milan and was supported by a Washington, D.C.-based interagency group. <2.1. U.S. Security Support Informed by an Understanding of Italian Security Capabilities, but without a Formal Assessment of Italy s Olympic Security Plans> Following the Athens Games in the summer of 2004, the United States began planning for (1) the security support it would provide to the Italian government and (2) the protection of U.S. citizens who would be participating in or attending the 2006 Winter Games. In October 2004, the United States held a 3-day interagency conference in Milan for Italy- and U.S.-based officials who would be working on the 2006 Winter Games to share lessons learned from the U.S. effort in Athens and to begin determining U.S. security support for Turin. The United States largely based its security support approach on its understanding of Italian security capabilities, gained from long-standing counterterrorism and military relationships with Italy. In particular, the United States and Italy have an established partnership as members of the North Atlantic Treaty Organization and, more recently, as coalition partners in Iraq. According to U.S. officials, the Italian government s sensitivities about formally sharing certain information limited the United States ability to formally assess Italy s operational plans for providing security for the 2006 Winter Games. Agency officials noted that this made U.S. efforts to plan security and emergency contingencies during the Games more difficult. However, the U.S. Olympic Security Coordinator and other key officials were able to use established relationships with their Italian counterparts to develop a working knowledge of Italy s plans and capabilities for providing security to the 2006 Winter Games and to plan U.S. security efforts. For example, U.S. officials met with their Italian security and law enforcement counterparts to receive information on Italy s security structure and Olympic security integration plan. In addition, over several months in 2005, State reported that more than 20 technical meetings were held between Italian authorities and U.S. representatives from the U.S. Mission in Italy and various federal law enforcement and intelligence agencies to coordinate bilateral cooperation during the Games. Moreover, Italian government representatives held meetings in 2005 with representatives from the United States and other interested governments to discuss Italy s security plans for the Games. For example, in September 2005, the Italian government hosted an international seminar on security concepts for the management of major sporting events, with law enforcement representatives from 11 countries. Furthermore, in October 2005, U.S. representatives were invited by the Italian government to observe its antiterrorism drills testing the efficiency and response capability of the local law enforcement, emergency, and rescue systems in four Italian cities. On the basis of its understanding of Italy s security capabilities, the United States identified specific training and security support that could be provided to support Italy s security efforts. In March 2005, the Italian government requested a consolidated list of the offers of U.S. security assistance, and the interagency working group in Washington identified the available sources and capabilities among the represented agencies to prevent duplication of efforts. In the spring of 2005, this interagency working group provided a comprehensive offer of security support for the 2006 Winter Games, comprising a variety of specific offers from several U.S. agencies. Italy accepted a number of these offers, including crisis management expertise, an assessment of Turin s international airport, and mapping assistance. In addition to direct security assistance to Italy, the United States conducted several exercises to test its own strategies for supporting Italy s security efforts and ensuring the protection of U.S. citizens during the 2006 Winter Games. In November and December 2005, the United States conducted several tabletop exercises to test strategies for ensuring the protection of U.S. citizens, including a joint crisis management exercise that focused on a theoretical terrorist attack in Italy and a crisis management exercise held over several days in Rome, Milan, and Turin to test U.S. crisis planning and execution in preparation for the Games. Figure 2 provides a timeline of U.S. security support activities for the 2006 Winter Games. <2.2. U.S. Olympic Security Support Was Coordinated by the U.S. Embassy in Rome and Supported by Interagency Efforts> The U.S. Embassy in Rome under the leadership of the U.S. Ambassador to Italy led the coordinated interagency efforts in Italy under one centralized U.S. government mission. The U.S. Ambassador delegated the responsibility for coordinating and overseeing U.S. interagency efforts in Turin to the U.S. Consul General in Milan. Located approximately 90-minutes away from Turin, the Consulate in Milan oversaw the establishment of a temporary U.S. presence in Turin to centralize U.S. agency operations and the efforts of key U.S. officials serving as designated point persons for coordinating security and logistical arrangements. To centralize all U.S. activities in Turin leading up to and during the Games, the U.S. Mission in Italy established a U.S. Olympic Coordination Office in Turin to coordinate U.S. security support activities. Under the direction of the U.S. Ambassador and through the U.S. Consul General in Milan, this office served as the center for U.S. security operations and other activities. In particular, during the Games, this office housed a U.S. Olympic command group, comprising senior representatives from State, DOD, and the Federal Bureau of Investigation (FBI). The U.S. command group was responsible for providing in the event of a request for assistance by the Italian government specialized expertise in a variety of areas, including security operations, crisis operations, terrorism investigations, consequence management, and intelligence collection and dissemination. In addition, this office provided limited consular services to American citizens and included a public diplomacy office to liaison with press and support VIP visits during the Games. U.S. efforts in Italy were supported by a Washington, D.C.-based interagency working group the International Athletic Events Security Coordinating Group which included representatives from the intelligence community, State, DHS, DOD, DOE, and DOJ, among others. Specifically, State s Bureau of Diplomatic Security (DS) and Office of the Coordinator for Counterterrorism serve as cochairs of this interagency working group. Chaired by State, the interagency working group facilitated and coordinated, on the domestic side, agencies contributions for the 2006 Winter Games. While this interagency working group has been a useful forum for coordinating the domestic side of U.S. efforts in providing security support to overseas athletic events, it operates without written operational guidance and without the authority for tasking participating agencies in planning for future Olympic Games, according to State and DOJ officials. Specifically, the interagency working group does not have a charter or mission statement that establishes the roles and responsibilities of this group and its members. U.S officials from State and DOJ indicated that, although U.S. support for Turin was coordinated through the interagency working group, the group s lack of clear authority presents confusion over what direction and guidance it can provide for U.S. operations in support of future Olympic Games. This confusion has impacted the efforts of some subgroups formed, in late 2004, by the interagency working group to provide guidance in several key areas. For example, of the subgroups that focus on logistics, transportation security, law enforcement, and intelligence support, only the intelligence support subgroup meets on a regular basis. The subgroup for law enforcement support met only a few times, in part due to the lack of clearly defined roles and responsibilities of participating members. According to a State official from DS, the interagency working group relied on U.S. officials in Italy to inform the group of what support was needed. The same official indicated that clearly defined authorities and responsibilities for the group s participating agencies may support the implementation of effective subgroups in the future. In addition, DS established a major events coordination unit after the 2004 Athens Summer Games, in part to coordinate U.S. security support for major sporting events overseas, according to a State official. However, this unit does not have written guidance for implementing coordination responsibilities for future Olympic events and other major sporting events. Without formal guidance, planning for future Olympic events could be complicated, as roles and responsibilities will have to be redefined on a case-by-case basis. Moreover, given the regular turnover of key staff, such written guidance could facilitate the continuity of future planning efforts. To plan and coordinate U.S. security support operations, State and DOJ officials in Turin worked closely with representatives from the interagency working group. However, these officials in Turin experienced difficulties in planning for interagency operations, in part due to problems in obtaining timely information and decisions from the agencies. For example, State and DOJ officials had difficulty identifying requirements for operations, such as space and classification requirements, due to communication challenges with the interagency community in Washington. According to State officials in Italy and Washington, the lack of proper communication capabilities in the U.S. Olympic Coordination Office in Turin made it difficult to obtain timely information from the agencies. The interagency working group worked to coordinate responses to these officials in Turin. However, State and FBI officials in Turin indicated that many of the agencies did not fully understand the communication infrastructure and logistical requirements until they sent representatives to Turin in the fall of 2005. FBI officials have stated that although the interagency working group works to support the needs of the U.S. effort overseas, it has no authority to task agencies to meet deadlines, which leads to confusion over what direction and guidance should be followed or given by this group. <3. Several U.S. Agencies Contributed to Security Support for 2006 Winter Games; United States Spent Millions on Security Support Activities, but Lacks Formal Mechanism for Coordinating Financial Requirements> Several U.S. agencies contributed to the U.S. security support effort in Turin, identifying more than $16 million in costs over fiscal years 2005 and 2006 to arrange and provide for this support. State initially paid for agencies shared costs, which were reimbursed by funds from the participating agencies; however, no formal mechanism exists for coordinating financial requirements associated with providing security support. <3.1. Several U.S. Agencies Provided Security Support for the 2006 Winter Games> Approximately 20 U.S. agencies, or their component entities, contributed to security efforts for the 2006 Winter Games. These contributions ranged from crisis management and investigative expertise to the provision of equipment, training, and communications and logistical support. Key agencies that contributed to the U.S. effort in Turin included State, DHS, DOD, DOE, DOJ, and intelligence agencies. State coordinated the U.S. interagency efforts in both Italy and Washington, D.C., and also provided security advice and other assistance to U.S. athletes, spectators, and commercial investors. See table 1 for key contributions of U.S. agencies for the 2006 Winter Games. State operated under Presidential Decision Directives 62 and 39, which extend U.S. responsibility for protection of Americans overseas and direct State as the lead agency to ensure the protection of American citizens overseas. Furthermore, the Omnibus and Diplomatic Security and Antiterrorism Act of 1986 directs DS to develop and implement policies and programs for the security of U.S. government operations, including the protection of all U.S. government personnel (other than those under military command) on official duty abroad, and the establishment and operation of security functions at all U.S. government missions. DS is responsible for the establishment and operation of post security and protective functions abroad, and for liaisons with host nation officials to ensure the safety of official U.S. citizens. The Bureau of Consular Affairs is responsible for assisting private Americans traveling and residing abroad. Under State s leadership, other agencies individual authorities were used to provide specific assistance to the Italian government. For example, the Transportation Security Administration (TSA) has the authority under the Aviation and Transportation Security Act to deploy federal air marshals on all select flights from the United States to Italy, and TSA did so for the 2006 Winter Games. <3.2. United States Spent Millions, but Lacks a Formal Mechanism for Coordinating Financial Requirements for Security Support> We surveyed the U.S. agencies identified as contributing security support in advance of and during the 2006 Winter Games. These agencies identified more than $16 million in costs in fiscal years 2005 and 2006 to arrange and provide security support activities for the 2006 Winter Games, with funds from multiple accounts. U.S. agencies did not receive specific Olympic- related appropriations during this period. Of the $16 million, agencies reported to us that they spent more than $5 million in travel costs, including airfare, lodging, and per diem costs for staff who traveled overseas in 2005 and 2006 to provide security support for the 2006 Winter Games. The reported costs during this period do not capture the entirety of costs for activities in support of the 2006 Winter Games. For example, while reported costs include the salaries of key personnel who filled Olympic-related coordination roles, they do not capture the salaries and benefits of other U.S. officials who worked to support the U.S. effort for the Games, as part of their regular duties. State paid for lodging and other administrative support needs associated with establishing U.S. operations in Turin in advance, often to secure limited housing at a lower rate, and these costs were later reimbursed by the participating agencies. Specifically, the U.S. Embassy in Rome paid for initial deposits on hotels because some agencies in Athens had struggled to identify available funding often, several years in advance of the Games for their housing and logistics needs. For Turin, some agencies provided funds to State in advance of the Games, particularly for lodging deposits, while additional reimbursements were made after the Games. According to a State finance official in Rome, State provided $140,000 on lodging contracts and $720,000 on joint administrative services associated with U.S. interagency operations in Turin. The U.S. Embassy in Rome was later reimbursed by participating agencies, including State, for their portion of these joint administrative services. According to State officials in Italy, although the U.S. Embassy in Rome was able to fund these expenses, it struggled to cover them, particularly as costs rose due to the changing requirements of the agencies in outfitting suitable space for their operations. These changing requirements made it difficult for budget personnel at the U.S. Embassy in Rome to identify total joint administrative costs in order to obtain funds from State and other agencies in a timely manner. Although the interagency working group coordinates the domestic side of agency support for U.S. efforts at major international sporting events, it does not have a formal mechanism for addressing funding issues associated with providing this support. State and DOJ officials told us that it would be easier to plan and budget for future Olympic-security support activities overseas, which often begin several years in advance of the Games, if a framework were available for identifying costs and determining how these costs will be funded as early as possible. Such a framework would also be useful for anticipating resource needs, coordinating budgetary requests, and addressing potential funding issues associated with providing U.S. security support to future overseas Games. Agencies have reported their expenditures associated with providing security support for both the 2004 Summer Games in Athens and the 2006 Winter Games in Turin. Although the total reported expenditures for providing security support to these overseas Games are not directly comparable, in part due to the differing sizes of the Games and the differing nature of U.S. security support, they can be helpful in identifying future costs. For both Games, State and DOD reported the two largest portions of costs associated with providing U.S. security support. For the 2004 Summer Games, State and DOD spent $15 million and $12.2 million, respectively. For the 2006 Winter Games, State and DOD spent $6.9 million and $6.6 million, respectively. See figure 3 for key agencies reported expenditures for security support to the 2004 Summer Games and the 2006 Winter Games. The nature of U.S. security support provided by key agencies differed between the 2004 Summer Games in Athens and the 2006 Winter Games in Turin. For the Athens Games, the majority of costs identified by the agencies were travel costs for U.S. personnel supporting the Games and for training programs provided to Greek officials and security personnel. Agencies reported that they spent more than $9 million on training programs provided to Greek officials and security personnel, including the costs for building and executing the consequence management military exercises and FBI forensics training as well as for translating training materials and providing translators at the training sessions. For the Turin Games, U.S. agencies reported that they spent $95,000 on training programs for Italian officials and security personnel. As previously mentioned, the majority of the Turin Games costs identified by the agencies were for U.S. personnel travel and salary, benefits, and related expenditures for staff who were hired to fill Olympic-related coordination roles. <4. Security Planning Lessons Learned Were Applied in Turin and Additional Lessons Were Identified for the Beijing and Other Future Games> Key lessons learned from the 2004 Summer Games were applied in the planning efforts for Turin, including (1) planning early for U.S. security support, (2) designating key U.S. officials to lead and deliver unified messages, and (3) centralizing U.S. resources and interagency operations. U.S. agencies are currently collecting lessons learned from the Turin Games, for distribution to agencies involved in security planning for the Beijing Games and other future Olympic Games. According to U.S. officials involved in the Turin Games, these lessons include the importance of (1) establishing a fully equipped, temporary operations center at the location of the Olympics when a U.S. presence is not nearby; (2) establishing clear roles and responsibilities for U.S. agencies in event planning and crisis response efforts; and (3) planning for Olympic-related expenditures over several fiscal years. <4.1. U.S. Government Agencies Applied Key Lessons Learned from the Athens Games to the Turin Games> As we reported in 2005, key lessons learned from the Athens Games that were highlighted in numerous agency after-action reports and in an interagency lessons learned conference in Milan were applied to the security planning for the Turin Games. These lessons included the importance of (1) planning early for U.S. security support, (2) designating key U.S. officials to lead efforts and deliver unified messages, and (3) centralizing U.S. agency operations and intelligence activities. <4.1.1. Planning Early for U.S. Security Support Activities> Many agency after-action reports from Athens and U.S. officials comments indicated the importance of planning early for providing crisis response support, counterterrorism and intelligence support, and other capabilities coupled with an understanding of host country security capabilities that an existing and cooperative bilateral relationship affords. Such early insight enables advance planning of baseline support, including logistics as well as training and military exercises to enhance the host country s capabilities. Furthermore, early planning of baseline U.S. support enables agencies to coordinate their efforts and plan more efficiently and effectively, including arranging accommodations, vehicle rentals, and communications infrastructure. For example, advance notification of the expected U.S. agency presence would allow for planning of support infrastructure, including the operations and intelligence center. U.S. officials planning for the Turin Games identified the importance of this lesson and began planning immediately after the Athens Games, almost 1 years in advance of the Turin Games. This lesson is being applied to the 2008 Beijing Games as the United States has already begun its planning efforts over 2 years in advance of the Games. According to U.S. officials in Beijing, U.S. officials in Greece; Italy; and Washington, D.C., have shared this lesson with their counterparts in Beijing. The U.S. Mission in Beijing is taking steps to plan for baseline support and identify the types of security support that the United States may provide for the Beijing Games. While U.S. agencies are focusing on the Beijing Games, they also are beginning to assess potential roles for U.S. security support for the 2010 Vancouver Winter Games. These plans are still in the early stages, although bilateral U.S.-Canada state and federal security and transportation officials have already met to discuss Canada s Olympic planning process. The Vancouver Games, located in close proximity to the U.S. border, will present new and different challenges for U.S. security support, such as cross-border security issues. <4.1.2. Designating Key U.S. Officials to Lead Efforts and Present Unified Message> The designation of certain U.S. officials to serve as point persons for U.S. security support efforts is another key lesson from Athens that was applied in Turin. In Athens, the U.S. Embassy had designated individuals to be responsible for political, security, and logistics arrangements, which helped to avoid separate requests for assistance from U.S. agencies and minimized overlap among and overreach by participating U.S. agencies. Athens- and Washington-based officials recommended this strategy for future use. In September 2004, the U.S. Ambassador to Italy delegated organizational responsibility and overall coordination authority for U.S. efforts in Turin to the U.S. Consul General in Milan. In November 2004, State appointed an U.S. Olympic Security Coordinator to serve in Turin as a U.S. focal point for contacts with the host government and to work with the Consul General to develop and communicate a coordinated U.S. message, specifically on matters related to security support. This individual was tasked with crafting and ensuring a consistent message and setting consistent expectations for the host country and multilateral community regarding planned U.S. security support efforts. In addition, in January 2005, an FBI liaison arrived in Italy to serve as the FBI s point of contact for its security support efforts in Turin. According to State and FBI officials, the U.S. Olympic Security Coordinator and FBI liaison worked closely together in Turin to plan for and coordinate U.S. security support operations in Turin. To coordinate the logistical arrangements and needs for U.S. operations in Turin, State appointed a U.S. Olympic Coordinator who arrived in April 2005. This individual served as a U.S. focal point for contacts with the host government, the Turin Olympic Organizing Committee, and the U.S. Olympic Committee and worked with the U.S. Consul General in Milan to develop and communicate interagency information in a coordinated and understandable way. In addition, a dedicated Web site was developed as a ready source of information for Americans on security matters, while also offering helpful advice on other matters, such as how to replace lost passports and locate English-speaking pharmacies. This lesson is being applied to the 2008 Beijing Games through State s appointment of an Olympic Coordinator in June 2005, a Minister Counselor for Olympic Coordination in December 2005, a Deputy Olympic Security Coordinator in January 2006, and the U.S. Ambassador s designation of an Olympic Coordination Office at the U.S. Mission in Beijing to coordinate all arrangements including political, security, and logistical for U.S. security support to the 2008 Summer Games. According to U.S. officials involved in planning for the 2008 Summer Games, providing consistent, clear, and targeted information for Beijing is needed to avert possible confusion within the Chinese government regarding which U.S. agency to speak with to obtain specific assistance. The strategy also will help ensure that U.S. citizens and interests receive consistent information on security and other critical issues. <4.1.3. Centralizing U.S. Activities in One Location> Many U.S. officials noted that the key lesson from Athens that was applied in Turin was the centralization of all U.S. activities in one location. U.S. officials involved in the Athens Games recommended that operations and intelligence centers for future Olympics be colocated to ensure the efficient delivery and dissemination of information among U.S. agencies. U.S. officials planning for Turin identified the importance of this lesson and planned to better centralize resources by colocating all participating U.S. agencies and their functions in one facility in Turin, including operations and intelligence activities and consular services for U.S. citizens. According to U.S. officials who worked on the Turin Games, the colocation of all U.S. agencies and activities in one facility resulted in good coordination, and eliminated many planning and operations problems that had been experienced in Athens. This lesson has been communicated by Washington-, Athens-, and Italy- based personnel to their counterparts in China and has been incorporated into planning efforts for the Beijing Games. According to U.S. officials in Beijing, they are following the Turin model of centralizing U.S. resources, to coordinate interagency needs that will be specific to the Beijing Games and to identify any training or security support that may be provided to the Chinese government. By July 2006, U.S. officials in Beijing plan to have established a U.S. Olympic Coordination Office outside of the U.S. Embassy in Beijing to coordinate U.S. operations leading up to and during the Beijing Games. <4.2. U.S. Agencies Identified Additional Lessons Learned in Turin> U.S. agencies have begun to collect lessons learned from the Turin Games and disseminate them to their Beijing Games counterparts. According to U.S. officials involved in the Turin Games, key lessons from Turin included the importance of (1) establishing a temporary, fully equipped, operations center at the location of the Olympics when a U.S. presence is not nearby; (2) establishing clear roles and responsibilities for U.S. agencies in event planning and operations; and (3) planning early for Olympic-related costs. Officials at State, DOJ, and other key agencies are currently completing after-action reports that are expected to highlight aspects of security support that went well and should be replicated in the future, where feasible, and what aspects could be improved upon. At the time of our review, State and DOJ expected to complete their after-action reports in June 2006. In addition, the Washington-based interagency working group is completing an after-action report that is expected to discuss issues specific to the support provided by this group. According to State, the interagency working group s after-action report is expected to be completed later this year. <4.2.1. Establishing a U.S. Operations Center at the Location of Games> The lack of a U.S. presence in Turin demonstrated the importance of establishing a fully equipped operations center at the location of the Games. Acquiring and outfitting suitable space for an interagency operations center require advance planning, particularly when a U.S. presence is not nearby. In Turin, which is a 90-minute drive from the nearest U.S. Consulate, the U.S. Mission faced unique challenges in establishing a temporary but suitable space for centralizing interagency operations, particularly those related to logistics, communications, and resources. For example, the U.S. Consulate in Milan used its staff to provide logistical support to the U.S. coordinators in Turin, such as the establishment of work space and other administrative support services. Since the U.S. Olympic Coordination Office was not established until July 2005, the U.S. coordinators in Turin worked from their homes and traveled between Milan and Turin to coordinate the U.S. efforts. In addition, proper space and classification requirements of participating agencies were difficult to identify in early planning efforts, in part due to the lack of proper communication capabilities between U.S. officials in Turin and U.S. agencies in Washington, D.C. Agency officials in Italy and Washington attributed this difficulty, in part, to this being the first time that the United States had attempted to establish a temporary U.S. facility to coordinate security support provided by all participating U.S. agencies. Although these challenges were resolved in time for the Turin Games, U.S. officials in Italy and Washington stated that authoritative decision making is necessary for budgeting and identifying requirements for setting up an interagency operation center. Due to the presence of a U.S. Embassy in Beijing and three U.S. Consulates near other Olympic venues, U.S. operations in Beijing will not require the establishment of a fully equipped U.S. operations center. However, shortage of space at the U.S. Mission requires the establishment of a U.S. Olympic Coordination Office outside of the U.S. Embassy in Beijing. U.S. officials in Beijing and at the interagency working group in Washington have begun discussing the communication, infrastructure, and other logistical requirements for centralizing and coordinating U.S. agency security support efforts before and during the Beijing Games. In addition, U.S. officials have held preplanning discussions for the 2010 Vancouver Games regarding work space and operating requirements. <4.2.2. Establishing Clear Roles and Responsibilities for U.S. Agencies> The Turin Games and, to some degree, the Athens Games demonstrated the importance of establishing clear roles and responsibilities for U.S. agencies in the planning and operational stages of U.S. security support efforts. While security support for the Turin Games was generally well- coordinated, U.S. agency officials at State and DOJ have stated that the overall U.S. effort lacks a clear strategy for security support operations at future overseas sporting events. These officials indicated that clear guidance for U.S. agencies roles and responsibilities would identify authorities for decision making and responsibilities during both the planning and operational stages of the U.S. efforts. U.S. officials in Turin prepared an operational plan that was approved by the U.S. Mission in Rome, which outlined the missions of all participating agencies and identified reporting authorities for U.S. operations. However, according to State and DOJ officials, although State is the lead agency for ensuring the protection of American citizens overseas, the United States does not have a strategy that clearly outlines the authorities responsible for planning operations at future Olympic Games. <4.2.3. Planning Early for Several Years of Olympic-Related Costs> U.S. support for the Turin Games demonstrated the importance of planning early for Olympic-related costs. In particular, State and DOJ officials noted the importance of identifying early funding sources to make advance payments on housing and logistical needs. In Turin, State struggled to identify funds to secure space, communication, and transportation arrangements, among other expenses, for interagency operations. Although U.S. officials in Italy were able to secure funding for housing and space for U.S. operations, they indicated that it was difficult to obtain timely decisions from the interagency to budget and identify requirements for the establishment of a U.S. presence in Turin. State officials in Italy indicated that they were unable to address these issues until the fall of 2005, when agency representatives came to Italy for operational planning meetings. According to a State finance official in Italy, funds for the joint administrative costs were easier to obtain once the U.S. coordinators in Turin and the interagency were able to identify operational requirements. In addition, several U.S. officials in Italy and Washington, D.C., stated that, for future overseas Games, it would be easier for agencies to identify and plan for their portion of Olympic-related expenditures if a framework were available for identifying costs and addressing funding issues associated with providing security support. U.S. officials in Italy and Washington, D.C., have shared this lesson with their counterparts in Beijing. According to U.S. officials in Beijing, they have already begun to address housing and logistics needs, such as planning to make initial deposits on hotels early to avoid high costs for accommodations as the Games draw nearer. However, these officials indicated potential problems with identifying funds early enough to cover expenditures for this fiscal year. <5. Planning Efforts Are Under Way to Identify U.S. Security Support for 2008 Beijing Games; Efforts Face Unique Challenges> The United States is currently taking steps to coordinate a U.S. security presence and identify the types of security support that the United States may provide for the 2008 Beijing Games. U.S.-Chinese counterterrorism cooperation is limited, and U.S. officials have stated that they lack knowledge of China s capabilities to handle security for the Olympics. In addition, technology transfer and human rights issues present new and different challenges for U.S. security support to these Games. The U.S. government, led by State, is actively working to identify and establish a U.S. security presence to support the interests of its athletes, spectators, and commercial investors during the Games. The U.S. Ambassador to China has designated a U.S. Olympic Coordination Office to be responsible for all arrangements including political, security, and logistical of U.S. efforts for the Beijing Games. In January 2006, State appointed a U.S. Deputy Olympic Security Coordinator to serve in this new office as a U.S. government point person for U.S. security support for the Beijing Games. In addition, the U.S. Olympic Coordinator and the Minister Counselor for Olympic Coordination, appointed by State in June and December 2005, respectively, serve as the point persons for logistic arrangements of U.S. efforts. Both the U.S. Deputy Olympic Security Coordinator and the U.S. Olympic Coordinator were in Turin to participate in U.S. security support for the 2006 Winter Games and to learn from their counterparts in Turin. To ensure the safety of U.S. athletes, spectators, and commercial investors, State has taken steps to identify and secure logistical support. To help identify necessary housing and mitigate high prices on accommodations, State has begun to identify housing options for U.S. personnel, coordinating through the interagency group for estimates of personnel to be temporarily assigned to Beijing during the Games. While China has not yet requested U.S. security assistance as of May 2006, State officials have received inquiries from Chinese officials regarding Olympic security. As of April 2006, discussions between the United States and China were under way for an assessment to identify security needs and U.S. security support for the Beijing Games, according to U.S. officials in Beijing and Washington, D.C. In addition, a working group has been established between U.S. and Chinese counterparts to discuss issues related to the operational and intelligence side of security. U.S. officials have stated that the Chinese have recognized the large size of the U.S. team and its associated security risks and concerns. Although recent steps have been taken, U.S. officials have stated that they lack knowledge of China s advanced capabilities to handle security for the 2008 Summer Games. In addition, U.S. officials from State and DOJ have stated that they are uncertain about the extent of assistance China may request or permit from outside sources. Moreover, U.S. and Chinese counterterrorism cooperation is limited, and military relations have only recently resumed. In July 2003, China joined the U.S. Container Security Initiative, and, in November 2005, the United States and China signed an agreement related to the U.S. Megaports Initiative, allowing for the installation of special equipment at Chinese ports to detect hidden shipments of nuclear and other radioactive materials. The United States has recently resumed, under the current administration, military-to- military contacts with China. In planning for the 2008 Summer Games in Beijing, logistical challenges and technology transfers and human rights issues present unique challenges for U.S. security support. The location of the Beijing Games presents unique logistical challenges in coordinating U.S. security support. Whereas past Summer Games have been centered in and around the host city, the venues for the Beijing Games will be spread across seven Chinese cities along the country s eastern border, presenting potential communication challenges for interagency operations between the U.S. Embassy Beijing and U.S. Consulates located near Olympic venue sites. Figure 4 presents the seven venue cities for the Beijing Games Beijing, Qingdao, Hong Kong, Shanghai, Tianjin, Shenyang, and Qinhuangdao and the U.S. embassy and three consulates located at Olympic venue cities. Unlike the 2004 Summer Games in Athens, U.S. officials have stated the Chinese are much further ahead in planning for the 2008 Summer Games, and these officials anticipate that the venues will be completed on time or ahead of schedule. To prepare for the 2008 Beijing Games, China is planning to host several events in 2007 to test its preparations for major event operations. In addition, any requests for equipment or technology to support security efforts in China must be addressed under U.S. requirements for the protection against sensitive technology transfers, because U.S. sanctions deny the export of defense articles/services, crime control equipment, and satellites to China. A presidential waiver for exports of equipment for security of the Beijing Olympics may be considered. U.S. policy makers and human rights groups have also expressed concern with several human rights issues in China, including freedom of information, freedom of religion, and protection of ethnic and minority groups. <6. Conclusions> In a climate of increased concerns about international terrorism, ensuring the protection of U.S. interests at future Olympic Games overseas will continue to be a priority for the United States. For such future Games, U.S. agencies are likely to continue providing support to host governments in identifying potential security threats and developing strategies to protect U.S. athletes, spectators, and commercial investors several years in advance of and throughout the Olympics. Although each Olympic Games has its own set of unique security requirements, future coordination of U.S. security support efforts for Games under the leadership of State should efficiently and effectively capture the expertise, knowledge, and resource requirements of all U.S. agencies. However, there is currently no formal framework for guiding the development and implementation of U.S. security support for such Games, particularly the coordinated financing of U.S. security support and operations. <7. Recommendations> To enhance planning and preparations for future overseas Olympic and Paralympic Games, we recommend that the Secretary of State, in consultation with members of the International Athletic Events Security Coordinating Group, take the following two actions: Develop written guidance for providing U.S. government security support to future Games. This guidance should identify key personnel and target dates for their assignment and roles and responsibilities, and key steps for the U.S. Mission and regional bureau to undertake in preparing for and leading the U.S. efforts at future Games. To formalize the process for providing security support overseas, we also recommend that State, in consultation with members of the International Athletic Events Security Coordinating Group, consider establishing a charter and mission statement for this group that identifies authorities and responsibilities for coordinating and supporting U.S. security efforts at future Games. Develop a finance subgroup as part of the International Athletic Events Security Coordinating Group, which would bring together budgetary personnel from the various agencies or component entities that contribute to security efforts for overseas Games. A formal mechanism, such as a finance subgroup with established responsibilities, would help the agencies plan for anticipated resources needs, coordinate their budget requests, and address potential funding issues for U.S. security support at future Games. <8. Agency Comments> We provided a draft of this report to the Secretaries of Defense, Energy, Homeland Security, and State and to the Attorney General for their review and comment. The Department of State provided written comments on the draft report, which are reprinted in appendix II. State said that it agreed with our findings and recommendations, and that it is working to develop a more efficient plan for coordinating the planning and implementation of U.S. security support at future major events overseas including the development of written guidance and identified roles and responsibilities for interagency working group members through an after-action review of the International Athletic Events Security Coordinating Group and by working with the National Security Council, Counterterrorism Security Group. Furthermore, State said that the interagency working group has expanded its working subgroups and is considering the inclusion of a budget subgroup to address potential funding issues for U.S. security support at future Olympic Games. State also provided technical comments, which we incorporated where appropriate. The Departments of Defense and Justice did not provide written comments on the draft report; however, they provided technical comments, which we incorporated where appropriate. The Departments of Energy and Homeland Security did not provide written or technical comments. We are sending copies of this report to interested congressional committees, the Secretary of State, the Secretary of Defense, the Secretary of Homeland Security, the Secretary of Energy, and the Attorney General. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To fulfill our objectives in identifying U.S. security strategies in providing security support for the 2006 Winter Games, the various roles and additional costs of the U.S. agencies involved, and the lessons they learned in supporting the Games, we specifically obtained and reviewed several documents, such as available operations and mission plans, security situation reports, and monthly activity reports. In addition, we interviewed officials at the Departments of State (State), Justice (DOJ), Homeland Security (DHS), Defense (DOD), and Energy (DOE) and at certain intelligence agencies. We also conducted fieldwork in Rome, Milan, and Turin, Italy. At State, we interviewed officials in the Bureau of Diplomatic Security, Coordinator for Counterterrorism office; Overseas Security Advisory Council; Bureau of European and Eurasian Affairs; Consular Affairs; and Public Affairs. We also interviewed the U.S. Olympic Security Coordinator and the U.S. Olympic Coordinator. At DOJ, we interviewed officials in the Criminal Division and the Federal Bureau of Investigation (FBI) Counterterrorism Division, including the FBI s Olympic coordinator who served as its liaison in Turin through the operational period of the Games. At DHS, we met with officials from the Transportation Security Administration, the U.S. Secret Service, and the Federal Air Marshal Service. At DOD, we spoke with officials from the Office of the Secretary of Defense and European Command. At DOE, we spoke with officials from the National Nuclear Security Administration. Finally, we regularly attended and met with the interagency working group the International Athletic Events Security Coordinating Group that includes all agencies involved in providing support to international sporting events overseas. During our fieldwork in Italy in November 2005, we obtained documents and interviewed key U.S. officials from the previously mentioned agencies. We obtained and reviewed key documents, such as operational and mission plans. In Rome, we interviewed U.S. officials, including the Deputy Chief of Mission, Regional Security Officer, Minister Counselor for Management Affairs, Financial Management Officer, Information Management Officer, Legal Attach , Public Affairs Officer, Immigration and Customs Enforcement Attach , Transportation and Security Administration representative, Defense Attach , and Consular officer. Additionally, we attended an interagency operations and capabilities presentation for the Chief of Mission. Also, we met with representatives from the Italian Ministry of Interior to obtain the Italian government s perspective on the security support provided by the United States. During our fieldwork in Milan, we interviewed U.S. officials, including the Consul General, Milan; Management Officer; Vice Consul; Public Affairs Officers; and Consular Officer. In Turin, we interviewed the U.S. Olympic Coordinator and the U.S. Olympic Security Coordinator. We also visited the U.S. Olympic Coordination Center in Turin and observed preparations for outfitting the center for the planned operations and intelligence center. Additionally, to understand the challenges associated with providing security support to the distant Olympic venues, we visited several of the Olympics venue sites in Turin, including in Palavela, Pragelato, Sestriere, and Bardonecchia, Italy. To determine cost estimates of U.S. security support to the 2006 Winter Games, we developed a data collection instrument (DCI), based on the previous DCI we used to obtain cost estimates for the 2004 Athens Games, to survey agencies identified as contributing to the U.S. effort. A draft DCI was pretested on two U.S. government agencies. In November 2005, we sent a preliminary DCI to agencies identified by State as being involved in the U.S. security support effort and obtained 11 responses. In March 2006, we sent a final DCI to the agencies previously identified and obtained 20 responses. Except for the National Geospatial-Intelligence Agency, the intelligence community entities did not provide a response to our final DCI on costs for the 2006 Winter Games. Our DCI requested agencies to identify how they collected and tracked the data on costs. We conducted follow-ups with the agencies to clarify information in their responses. We observed that not all agency components collect and track data in a consistent manner. Furthermore, the DCI did not attempt to gather information on the costs of personnel salaries, which are presumed to be a significant outlay for the agency components involved. To assess the reliability of the estimates provided, we compared the preliminary results with the final results and compared this year s 2006 results with those for 2004. In addition, we considered the cost factors cited by the agencies in relation to the sums they reported and conducted follow-ups with the agencies to clarify any questions that arose. We determined that these data were sufficiently reliable to be reported in aggregated form, rounded to millions, and attributed to the agencies, as estimated cost outlays and by category of expenditure, but not in precise, detailed form. To assess how lessons learned in supporting Greece were applied to the Turin Games, we gathered information from the various agencies previously mentioned; reviewed operations plans; attended meetings of the State-chaired interagency working group in Washington, D.C.; and conducted fieldwork in Rome, Milan, Turin, and mountain areas of Italy. To identify lessons learned from the Turin Games, we gathered information from the various agencies and attended meetings of the interagency working group. Although after-action reports were not completed and available at the time of our audit, to identify lessons learned, we interviewed key officials at State, DOJ, the U.S. Consulate Milan, and the U.S. Coordination Center in Turin and attended the interagency working group meetings. To identify how lessons learned in Turin are being applied to the 2008 Beijing Games, we interviewed U.S. officials from State and DOJ and the U.S. Olympic Coordinator and U.S. Deputy Olympic Security Coordinator at the U.S. Mission in Beijing. To identify efforts under way for providing support to the 2008 Summer Games in Beijing, we gathered information from the various agencies previously mentioned; reviewed China s Mission Performance Plan; attended meetings of the State-chaired interagency working group in Washington, D.C.; and interviewed the Deputy Olympic Security Coordinator and Olympic Coordinator at the U.S. Mission in Beijing. We conducted our work from September 2005 to May 2006 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of State Appendix III: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> Key contributors to this report included Diana Glod, Monica Brym, and Dorian L. Herring. Technical assistance was provided by Jason Bair, Joe Carney, Martin de Alteris, Etana Finkler, Ernie Jackson, Jena Sinkfield, George Taylor, and Mike TenKate.
Why GAO Did This Study The 2006 Winter Games in Turin, Italy, were the second Olympic Games to take place overseas since September 11, 2001. The United States worked with Italy to ensure the security of U.S. citizens, and it expects to continue such support for future Games, including the 2008 Games in Beijing, China. GAO was asked to (1) discuss the U.S. approach for providing security support for the 2006 Winter Games and how such efforts were coordinated, (2) identify the roles of U.S. agencies in providing security support for the Games and how they financed their activities, (3) review lessons learned in providing security support and the application of prior lessons learned, and (4) identify U.S. efforts under way for providing security support to the 2008 Beijing Games. What GAO Found In 2004, the United States began planning to provide a U.S. security presence in Italy and security support to the Italian government, and based much of its security strategy on its understanding of Italy's advanced security capabilities. The United States provided Italy with some security assistance, mostly in the form of crisis management and response support. To coordinate U.S. efforts, the U.S. Mission in Italy established an office in Turin as a central point for security information and logistics, and to provide consular services to U.S. citizens during the Games. The U.S. Ambassador to Italy, through the U.S. Consulate in Milan, coordinated and led U.S. efforts in-country, while the Department of State-chaired interagency working group in Washington, D.C., coordinated domestic efforts. While the interagency working group has been a useful forum for coordinating U.S. security support to overseas athletic events, State and Department of Justice (DOJ) officials have indicated that formal guidance that articulates a charter; a mission; and agencies' authorities, roles, and responsibilities would help in planning for security support to future Games. Nearly 20 entities and offices within several U.S. agencies provided more than $16 million for security support activities for the Turin Games. The roles of these agencies--which included the Departments of State, Justice, Homeland Security, Defense, and Energy--included providing crisis management and response support through personnel, equipment, and training and providing security advice and other assistance to U.S. athletes, spectators, and commercial investors. The U.S. Embassy in Rome initially paid for lodging and other administrative support needs, which were reimbursed by the participating agencies, although it struggled to do so. State and DOJ officials indicated that an interagency mechanism for identifying costs and addressing potential funding issues would be useful in providing U.S. security support to future Games. For the Turin Games, agencies applied key lessons learned from the 2004 Athens Games and identified additional lessons for future Games. Key lessons identified from the Turin Games included, the importance of establishing an operations center at the location of the Games, establishing clear roles and responsibilities for agencies in event planning and crisis response efforts, and planning early for several years of Olympic-related expenditures. These lessons learned were communicated by Washington, D.C.- and Italy-based personnel to their counterparts who are preparing for the 2008 Summer Olympics in Beijing. The United States is currently taking steps to identify the types of security support that agencies may provide to support China's security efforts for the 2008 Summer Games and to ensure the safety of U.S. athletes, spectators, and commercial investors.
<1. Background> Following the terrorist attacks on September 11, 2001, the President signed under the authority of the Stafford Act a major disaster declaration for the state of New York. The presidential declaration allowed the state of New York to apply for federal assistance to help recover from the disaster. FEMA was responsible for coordinating the federal response to the September 11 terrorist attacks and providing assistance through a variety of programs, including the CCP. The CCP was authorized in section 416 of the Stafford Act to help alleviate the psychological distress caused or aggravated by disasters declared eligible for federal assistance by the President. Through the CCP, FEMA released federal grant awards to supplement the state of New York s ability to respond to the psychological distress caused by the September 11 terrorist attacks through the provision of short-term crisis counseling services to victims and training for crisis counselors. FEMA relied on SAMHSA to provide expertise related to crisis counseling and public education for Project Liberty. FEMA assigns SAMHSA s responsibilities for the CCP through an annual interagency agreement. These responsibilities included, among other things, providing technical assistance, monitoring the progress of programs conducted under the CCP, and performing program oversight. Within SAMHSA, the Center for Mental Health Services (CMHS) carried out these responsibilities for Project Liberty. CMHS received support from SAMHSA s Division of Grants Management, which provides grant oversight. NYS OMH established Project Liberty under the CCP to offer crisis counseling and public education services throughout the five boroughs of New York City and 10 surrounding counties free of charge to anyone affected by the World Trade Center disaster and its aftermath. The areas served by Project Liberty are shown in the shaded areas of figure 1. The state of New York s primary role was to administer, oversee, and guide Project Liberty s program design, implementation, and evaluation and pay service providers, but not to provide services itself. New York City and the surrounding counties contracted with over 200 service providers and were responsible for overseeing day-to-day activities. Figure 2 shows the organization structure of Project Liberty at the federal, state, and local levels. Under the CCP, Project Liberty s goal was to serve New York City and the 10 surrounding counties by assisting those affected by the September 11 terrorist attacks to recover from their psychological reactions and regain predisaster level of functioning. The CCP supports services that are short- term interventions with individuals and groups experiencing psychological reactions to a presidentially declared disaster and its aftermath. Crisis counseling services were primarily delivered to disaster survivors through outreach, or face-to-face contact with survivors in familiar settings (e.g., neighborhoods, churches, community centers, and schools). Although the CCP does not support long-term, formal mental health services such as medications, office-based therapy, diagnostic services, psychiatric treatment, or substance abuse treatment, FEMA approved an enhanced services program for Project Liberty. This enhanced services program allowed for an expansion of services, including enhanced screening methods; a broader array of brief counseling approaches; and additional training, technical assistance, and supervision to a set of service providers. These enhanced services were intended to address the needs of individuals who continued to experience trauma symptoms and functional impairment after initial crisis counseling but did not need long-term mental health services. Project Liberty was funded through two separate, but related, grant programs: the ISP and RSP. The ISP grant was designed to fund Project Liberty for the first 60 days following the disaster declaration. Because there was a continuing need for crisis counseling services, the ISP was extended to last about 9 months, until the RSP began. The RSP grant was awarded and was designed to continue to provide funding for an additional 9 months of crisis counseling services, but was extended to last for 2 years. Figure 3 shows key milestones for Project Liberty. For the approved ISP application, FEMA made funds available directly to the state. Under the RSP, after approval, funds were transferred from FEMA to SAMHSA, which awarded the grant to the state of New York through SAMHSA s grants management process. The state of New York, in turn, disbursed funds to the service providers and local governments through the Research Foundation for Mental Hygiene, Inc. (RFMH), a not- for-profit corporation affiliated with the state of New York that assists with financial management of federal and other grants awarded to NYS OMH. Figure 4 shows the flow of funds for Project Liberty s ISP and RSP. Service providers were required to submit claims and supporting documentation to receive reimbursement for expenses incurred to provide services. As shown in figure 5, these claims were to have multiple levels of review to determine whether the expenses claimed were allowable under the CCP s fiscal guidelines. This review structure, which placed primary responsibility for reviewing claims on the local government units, was based on the state of New York s existing grant management policies. Additional controls for Project Liberty included (1) NYS OMH site visits to service providers in New York City and surrounding counties; (2) closeout audits by independent auditors of certain New York City service providers to test whether claims were documented and allowable; and (3) annual audits of New York City and surrounding counties conducted under the Single Audit Act, which requires independent auditors to provide an opinion on whether the financial statements are fairly presented, a report on internal control related to the major programs, and a report on compliance with key laws, regulations, and the provisions of the grant agreements. Our publication, Standards for Internal Control in the Federal Government, provides a road map for entities to establish control for all aspects of their operations and a basis against which entities can evaluate their control structures. The five components of internal control are as follows: Control environment. Creating a culture of accountability within the entire organization program offices, financial services, and regional offices by establishing a positive and supportive attitude toward the achievement of established program outcomes. Risk assessment. Identifying and analyzing relevant problems that might prevent the program from achieving its objectives. Developing processes that can be used to form a basis for measuring actual or potential effects of these problems and manage their risks. Control activities. Establishing and implementing oversight processes to address risk areas and help ensure that management s decisions especially about how to measure and manage risks are carried out and program objectives are met. Information and communication. Using and sharing relevant, reliable, and timely information on program-specific and general financial risks. Such information surfaces as a result of the processes or control activities used to measure and address risks. Monitoring. Tracking improvement initiatives over time and identifying additional actions needed to further improve program efficiency and effectiveness. SAMHSA and FEMA were responsible for providing oversight to ensure that the state of New York had a reasonable level of controls in place. Although FEMA retained responsibility for providing leadership and direction for Project Liberty, it assigned primary responsibility to SAMHSA for oversight and monitoring through an interagency agreement. <2. Remaining Grant Funds Primarily Relate to Unresolved Issues at NYC DOEd> Approximately $121 million, more than three-quarters of the $154.9 million in federal funds provided to Project Liberty, were reported as expended as of September 30, 2004, leaving a remaining balance of $33.9 million. About $32 million of the $33.9 million pertain to unresolved NYC DOEd expense claims. According to NYS OMH, NYC DOEd had not been reimbursed for the Project Liberty expenses it incurred throughout the program because NYC DOEd had not been able to provide support for these expenses that met the CCP documentation standards for reimbursement under federal grants. NYS OMH began considering alternative indirect forms of evidence, including internal control summary memos prepared by NYC DOEd, to begin paying NYC DOEd s expense claims. As of March 31, 2005, NYS OMH had accepted alternative forms of supporting evidence to pay $5.2 million of NYC DOEd expense claims; however, this type of alternative evidence provides only limited assurance of the propriety of the claimed amounts. NYS OMH was not sure when and how the remaining NYC DOEd expense claims would be resolved. <2.1. Timing of Reported Expenditures> For the period September 11, 2001, through September 30, 2004, Project Liberty reported that it had expended all of the $22.8 million ISP grant and about $98.2 million of the $132.1 million RSP grant for total reported expenditures of approximately $121 million, leaving a remaining balance $33.9 million. Although crisis counseling services had been phased out as of December 31, 2004, Project Liberty will continue to use the remaining grant funds to process claims for reimbursement of program-related expenses incurred through December 31, 2004, and to cover administrative expenses during the closeout period, which at the end of our fieldwork, was scheduled to end on May 30, 2005. Table 1 and figure 6 show the timing and amount of expenditures reported by Project Liberty for the ISP and RSP grants by quarter through September 30, 2004, compared to the total CCP grant awards for Project Liberty. According to NYS OMH officials, the expenditures reported by Project Liberty from September 11, 2001, through September 30, 2004, included expenses incurred as well as amounts advanced to service providers. During the RSP, Project Liberty made advances to 109 service providers, for a total of about $25.8 million. As of September 30, 2004, the outstanding advance balance was $5.8 million; however, according to an NYS OMH official, the balance had been reduced to $1.2 million as of March 31, 2005. <2.2. NYC DOEd Expense Claims Not Fully Resolved> The vast majority of remaining Project Liberty funds related to unresolved expense claims of NYC DOEd. As of March 31, 2005, NYS OMH officials told us that NYC DOEd had submitted claims for a portion of the $32 million that was budgeted to NYC DOEd to provide crisis counseling services to New York City school children, and planned to ultimately submit claims for the full amount. NYS OMH and NYC DOHMH had not approved the majority of NYC DOEd claims for reimbursement incurred during the RSP because NYC DOEd had not provided support for these expenses that met the CCP documentation standards for reimbursement under federal grants. These standards require that the expenditure of grant funds be supported by detailed documentation, such as canceled checks, paid bills, time and attendance records, and contract and subgrant award documents. According to NYC DOEd officials, they could not meet the documentation standards established by NYS OMH because (1) NYC DOEd reorganized on July 1, 2003, which coincided with the delivery of crisis counseling services under the Project Liberty grant, resulting in significant loss of staff, loss of institutional knowledge, and therefore lost or diminished ability to retrieve supporting documentation, and (2) NYC DOEd s complex financial systems cannot produce the type of transaction-specific documentation required by NYS OMH and makes the process of retrieving supporting documentation unwieldy and administratively burdensome. A SAMHSA official told us SAMHSA was aware of issues involving the supporting documentation for the NYC DOEd expense claims; however, because officials viewed it as a grantee issue, they have had limited involvement with NYS OMH s efforts to resolve these issues. NYS OMH decided to consider alternative evidence, including supplemental supporting documentation in the form of internal control summary memos prepared by NYC DOEd that describe the controls over payments for personnel, other-than-personnel, and community-based organization expenses. Personnel expenses include NYC DOEd workers while the other-than-personnel expenses include other costs incurred directly by NYC DOEd. The community-based organization expenses are those incurred by other service providers on behalf of NYC DOEd. Although NYC DOEd s Chief Financial Officer has signed an attestation stating that the controls described in the summary memos for personnel and other-than-personnel expenses were in place and working during Project Liberty, the level of assurance provided by these internal control summary memos is limited for several reasons. First and foremost, the memos do not provide the type of supporting documentation necessary to verify the validity of the claimed expenses as required by the federal documentation standards. Second, the memos are not certified by an external source, such as an independent auditor. Third, the memos were prepared solely to support NYC DOEd s Project Liberty expenses and may not represent written policies and procedures that existed during the time the claimed expenditures were incurred. And lastly, the memos were prepared toward the end of the program by officials who did not, in all cases, have firsthand knowledge of the controls that existed during the program. As of March 31, 2005, NYS OMH and NYC DOHMH had reviewed and accepted internal control summary memos that describe the controls over payments for personnel and other-than-personnel expenses, and NYS OMH had used these memos and other alternative forms of evidence to reimburse NYC DOEd for $5.2 million in expense claims. These other forms of evidence included observations of services being provided during site visits, the existence of encounter logs evidencing that some services had been provided, and general familiarity with service providers. NYS OMH officials were not sure when they would complete the review of the memo covering the controls over payments for community-based organization expenses and how this memo, along with other alternative forms of evidence, would be used to resolve the remaining $26.8 million in NYC DOEd expense claims. As part of its approval process for expense claims, NYS OMH relied upon NYC DOHMH to certify that the claims submitted were valid and met the CCP documentation requirements. However, because NYC DOEd did not provide the required supporting documentation, NYC DOHMH could not perform the same level of review as it did for the claims of the other Project Liberty service providers. Further, although NYC DOHMH contracted with independent auditors to perform audits of expense claims of certain service providers for Project Liberty, there were no audits performed of NYC DOEd claims, which are expected to total approximately $32 million. At the end of our audit fieldwork, it was not clear when and how the remaining expense claims would be resolved. However, if the internal control summary memos and other alternative evidence continue to be the primary supporting documentation for $32 million in NYC DOEd expense claims, the federal government will have only limited assurance that these payments are an appropriate use of Project Liberty grant funds. <3. Improvements Needed in the Process for Determining Federal Funding> FEMA s process for determining funding is designed to be implemented quickly after a state requests federal assistance to recover from a presidentially declared disaster. The state of New York s grant applications for Project Liberty were developed during the initial dynamic stages of the recovery effort when damage reports and response plans were subject to frequent change. The budgets submitted with the grant applications were revised by the grantee to satisfy certain conditions of grant award. However, we found that although the budgets were developed using estimates established during the initial stages of the disaster, FEMA and SAMHSA never required the state of New York to formally submit revised budget requests to reflect new information and significant changes to the program that occurred as the needs of the affected population became better identified. As a result, FEMA and SAMHSA did not have realistic budget information that could be used to effectively assess how responsible city and state officials planned to spend Project Liberty grant funds. <3.1. Federal Assistance Application Process> The grant applications that the state of New York submitted to FEMA for Project Liberty were prepared with assistance from FEMA and SAMHSA and included a needs assessment, plan of services, and budget. The needs assessment, which was based on a formula developed by SAMHSA, was the state s estimate of the number of people who would need crisis counseling. The plan of services described the state s plan for treating the identified population, including segments of the population needing special services or outreach methods such as counseling and training in various languages. The budget was developed based on the estimated cost to treat the population identified in the needs assessment through the program outlined in the plan of services. FEMA and SAMHSA provided the state of New York the flexibility to submit grant applications that reflected its identified and estimated needs, which were based on information available at the time. In preparing the budget, the state of New York relied on SAMHSA s Budget Estimating and Reporting Tool, which was designed to assist states in developing budgets consistent with FEMA guidelines. The state of New York took two different approaches in constructing the ISP and RSP budgets for Project Liberty. The ISP budget used estimates of administrative costs and a simple direct services cost calculation. The direct services costs were based on the estimated number of people needing crisis counseling services, the estimated average length of treatment each person would need, the estimated hourly rate for crisis counselors, and the estimated length of the ISP. The RSP budget, on the other hand, was prepared by the state of New York based on estimates provided by NYS OMH, each of the New York City boroughs, and the 10 surrounding counties eligible for CCP grant funding. Once the state of New York submitted its ISP and RSP grant applications, FEMA had processes in place to review and approve them. Although the processes differed, both shared common elements. The first step for both applications was a technical review conducted by the FEMA regional office with jurisdiction over the state of New York to ensure that the applications had a direct link to the September 11 terrorist attacks. Once this technical review was completed, the applications were sent to FEMA headquarters and to SAMHSA for review and comment. In addition, the RSP was reviewed by a panel of mental health professionals who had experience with CCP grants. The ISP and RSP review processes also differed in that FEMA s regional office had final decision authority for the ISP application while FEMA headquarters had final decision authority for the RSP application. Figure 7 shows the application processes for the ISP and RSP. After the reviews conducted by FEMA and SAMHSA were completed, FEMA awarded the state of New York $22.7 million for the ISP on September 24, 2001, with subsequent amendments bringing the ISP total to $22.8 million. In addition, FEMA awarded the state of New York $132.1 million for the RSP immediately after the ISP ended on June 14, 2002. <3.2. Budgets Were Not Adjusted to Reflect New Information or Program Changes> Because FEMA s process for determining funding is designed to be implemented quickly after presidential disaster declarations and official loss numbers were not known at the time the Project Liberty applications were prepared, the state of New York used estimates of the number of people who would need crisis counseling services, the length of the program, and the services that would be provided. However, FEMA and SAMHSA never required the budgets to be modified to reflect new information or significant changes to the program. The estimates used by the state of New York to develop its initial needs assessment, or number of people it believed would need crisis counseling services, included several risk factors and loss categories. In keeping with existing CCP policy, FEMA and SAMHSA encouraged the state of New York to modify the needs assessment formula by adjusting the loss categories of affected persons and the risk factors for each of those loss categories to better reflect the situation in New York. The state of New York also estimated the number of direct victims in each loss category because official numbers were not available. For example, the official number of deaths was not known until more than 2 years after the disaster. As a result of these estimates, the needs assessment for the state of New York s ISP application determined that 2.3 million people would need crisis counseling services as a result of the terrorist attacks on September 11, 2001. With the RSP application, the needs assessment formula was modified to estimate the pervasive reactions to the disaster and to update the loss category numbers, such as the number of people dead, missing, or hospitalized. These modifications increased the estimate of the people who would need crisis counseling to 3.4 million. We found that based on the approved budgets for the ISP and RSP, Project Liberty estimated that it would need $154.9 million to provide crisis counseling and public education to the estimated 3.4 million people and training for Project Liberty staff who would be delivering these services, at a cost of approximately $46 per person. In its report for the period ending September 30, 2004, Project Liberty estimated that it had provided crisis counseling to 1.5 million people at a cost of $121 million, or approximately $83 per person. Another estimate used in preparing the grant applications was the length of time needed to carry out the services identified in the plan of services. The state of New York used the maximum length of service provision allowed by FEMA regulations in its ISP and RSP applications, 60 days and 9 months, respectively. However, crisis counseling services were actually provided for approximately 9 months under the ISP and over 30 months under the RSP. In addition, the state of New York initially understood that crisis counseling and public education services offered by Project Liberty would be limited to the services normally allowed by the CCP, such as short-term individual and group crisis counseling, community outreach, and training for crisis counselors. However, in August 2002, Project Liberty was authorized to adjust the program to include enhanced services and began providing these services in May 2003. Other significant changes, which were not reflected in Project Liberty s budget, included a reallocation from New York City s budget to the NYC DOEd, which increased NYC DOEd s budget from $8.9 million to $40 million and subsequent reductions of NYC DOEd s budget to $32 million. Despite these major changes in the program, FEMA and SAMHSA did not require and Project Liberty did not prepare adjusted budgets to reflect their revised plans for meeting the needs of the victims of September 11. Therefore, New York State and City officials did not have realistic budget information to use as a tool to manage program funds, and FEMA and SAMHSA were not in a position to effectively assess the planned use of the funds. <4. Federal Oversight of Project Liberty s Financial Information Was Limited> While SAMHSA provided oversight of Project Liberty s delivery of services, it provided only limited oversight of financial information reported by Project Liberty about the cost of those services. SAMHSA received periodic financial reports but did not perform basic analyses of expenditures to obtain a specific understanding of how Project Liberty was using federal funds. In addition, as discussed above, budget information was outdated and therefore an ineffective tool to monitor actual expenditures. SAMHSA s limited level of oversight over Project Liberty s financial information was driven in part by its assessment that the program was not high risk, but this assessment did not fully consider the magnitude, complexity, and unique nature of the program and was not revisited even after significant program changes occurred. As a result, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that funds were used efficiently and effectively in addressing the needs of those affected by the September 11 terrorist attacks. SAMHSA s oversight for Project Liberty included review of service delivery information and identification of unusual items included in Project Liberty s program reports, eight site visits, and routine communication with NYS OMH and FEMA. These oversight activities helped SAMHSA gain assurance that NYS OMH was delivering appropriate services. However, SAMHSA s oversight of these services did not directly link with, and therefore did not provide assurance related to, financial information reported by Project Liberty. In addition to requiring Project Liberty to submit budgets to show how it planned to use federal funds, FEMA regulations also required Project Liberty to periodically submit financial reports to show how funds were actually spent. Required financial reports included quarterly expenditure reports, a final accounting of funds, and a final voucher. SAMHSA officials told us they did some high-level review of the financial information provided to determine how quickly the program was using grant funds and when the grant funds should be made available to NYS OMH. However, they did not perform basic analyses of expenditures to obtain a specific understanding of how Project Liberty was using federal funds. We found that SAMHSA did not use financial information submitted by Project Liberty to conduct basic analytical reviews of how funds were being spent and whether this spending was consistent with the budgeted program expenditures. Table 2 illustrates a basic analysis we performed of Project Liberty s reported and budgeted expenditures for the period June 14, 2002, through September 30, 2004, which identified significant differences by category between reported expenditures and budget. Notwithstanding the fact that budgets were not updated for major program changes, several of these differences should have raised questions about whether Project Liberty was using federal funds within allowable categories and within its approved budget. For example, the Project Liberty personnel budget was over $93 million; however, as of September 30, 2004, over 26 months into the program that was initially planned for completion in 9 months, it had reported personnel expenditures of only about $26 million, for a difference of about $67 million. A SAMHSA official said that because of the way Project Liberty reported its expenditures, SAMHSA officials could not track its financial reports to its budget. As a result, we found that SAMHSA was not aware of the significant variations between Project Liberty s reported expenditures and budget and did not make inquiries of Project Liberty officials to obtain an understanding of why these variations were occurring. Some of the differences between reported and budgeted expenditures may have resulted from the fact that Project Liberty was not required to formally adjust the initial program budget to reflect significant changes. However, the differences may also have raised questions about whether SAMHSA s understanding of how the program was planning to spend funds was consistent with actual spending patterns. Comparisons between Project Liberty s reported budget and expenditures could have helped SAMHSA better assess the status of the program to allow it to take effective action to ensure that Project Liberty was using federal funds to provide the most value for victims of the September 11 terrorist attacks. The differences between Project Liberty s reported budget and expenditures may also have been caused by inconsistencies in financial information submitted by Project Liberty. FEMA and SAMHSA did not provide detailed guidance on how to classify CCP expenditures but instead left Project Liberty to interpret how expenditures should be classified. We found that Project Liberty expenditures were not always consistently reported to FEMA and SAMHSA. For example, Project Liberty did not consistently classify evaluation expenditures. If an NYS OMH employee was evaluating the program, the expenditure was classified as personnel, but if the work was contracted to someone outside of NYS OMH, the expenditure was classified as evaluation. As a result, SAMHSA could not reliably use Project Liberty s financial reports to determine how much it cost to evaluate the program. By obtaining a better understanding of how federal funds were spent by Project Liberty, SAMHSA would have improved its ability to determine whether funds were used most efficiently and effectively in carrying out the objectives of the program. SAMHSA s limited oversight of Project Liberty s financial information was driven in part by its own assessment that the program was not high risk. SAMHSA s oversight of Project Liberty included an initial assessment of the risk associated with the grantee. SAMHSA applied risk factors identified in HHS regulations regarding grants, including financial management issues, such as financial stability and experience in handling federal grants, to RFMH, the fiscal agent for NYS OMH responsible for making payments to service providers. For example, SAMHSA reviewed the result of RFMH s fiscal year 2001 financial audit that was required by the Single Audit Act and found that RFMH received an unqualified audit opinion while handling a total of about $62 million in federal funds. SAMHSA concluded that RFMH had a strong track record for handling federal funds and classified RFMH as not high risk. Based in part on this risk assessment, SAMHSA officials told us staff with financial backgrounds were not actively involved in the oversight of Project Liberty. However, we found that SAMHSA s risk assessment only considered risks associated with RFMH and did not consider other potential risks associated with the Project Liberty grant. For example, the assessment did not consider all significant interactions in the complex federal, state, and local government environment that existed for Project Liberty; the amount of the RSP grant award, which was the largest RSP grant ever made by FEMA; or the geographic complexities of the program, including the size of the area affected and the diversity of the community being served. In addition, SAMHSA did not revisit its initial risk assessment even after the program encountered significant changes and challenges, including the design of the first-ever enhanced services program and the documentation issues with NYC DOEd expenses, which have yet to be resolved. As a result, SAMHSA s level of oversight was not in line with the challenges and complexities that increased the risks associated with Project Liberty. Based in part on its risk assessment process, SAMHSA s oversight of Project Liberty was primarily carried out by its programmatic staff who focused on activities that did not directly link to the financial information being reported by NYS OMH. Without useful financial information, including updated budgets, and without analyses of the financial information Project Liberty was reporting, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that grant funds were effectively used to address the needs of those affected by the September 11 terrorist attacks. <5. Assessments of Project Liberty Ongoing> Both the state of New York and the federal government have taken steps to assess how Project Liberty delivered services. NYS OMH is conducting its own assessment of Project Liberty and partnered with NYAM to obtain information from telephone surveys. SAMHSA contracted with NCPTSD, a center within the Department of Veterans Affairs, to conduct a case study of New York s response to the terrorist attacks on September 11, with a primary focus on Project Liberty. Both NYS OMH and NCPTSD s overall assessments of the program were ongoing as of March 2005. FEMA plans to consider lessons learned from NYS OMH and NCPTSD when conducting its own internal review of the CCP. NYS OMH is conducting an evaluation of Project Liberty, which includes designated funding for program evaluation. This nonstatistical evaluation consists of several components, including analysis of data collected by service providers documenting services delivered through encounter data forms, recipient feedback through written questionnaires and telephone surveys, provider feedback through written reports and staff surveys, and other initiatives. The data collected by service providers was the primary method used to assess the services delivered by the program. Based on these data, NYS OMH preliminarily found that Project Liberty had reached a large number of people affected by the September 11 terrorist attacks and that it was successful in reaching many diverse communities. NYS OMH reported that 95 percent of providers who responded to its surveys rated the overall quality of services provided as good or excellent. NYS OMH also reported that the majority of respondents to its recipient surveys indicated that they have returned to their predisaster mental health condition, a goal of Project Liberty. However, according to NYS OMH, the recipient surveys were made available beginning in July 2003 to organizations providing crisis counseling for distribution to individuals receiving services and therefore may not be representative of all Project Liberty recipients. NYS OMH did not report the number of providers who received surveys and reported a low response rate for recipients. Because the number of surveys offered to providers was not disclosed and because of low response rates for the recipient surveys, we were unable to determine the level of coverage provided by these surveys. NYS OMH also partnered with NYAM, a not-for-profit organization dedicated to enhancing public health through research, education, public policy, and advocacy. NYAM conducted nonstatistical telephone surveys in 2001 and 2002 of New Yorkers to assess the magnitude and duration of the mental health effects of the terrorist attacks. NYS OMH worked with NYAM to assess the reach and recognition of Project Liberty by adding questions to NYAM's ongoing September 11 telephone surveys. NYAM and NYS OMH reported that 24 percent of the respondents interviewed were aware of Project Liberty and, among respondents who had heard of the program, 67 percent had a good impression of the program. However, because the sampling methodology for the NYAM phone surveys was not disclosed and because of low response rates, we were unable to determine the survey coverage. Based on these evaluation activities, as well as their own experience with Project Liberty, NYS OMH officials have begun to identify lessons to be learned. For example, they found that emergency mental health plans and resources in place prior to September 11 were insufficient to fully respond to the mental health impact of the terrorist attacks. Much of the infrastructure needed to implement Project Liberty, such as data collection procedures and public education materials, had to be developed in the immediate aftermath of the terrorist attacks. In addition, NYS OMH found that the services covered by the CCP were not sufficient to meet the mental health needs of the minority of individuals who developed severe and persistent symptoms that substantially interfered with day-to-day functioning. Although the state of New York was given permission to develop and implement an enhanced services program to meet the needs of the more severely affected individuals, similar intensive interventions are not currently routinely included as part of the FEMA CCP. NYS OMH officials told us that when their evaluation is completed, they expect that they will have comprehensively identified best practices and obstacles encountered and that they will make recommendations to FEMA and SAMHSA for actions needed to better organize a mental health response to future disasters funded by the CCP. SAMHSA entered into an interagency agreement with NCPTSD, a center within the Department of Veterans Affairs, to conduct a case study of New York s response to the September 11 terrorist attacks. The primary purpose of the NCPTSD case study was to identify lessons to be learned from New York s experience that could be useful to other communities that might have to respond to major disasters, including acts of terrorism. As part of its study, NCPTSD interviewed 103 individuals, including service providers and management from 50 public and private provider organizations in New York City and the surrounding counties. NCPTSD used a qualitative methodology to analyze the data to develop findings and recommendations. According to SAMHSA, the NCPTSD report is expected to be issued as soon as all stakeholders comments have been received and considered. FEMA officials told us they plan to consider lessons learned from the NCPTSD and NYS OMH assessments of Project Liberty through FEMA s internal review of the CCP that was ongoing as of March 2005. This internal review is being conducted in partnership with SAMHSA and, according to FEMA, will consider whether aspects of FEMA s CCP, including its regulations and guidance, need to be improved. For example, FEMA plans to work with SAMHSA to consider the extent to which the enhanced services should be included as a permanent part of the CCP. FEMA officials told us that the internal review will have to be completed in conjunction with their primary work responding to disasters; therefore, they have not established a timetable to complete this review. Given that Project Liberty was awarded the largest RSP grant in the history of FEMA s CCP and that FEMA provided funding to the state of New York to evaluate Project Liberty, the timely assessment of lessons learned from this program would be beneficial to future CCPs. <6. Conclusions> FEMA and SAMHSA s limited oversight of the planned and actual spending of Project Liberty impeded their ability to monitor whether the grant funds were being used in the most efficient and effective way to meet the needs of those affected by the terrorist attacks of September 11, 2001. Further, until recently, FEMA and SAMHSA had limited involvement in efforts to resolve issues surrounding the outstanding NYC DOEd expense claims; additional oversight in this area could help bring appropriate and timely resolution to these issues. FEMA will have an opportunity to address these oversight issues, as well as lessons learned identified by NYS OMH and NCPTSD, as part of its ongoing internal review of its CCP. <7. Recommendations for Executive Action> In order to address the issues identified in our report, we recommend that the Secretary of Homeland Security direct the Under Secretary of Emergency Preparedness and Response to take the following eight actions: To help ensure proper and timely expenditure of the remaining Project Liberty funds, FEMA should work with SAMHSA to provide assistance to New York City and State officials in appropriately resolving issues surrounding the NYC DOEd expense reimbursements and determine whether an independent review of the propriety of the use of funds for payments to the NYC DOEd is needed. To strengthen federal financial oversight of future CCP grants, FEMA should work with SAMHSA to require the recipients of CCP grants to submit updated budgets to reflect approved changes to the program; revise the current risk assessment process to comprehensively identify and assess risks associated with CCP grants; establish a process to update the risk assessment for significant consider developing formal requirements for consistent classifications of expense data; and develop formal procedures to perform more detailed analyses of financial reports, including comparing actual expenditures and budgets to identify variations and obtain an understanding of the reasons for any unusual variations. To help ensure that the lessons learned from Project Liberty will be used to help improve future programs funded by the CCP, FEMA should establish a clear time frame to complete its internal review of the CCP as expeditiously as possible. <8. Agency Comments and Our Evaluation> We received written comments on a draft of this report from DHS who generally concurred with our recommendations but expressed reservations regarding our assessment of the adequacy of FEMA and SAMHSA oversight. SAMHSA in a separate comment letter to FEMA, did not object to our recommendations but did take issue with our assessment of its oversight, particularly given the unprecedented circumstances that led to the establishment of Project Liberty. SAMHSA also provided additional information on the NYC DOEd claim issue. DHS s comment letter (reprinted in appendix II) incorporates by reference SAMHSA s letter (reprinted in appendix III). We also received technical comments from NYS OMH, NYC DOHMH, and NYC DOEd on excerpts of the report, which we incorporated as appropriate. DHS stated that our report should give more weight to the unprecedented conditions that led to Project Liberty, and that it was these unique circumstances that led to our findings and were the basis for our recommendations. It further stated that our recommendations primarily relate to the use of grant funds by NYC DOEd, and that no similar issues were identified with respect to the use of funds by other program subgrantees. Our report clearly acknowledges the unique and unprecedented circumstances that led to the establishment of Project Liberty. These unique circumstances were largely the basis for our conclusion that Project Liberty required a high level of federal oversight. A number of red flags signaled the need for a heightened federal role, including: the RSP grant was the largest such grant ever made by FEMA; the program initially designed to last about a year is now over 3 years old and still ongoing, with an extension being considered to September 30, 2005; and reimbursements for approximately $32 million, representing over 20 percent of the total federal funds awarded, remained unresolved as of May 2005. The fact that the level of federal oversight was not commensurate with the unprecedented circumstances surrounding Project Liberty was what led us to our findings and recommendations in this area. Five of our eight recommendations relate to strengthening federal financial oversight of future CCP grants; two of the recommendations specifically address the NYC DOEd use of grant funds; and the remaining recommendation calls for ensuring that lessons learned from Project Liberty will be used to improve future programs funded by the CCP. Thus, DHS was incorrect in stating that our recommendations primarily relate to the NYC DOEd issues. As to FEMA s statement that we did not identify any other issues about the use of grant funds, the scope of our work included determining the extent to which Project Liberty expended grant funds and whether the federal government had adequate financial oversight of Project Liberty. Our work did not address whether payments made, including those made by NYC DOEd, were a valid use of federal resources. We have no basis for reaching the conclusion suggested by FEMA. SAMHSA s letter also discussed the unprecedented conditions surrounding Project Liberty and, as discussed below, strongly disagreed with our assessment that SAMHSA s financial oversight was limited. SAMHSA also stated that during a recent site visit, NYC DOEd s Chief Financial Officer indicated that documentation was available to support claims as necessary. SAMHSA further stated that it will be recommending that the NYS OMH conduct an independent audit of these claims as one of the conditions of approving an extension of the grant to September 30, 2005. During our fieldwork, we were consistently told by NYC DOHMH officials that NYC DOEd had not been able to produce documentation for the majority of expenses it incurred on behalf of Project Liberty that met the documentation standards for reimbursement under federal grants. Given ongoing questions about the existence of documentation supporting NYC DOEd claims, the sufficiency of this documentation, or both, we agree that an independent audit is appropriate. SAMHSA stated that overall, the federal oversight of Project Liberty was appropriate, reasonable, and responsive to state and local needs. SAMHSA outlined several factors to support this statement, including that it received financial data from the state of New York on a routine basis and monitored the allocability, allowability, and reasonableness of project expenditures. While SAMHSA acknowledged that project budget data were not updated comprehensively, it stated that it did request and receive updated budget information in several instances, particularly in conjunction with project extensions. SAMHSA further stated that we had not cited any problems with program expenditures but instead seemed to focus on differences between classification of budgeted and reported expenditures. SAMHSA acknowledged that the budget was not prepared in the same format as reported expenditures, and stated that inconsistent categorization of expense accounts were largely the reason for the classification discrepancies we highlighted in our report. Our conclusion that SAMHSA s oversight of Project Liberty s financial information was limited was based in large part on the fact that the SAMHSA did not have a basis to reliably monitor how Project Liberty was using federal funds, since, as SAMHSA acknowledged, it did not have updated budget information and the reported expenditure data were not accurate due to classification discrepancies. At the time of our review, SAMHSA was not aware of these discrepancies because it had not been conducting basic analyses, such as comparisons between Project Liberty s budgeted and reported expenses. Further, there were no staff with financial backgrounds involved with the oversight of Project Liberty expenditures. SAMHSA s limited oversight was based in part on the fact that it did not deem the project high risk from a financial standpoint, despite the complex federal, state, and local environment, the fact that this was by far the largest RSP grant ever awarded by FEMA, the size and diversity of the community being served, and the overall challenging and changing circumstances of September 11. Overall, we found that SAMHSA s level of oversight was not in line with these challenges and complexities associated with Project Liberty. SAMHSA went on to state that in its opinion, classification of Project Liverty as high risk simply based on total estimated project expenditures would be inappropriate. It further noted that there is no regulatory mechanism allowing SAMHSA to assess risk on a complex federal, state, and local government environment as was specified in our report. We did not suggest that the risk classification should simply be based on total estimated project expenditures. As discussed above, our report clearly delineates a number of different risk factors that should have been considered in the risk classification. Further, we disagree that the current regulatory mechanism would not allow SAMHSA to consider these risk factors in making its risk assessment of Project Liberty. While current regulations do not require SAMHSA to consider programmatic factors in its risk assessment, they do not prevent SAMHSA from considering risk factors other than those delineated in its regulations in its overall assessment of the program and its operations. As noted by SAMHSA in its written comments, Project Liberty was by far the largest and most complex effort in the 30-year history of the CCP and presented unique and unprecedented challenges for government authorities at all levels. We believe these should have been key factors in SAMHSA s risk assessment and should have triggered heightened financial oversight of Project Liberty. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to appropriate House and Senate committees; the Secretary of Homeland Security; the Under Secretary of Homeland Security for Emergency Preparedness and Response; the Administrator, Substance Abuse and Mental Health Services Administration; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8341 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Objectives, Scope, and Methodology To determine the extent to which Project Liberty spent the Immediate Services Program (ISP) and Regular Services Program (RSP) grant funds received from the Federal Emergency Management Agency (FEMA), we did the following: Reviewed various documents, including quarterly RSP expenditure reports for the first (June 15, 2002, through September 14, 2002) through the ninth (June 15, 2004, through September 30, 2004) quarters; a detailed listing of the outstanding advance balances as of September 30, 2004, obtained from the Research Foundation for Mental Hygiene, Inc. (RFMH); a summary of expense claims submitted by the New York City Department of Education (NYC DOEd) as of March 2005; internal control summaries prepared by NYC DOEd for its personnel and other- than-personnel expenses; a draft internal control summary prepared by NYC DOEd for its community-based expenses; Crisis Counseling Assistance and Training Program (CCP) guidance on appropriate uses of grant funds; and FEMA and Department of Health and Human Services (HHS) regulations pertaining to the CCP. Interviewed officials from FEMA s headquarters and finance office, the Substance Abuse and Mental Health Services Administration s (SAMHSA) Center for Mental Health Services (CMHS), the New York State Office of Mental Health (NYS OMH), RFMH, the New York City Department of Health and Mental Hygiene (NYC DOHMH), and NYC DOEd. Determined that the total expenditures data obtained from RFMH and Project Liberty s quarterly expenditure reports were sufficiently reliable for the purposes of this report by the following: Obtaining and reviewing a copy of the independent auditor s report of RFMH s financial statements for fiscal years ending March 31, 2004 and 2003, and Report on Compliance and on Internal Control Over Financial Reporting Based on an Audit of Financial Statements Performed in Accordance with Government Auditing Standards as of March 31, 2004. We determined that RFMH received a clean opinion on its fiscal year 2004 and 2003 financial statements. In addition, the auditor concluded that its tests of RFMH s compliance with certain provisions of laws, regulations, contracts, and grants did not disclose any instances of noncompliance that are required to be reported under Government Auditing Standards. Finally, the auditor s consideration of internal control over RFMH s financial reporting did not identify any matters involving the internal control over financial reporting and its operation that it considered to be material weaknesses. Analyzing a database obtained from RFMH of the payments made on behalf of Project Liberty for the ISP and RSP from the first payment made on September 25, 2001, through September 30, 2004, including advances made to service providers. Determining that the amount of the payments included in the database was consistent with the total reported expenditures in the ISP final report and the RSP quarterly expenditure reports that were prepared by Project Liberty and submitted to SAMHSA and FEMA. Comparing the Project Liberty expenditures as reported by RFMH to drawdowns reported by SAMHSA on Project Liberty s RSP grant award. Obtaining a written certification of data completeness from the Managing Director of RFMH that the expenditures reported in the database were complete and accurate for all payments made for, or on behalf of, Project Liberty for the ISP and the RSP through September 30, 2004. Reviewing Single Audit Act reports for fiscal years 2003 and 2002 of New York City and surrounding counties. To determine whether the federal government had an effective process in place to determine the amount of funds to provide to Project Liberty, we interviewed officials from FEMA headquarters, SAMHSA s CMHS, NYS OMH, and RFMH; reviewed various documents, including the state of New York s ISP and RSP grant applications, the ISP and RSP grant awards, and federal guidance for the CCP, including the Robert T. Stafford Disaster Relief and Emergency Assistance Act and FEMA and HHS regulations; and reviewed correspondence between officials from FEMA, SAMHSA s CMHS, NYS OMH, and RFMH. To assess federal oversight over Project Liberty s expenditures, we obtained an understanding of CCP oversight roles and responsibilities by reviewing FEMA and HHS regulations, FEMA and SAMHSA s fiscal year 2004 interagency agreement, CCP fiscal guidelines, HHS s grants management manual, summary documents of the CCP s oversight structure prepared by SAMHSA, and GAO reports; reviewed available documentation of oversight performed for Project Liberty, including Project Liberty s financial reports and documentation of site visits conducted by FEMA and SAMHSA; analyzed Project Liberty s financial reports and compared them to initial designed our work to assess the effectiveness of federal oversight and therefore considered but did not assess the controls over Project Liberty payments implemented at the state and local levels; interviewed officials from FEMA headquarters and the FEMA regional office that serves New York, SAMHSA s CMHS, SAMHSA s Division of Grants Management, NYS OMH, and RFMH to identify policies and procedures for overseeing Project Liberty; and reviewed and used Standards for Internal Control in the Federal Government as criteria. To identify the steps that have been taken by the federal government in partnership with the state of New York to assess Project Liberty, we reviewed documentation of assessments performed, including a draft of the National Center for Post-Traumatic Stress Disorder case study of Project Liberty, NYS OMH summaries of survey results, an article written by the Deputy Commissioner of NYS OMH on lessons learned about the mental health consequences of the September 11 terrorist attacks, articles published by the New York Academy of Medicine, and documentation from FEMA related to its internal review of the CCP; reviewed various documents related to Project Liberty including the grant applications and the response to conditions of the grant award set out by FEMA and SAMHSA; reviewed GAO and FEMA Office of Inspector General reports to determine whether the CCP was evaluated; and interviewed officials from FEMA headquarters, SAMHSA s CMHS, and NYS OMH. We requested written comments on a draft of this report from the Secretary of Homeland Security. We received written comments from DHS. The DHS comments (reprinted in app. II) incorporate by reference a letter from SAMHSA to FEMA commenting on the draft (reprinted in app. III). We also provided excerpts of a draft for technical comment to NYS OMH, NYC DOHMH, and NYC DOEd. NYS OMH technical comments and the coordinated NYC DOHMH and NYC DOEd technical comments are incorporated as appropriate. We performed our work from July 2004 through March 2005 in accordance with generally accepted government auditing standards. <9. Organizations Contacted> <9.1. Federal Agencies> <9.2. State of New York> New York State Office of Mental Health Research Foundation for Mental Hygiene, Inc. <9.3. New York City> Comments from the Department of Homeland Security Comments from the Substance Abuse and Mental Health Services Administration GAO Contact and Staff Acknowledgments <10. GAO Contact> Linda Calbom, (202) 512-8341. <11. Acknowledgments> Robert Owens (Assistant Director), Donald Neff (Auditor-in-Charge), Lisa Crye, Edward Tanaka, and Brooke Whittaker made key contributions to this report.
Why GAO Did This Study To help alleviate the psychological distress caused by the September 11, 2001, attacks the Federal Emergency Management Agency (FEMA) awarded the state of New York two grants totaling $154.9 million to provide crisis counseling and public education. Because of questions about whether the program, called Project Liberty, had spent all the funds it received from the federal government, GAO was asked to determine (1) the extent to which the program expended the funds awarded from the federal government, (2) whether the federal government had an effective process in place to determine the amount of funds to provide the program, (3) whether the federal government had adequate financial oversight of the program, and (4) steps taken by the federal government and New York State to assess the program's effectiveness. What GAO Found For the period September 11, 2001, through September 30, 2004, Project Liberty reported that it had expended approximately $121 million, or three-quarters of the $154.9 million in grants awarded by FEMA, leaving a remaining balance of $33.9 million. The majority of the remaining balance, approximately $32 million, related to unresolved issues involving the adequacy of supporting documentation for the New York City Department of Education's (NYC DOEd) expense claims. As of March 31, 2005, city and state officials told GAO they had accepted alternative forms of supporting evidence related to $5.2 million in NYC DOEd expenses; however, this alternative evidence provides only limited assurance of the propriety of the claimed amounts. It is unclear whether similar alternative sources of evidence will be accepted for the remaining $26.8 million in NYC DOEd expense claims. FEMA assisted state officials in developing estimated funding needs for Project Liberty immediately after the terrorist attacks. By necessity, these initial budgets were developed using estimates established during the initial stages of the disaster. However, FEMA never required Project Liberty to prepare adjusted budgets to reflect new information or subsequent changes to the program. As a result, FEMA did not have realistic budget information to assess how city and state officials were planning to spend Project Liberty grant funds. FEMA assigned primary responsibility for oversight and monitoring to the Substance Abuse and Mental Health Services Administration (SAMHSA) through an interagency agreement. Although SAMHSA had procedures in place to monitor Project Liberty's delivery of services, it performed only limited monitoring of financial information reported by Project Liberty about the cost of those services. For example, while SAMHSA received periodic financial reports from Project Liberty, it did not perform basic analyses of expenditures in order to obtain a specific understanding of how the grant funds were being used and, as noted above, did not have updated budget information to gauge how actual spending compared to budgets. As a result, SAMHSA was not in a position to exercise a reasonable level of oversight to ensure that funds were being used efficiently and effectively in addressing the needs of those affected by the September 11 attacks. Both the state of New York and the federal government have taken steps to assess how Project Liberty delivered services. These assessments were ongoing as of March 2005. FEMA plans to consider lessons learned from Project Liberty when conducting its own internal review of the crisis counseling program.
<1. Background> Homeland security is a complex mission that involves a broad range of functions performed throughout government, including law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. Federal, state, and local governments have a shared responsibility in preparing for catastrophic terrorist attacks as well as other disasters. The initial responsibility for planning, preparing, and response falls upon local governments and their organizations such as police, fire departments, emergency medical personnel, and public health agencies which will almost invariably be the first responders to such an occurrence. For its part, the federal government has principally provided leadership, training, and funding assistance. The federal government s role in responding to major disasters has historically been defined by the Stafford Act, which makes most federal assistance contingent on a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively. Once a disaster is declared, the federal government through the Federal Emergency Management Agency (FEMA) may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. In addition to post disaster assistance, there has been an increasing emphasis over the past decade on federal support of state and local governments to enhance national preparedness for terrorist attacks. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar-Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in FEMA; the Departments of Justice, Health and Human Services, and Energy; and the Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. As emphasis on terrorism prevention and response grew, however, so did concerns over coordination and fragmentation of federal efforts. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 in bioterrorism alone. Our past work, conducted prior to the establishment of an Office of Homeland Security and a proposal to create a new Department of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. Further, our work found there was an absence of a central focal point that caused a lack of a cohesive effort and the development of similar and potentially duplicative programs. Also, as the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds from different programs administered by different agencies and have argued that the application process is burdensome and inconsistent among federal agencies. President Bush took a number of important steps in the aftermath of the terrorist attacks of September 11th to address the concerns of fragmentation and to enhance the country s homeland security efforts, including the creation of the Office of Homeland Security in October 2001. The creation of such a focal point is consistent with a previous GAO recommendation. The Office of Homeland Security achieved some early results in suggesting a budgetary framework and emphasizing homeland security priorities in the President s proposed budget. <2. Proposed Department Will Have A Central Role in Strengthening Homeland Security> The proposal to create a statutorily based Department of Homeland Security holds promise to better establish the leadership necessary in the homeland security area. It can more effectively capture homeland security as a long-term commitment grounded in the institutional framework of the nation s governmental structure. As we have previously noted, the homeland security area must span the terms of various administrations and individuals. Establishing a Department of Homeland Security by statute will ensure legitimacy, authority, sustainability, and the appropriate accountability to Congress and the American people. The President s proposal calls for the creation of a Cabinet department with four divisions, including Chemical, Biological, Radiological, and Nuclear Countermeasures; Information Analysis and Infrastructure Protection; Border and Transportation Security; and Emergency Preparedness and Response. Table 1 shows the major components of the proposed department with associated budgetary estimates. The DHS would be responsible for coordination with other executive branch agencies involved in homeland security, including the Federal Bureau of Investigation and the Central Intelligence Agency. Additionally, the proposal to establish the DHS calls for coordination with nonfederal entities and directs the new Secretary to reach out to state and local governments and the private sector in order to: ensure that adequate and integrated planning, training, and exercises occur, and that first responders have the equipment they need; coordinate and, as appropriate, consolidate the federal government s communications systems relating to homeland security with state and local governments systems; direct and supervise federal grant programs for state and local emergency distribute or, as appropriate, coordinate the distribution of warnings and information to state and local government personnel, agencies and authorities, and the public. Many aspects of the proposed consolidation of homeland security programs are in line with previous recommendations and show promise towards reducing fragmentation and improving coordination. For example, the new department would consolidate federal programs for state and local planning and preparedness from several agencies and place them under a single organizational umbrella. Based on its prior work, GAO believes that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective and coordinated programs, better intelligence sharing, and a more robust protection of our people, and borders and critical infrastructure. However, as the Comptroller General has recently testified,implementation of the new department will be an extremely complex task, and in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it effective. Further, some aspects of the new department, as proposed, may result in yet other concerns. As we reported on June 25, 2002, the new department would include public health assistance programs that have both basic public health and homeland security functions. These dual-purpose programs have important synergies that should be maintained and could be disrupted, as the President s proposal was not sufficiently clear on how both the homeland security and public health objectives would be accomplished. In addition, the recent proposal for establishing DHS should not be considered a substitute for, nor should it supplant, the timely issuance of a national homeland security strategy. At this time, a national homeland security strategy does not exist. Once developed, the national strategy should define and guide the roles and responsibilities of federal, state, and local entities, identify national performance goals and measures, and outline the selection and use of appropriate tools as the nation s response to the threat of terrorism unfolds. <3. Challenges Remain in Defining Appropriate Intergovernmental Roles> The new department will be a key player in the daunting challenge of defining the roles of the various actors within the intergovernmental system responsible for homeland security. In areas ranging from fire protection to drinking water to port security, the new threats are prompting a reassessment and shift of longstanding roles and responsibilities. However, proposed shifts in roles and responsibilities are being considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria to guide this process. A national strategy could provide such guidance by more systematically identifying the unique capacities and resources of each level of government and matching them to the job at hand. The proposed legislation provides for the new department to reach out to state and local governments and the private sector to coordinate and integrate planning, communications, information, and recovery efforts addressing homeland security. This is important recognition of the critical role played by nonfederal entities in protecting the nation from terrorist attacks. State and local governments play primary roles in performing functions that will be essential to effectively addressing our new challenges. Much attention has already been paid to their role as first responders in all disasters, whether caused by terrorist attacks or natural hazards. State and local governments also have roles to play in protecting critical infrastructure and providing public health and law enforcement response capability. Achieving national preparedness and response goals hinge on the federal government s ability to form effective partnerships with nonfederal entities. Therefore, federal initiatives should be conceived as national, not federal in nature. Decisionmakers have to balance the national interest of prevention and preparedness with the unique needs and interests of local communities. A one-size-fits-all federal approach will not serve to leverage the assets and capabilities that reside within state and local governments and the private sector. By working collectively with state and local governments, the federal government gains the resources and expertise of the people closest to the challenge. For example, protecting infrastructure such as water and transit systems lays first and most often with nonfederal levels of government. Just as partnerships offer opportunities, they also pose risks based upon the different interests reflected by each partner. From the federal perspective, there is the concern that state and local governments may not share the same priorities for use of federal funds. This divergence of priorities can result in state and local governments simply replacing ( supplanting ) their own previous levels of commitment in these areas with the new federal resources. From the state and local perspective, engagement in federal programs opens them up to potential federal preemption and mandates. From the public s perspective, partnerships if not clearly defined, risk blurring responsibility for the outcome of public programs. Our fieldwork at federal agencies and at local governments suggests a shift is potentially underway in the definition of roles and responsibilities between federal, state and local governments with far reaching consequences for homeland security and accountability to the public. The challenges posed by the new threats are prompting officials at all levels of government to rethink long standing divisions of responsibilities for such areas as fire services, local infrastructure protection and airport security. The proposals on the table recognize that the unique scale and complexity of these threats call for a response that taps the resources and capacities of all levels of government as well as the private sector. In many areas, the proposals would impose a stronger federal presence in the form of new national standards or assistance. For instance, the Congress is debating proposals to mandate new vulnerability assessments and protective measures on local communities for drinking water facilities. Similarly, new federal rules have mandated local airport authorities to provide new levels of protection for security around airport perimeters. The block grant proposal for first responders would mark a dramatic upturn in the magnitude and role of the federal government in providing assistance and standards for fire service training and equipment. Although promising greater levels of protection than before, these shifts in roles and responsibilities have been developed on an ad hoc piecemeal basis without the benefit of common criteria. An ad hoc process may not capture the real potential each actor in our system offers. Moreover, a piecemeal redefinition of roles risks the further fragmentation of the responsibility for homeland security within local communities, blurring lines of responsibility and accountability for results. While federal, state, and local governments all have roles to play, care must be taken to clarify who is responsible for what so that the public knows whom to contact to address their problems and concerns. The development of a national strategy provides a window of opportunity to more systematically identify the unique resources and capacities of each level of government and better match these capabilities to the particular tasks at hand. If developed in a partnerial fashion, such a strategy can also promote the participation, input and buy in of state and local partners whose cooperation is essential for success. Governments at the local level are also moving to rethink roles and responsibilities to address the unique scale and scope of the contemporary threats from terrorism. Numerous local general-purpose governments and special districts co-exist within metropolitan regions and rural areas alike. Many regions are starting to assess how to restructure relationships among contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination of preparedness and response on a regional basis. For example, mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in preparing for and responding to emergencies and disasters. Because individual jurisdictions may not have all the resources they need to acquire equipment and respond to all types of emergencies and disasters, these agreements allow for resources to be regionally distributed and quickly deployed. The terms of mutual aid agreements vary for different services and different localities. These agreements provide opportunities for state and local governments to share services, personnel, supplies, and equipment. We have found in our fieldwork that mutual aid agreements can be both formal and informal and provide for cooperative planning, training, and exercises in preparation for emergencies and disasters. Additionally, some of these agreements involve private companies and local military bases, as well as local entities. <4. Performance Goals and Measures Needed in Homeland Security Programs> The proposed Department, in fulfilling its broad mandate, has the challenge of developing a performance focus. The nation does not have a baseline set of performance goals and measures upon which to assess and improve preparedness. The capability of state and local governments to respond to catastrophic terrorist attacks remains uncertain. The president s fiscal year 2003 budget proposal acknowledged that our capabilities for responding to a terrorist attack vary widely across the country. The proposal also noted that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Given the need for a highly integrated approach to the homeland security challenge, performance measures may best be developed in a collaborative way involving all levels of government and the private sector. Proposed measures have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and have been pilot tested in North Carolina and North Dakota. Testing at the local level is planned for fiscal year 2002 through the Emergency Management Accreditation Program (EMAP). EMAP is administered by the National Emergency Management Association an association of directors of state emergency management departments and funded by FEMA. Its purpose is to establish minimum acceptable performance criteria, by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement (1) that the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. In recent meetings, FEMA officials have said that EMAP is a step in the right direction towards establishing much needed national standards for preparedness. FEMA officials have suggested they plan on using EMAP as a building block for a set of much more stringent, quantifiable standards. Standards are being developed in other areas associated with homeland security. For example, the Coast Guard is developing performance standards as part of its port security assessment process. The Coast Guard is planning to assess the security condition of 55 U.S. ports over a 3-year period, and will evaluate the security of these ports against a series of performance criteria dealing with different aspects of port security. According to the Coast Guard s Acting Director of Port Security, it also plans to have port authority or terminal operators develop security plans based on these performance standards. Communications is an example of an area for which standards have not yet been developed, but various emergency managers and other first responders have continuously highlighted that standards are needed. State and local governments often report there are deficiencies in their communications capabilities, including the lack of interoperable systems. Additionally, FEMA s Director has stressed the importance of improving communications nationwide. The establishment of national measures for preparedness will not only go a long way towards assisting state and local entities determine successes and areas where improvement is needed, but could also be used as goals and performance measures as a basis for assessing the effectiveness of federal programs. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures; (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments; (3) ensuring that goals at lower organizational levels align with and support general goals; and (4) identifying the resources that will be required to achieve the goals. However, FEMA has had difficulty in assessing program performance. As the president s fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA s Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. In the area of bioterrorism, the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services is requiring state and local entities to meet certain performance criteria in order to qualify for grant funding. The CDC has made available 20 percent of the fiscal year 2002 funds for the cooperative agreement program to upgrade state and local public health jurisdictions preparedness for and response to bioterrorism and other public health threats and emergencies. However, the remaining 80% of the available funds is contingent on receipt, review, and approval of a work plan that must contain 14 specific critical benchmarks. These include the preparation of a timeline for assessment of emergency preparedness and response capabilities related to bioterrorism, the development of a state-wide plan for responding to incidents of bioterrorism, and the development of a system to receive and evaluate urgent disease reports from all parts their state and local public health jurisdictions on a 24-hour per day, 7-day per week basis. Performance goals and measures should be used to guide the nation s homeland security efforts. For the nation s homeland security programs, however, outcomes of where the nation should be in terms of domestic preparedness have yet to be defined. The national homeland security strategy, when developed, should contain such goals and measures and provide a framework for assessing program results. Given the recent and proposed increases in homeland security funding as well as the need for real and meaningful improvements in preparedness, establishing clears goals and performance measures is critical to ensuring both a successful and fiscally responsible effort. <5. Appropriate Tools Need to Be Selected For Providing Assistance> The choice and design of the policy tools the federal government uses to engage and involve other levels of government and the private sector in enhancing homeland security will have important consequences for performance and accountability. Governments have a variety of policy tools including grants, regulations, tax incentives, and information-sharing mechanisms to motivate or mandate other levels of government or the private sector to address security concerns. The choice of policy tools will affect sustainability of efforts, accountability and flexibility, and targeting of resources. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. <5.1. Grants> The federal government often uses grants to state and local governments as a means of delivering federal assistance. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad, national purpose and to provide a great deal of discretion to state and local officials. In designing grants, it is important to (1) target the funds to state and localities with the greatest need based on highest risk and lowest capacity to meet these needs from their own resource base, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as supplantation, with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. At their best, grants can stimulate state and local governments to enhance their preparedness to address the unique threats posed by terrorism. Ideally, grants should stimulate higher levels of preparedness and avoid simply subsidizing local functions that are traditionally state or local responsibilities. One approach used in other areas is the seed money model in which federal grants stimulate initial state and local activity with the intent of transferring responsibility for sustaining support over time to state and local governments. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president s fiscal year 2003 budget, have included some of these provisions. This grant would be used by state and local government s to purchase equipment, train personnel, exercise, and develop or enhance response plans. FEMA officials have told us that it is still in the early stages of grant design and is in the process of holding various meetings and conferences to gain input from a wide range of stakeholders including state and local emergency management directors, local law enforcement responders, fire responders, health officials, and FEMA staff. Once the details of the grant have been finalized, it will be useful to examine the design to assess how well the grant will target funds, discourage supplantation, provide the appropriate balance between accountability and flexibility, and whether it provides temporary seed money or represents a long-term funding commitment. <5.2. Regulations> Other federal policy tools can also be designed and targeted to elicit a prompt, adequate, and sustainable response. In the area of regulatory authority, the Federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors. An example of infrastructure regulations include the new federal mandate requiring that local drinking water systems in cities above a certain size provide a vulnerability assessment and a plan to remedy vulnerabilities as part of ongoing EPA reviews while the new Transportation Security Act is representative of a national preparedness regulation as it grants the Department of Transportation authority to order deployment of local law enforcement personnel in order to provide perimeter access security at the nation s airports. In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. There are several models of shared regulatory authority offer a range of approaches that could be used in designing standards for preparedness. Examples of these models range from preemption though fixed federal standards to state and local adoption of voluntary standards formulated by quasi-official or nongovernmental entities. <5.3. Tax Incentives> As the Administration noted protecting America s infrastructure is a shared responsibility of federal, state, and local government, in active partnership with the private sector, which owns approximately 85 percent of our nation s critical infrastructure. To the extent that private entities will be called upon to improve security over dangerous materials or to protect critical infrastructure, the federal government can use tax incentives to encourage or enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. <5.4. Information Sharing> Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the U.S. Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S1615; H.R. 3285) that would provide for information sharing among federal, state and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information sharing needs. For instance, the proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them and to state and local officials who participate in federal counter- terrorism working groups or regional task forces. <6. Conclusion> The proposal to establish a new Department of Homeland Security represents an important recognition by the Administration and the Congress that much still needs to be done to improve and enhance the security of the American people. The DHS will clearly have a central role in the success of efforts to strengthen homeland security, but it is a role that will be made stronger within the context of a larger, more comprehensive and integrated national homeland security strategy. Moreover, given the unpredictable characteristics of terrorist threats, it is essential that the strategy be formulated at a national rather than federal level with specific attention given to the important and distinct roles of state and local governments. Accordingly, decisionmakers will have to balance the federal approach to promoting homeland security with the unique needs, capabilities, and interests of state and local governments. Such an approach offers the best promise for sustaining the level of commitment needed to address the serious threats posed by terrorism. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the Subcommittee may have. <7. Contacts and Acknowledgments> For further information about this testimony, please contact me at (202) 512-2834 or Paul Posner at (202) 512-9573. Other key contributors to this testimony include Matthew Ebert, Thomas James, Kristen Massey, David Laverny-Rafter, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Related GAO Products <8. Homeland Security> Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. <9. Combating Terrorism> Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. <10. Public Health> Homeland Security: New Department Could Improve Coordination but may Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO-02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. <11. Disaster Assistance> Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001.
What GAO Found The challenges posed by homeland security exceed the capacity and authority of any one level of government. Protecting the nation against these threats calls for a truly integrated approach, bringing together the resources of all levels of government. The proposed Department of Homeland Security will have a central role in efforts to enhance homeland security. The proposed consolidation of homeland security programs has the potential to reduce fragmentation, improve coordination, and clarify roles and responsibilities. However, formation of a department should not be considered a replacement for the timely issuance of a national homeland security strategy to guide implementation of the complex mission of the department. Appropriate roles and responsibilities within and between the government and private sector need to be clarified. New threats are prompting a reassessment and shifting of long-standing roles and responsibilities, but these shifts are being considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria. A national strategy could provide guidance by more systematically identifying the unique capacities and resources at each level of government to enhance homeland security and by providing increased accountability within the intergovernmental system. The nation does not yet have performance goals and measures upon which to assess and improve preparedness and develop common criteria that can demonstrate success; promote accountability; and determine areas where resources are needed, such as improving communications and equipment interoperability. A careful choice of the most appropriate tools is critical to achieve and sustain national goals. The choice and design of policy tools, such as grants, regulations, and tax incentives, will enable all levels of government to target areas of highest risk and greatest need, promote shared responsibilities, and track and assess progress toward achieving preparedness goals.
<1. Background> Express Mail, the Service s premium service, was first offered in 1970 and is designed to provide overnight delivery for documents and packages weighing up to 70 pounds, which are to be tracked from the points of acceptance to points of delivery. It is the Service s only guaranteed delivery service, and customers may request and receive a postage refund if an Express Mail package is not delivered on time. As of July 1996, the minimum postage for mailing an Express Mail package was $10.75. Overall, Express Mail represents a relatively small portion of the Service s total mail volume and revenue. For fiscal year 1995, the Service reported Express Mail volume of 56 million pieces, which generated revenue of about $711 million, or about 1 percent of the Service s total mail volume and postage revenue that year. The Postal Service began offering EMCAs in 1984 to make Express Mail more attractive to customers by giving them a more convenient way to pay postage. Around that time, the Postal Service took other steps as well to retain Express Mail customers. For example, the Postal Service s 1986 annual report to Congress shows that after Express Mail volume dropped by 8.7 percent between fiscal years 1985 and 1986, it . . . moved aggressively to stop the decline and to make Express Mail service more competitive. According to the 1986 report, the Postal Service implemented an Express Mail morning-delivery program in 30 cities, placed 10,000 Express Mail collection boxes on the streets, and introduced a new Express Mail letter envelope in 1986. During fiscal year 1995, customers used EMCAs to pay about $139 million in postage on about 8 million Express Mail packages, or 13 percent and 16 percent of the Service s total Express Mail volume and revenue, respectively. About 90 percent of all EMCA transactions were for domestic Express Mail, and the balance for international Express Mail. In addition to EMCAs, Express Mail customers can pay postage with cash, checks, and postage meters. Recently, the Postal Service has begun making debit and credit cards increasingly available for use by Express Mail customers and other postal customers. The Service s Vice President for Marketing Systems, under the Senior Vice President for Marketing, has overall responsibility for Express Mail procedures and management oversight. Employees at post offices and mail-processing plants where Express Mail is accepted from customers and prepared for delivery are responsible for implementing the Service s EMCA policy and procedures. <1.1. Related GAO Reports> In recent years, the House Subcommittee on the Postal Service, the U.S. Postal Service, and we have received allegations of fraudulent schemes to evade payment of postage. In addition, we have reported serious weaknesses in some of the Service s revenue systems. In 1993, we reported weak controls over postage meters after allegations of postage meter fraud and a statement by the Postmaster General, which related that revenue losses could total $100 million annually. More recently, we reported a lack of adequate procedures for accepting bulk mail, for which the Service recorded revenue of about $23 billion in 1994. In response to allegations and our reports, the Service took numerous actions to improve its systems of controls over postage meters and bulk mail acceptance. Since that time, we received the allegation that mailers were abusing EMCAs. <2. Objectives, Scope, and Methodology> Our objectives were to determine (1) whether there is any basis for an allegation regarding EMCA abuse and (2), if so, what steps the Service is taking and could take to help avoid or minimize EMCA revenue losses. To review alleged EMCA abuse, we interviewed various Service officials at headquarters offices in Washington, D.C., and reviewed Servicewide EMCA policies, procedures, and internal controls for opening EMCAs, verifying EMCA numbers presented by customers, closing EMCAs with negative balances, and recording all required Express Mail data when packages are accepted. To ascertain whether procedures and controls were adequate to protect EMCA revenue and were being followed, we reviewed pertinent Postal Service policies, procedures, and forms for EMCA operations and discussed Express Mail and EMCA practices with Service officials in three customer service districts (Dallas, TX; New York, NY; and Van Nuys, CA). We selected the New York and Van Nuys districts because they were among those having the largest number of EMCA transactions. We selected the Dallas district to provide broader geographic coverage of the Service s EMCA activities. To help determine if use of EMCAs had resulted in revenue losses, we reviewed, but did not verify, various management reports relating to EMCA activities generated from the Service s Electronic Marketing and Reporting System (EMRS). These reports provided data on (1) invalid EMCAs accepted by the Service, (2) EMCAs with negative fund balances, and (3) Express Mail packages delivered by the Service with no acceptance data recorded. For the three selected districts, we gathered data on the dollar amounts of the EMCA negative balances that existed for at least five consecutive accounting periods. We scanned some Express Mail labels in all three districts to determine if the Postal Service accepted Express Mail packages from EMCA customers and did not record any acceptance data. We reviewed data provided by the Service s collection agency on the amount of EMCA-related postage lost due to invalid EMCAs. We reviewed relevant portions of all 19 Postal Inspection Service reports that addressed EMCA activities in various districts, including two of the three selected districts. To help determine what recent actions, if any, the Service had taken or planned to take relating to EMCAs, we interviewed various headquarters officials responsible for EMCA procedures and controls and for providing employees with equipment that could help to strengthen EMCA-related controls. We also discussed EMCA procedures with officials at the Service s area offices in Dallas, TX and Memphis, TN. At the Memphis office, we inquired about a recently developed EMCA self-audit guide, which was to be used by all districts. To determine what actions the Service might take to reduce EMCA losses, we interviewed various headquarters officials and reviewed various Service reports showing the purpose to be achieved with EMCAs, Express Mail volumes, and related data after the Service introduced EMCAs. We also interviewed account representatives for two of the Service s principal competitors for overnight delivery Federal Express and United Parcel Service. We determined if these competitors offered corporate accounts to customers and, if so, what they required for opening an account. The Postal Service provided written comments on a draft of this report. The Service s comments are summarized and evaluated beginning on page 17 and included in appendix II. We did our work from November 1995 through April 1996 in accordance with generally accepted government auditing standards. <3. Revenue Losses From Corporate Accounts Have Grown> EMCA procedures have not adequately protected the Service against postage revenue losses, and EMCA customers have sometimes obtained Express Mail services without valid EMCAs. Postal Service reports showed that the EMCAs were invalid because the EMCA numbers used by customers did not match any of the Service s valid numbers. Also, although EMCAs are to always contain sufficient funds to cover Express Mail postage, EMCA customers sometimes overdrew their accounts and accumulated large negative account balances. The Service lost increasing sums of Express Mail revenue in the past 3 years because of weak internal controls over EMCAs. Nationwide, the Service referred about $966,000 in delinquent EMCAs to its collection agency in fiscal year 1995. Of that amount, the Service recovered about $165,000 (17 percent), and the balance of $801,000 was written off as uncollectible, almost twice (90 percent increase) the amount written off in 1993, as figure 1 shows. <3.1. Invalid Corporate Account Numbers Sometimes Used> Postal Service reports show that its employees accepted and delivered some Express Mail packages with invalid EMCA numbers. After delivering the packages, the Service determined that EMCA numbers provided by customers did not match any of the valid EMCA numbers in the Service s automated system. The Service lost revenue and incurred administrative cost to follow up on these customers because it had not determined that their EMCA numbers were invalid before accepting and delivering Express Mail packages. To help employees detect invalid EMCA numbers before accepting Express Mail, the Service includes, as part of a Fraud Alert in a biweekly Postal Bulletin distributed within the Service, a list of EMCA numbers that it has determined to be invalid after some prior EMCA action (e.g., it had previously closed the account). Employees are instructed to not accept Express Mail packages bearing any of the invalid numbers. When the packages are accepted at a post office or a mail-processing plant, employees are to check EMCA numbers manually against the biweekly list of invalid numbers. Various Service officials told us that employees accepting Express Mail with EMCA payment do not always use the bulletins to check for invalid EMCAs. Employees at mail-processing plants are expected to move huge volumes of mail in a few hours, and Postal Service officials said that, due to time pressures, most of the EMCA problems occur as a result of improper acceptance of Express Mail at processing plants. A manual process of checking for invalid EMCAs can take a considerable amount of time because of the large quantity of invalid numbers to be scanned for each EMCA package (e.g., the Postal Bulletin dated June 20, 1996, contained about 2,900 invalid 6-digit EMCA numbers listed in numeric order). Employees accepting Express Mail packages at post offices and mail-processing plants have access to and are to use only the list of invalid EMCA numbers to verify that customers are presenting valid EMCA numbers. Therefore, if a customer made up a number, it likely would not be on the Service s list of invalid EMCAs. Postal employees at post offices and processing plants do not have automated access to valid EMCA numbers which totaled about 113,000 in February 1996. The Postal Service incurred administrative costs to collect postage from some EMCA customers using invalid EMCA numbers after the Postal Service delivered Express Mail packages. Each of the three selected districts we visited had 4 to 13 employees responsible for domestic and international Express Mail and Priority Mail activities. Service officials said that all districts have employees with similar responsibilities. District officials told us that these employees receive reports each workday showing EMCA errors that must be investigated so postage can be collected. These administrative actions can be time consuming and costly because they entail obtaining copies of mailing labels, verifying data, and recording new data when a valid EMCA can be charged. When the EMCA number appears to be invalid, i.e., does not match the Service s records of valid EMCA numbers, the employees must further investigate each case through telephone calls or letters asking for reimbursement and requesting mailers to stop using invalid accounts. <3.2. EMCAs Sometimes Overdrawn and Collection Is Difficult> Some customers continued to use EMCAs although they had insufficient funds in their accounts to cover charges for Express Mail services that they received a problem that the Inspection Service reported over several years. Under current Service procedures, customers must maintain a minimum EMCA balance of either the customer s estimated Express Mail postage for 1 week or $50, whichever is higher. However, employees do not have the necessary EMCA data access to verify that this requirement is met before accepting Express Mail packages. Some EMCA customers overdrew their EMCA accounts, and the Postal Service continued to accept Express Mail packages from these customers. When EMCA customers overdraw their accounts, Postal Service procedures require that employees contact individuals and businesses to collect the postage due. A letter is to be sent to the EMCA customer when the account is deficient for one postal accounting period (28 days). If the account remains deficient after 3 postal accounting periods (84 days), the Service is to close the account and refer it to a collection agency used by the Service. However, the Service has little information from EMCA applications to use in locating customers and collecting postage. Under current Service procedures, an individual or corporation is to be approved for an EMCA after completing a one-half page application, which shows the applicant s name, address, and telephone number, and depositing the minimum money required in the account. The Service does not require the applicant to present any identification, such as a driver s license or major credit card, to receive an EMCA. Employees approving EMCA applications are not required to verify any information presented on the applications. Thus, an EMCA applicant could provide false or erroneous information on the application and, in these instances, efforts by the Postal Service and its collection agency to locate the customers and collect postage on the basis of information in the EMCA application likely would be unsuccessful. A Service report on EMCA operations for February 1996 showed that about 97,000 of the approximately 113,000 EMCAs (or 86 percent) had money on deposit with the Service totaling $18.5 million. However, for the remaining 14 percent, or about 16,000 EMCAs, there was no money on deposit; rather, the accounts were overdrawn by $4.3 million. According to the Service s management reports on Express Mail operations, many EMCAs had large negative balances for periods exceeding three accounting periods and were not closed or sent to the collection agency. For example, in the New York district, 16 of the 27 EMCAs we reviewed had negative balances for about 5 consecutive accounting periods (about 140 days). Of these 16 EMCAs, 10 had negative balances of more than $2,000 each, at the time of our review; and the negative balance for one account was about $10,000. Similarly, in the Van Nuys district, 10 of the 14 EMCAs we reviewed had negative balances for 5 consecutive accounting periods, and the negative balances for 8 accounts were about $3,000 each. In the Dallas district, 3 of the 12 EMCAs we reviewed had negative balances for 5 consecutive accounting periods, including 1 EMCA with a $8,800 negative balance. The Service s practice of allowing postage to remain unpaid for Express Mail services over long periods of time is inconsistent with the Service policy, which requires that Express Mail must be prepaid or paid at the time of mailing. Further, allowing customers to overdraw EMCAs and maintain active EMCAs with negative balances for periods exceeding three accounting periods violates Postal Service procedures. The Postal Inspection Service has conducted financial audits that included a review of controls over EMCA operations. Postal inspectors in the New York district reported finding overdrawn EMCAs during five audits done since 1987. Some audits revealed that the total negative EMCA balances for the district exceeded $600,000. The inspectors reported that the Van Nuys district had EMCAs with negative balances at various times since 1988. For example, in 1994, the district had EMCA accounts with negative balances totaling about $122,000. As a result of financial audits, the Inspection Service also reported EMCAs with negative balances in many districts that we did not visit. In reports on districts with negative EMCA balances, the Inspection Service recommended that local management take action to eliminate such balances. <3.3. Acceptance Data for Some Express Mail Packages Not Recorded> Along with not verifying some EMCAs, Service employees at times did not make any record of accepting Express Mail packages that the Service processed and delivered. In these instances, the necessary information was not available to respond to customer inquiries about the status of packages and process requests for postage refunds when customers claimed that packages were delivered late. Also, in cases where EMCAs were to be charged, the Service lost some revenue because of the lack of acceptance data. Employees receiving Express Mail packages, whether EMCAs are used or not, are to electronically scan a barcode on the mailing labels to record data for tracking and reporting purposes. When the packages include EMCA numbers, employees are to record those numbers so that the Service can charge postage to the EMCA. Postal Service reports showed that, for the 12-month period ending February 1996, it delivered about 1.9 million domestic Express Mail packages, or 3.4 percent of total domestic Express Mail volume, for which the Service did not record any required acceptance data. Service officials in the three districts we visited said that recording Express Mail acceptance can be a problem when customers drop packages in collection boxes and employees are expected to record acceptance data when the packages arrive at a mail-processing plant. According to these officials, pressures to keep the mail moving and meet scheduled deadlines can result in some Express Mail being received, sorted, and delivered without proper acceptance. Service officials at headquarters and in the districts we visited routinely receive exception reports showing that Express Mail was delivered but not properly accepted. They said that generally no attempt is made to correct these errors or collect the postage due in cases where EMCAs are used. Specifically, district officials said that they were instructed by Service headquarters not to take any action in these cases. They also said that they did not have the employees needed to do follow-up, even if it were required. When the Service failed to record acceptance of Express Mail packages, it did not have data needed to respond to customers inquiries about the status of Express Mail packages. Because the packages were not logged in, the Service had no record to show when packages were received. The Service needs such data to verify whether Express Mail customers claims for postage refunds on late deliveries are valid. The Service guarantees that Express Mail packages will be delivered on time. In fiscal year 1995, the Service refunded postage to Express Mail customers totaling about $1.5 million. We did not determine if it had adequate data for determining whether the refund claims were valid. However, if the Service lacks data on when a package was accepted for delivery, it cannot determine whether the package was delivered on time or whether it was delivered late. Further, the Service regularly reports on-time delivery rates for Express Mail on the basis of the data that are to be recorded when packages are accepted and delivered. When acceptance data are not recorded, the Service has incomplete data to report on-time delivery rates for Express Mail. The Service lost unknown amounts of revenue because some customers had included EMCA numbers on Express Mail packages, but Service employees did not record any acceptance data. We scanned some Express Mail labels in the three selected districts and noted that all three had received some Express Mail packages from EMCA customers without recording acceptance data. In all three districts, the practice was to not follow up when customers used EMCAs; therefore, no Express Mail acceptance data were recorded. Postal Service officials in the three districts and at headquarters did not know the extent of EMCA revenue losses associated with the failure to record Express Mail acceptance data. <4. Actions That Could Help Reduce EMCA Revenue Losses> We identified two Postal Service actions under way that could help to improve EMCA controls and thereby reduce related revenue losses and provide needed EMCA data. However, these actions were not fully implemented at the time of our review, and the actions do not address some EMCA control weaknesses that we identified. <4.1. Some Actions Were Being Taken> Recognizing the Postal Service s overall vulnerability to revenue losses, in 1994, the Senior Vice President for Finance established a new revenue assurance unit to help collect revenue owed to the Service. The new unit targeted EMCAs as one of five Postal Service operations for improvement. The unit developed strategies, such as self-audits of EMCA activities, to reduce revenue losses resulting from EMCAs. At the time of our review, the strategies had not yet been fully implemented; and no results from the self-audits, or the unit s other EMCA-related efforts, were available for our review. In addition to the above action, the Postal Service was installing point-of-service terminals at post offices to provide employees with improved access to current postage rates and certain other automated data maintained by the Postal Service. According to the headquarters manager responsible for the point-of-service terminal project, eventually, the terminals are to provide access to the EMCA database and thus enable employees to verify EMCA numbers and fund balances before accepting Express Mail packages. He said that the date and additional cost to provide this access are yet to be determined. The Service did not plan to provide the terminals to employees in mail-processing plants who accept Express Mail packages. These employees will still lack access to valid EMCA numbers and current fund balances, and the Service will continue to be vulnerable to revenue losses when customers drop Express Mail packages in collection boxes and include invalid EMCA numbers on the packages. <4.2. Planned Actions Do Not Fully Address EMCA Control Weaknesses> Although completion of Service actions discussed above should help to improve controls over EMCAs and reduce related revenue losses, control weaknesses will remain. Taking additional steps to better ensure compliance with existing controls, as well as adding controls, can help to protect revenue. But, the Postal Service will incur cost to strengthen internal controls over EMCAs. Given this and other factors, such as changes that have occurred in the overnight mail delivery market and new methods of providing customer convenience, a reasonable step would be for the Postal Service to first ensure that it wants to retain EMCAs before incurring substantial, additional costs to improve related controls. The Postal Service introduced EMCAs in 1984 to help stem the decline in the growth of Express Mail business and become more competitive. As we previously reported, since that time, private carriers have dominated the expedited (overnight) delivery market. We reported that Federal Express is the acknowledged leader in this market and that the Postal Service s share of the market declined from 100 percent in 1971 to 12 percent in 1990. Recognizing these market realities, in recent years, the Postal Service has focused marketing efforts more on Priority Mail which generally is to be delivered in 2 or 3 days than overnight Express Mail. Priority Mail accounted for almost 6 percent of total revenue in fiscal year 1995, compared with just over 1 percent for Express Mail. Unlike Express Mail, the Postal Service does not offer a corporate account for Priority Mail, and the annual growth rate of Priority Mail pieces outpaced Express Mail growth over each of the past 5 fiscal years. (See figure 2.) Other factors also suggest that EMCAs may not be the most cost-effective method of offering payment convenience. Specifically, in 1994, the Postal Service began offering customers the use of major debit or credit cards (e.g., MasterCard, Visa, or American Express) to pay for various mail services at post offices. Customers who want to drop Express Mail packages in collection boxes currently have the option of using postage meters to pay postage. Thus, as one step toward addressing EMCA control problems, the Postal Service could compare the relative customer convenience, administrative cost, and risk of revenue losses of EMCAs with alternative payment methods currently available to Express Mail customers. The Postal Service could also consider competitors current customer service practices. On the basis of our limited inquiry, we found some of the Postal Service s competitors (i.e., Federal Express and United Parcel Service) offer corporate accounts to customers. For example, Federal Express offers customers a FedEx account and requires that applicants have a major credit card to qualify for an account. If the Postal Service determines that EMCAs are necessary or desirable, we identified two additional steps, beyond those now planned and under way, to help minimize the risk of EMCA abuse and revenue losses as discussed below. First, while the self-audits proposed by the revenue assurance unit could help to improve compliance, the audits were just getting started at the time of our review. Express Mail packages can be accepted at about 40,000 post offices and several hundred mail-processing plants, and self-audits covering all of these entities will take some time to complete. Postal Service headquarters responsible for Express Mail operations could reinforce the need for managers and employees to comply with existing internal procedures and controls designed to prevent EMCA abuse. These procedures require employees to (1) record all required data from Express Mail labels, (2) verify EMCA numbers presented by customers against lists of invalid EMCA numbers, and (3) close EMCAs with negative balances running more than three postal accounting periods. Second, the Postal Service could improve EMCA internal controls by imposing more stringent requirements for opening EMCAs, such as requiring that individuals present a valid driver s license, a valid major credit card, or other appropriate identification to receive an EMCA. If Postal Service employees approving EMCAs are required to record information from such sources about EMCA applicants, such information could be useful to the Service and its collection agency to locate and collect postage from customers with overdrawn and closed EMCAs. <5. Conclusions> Internal controls over EMCAs are weak or nonexistent, which has resulted in potential for abuse and increasing revenue losses over the past 3 fiscal years. Establishing adequate control over EMCA operations will require management attention and additional dollar investments. In light of the control problems we identified, overnight mail market developments since 1984, and the increased availability of other payment methods, EMCAs may not be the most cost-effective method of providing a convenient method for paying Express Mail postage. This question requires further evaluation by the Postal Service of all the relevant factors. If EMCAs are necessary or desirable, the Postal Service can take steps beyond those planned and under way to help minimize revenue losses and other problems associated with EMCAs. Some employees did not always comply with existing EMCA procedures for checking EMCAs numbers and recording Express Mail data. Although acceptance employees are under pressure to move the mail and some have side stepped some required tasks, management could emphasize to these employees the importance of following EMCA procedures and collecting the postage due when the Postal Service delivers mail. Further, the Postal Service violated its procedures by allowing customers to overdraw EMCAs and continue using them for up to 5 months. Currently, few requirements exist for customers to obtain EMCAs; and more stringent requirements for opening EMCAs, similar to those used by the Service s competitors, might also help to avoid Express Mail revenue losses. <6. Recommendations> To help reduce EMCA revenue losses and other related problems discussed in this report, we recommend that the Postmaster General require Service executives to determine if EMCAs are the most cost-effective method for achieving the purpose for which they were intended, in light of all relevant factors. If EMCAs are determined to be a necessary or desirable method, we recommend that the Service (1) establish stronger requirements for opening EMCAs and (2) hold managers and employees accountable for handling EMCA transactions in accordance with the new requirements as well as existing Service policies and procedures for verifying EMCA numbers, closing EMCAs with negative balances, and recording required data for all Express Mail packages accepted. <7. Postal Service Comments and Our Evaluation> In a September 9, 1996, letter, the Postmaster General said that the Postal Service agreed with our overall findings and conclusions. He said that the Service was moving forward with initiatives to cut down on revenue losses from invalid EMCAs. In addition to the two actions discussed previously in our report, he said that the Service will take the following actions to address our recommendations: Establish more stringent requirements for opening and using EMCAs. These requirements will include a $250 deposit (in lieu of a $100 deposit now required) to open an account and weekly reviews at acceptance units of EMCA use to ensure that minimum balance requirements are met. Area and district managers will focus more consistent attention on ensuring that acceptance units follow EMCA procedures. Examine the feasibility and cost of installing terminals at mail-processing plants in addition to the terminals being installed at many post offices to check instantly whether EMCAs are valid and contain sufficient funds for Express Mail postage. Evaluate whether continuing to offer EMCAs as a payment option still makes good business sense. The Service expects these corrective actions to go a long way toward minimizing the use of invalid EMCAs and revenue losses. We agree that, when the Service has fully implemented the actions taken and planned, controls over EMCA are likely to be significantly improved. The Service will need to coordinate these EMCA improvement actions with its evaluation to determine whether to continue offering EMCAs. Otherwise, it could incur unnecessary cost of improving controls over EMCAs if later it determines that EMCAs do not make good business sense and should be discontinued. We are sending copies of this report to the Postmaster General, the Postal Service Board of Governors, the Ranking Minority Member of your Subcommittee, the Chairman and Ranking Minority Member of the Senate Oversight Committee for the Postal Service, and other congressional committees that have responsibilities for Postal Service issues. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix III. If you have any questions about this report, please call me on (202) 512-8387. Description of the Postal Service s Express Mail Corporate Account Procedures EMCAs are available to both individual and business customers. Under current Service procedures, anyone can open an EMCA by depositing $100 or the customer s estimated 2-weeks Express Mail postage, whichever is higher. EMCA customers are required to maintain a minimum balance of $50 or 1-week Express Mail postage, whichever is higher, on deposit with the Service. Although Service officials said that the number of active EMCA varies daily, Service records show that, during the month of February 1996, an average of about 113,000 EMCAs existed nationwide. When opening EMCAs, customers are to be given a six-digit EMCA number, and these numbers are to be included on mailing labels affixed to the Express Mail packages. Customers can drop the package in a collection box designated for Express Mail or take the package to a post office, mail-processing plant, or other places where the Service accepts Express Mail. Postmasters, clerks, or other Service employees accepting Express Mail at post offices and mail-processing plants are to electronically scan a preprinted barcode on the Express Mail label, which enters the label s unique identifying number into an automated system for tracking purposes. The employees are to weigh the package, verify that the customer calculated the correct postage, and take steps as required to ensure the correct postage is collected. These steps are to be done for all Express Mail, whether an EMCA is used for payment or not. For those Express Mail packages involving an EMCA, employees accepting the package are to determine if the EMCA number on the package is invalid by manually comparing the number against a list of EMCA numbers that the Service has determined to be invalid. If the number on the package is not found on that list, employees are to manually key in the EMCA number and the postage due so that the amount can be charged to the EMCA. An EMCA is to be charged for the Express Mail package when employees record a valid EMCA number at the acceptance point and scan the Express Mail barcode. A sample Express Mail label follows, showing EMCA numbers and other data to be recorded by employees when they accept a package. <8. Sample Express Mail Label> Postal Service employees are to manually record an Express Mail Corporate Account number supplied by the customer in this block. Employees are to scan a barcode pre-printed by the Service on each Express Mail label. After acceptance is recorded, the Service is to track each Express Mail package until it reaches the delivery station near the home or business receiving the package. At these stations, Service employees are to again electronically scan the barcode on the Express Mail label before the package is delivered. The Service has an Electronic Marketing and Reporting System (EMRS) to record, track, and report on Express Mail transactions. The system is used to receive and compare the Express Mail identification numbers scanned by postal employees at the post offices or mail-processing plants and delivery stations. If the comparisons show no match between the scanned barcodes entered at the points of acceptance and delivery, exception reports are to be prepared and made available to Service officials each workday for follow-up action. EMRS also generates reports showing (1) pieces of mail charged to invalid EMCAs; (2) Express Mail packages scanned at either the acceptance point or the delivery point, but not both; and (3) EMCAs with insufficient or negative fund balances. Along with these exception reports, the system generates other reports every 4 weeks for use by Service officials and, in some cases, EMCA customers. Among these reports are those that show Express Mail volume and revenue, on-time delivery rates, and refunds of postage for late delivery. Service officials and each EMCA customer are to receive a report every 4 weeks showing the beginning EMCA balance, number of packages mailed, amount of postage charged during the preceding 4-week period, ending and minimum balances, and any additional deposit required by the customer. Comments From the U.S. Postal Service Major Contributors to This Report <9. General Government Division, Washington, D.C.> <10. Dallas Field Office> Sherrill Johnson, Core Group Manager Raimondo Occhipinti, Evaluator-in-Charge Hugh Reynolds, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO reviewed the U.S. Postal Service's controls over Express Mail Corporate Accounts (EMCA), focusing on: (1) whether there is any basis for the allegation of EMCA abuse; and (2) if so, what steps the Service is taking to help avoid or minimize EMCA revenue losses. What GAO Found GAO found that: (1) some mailers obtained Express Mail services using invalid EMCA in fiscal year 1995; (2) the Service did not collect the postage due or verify EMCA which were later determined to be invalid; (3) some EMCA customers overdrew their accounts and carried negative balances; (4) the Service plans to provide post office employees with automated access to valid EMCA numbers and fund balances, but has no plans to provide similar access to employees at mail-processing plants; (5) although the Service's planned actions to improve controls over EMCA operations will take a considerable amount of money and time to complete, they will not have addressed several other EMCA control weaknesses; (6) to determine whether EMCA continue to be necessary or desirable, the Service could evaluate the relative customer convenience, cost-effectiveness, and other relevant factors; and (7) if EMCA are continued, Service employees need to follow new and existing procedures designed to help prevent EMCA revenue losses.
<1. Background> For over 30 years, TCMP has been IRS primary program for gathering comprehensive and reliable taxpayer compliance data. It has been IRS only program for making statistically reliable estimates of compliance nationwide. It has also been used to identify areas where tax law needs to be changed to improve voluntary compliance and to estimate the tax gap and its components. TCMP data are also used outside IRS, including by Congress to make revenue estimates for new legislation and by the Department of Commerce s Bureau of Economic Analysis to adjust national income accounts such as the gross domestic product. The 1994 TCMP survey, which was to consist of over 150,000 income tax returns, was to be the most comprehensive TCMP effort ever undertaken. By auditing the tax returns of individuals (Form 1040), small corporations with $10 million or less in assets (Form 1120), Partnerships (Form 1065), and S corporations (Form 1120S), IRS planned to obtain comprehensive compliance data. Most sample results were to be sufficiently precise to be reliable at the national level as well as at smaller geographic areas across the country. The 1994 TCMP was designed to fulfill the information needs for several compliance areas expected to be important to IRS functions over the next decade. The more important uses were to include development of audit selection formulas, validation of IRS revised approach to categorizing returns for audit, and development of new approaches to researching compliance across specific geographic areas. Each of these uses is discussed in more detail below. Since 1969, IRS has used TCMP data to update its Discriminant Function (DIF) formulas, which are mathematical formulas used to select tax returns with the greatest probability of change for audit. The current formulas for individuals are based on 1988 tax returns, IRS most recent individual TCMP audits. Formulas for small corporations are based on returns that were processed in 1987. IRS does not use DIF scores for partnerships and S corporations because of the age of the underlying TCMP audits. TCMP data were also to be used to test new compliance strategies. IRS planned to change the way it categorized returns for audit by adopting the market segment approach. Market segments represent groups of taxpayers with similar characteristics, such as those in manufacturing. IRS assumes that because these taxpayers have similar external characteristics, their tax compliance behavior will exhibit similar attributes. Finally, the 1994 TCMP was to provide compliance research data. IRS recently reorganized its compliance research function, establishing a National Office of Research and Analysis (NORA) and 31 District Office of Research and Analysis (DORA) sites. The 1994 TCMP was to be large enough to provide reliable compliance data for field and National Research Offices. IRS researchers planned to use TCMP data to identify national and geographically specific areas of noncompliance and, by focusing on key compliance issues, develop programs to improve voluntary compliance. It is through these research efforts that IRS planned to improve overall voluntary compliance. Noncompliance represents a major source of lost revenue for the nation. IRS most recent tax-gap estimates indicate that over $127 billion was lost to noncompliance in 1992. In an attempt to reduce this lost revenue, IRS established an objective of collecting at least 90 percent of the taxes owed through voluntary compliance and enforcement measures by the year 2001. However, this overall compliance rate has remained at about 87 percent since 1973. The 1994 TCMP was intended to provide data from which other programs could be developed to improve this rate and increase revenue. This nation s tax system is based on individuals and businesses voluntarily paying the taxes they owe. To the extent that this system works, it improves the efficiency of tax collection. Measuring the extent to which the tax system works and identifying areas in which it does not is the job of compliance measurement. TCMP has been IRS only tool for measuring voluntary compliance and determining compliance issues. The postponed TCMP for 1994 tax returns was to establish the voluntary compliance benchmark to carry IRS into the next century. <2. Objectives, Scope, and Methodology> The objectives of this assignment were to (1) determine the possible effects on IRS compliance programs of postponing the 1994 TCMP and (2) identify some potential short- and long-term alternatives to the planned TCMP for collecting this data. To determine the possible effects of postponing the 1994 TCMP, we talked to responsible officials in IRS Research Division and the Examination Division. We obtained information on how these officials planned to use TCMP data and what will likely be affected now that TCMP has been postponed. To identify alternatives to the planned TCMP, we talked to IRS officials responsible for planning TCMP. We discussed alternative sampling methodologies with officials from IRS Statistics of Income (SOI) Branch who were responsible for preparing the original TCMP sample and asked them to determine sample sizes on the basis of revised requirements. We developed the revised requirements on the basis of our discussions with IRS Research Division staff as well as officials outside IRS, including congressional staff. Some of the observations in this report are based on the work we have done over the years on IRS compliance programs as well as our specific work on TCMP in recent years. We requested comments from you on a draft of this report. On February 23, 1996, we obtained oral comments from IRS Director of Research and the National Director of Compliance Specialization. We also obtained commented from you in a March 18, 1996, letter. These comments are discussed on page 13 of this report. We did our work in San Francisco, Dallas, and Washington, D.C., between August and December 1995 in accordance with generally accepted government auditing standards. <3. Loss of TCMP Data Could Disrupt IRS Compliance Strategies> The planned TCMP for 1994 tax returns was to establish the voluntary compliance benchmark to carry IRS into the next century. While agency officials said that postponing TCMP will help resolve budget problems, our work suggests that the loss of these or comparable data is also likely to disrupt IRS efforts to increase the total collection percentage to 90 percent by 2001. For example, without these data, IRS will have difficulty updating the formulas it uses to select returns for audit and, thus, it would be more likely that a higher percentage of the returns IRS selects for audit would not result in changes to the amount of tax owed by the taxpayer. Additionally, without such data IRS will be unlikely to have sufficient data to validate its market segment approach to audits or to be used by the DORA research functions to identify programs to improve voluntary compliance. It is not clear whether IRS will replace the data it had planned to obtain from TCMP. However, updated compliance data will be needed in the short term if IRS still plans to update the audit selection formulas and in the long term to validate and improve IRS compliance efforts. <3.1. Updated Compliance Data Were Needed to Update Return Selection Formulas> The primary system that will be disrupted by postponing TCMP is the one used by IRS to select returns for audit. Since 1969, IRS has used DIF formulas to select returns for audit. New DIF formulas are developed periodically from TCMP data and applied to all individual and small corporation income tax returns. IRS then selects returns for audit with the highest DIF scores. In 1992, over 55 percent of the audited returns of individuals were selected using the DIF score. The DIF selection system replaced programs that were largely dependent on auditor s judgment. The DIF system has not only improved the efficiency of IRS audit efforts but also the consistency and objectivity of the selection process. The use of the DIF selection process has also resulted in fewer no-change audits, which not only waste IRS resources but unnecessarily burden compliant taxpayers. According to IRS, use of the DIF scoring system reduced the no-change rate from over 46 percent in 1969 to about 15 percent in 1992. IRS officials believe the DIF process is dependent on periodically updating the formulas used to score returns. Formulas are updated so that they will more accurately identify the returns with the greatest probability for change. Until 1988, data from TCMP had been used to update formulas for individual returns every 3 years. However, the most recent TCMP was conducted on 1988 individual returns. For small corporations, partnerships, and S corporations, IRS has updated formulas much less frequently. TCMPs were conducted on corporate returns filed in 1987, and partnership and S corporation returns filed in 1982 and 1985, respectively. IRS is not certain how well the DIF scores will continue to perform if not updated. IRS officials believe that by 1998, the year IRS planned to have TCMP data available, the DIF scores may become less effective at identifying returns with the greatest potential for change. They said this decrease in effectiveness may occur because of changes in tax laws and taxpayer behavior resulting in an increased no-change rate for DIF selected returns and potentially lower revenue yields. This would mean greater burden on compliant taxpayers if more of them are selected for audit. IRS officials indicated that they plan to monitor the performance of DIF over time. <3.2. Compliance Data Needed to Validate Market Segment Approach> The 1994 TCMP was also intended to provide information on IRS new market segment approach for grouping tax returns. IRS initiated the market segment approach on the basis of work done in its Western Region, which indicated that compliance rates and audit issues were likely to be similar for taxpayers with similar characteristics, such as businesses in the same industry (e.g., manufacturing or retail sales). Accordingly, IRS concluded that grouping taxpayers by market segments might result in selecting returns for audit that have a higher potential for change and might allow auditors to specialize in market segments. The 1994 TCMP was designed to provide data to test this hypothesis as well as to develop DIF scores by market segment rather than by audit class, as had been done in the past. Without TCMP or some alternative to provide similar information, IRS will not have data to show whether market segments are better for return selection purposes than traditional audit classes or be able to determine the compliance rate or compliance issues of the market segments. Because of these concerns, IRS no longer plans to test a selection of returns for audit by using the market segment approach. Instead, IRS plans to continue selecting returns for audit using the DIF score within audit classes. <3.3. TCMP Was Designed to Support IRS Research Function> Finally, the 1994 TCMP was designed to provide compliance data for IRS National and District Research Offices. IRS established these offices to research taxpayer compliance at the national and local levels. These researchers were to identify programs to improve compliance not only through audits but also through larger scale nonaudit programs, such as improved guidance and assistance to taxpayers and tax-law changes. TCMP also was to be used to develop benchmark compliance data for measuring future progress and determine how effectively managers were meeting their objectives of improving compliance. Without TCMP or an alternative data source, IRS new research function would still be able to analyze noncompliance in filing returns and paying taxes. However, research on reporting compliance, the area where most of IRS compliance dollars are spent, would be very limited. Thus, researchers would have inadequate data to identify emerging trends in reporting compliance, to develop solutions, and to test the effectiveness of these solutions. As a result, IRS would likely continue its reliance on enforcement to improve compliance. However, enforcement has proven to be a costly and ineffective way to increase overall voluntary compliance. <4. Possible Short-Term Approaches to Collect Compliance Data> According to IRS officials, because of criticisms of TCMP and budget concerns, the 1994 TCMP is unlikely to be conducted. Although IRS officials told us they planned to use an alternative method to obtain TCMP data, they currently have no short-term proposal on how to obtain these data. Regardless of how IRS plans to mitigate the loss of 1994 TCMP data, it would have to start soon in order to minimize the adverse effects of not updating its compliance programs. According to IRS officials, a number of alternative sampling strategies could fill the short-term data gap created by postponing TCMP indefinitely. From these strategies, we identified several alternative samples that met three basic objectives we considered important: (1) reducing the sample size to make data collection less costly for IRS and less burdensome to taxpayers, (2) maintaining IRS ability to update the DIF scoring system, and (3) maximizing use of the work already completed to identify returns and collect data for the 1994 TCMP sample. One alternative sample would be for IRS to reduce the planned TCMP sample size and still provide some of the same data, although with less precision. This smaller sample could also be used to update the DIF score with little loss in accuracy. On the basis of our discussions with SOI officials, it appears IRS could reduce the sample size in any one of several ways, including decreasing the level of acceptable statistical precision for individual and corporate returns; selecting a sample with only businesses (sole proprietors, corporations, partnerships, and S corporations), with reduced precision; classifying TCMP sample returns and eliminating returns that past audit experience indicates are not likely to result in an audit adjustment; and selecting a sample that includes only sole proprietor and corporation returns. Numerous other alternatives to the sampling methodology and characteristics may give slightly different sample sizes. For example, by eliminating the requirement for updating the DIF formula, the sample size for the corporation and individual business option is reduced by about 12 percent, to 28,275. However, such an approach would lessen the value of TCMP because it would limit IRS ability to update the DIF score, a primary purpose of TCMP audits. Reducing the sample size would reduce the cost of TCMP audits. IRS cost estimates for the 1994 TCMP were divided into two types, (1) staffing costs and (2) opportunity costs. Staffing costs reflect IRS cost estimates for auditors to conduct the TCMP audits. Opportunity costs reflect IRS estimates of the difference between revenue generated through the regular audit program and revenue generated by TCMP audits. According to IRS officials, TCMP audits generate less revenue because the returns are randomly selected rather than identified by using the DIF score or as part of a special project and because the returns take longer to audit. Table 1 shows how the variations in sampling methodology and characteristics change the sample size and cost estimates. Changing the sample characteristics not only reduces the size but affects the usefulness of data from the sample. Each of the changes shown in table 1 has its own set of strengths and weaknesses that relate primarily to reliability and coverage. For example, reducing the sample to businesses only and reducing the precision would provide no information on nonbusiness individuals. Also, this sample would be of little use at the DORA level because it would not provide statistically reliable estimates of compliance below the national level. This sample could, however, provide some information on market segment compliance and be used to update the DIF formula for businesses and the return types where voluntary compliance is the lowest. Also, a business-only approach could be combined with a multiyear sample where the compliance of nonbusiness individual returns is evaluated in a future year. Although we did not fully evaluate the alternatives, the table in appendix I summarizes some of the more obvious trade-offs inherent in the alternatives discussed above. Deciding how to change the sampling strategy to reduce the sample size would require careful evaluation of the tradeoffs. It seems reasonable, however, to consider that any new sample should, at a minimum, allow some updating of the DIF formulas, since this was to be the primary purpose of the original TCMP. To the extent that other purposes can also be met through one of these alternative sampling strategies, the sample would be more valuable. <5. Long-Term Compliance Measurement Considerations> Because a significant portion of IRS workload and future revenue depends on compliance programs, it is important that IRS determine how to measure compliance. Such measurements are an on-going need for any tax system that depends on voluntary compliance. It is also important that any long-term solution to obtaining compliance measurement information address the issue of sustainability so that long-term consistent measurement data are available. Sustainability means that the program s costs, in terms of IRS budget and perceived burden on the taxpayer, must be clearly defensible. Additionally, to be efficient and effective, it would be necessary to design a program that provides timely data and clearly identifies the objectives and uses of these compliance data. We identified several alternatives to the traditional TCMP that would meet some of the data needs that were lost when TCMP was postponed, including (1) conducting multiyear TCMP audits on smaller samples and combining the results; (2) using operational audit data; and (3) conducting a mini TCMP to identify compliance issues, with a more focused TCMP audit on the identified issues. We discuss these three options below. <5.1. Using Multiyear TCMP Audits> The multiyear TCMP alternative envisions annual TCMP-type audits on a smaller sample of tax returns which, over the course of several years, could be combined to obtain the required statistical precision. For example, IRS could disaggregate an entity type, such as individual taxpayers, into separate market segments or audit classes and conduct the audits of each segment on a 3-year cycle. Table 2 below shows an example of how such a program might operate. One benefit of such an approach to IRS would be that after the initial 3-year period, new and current data would become available for one of the segments every year, making it easier to fine-tune the compliance system. Such an approach, however, would require considerable effort from IRS statisticians to ensure that the sample design was statistically sound. Also, it would require a long-term commitment from IRS managers to ensure that returns were audited regularly. <5.2. Using Operational Audit Data> A second option is to use data from operational audits already being done. Using data from operational audits would provide a large amount of compliance data. This option is also probably the most sustainable of the three we discuss because it would be less burdensome on compliant taxpayers and have no marginal staffing and opportunity costs. However, there are weaknesses. IRS currently has no system to track operational audit issues. While such a system is currently being developed, it is not yet operational and testing is not planned to begin until later in 1996. According to IRS officials, this database is to identify audit issues as well as provide codes to identify the causes of noncompliance. Also, IRS officials believe that using a database of operational audit results could not be used for updating the DIF formulas, determining ways to improve voluntary compliance, or systematically identifying emerging audit issues because the audited returns would not be randomly selected. <5.3. Using a Mini TCMP> A third option is to periodically conduct a very small TCMP that covers all taxpayers and follow up with mini TCMP audits on specific issues identified as concerns. Using this approach, IRS may be able to reduce the sample size and focus the majority of the audits on less compliant taxpayers, thus reducing cost and taxpayer burden. This approach may also provide IRS with insight into the areas of greatest noncompliance because efforts would be more focused. IRS officials said that this approach, however, would probably not provide sufficient data to update the DIF formulas and may be of little use at DORA sites because too few randomly selected returns would likely be examined. <6. Conclusions> A significant proportion of IRS present and future compliance programs have been predicated on the information obtained from TCMP. Benchmarking current compliance, validating the market segment approach, updating return selection formulas, researching noncompliance issues and developing programs to address them, and estimating the tax gap all depend on TCMP information. Without updated compliance data, increasing voluntary compliance, as envisioned by IRS, is less likely to occur. IRS has options to replace at least some of the data that would have been available from the 1994 TCMP audits. Auditing a smaller sample size by eliminating some return types and accepting a decrease in precision, is a factor in such options. While each of these alternatives has limitations, they would meet some of the data needs that were lost when TCMP was postponed. It is important for IRS to make a decision soon on how to replace TCMP data because it will take some time to implement a replacement, and IRS projects that the currently available 1988 data will be less effective by 1998. If IRS does not develop a sustainable compliance measurement program, IRS compliance programs may be disrupted as the proportion of audits that result in no-changes increases and IRS access to information on emerging compliance issues decreases. In the long term, such disruptions are likely to result in increased burdens on compliant taxpayers as more of them are selected for audit. <7. Recommendations> To provide the data necessary to help meet the objectives of IRS compliance strategies, we recommend that you identify a short-term alternative strategy to minimize the negative effects of the compliance information that is likely to be lost because TCMP was postponed, and develop a cost-effective, long-term strategy to ensure the continued availability of reliable compliance data. <8. Agency Comments and Our Evaluation> We requested comments from you on a draft of this report. Responsible IRS officials, including the National Director, Compliance, Research and National Director, Compliance Specialization, provided comments in a February 23, 1996, meeting. These officials agreed with our recommendations and provided some technical comments, which we have incorporated where appropriate. In a March 18, 1996, letter, you restated those agreement and indicated that over the next several months IRS would devote substantial effort to investigating all potential options for capturing reliable compliance information as an alternative to TCMP. We believe the actions that IRS proposes, if properly implemented, will be responsive to our recommendations. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency s first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to pertinent congressional committees with responsibilities related to IRS, the Secretary of the Treasury, and other interested parties. Copies will be made available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions, please contact me on (202) 512-9044. Trade-Offs With Alternative Sampling Strategies Useable to update DIF scores, provides baseline compliance for market segments, useable at the DORA level for most market segments, very precise compared with other options. Large sample size requiring significant resource and cost commitment. Useable to update DIF scores, provides baseline data for national market segments, reduces the sample size and burden. Not useable at the DORA level. Useable to update DIF formula for businesses, where the most noncompliance occurs, provides baseline data for national market segments, reduces the sample size and burden. Not usable to update the DIF score for individual returns, not useable at the DORA level. Possibly useable to update DIF formulas, would provides some national market segment information, reduces the sample size and burden. Not useable at the DORA level, problems identifying no-change returns. Useable to update DIF formulas for selected classes of business return, provides national market segment compliance data, reduces the burden on individual taxpayers. Not useable to update the DIF score or identify compliance issues for nonbusiness individuals, partnerships, and S corporations not useable at the DORA level. Major Contributors to This Report <9. General Government Division, Washington, D.C.> <10. San Francisco Regional Office> Ralph T. Block, Assistant Director Louis G. Roberts, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study GAO assessed the potential effects on the Internal Revenue Service's (IRS) compliance programs of postponing the 1994 Taxpayer Compliance Measurement Program (TCMP) survey and identified some potential short- and long-term TCMP alternatives. What GAO Found GAO found that: (1) IRS postponed the 1994 TCMP because of criticisms and budget constraints; (2) IRS does not know how it will obtain the taxpayer compliance data it needs; (3) the loss of 1994 TCMP data could increase compliant taxpayers' burden over the long term because audits may become less targeted; (4) to mitigate the data losses over the short term, IRS could employ a number of alternatives, including doing a smaller survey; (5) any alternative should reduce sample size to lessen taxpayer burden and administrative costs, maintain IRS ability to update the discriminant function scoring system, and maximize the use of already completed work; (6) a limited survey would reduce the quantity and quality of the data collected, but still provide national compliance data; (7) IRS must determine how it will measure compliance over the long term, since its workload and future revenues depend on taxpayers' voluntary compliance; (8) long-term alternatives include conducting small multiyear TCMP audits, using data from operational audits to assess compliance changes, and conducting periodic national mini-TCMP audits; (9) IRS must decide on a compliance information-gathering alternative in the near term, since any alternative will take several years to develop and implement; and (10) the alternatives will likely not gather data as comprehensive as the originally planned TCMP data.
<1. Background> VA, Education, and Labor assess education and training programs for various purposes. VA s approval process is meant to ensure that education and training programs meet VA standards for receipt of veteran education assistance benefits, while Education s and Labor s processes are primarily for awarding student aid and providing apprenticeship assistance. VA administers a number of programs designed to assist individuals in gaining access to postsecondary education or training for a specific occupation (see table 1). VA generally provides its assistance in the form of payments to veterans, service persons, reservists, and certain spouses and dependents. Benefits can be used to pursue a degree program, vocational program, apprenticeship, and on-the-job training (see fig. 1). Before an individual entitled to VA education assistance can obtain money for an education or training program, the program must be approved by an SAA, or by VA in those cases in which an SAA has not been contracted to perform the work. VA s administrative structure for the education and training assistance programs includes its national office, which oversees the four regional processing offices (RPO), and the national contract with SAAs. RPOs administer the education assistance programs and process benefits for veterans. SAAs review education and training programs to determine which programs should be approved and ensure schools and training providers are complying with VA standards. SAAs have six core duties: (1) approval of programs, (2) visits to facilities, (3) technical assistance to individuals at facilities, (4) outreach, (5) liaison with other service providers, and (6) contract management. Sixty SAAs exist in the 50 states, the District of Columbia, and Puerto Rico. Eight states have two SAAs. SAAs are usually part of a state s department of education (31 SAAs). In some states, SAAs are organizationally located in other departments such as labor (9 SAAs) or veterans services (19 SAAs). The U.S. Department of Education s approval process is to ensure that schools meet federal Education standards to participate in federal student financial aid programs. In order for students attending a school to receive Title IV financial aid, a school must be (1) licensed or otherwise legally authorized to provide postsecondary education in the state in which it is located, (2) accredited by an entity recognized for that purpose by the Secretary of Education, and (3) certified to participate in federal student aid programs by Education. As such, the state licensing agencies, accrediting agencies, and certain offices within Education are responsible for various approval activities. State licensing agencies grant legal authority to postsecondary institutions to operate in the state in which they are located. Each of the states has its own agency structure, and each state can choose its own set of standards. Accrediting agencies develop evaluation criteria and conduct peer evaluations to assess whether or not those criteria are met by postsecondary institutions. Institutions or programs that meet an agency s criteria are then accredited by that agency. As of November 2005, there were 60 recognized private accrediting agencies of regional or national scope. The U.S. Department of Education s Office of Postsecondary Education evaluates and recognizes accrediting agencies based on federal requirements to ensure these agencies are reliable authorities as to the quality of education or training provided by the institutions of higher education and the higher education programs they accredit. The U.S. Department of Education s Office of Federal Student Aid determines the administrative and financial capacity of schools to participate in student financial aid programs, conducts ongoing monitoring of participant schools, and ensures participant schools are accredited and licensed by the states. The purpose of the Department of Labor s approval process is to establish and promote labor standards to safeguard the welfare of apprentices. Labor establishes standards and registers programs that meet the standards. Labor directly registers and oversees programs in 23 states but has granted 27 states, the District of Columbia, and 3 territories authority to register and oversee their own programs, conducted by state apprenticeship councils (SACs). Labor reviews the activities of the SACs. SACs ensure that apprenticeship programs for their respective states comply with federal labor standards, equal opportunity protections, and any additional state standards. Figure 2 shows the agencies responsible for the approval processes for the various types of education and training programs. <2. Legislative Changes Effective in 2001 Created Additional Responsibilities for SAAs> In 2001, SAAs received additional responsibilities as a result of legislative changes. This included responsibility for actively promoting the development of apprenticeship and on-the-job training programs and conducting more outreach activities to eligible persons and veterans to increase awareness of VA education assistance. SAAs were also charged with approving tests used for licensing and certification, such as tests to become a licensed electrician. For those tests that have been approved, veterans can use VA benefits to pay for testing fees. From fiscal years 2003 to 2006, SAA funding increased from $13 million to $19 million to expand services and support the additional responsibilities. Funding is scheduled to begin to decrease in fiscal year 2008. <3. Many Education and Training Programs Approved by SAAs Have Also Been Approved by Education or Labor, and VA Has Taken Few Steps to Coordinate Approval Activities with These Agencies> Many education and training programs approved by SAAs have also been approved by Education and Labor. Sixty-nine percent of all programs approved by SAAs are offered by institutions that have also been certified by Education. Seventy-eight percent of SAA-approved programs in institutions of higher learning (e.g., colleges and universities) have been certified by Education. Also, 64 percent of SAA-approved non-college degree programs are in institutions that have been certified by Education. Although less than 2 percent of all programs approved by SAAs are apprenticeship programs, VA and SAA officials reported that many of these programs have also been approved by Labor. Similar categories of approval standards exist across agencies, but the specific standards within each category vary and the full extent of overlap is unknown. For example, while VA and Education s approval standards both have requirements for student achievement, the New England Association of Schools and Colleges, an accrediting agency, requires that students demonstrate competence in various areas such as writing and logical thinking, while VA does not have this requirement. Also among the student achievement standards, VA requires schools to give appropriate credit for prior learning, while Education does not have such a requirement. Table 2 shows the similar categories of standards that exist across agencies. policies related to student achievement, such as minimum satisfactory grades, but the requirements differ in level of specificity. Despite the overlap in approved programs and standards, VA and SAAs have made limited efforts to coordinate approval activities with Education and Labor. VA reported that while it has coordinated with Education and Labor on issues related to student financial aid and apprentices skill requirements, it believes increased coordination is needed for approval activities in order to determine the extent of duplicative efforts. Most of the SAA officials we spoke with reported that they have coordinated with SACs to register apprenticeship programs in their states. Labor reported that it coordinated with VA s national office in several instances, including providing a list of registered apprenticeship programs. Education reported that it does not have formalized coordination with VA but has had some contacts to inform VA of its concerns regarding specific institutions. Information is not available to determine the amount of resources spent on SAA duties and functions, including those that may overlap with those of other agencies. VA does not require SAAs to collect information on the amount of resources they spend on specific approval activities. The SAA officials we spoke with said that their most time-consuming activity is conducting inspection and supervisory visits of schools and training facilities. However, the lack of data on resource allocation prevented us from determining what portions of funds spent by SAAs were for approval activities that may overlap with those of other agencies. <4. SAAs Reportedly Add Value to the Approval Process for Education and Training Programs, but the Lack of Outcome- Oriented Performance Measures Makes It Difficult to Assess the Significance of Their Efforts> SAA and other officials reported that SAA activities add value because they provide enhanced services to veterans and ensure program integrity. According to these officials, SAAs added value includes a focus on student services for veterans and on VA benefits, more frequent on-site monitoring of education and training programs than Education and Labor, and assessments and approval of a small number of programs that are not reviewed by other agencies, such as programs offered by unaccredited schools, on-the-job training programs, and apprenticeship programs not approved by Labor. SAA approval activities reportedly ensure that (1) veterans are taking courses consistent with occupational goals and program requirements, (2) schools and training programs have evaluated prior learning and work experience and grant credit as appropriate, and (3) school or program officials know how to complete paperwork and comply with policies required by VA educational assistance through technical assistance. According to officials we interviewed, SAAs generally conduct more frequent on-site monitoring of education and training programs than Education or Labor, possibly preventing fraud, waste, and abuse. Some officials reported that SAAs frequent visits were beneficial because they ensure that schools properly certify veterans for benefits and that benefits are distributed accurately and quickly. States, schools, and apprenticeship officials we spoke with reported that without SAAs, the quality of education for veterans would not change. However, veterans receipt of benefits could be delayed and the time required to complete their education and training programs could increase. Despite areas of apparent added value, it is difficult to fully assess the significance of SAA efforts. VA does measure some outputs, such as the number of supervisory visits SAAs conduct, but it does not have outcome- oriented measures, such as the amount of benefit adjustments resulting from SAAs review of school certification transactions, to evaluate the overall effectiveness and progress of SAAs. (See table 3.) <5. Prior Recommendations and Agency Response> We made several recommendations to the Department of Veterans Affairs to help ensure that federal dollars are spent efficiently and effectively. We recommended that the Secretary of the Department of Veterans Affairs take steps to monitor its spending and identify whether any resources are spent on activities that duplicate the efforts of other agencies. The extent of these actions should be in proportion to the total resources of the program. Specifically: VA should require SAAs to track and report data on resources spent on approval activities such as site visits, catalog review, and outreach in a cost-efficient manner, and VA should collaborate with other agencies to identify any duplicative efforts and use the agency s administrative and regulatory authority to streamline the approval process. In addition, we recommended that the Secretary of the Department of Veterans Affairs establish outcome-oriented performance measures to assess the effectiveness of SAA efforts. VA agreed with the findings and recommendations and stated that it will (1) establish a working group with the SAAs to create a reporting system to track and report data for approval activities with a goal of implementation in fiscal year 2008, (2) initiate contact with appropriate officials at the Departments of Education and Labor to identify any duplicative efforts, and (3) establish a working group with the SAAs to develop outcome-oriented performance measures with a goal of implementation in fiscal year 2008. While VA stated that it will initiate contact with officials at Education and Labor to identify duplicative efforts, it also noted that amending its administrative and regulatory authority to streamline the approval process may be difficult due to specific approval requirements of the law. We acknowledge these challenges and continue to believe that collaboration with other federal agencies could help VA reduce duplicative efforts. We also noted that VA may wish to examine and propose legislative changes needed to further streamline its approval process. Madame Chairwoman, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have. <6. GAO Contacts> For further information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Heather McCallum Hahn, Andrea Sykes, Kris Nguyen, Jacqueline Harpp, Cheri Harrington, Lara Laufer, and Susannah Compton. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In fiscal year 2006, the Department of Veterans Affairs (VA) paid $19 million to state approving agencies (SAA) to assess whether schools and training programs are of sufficient quality for veterans to receive VA education assistance benefits when attending them. The Departments of Education and Labor also assess education and training programs for various purposes. This testimony describes (1) changes that have occurred in state approving agencies' duties and functions since 1995, (2) the extent to which the SAA approval process overlaps with efforts by the Departments of Education and Labor, and (3) the additional value that SAA approval activities bring to VA education benefit programs. This testimony is based on a March 2007 report (GAO-07-384). What GAO Found Since 1995, legislative changes effective in 2001 created additional responsibilities for SAAs, including promoting the development of apprenticeship and on-the-job training programs, providing outreach services, and approving tests for occupational licensing. From fiscal years 2003 to 2006, SAA funding increased from $13 million to $19 million to expand services and support the additional responsibilities. However, funding is scheduled to decrease beginning in fiscal year 2008. Many education and training programs approved by SAAs have also been approved by Education or Labor, and VA has taken few steps to coordinate approval activities with these agencies. More than two-thirds of all programs approved by SAAs are offered by institutions that have been certified by Education. Many apprenticeship programs approved by SAAs have also been approved by Labor, although apprenticeship programs make up less than 2 percent of all programs approved by SAAs. Similar categories of approval standards, such as student achievement, exist across agencies, but the specific standards within each category vary and the full extent of the overlap is unknown. For example, VA requires schools to give appropriate credit for prior learning while Education does not have such a requirement. Despite the overlap in approved programs and standards, VA and SAAs have made limited efforts to coordinate approval activities with other federal agencies. VA does not require SAAs to collect information on the amount of resources they spend on specific approval activities; therefore, information is not available to determine the amount of resources spent on SAA duties and functions, including those that may overlap with those of other agencies. SAAs reportedly add value to the approval process for education and training programs, but the lack of outcome-oriented performance measures makes it difficult to assess the significance of their efforts. Areas of added value include (1) a focus on student services for veterans and on the integrity of VA benefits, (2) more frequent on-site monitoring of education and training programs than provided by Education or Labor, and (3) assessments and approval of a small number of programs that are not reviewed by other agencies. States, schools, and apprenticeship officials we spoke with reported that without SAAs, the quality of education for veterans would not change. However, veterans' receipt of benefits could be delayed and the time required to complete their education and training programs could increase. Despite areas of apparent added value, it is difficult to fully assess the significance of SAA efforts. VA measures some outputs, such as the number of supervisory visits SAAs conduct, but it does not have outcome-oriented performance measures, such as the amount of benefit adjustments resulting from SAAs' reviews, to evaluate the overall effectiveness of SAAs.
<1. Background> Since 2004, Congress has authorized over $8 billion for medical countermeasure procurement. The Project BioShield Act of 2004 authorized the appropriation of $5.6 billion from fiscal year 2004 through fiscal year 2013 for the Project BioShield Special Reserve Fund, and funds totaling this amount were appropriated. The act facilitated the creation of a government countermeasure market by authorizing the government to commit to making the Special Reserve Fund available to purchase certain medical countermeasures, including those countermeasures that may not be FDA-approved, cleared, or licensed. In 2013, PAHPRA authorized an additional $2.8 billion to be available from fiscal year 2014 through fiscal year 2018 for these activities,funding has not yet been appropriated for these years. In addition to the but Special Reserve Fund, Congress has also made funding available through annual and supplemental appropriations to respond to influenza pandemics, including developing vaccines and other drugs. <1.1. Federal Roles and Responsibilities Related to Medical Countermeasures> HHS is the primary federal department responsible for public health emergency planning and response, including medical countermeasure development, procurement, and distribution. HHS also coordinates with other federal departments, such as DHS, through PHEMCE. Within HHS, several offices and agencies have specific responsibilities for public health preparedness and response. HHS s ASPR leads PHEMCE and the federal medical and public health response to public health emergencies, including strategic planning, medical countermeasure prioritization, and support for developing, procuring, and planning for the effective use of medical countermeasures. Within ASPR, BARDA established by the Pandemic and All-Hazards Preparedness Act of 2006 oversees and supports advanced development and procurement of some medical countermeasures into the SNS. NIH conducts and funds basic and applied research and early development needed to develop new or enhanced medical countermeasures and related medical tools for CBRN and infectious disease threats. CDC maintains the SNS, including purchasing commercially available products as necessary, and supports state and local public health departments efforts to detect and respond to public health emergencies, including providing guidance and recommendations for the mass dispensing and use of medical countermeasures from the SNS. FDA assesses the safety and effectiveness of medical countermeasures; regulates their development; approves, clears, or licenses them; and conducts postmarket surveillance as part of its overall role to assess the safety and effectiveness of medical products. FDA also provides technical assistance to help ensure that product development meets FDA s regulatory requirements and provides technical support for the development of regulatory science tools. FDA may authorize the emergency use of medical products that have not yet been approved, cleared, or licensed or were approved, cleared, or licensed only for other uses. DHS develops material threat assessments (MTA), in coordination with HHS, to assess the threat posed by given CBRN agents or classes of agents and the potential number of human exposures in plausible, high- consequence scenarios. DHS uses the MTAs to determine which CBRN agents pose a material threat sufficient to affect national security and to provide HHS with a basis for determining needed countermeasures for those agents. DHS also develops terrorism risk assessments (TRA) to assess the relative risks posed by CBRN agents based on variable threats, vulnerabilities, and consequences. HHS s PHEMCE is responsible for establishing civilian medical countermeasure priorities for CBRN and emerging infectious disease threats, including influenza; coordinating federal efforts to research, develop, and procure medical countermeasures to enhance preparedness and response for public health threats; and developing policies, plans, and guidance for the use of countermeasure products in a public health emergency. PHEMCE is composed of officials from ASPR, including BARDA; CDC; FDA; NIH; and other federal departments, including the Departments of Agriculture, Defense, Homeland Security, and Veterans Affairs. HHS and PHEMCE establish federal medical countermeasure development and procurement priorities through a multistep process. This process includes assessing the threat posed by CBRN agents and the potential consequences they pose to public health, determining medical countermeasure requirements the type of countermeasure (vaccines, drugs, or medical devices such as diagnostics), the amount needed, and characteristics of the countermeasures (such as formulations, dosing, and packaging) for these agents, evaluating public health response capability, and developing and procuring countermeasures against these CBRN agents. (See fig. 1.) <1.2. PHEMCE Strategy, Implementation Plan, and Priorities> The 2012 PHEMCE Strategy lays out the four PHEMCE strategic goals and their underlying objectives for building HHS s countermeasure capabilities to respond to a public health emergency. The 2012 PHEMCE Implementation Plan updates the 2007 implementation plan and describes the activities that HHS and its interagency partners plan to conduct to achieve the four strategic goals and their associated objectives, the medical countermeasures HHS wants to develop and procure, and the capabilities HHS wants to build to support countermeasure development and procurement. The plan also includes 72 items that HHS selected as key priorities for fulfilling PHEMCE s strategic goals within the next 5 years, which the agency placed into three categories. For the purposes of this report we refer to the items in these categories as priority activities, priority threat-based approaches, and priority capabilities. The 33 priority activities reflect activities that support PHEMCE s overall mission and include pursuits such as developing systems to track countermeasure activities across all PHEMCE partners, enhancing national laboratory capabilities, and developing guidance documents and information for the public on using medical countermeasures in an emergency. (See table 1 for examples of PHEMCE priority activities by strategic goal.) In addition to the 33 priority activities, the 25 items identified as priorities for threat-based approaches are intended to directly address threats such as anthrax or smallpox. These priorities include pursuits such as publishing updated clinical guidance for anthrax countermeasures; developing and qualifying with FDA animal models to test the safety and efficacy of medical countermeasures for certain biological, radiological, and nuclear threats; and developing new plans for the distribution and dispensing of pandemic influenza antivirals. The remaining 14 items identified as priority capabilities reflect what HHS refers to as crosscutting capabilities. The priority capabilities are a mix of programs or technological applications that may, for example, support the development of countermeasures for a range of existing CBRN threats or for any new threats that may emerge in the future, or build infrastructure to provide countermeasure developers assistance with advanced development and manufacturing services. The priority capabilities include such pursuits as initiating a research program to fill gaps in knowledge in the area of patient decontamination in a chemical incident and establishing a network of facilities to support the filling and finishing of vaccines and other countermeasures. In addition to the 72 items HHS selected as key priorities for fulfilling PHEMCE s strategic goals, the implementation plan also identifies the medical countermeasures that constitute HHS s priorities for development and procurement to fulfill strategic goal 1, which we refer to as priority countermeasures for the purposes of this report. (See table 2.) Many of the threat-specific countermeasures for which PHEMCE set procurement priorities in 2007 continue to be priorities for development and procurement in the 2012 plan, such as anthrax vaccine, smallpox antivirals, chemical agent antidotes, and diagnostic devices for radiological and nuclear agents. The 2012 plan also includes pandemic influenza countermeasures and nonpharmaceutical countermeasures, such as ventilators, as priorities, whereas the 2007 plan focused on CBRN medical countermeasures only. <2. HHS Has Established Timelines and Milestones for Key PHEMCE Priorities but Has Not Yet Provided Previously Recommended Anticipated Spending Estimates for Priority Countermeasures> HHS has established timelines and milestones for the 72 priority activities, threat-based approaches, and capabilities identified in the 2012 PHEMCE Implementation Plan as key to fulfilling PHEMCE s strategic goals. However, while HHS has developed spending estimates for its priority medical countermeasures for internal planning purposes, it has not made these estimates publicly available, as we previously recommended in 2011. <2.1. HHS Has Established Timelines and Milestones for Key PHEMCE Priorities> HHS has established timelines and milestones for the 72 items it selected as key priorities for fulfilling PHEMCE s strategic goals. Leading practices for program management call for establishing time frames and milestones as part of a plan to ensure that organizations achieve intended results.In the implementation plan, HHS has assigned each of the 33 priority activities, the 25 priority threat-based approaches, and the 14 priority capabilities to one of three time frames for completion near-term (fiscal years 2012 through 2014), midterm (fiscal years 2015 through 2017), and long-term (fiscal year 2018 and beyond). In addition, HHS has placed PHEMCE s priority countermeasures into these time frames. All but 2 of the 33 priority activities, and all of the priority threat-based approaches and capabilities, are slated for completion in either the near term or the midterm. HHS has also identified deliverables and milestones for some of the priority activities, threat-based approaches, and capabilities, and assigned them more specific timelines. For 21 of the 33 priority activities, 10 of the 25 priority threat-based approaches, and 8 of the 14 priority capabilities, HHS and the PHEMCE agency or office responsible for carrying out the activity have identified specific deliverables intended to complete them. PHEMCE partners have tied each deliverable to a specific milestone or set of milestones, which delineate the steps necessary to complete the deliverable. In addition, the deliverables and milestones may have more specific timelines, such as an actual month or year of expected completion within the broader multiyear near- or midterm time frame. Examples of deliverables, milestones, and more specific timelines for PHEMCE priorities include the following: For the priority activity that states that ASPR is to lead PHEMCE in developing or updating medical countermeasure requirements for certain CBRN threats by the end of fiscal year 2014, ASPR has identified the requirements for each specific threat such as requirements for countermeasures for mustard gas and other blister agents as the individual deliverables for this activity. The blister agents requirement deliverable has four associated milestones that reflect the various activities of a PHEMCE working group to develop the requirements and the levels of PHEMCE and HHS approval needed, culminating in the approval by the ASPR Assistant Secretary by September 2013. For the priority threat-based approach of qualifying animal models for biological threats, the deliverable is FDA qualification of the animal model, and the three milestones are the development of animal models for anthrax, plague, and tularemia in fiscal year 2015. For the priority capability of initiating funding for the development of diagnostic systems for biological and chemical threat agents, and systems to identify and characterize unknown threats, the deliverable is NIH s awarding of funds to eligible applicants; the set of milestones for this deliverable are obtaining NIH approval to publish a solicitation for proposals for development of the diagnostics, publishing the solicitation in July 2013, and making awards in fiscal year 2014. NIH also plans to award additional funds in fiscal year 2015 for the development of multiplex diagnostic platforms for multiple threats. For the priority countermeasures, HHS officials told us that the department includes specific milestones in the contracts it awards to developers; these milestones reflect the expected course for research and development, such as holding and completing clinical trials to test the efficacy of a countermeasure or submitting inventory and storage plans, and have associated completion dates. For the remaining 12 priority activities, 15 priority threat-based approaches, and 6 priority capabilities, HHS has not established specific deliverables with milestones and timelines other than the overall completion of the priority within the specified near- or midterm time frame. HHS officials told us that some activities do not have specific timelines because HHS considers them to be ongoing activities that PHEMCE conducts regularly. For example, at least every 18 months, ASPR conducts formal reviews across participating PHEMCE agencies of medical countermeasure portfolios for specific threats in order to monitor progress in developing and procuring medical countermeasures for those threats, identify remaining gaps and challenges to developing and procuring countermeasures, and develop potential solutions. For activities in the implementation plan that are slated for completion in the long term, HHS officials said that they intend to develop more specific timelines as the near- and midterm activities are completed. ASPR tracks the progress of participating PHEMCE partners in implementing the priority activities, threat-based approaches, and capabilities by holding monthly meetings to collect information on progress. According to HHS officials, during these monthly meetings, PHEMCE participants discuss their progress in completing deliverables, potential barriers to completion, and any options to help mitigate these barriers. ASPR officials told us they rely on the PHEMCE partner responsible for the activity to have adequate project management controls in place to determine the amount of progress that the partner agency has made. If an agency anticipates delays in or barriers to completing and meeting certain milestones, ASPR officials may assist in identifying additional support within PHEMCE partner agencies or within other federal agencies. For example, HHS officials told us that for one priority activity s deliverable developing requirements for anthrax antitoxins CDC and FDA officials differed in their professional opinions on guidance for clinicians to administer the drug. PHEMCE senior management worked with the agencies to develop consensus wording for the guidance document to complete that deliverable. ASPR officials told us that they enter information collected in the meetings into a spreadsheet that contains descriptions of the PHEMCE priority activities, threat-based approaches, and capabilities; their associated deliverables, milestones, and timelines; and information on current progress, barriers to completion, and mitigation options. ASPR follows up with PHEMCE partners after the meetings to obtain any additional information, if necessary. ASPR distributes the finalized spreadsheet to PHEMCE partners about 1 week in advance of the next monthly meeting for them to use as reference for that meeting. ASPR officials told us they developed the tracking spreadsheet in response to the recommendation in our 2011 report that HHS develop a written strategy to monitor the implementation of recommendations from HHS s 2010 PHEMCE review and incorporated the PHEMCE priorities into the spreadsheet when HHS updated the implementation plan. At the completion of our review, PHEMCE was halfway through its near- term period of fiscal year 2012 through fiscal year 2014. As of September 2013 (the most recent information available): PHEMCE partners reported completing five deliverables for the 21 priority activities. For example, for the priority activity that specifies that HHS, DHS, and other federal partners are to formalize roles, responsibilities, policies, and procedures for conducting the next generation of MTAs and TRAs, HHS and DHS completed one of two deliverables by developing and cosigning a strategic implementation plan to conduct MTAs. PHEMCE partners reported completing three deliverables for the 10 priority threat-based approaches. For example, for one of the threat- based approaches, PHEMCE partners report completing the sole deliverable of developing guidance that establishes the order in which different groups of affected individuals would receive anthrax vaccination in a public health emergency. The completion of the three deliverables resulted in the completion of three priority threat-based approaches. PHEMCE partners reported completing two deliverables for the eight priority capabilities. For example, for one of the priority capabilities, PHEMCE partners have reported completing the sole deliverable that specifies that BARDA will initiate a research program to address knowledge gaps in chemical decontamination of exposed individuals by awarding a contract to a university to gather data and develop decontamination procedures. The completion of the two deliverables resulted in the completion of two priority capabilities. <2.2. HHS Has Not Provided Previously Recommended Spending Estimates for Priority Countermeasures> HHS has not provided publicly available spending estimates for research, development, or procurement for the countermeasures it identified as priorities in the 2012 implementation plan. We previously recommended that HHS provide more specific information on anticipated countermeasure spending when it updated its 2007 plan. Additionally, PAHPRA directs HHS to include anticipated funding allocations for each countermeasure priority in the PHEMCE strategy and implementation plan. The implementation plan contains information on the source of the funds for research, development, and procurement, such as the Special Reserve Fund. However, the plan does not include any estimates of how much of these funds HHS may spend to develop or procure specific priority countermeasures. HHS officials told us that while PHEMCE has developed spending estimates for internal planning, they are hesitant to provide these estimates to manufacturers because they do not want to create the expectation that the estimates would reflect any final contract amounts. In addition, anticipated spending estimates for future years may be unreliable because, according to HHS officials, the Special Reserve Fund will be appropriated annually after fiscal year 2014, as opposed to the fiscal year 2004 appropriation, which appropriated funds for a 10-year period. Additionally, officials stated that because HHS published the PHEMCE Implementation Plan prior to the passage of PAHPRA, the department did not include any spending estimates in the plan because it was unaware that PAHPRA would include that requirement. HHS officials said that they plan to include estimates in the next iteration of the plan, which they anticipate publishing in September 2014, based on the time frames laid out in PAHPRA. However, the nature and format of the spending estimates that would be included in the plan had not been determined. As we stated in our previous recommendation, information on anticipated spending would allow HHS s industry partners to suitably target research and development to fulfill PHEMCE s countermeasure priorities, especially in tighter budget climates. While HHS officials expressed concerns regarding sharing internal spending estimates and the short-term nature of annual appropriations, these concerns could be addressed by agency communications with manufacturers when providing the spending estimates to make clear that spending estimates may not reflect final contract amounts, which depend on enacted appropriations levels, among other factors. <3. Concluding Observations> Developing and procuring medical countermeasures is a complex process that requires engagement across the federal government and with countermeasure developers in private industry. HHS has strengthened PHEMCE planning and oversight and has made progress in developing and procuring some medical countermeasures. However, given its almost 10-year efforts and the continuing lack of available countermeasures to fulfill PHEMCE s many priorities, HHS would benefit from sharing information on its anticipated spending estimates with industry, to assist countermeasure developers with long-term business planning. PAHPRA s requirement for HHS to include spending estimates for each medical countermeasure priority in future PHEMCE implementation plans is consistent with our 2011 recommendation. HHS s plans to include more specific spending estimates in future plan updates could help implement both this requirement and our 2011 recommendation, provided the department makes meaningful estimates of spending for countermeasure research, development, and procurement available to industry. These estimates or ranges of estimates will provide HHS s industry partners with more transparency on anticipated returns on investment in the face of competing priorities for developing other drugs with a commercial market. We believe the value of making this information available outweighs HHS s concerns, especially those related to uncertainty over future appropriations; anticipated countermeasure spending would provide industry with the information it needs to determine whether and how to suitably target their research and development programs in tight budget climates. <4. Agency Comments> We provided a draft of this report to HHS, and its comments are reprinted in appendix II. In its comments, HHS acknowledged the effort we have taken to document HHS s tracking processes for the activities in the 2012 PHEMCE Implementation Plan. HHS commented that the 72 activities we focused on in this review which were described in the implementation plan as key to HHS s efforts in the near and midterm were a subset of 255 near- and midterm activities delineated in the implementation plan and that these 72 items were meant to be an illustrative but not comprehensive list of priorities. Further, HHS stated that it considered all 255 near- and midterm activities as priorities. HHS provided information on its efforts to track its progress on the remainder of these items that we did not discuss in the report and to establish deliverables and interim milestones for the activities slated for the midterm (fiscal years 2015 through 2017) as that period approaches. Finally, HHS provided information on its efforts to quantify its resource needs and provide more transparent anticipated spending information for its medical countermeasure development efforts while maintaining the integrity of the federal contracting process. HHS stated that it is working to find a compromise solution that will provide this transparency in light of statutory requirements and GAO s 2011 recommendation. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: HHS Spending on Advanced Research, Development, and Procurement of Medical Countermeasures The Department of Health and Human Services (HHS) spent approximately $3.6 billion in advanced research, development, and procurement of chemical, biological, radiological, and nuclear (CBRN) and pandemic influenza medical countermeasures from fiscal year 2010 through fiscal year 2013. Of this amount, HHS spent 30 percent for countermeasures against influenza, 20 percent for smallpox countermeasures, and 19 percent for anthrax countermeasures. (See fig. 2.) The spending on influenza countermeasures reflects, in part, HHS s response to the 2009 H1N1 influenza pandemic using annual and supplemental funds appropriated for that response. Of HHS s total medical countermeasure spending of $3.6 billion, from fiscal year 2010 through fiscal year 2013, HHS spent almost $2.1 billion on contracts dedicated to advanced research and development, of which HHS s Biomedical Advanced Research and Development Authority (BARDA) spent nearly $700 million (almost 34 percent) for influenza antivirals, diagnostics, and vaccines. (See table 3.) Of the remaining $1.5 billion, HHS spent nearly $403 million on contracts dedicated to the procurement of pandemic influenza antivirals and vaccines. (See table 4.) BARDA also spent almost $1.2 billion on contracts dedicated to both advanced research and development and procurement of CBRN medical countermeasures. (See table 5.) In addition to the contracts that have already been awarded, HHS issues annual announcements for additional funding opportunities in the areas of advanced research and development of CBRN medical countermeasures; advanced development of medical countermeasures for pandemic influenza; and innovative science and technology platforms for medical countermeasure development. The announcements state anticipated funding for the overall program. For example, the announcement for CBRN countermeasure advanced research and development states that anticipated funding for the overall effort not per award ranges from an estimated $2 million to an estimated $415 million, subject to congressional appropriations, and does not reflect a contractual obligation for funding. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments <5. GAO Contact> <6. Staff Acknowledgments> In addition to the contact named above, Karen Doran, Assistant Director; Shana R. Deitch; Carolyn Feis Korman; Tracey King; and Roseanne Price made significant contributions to this report. Related GAO Products National Preparedness: Efforts to Address the Medical Needs of Children in a Chemical, Biological, Radiological, or Nuclear Incident. GAO-13-438. Washington, D.C.: April 30, 2013. National Preparedness: Improvements Needed for Measuring Awardee Performance in Meeting Medical and Public Health Preparedness Goals. GAO-13-278. Washington, D.C.: March 22, 2013. High-Containment Laboratories: Assessment of the Nation s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. National Preparedness: Countermeasures for Thermal Burns. GAO-12-304R. Washington, D.C.: February 22, 2012. Chemical, Biological, Radiological, and Nuclear Risk Assessments: DHS Should Establish More Specific Guidance for Their Use. GAO-12-272. Washington, D.C.: January 25, 2012. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington, D.C.: June 27, 2011. Influenza Vaccine: Federal Investments in Alternative Technologies and Challenges to Development and Licensure. GAO-11-435. Washington, D.C.: June 27, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Combating Nuclear Terrorism: Actions Needed to Better Prepare to Recover from Possible Attacks Using Radiological or Nuclear Materials. GAO-10-204. Washington, D.C.: January 29, 2010.
Why GAO Did This Study Public health emergencies--the 2001 anthrax attacks, the 2009 H1N1 influenza pandemic, and others--have raised concerns about national vulnerability to threats from chemical, biological, radiological, and nuclear agents and new infectious diseases. There are some medical countermeasures--drugs, vaccines, and medical devices such as diagnostics--available to prevent, diagnose, or mitigate the public health impact of these agents and diseases, and development continues. HHS leads federal efforts to develop and procure countermeasures through the interagency PHEMCE. The Pandemic and All-Hazards Preparedness Reauthorization Act of 2013 mandated GAO to examine HHS's and PHEMCE's planning documents for medical countermeasure development and procurement needs and priorities. This report examines the extent to which HHS developed timelines, milestones, and spending estimates for PHEMCE priorities. GAO reviewed relevant laws; analyzed HHS's 2012 PHEMCE Strategy and Implementation Plan, HHS's tools for tracking the implementation of PHEMCE activities, and data on countermeasure spending from fiscal years 2010 through 2013; and interviewed HHS officials. What GAO Found The Department of Health and Human Services (HHS) has established timelines and milestones for the 72 Public Health Emergency Medical Countermeasures Enterprise (PHEMCE) priorities--33 activities, 25 threat-based approaches, and 14 capabilities--that HHS selected as key to fulfilling PHEMCE strategic goals. However, HHS has not made spending estimates for its medical countermeasure development or procurement priorities (priority countermeasures) publicly available. In the PHEMCE implementation plan, HHS has grouped the 72 PHEMCE priorities into three time frames for completion--near-term (fiscal years 2012 through 2014), midterm (fiscal years 2015 through 2017), and long-term (fiscal year 2018 and beyond). For 21 priority activities, 10 priority threat-based approaches, and 8 priority capabilities, HHS and PHEMCE have identified specific deliverables, each tied to a milestone or set of milestones that delineate the steps necessary to complete deliverables, and established more specific timelines for completion of deliverables and milestones. For example, HHS's Office of the Assistant Secretary for Preparedness and Response (ASPR) is to lead the development of medical countermeasure requirements, which outline countermeasure quantity, type, and desired characteristics. Deliverables are the threat-specific requirements, such as for antidotes for mustard gas and other blister agents. Milestones for mustard gas antidote requirements reflect the PHEMCE activities to develop the requirements and the necessary approvals; the milestones are tied to interim timelines and culminate in approval by the ASPR Assistant Secretary by September 2013. HHS has not established specific deliverables, milestones, or timelines for the remaining 12 priority activities, 15 priority threat-based approaches, and 6 priority capabilities other than their overall completion within the specified near- or midterm time frame. HHS monitors progress in completing deliverables and milestones for the priorities monthly, with PHEMCE partners meeting to discuss potential barriers to completing deliverables or meeting milestones and possible options to mitigate these barriers. As of September 2013 (the most recent information available), HHS reported that PHEMCE partners have completed 10 deliverables for the 72 priorities, resulting in completion of 5 priorities. GAO did not examine the status of the priorities that did not have specific deliverables, timelines, and milestones. HHS has developed spending estimates for priority countermeasures for internal planning purposes but has not made them publicly available. In 2011, GAO recommended that HHS provide more specific anticipated spending information in an updated plan to assist with long-term planning. HHS's 2012 plan contains information on how countermeasures may be funded, such as through advanced development funds, but does not include estimates of how much PHEMCE may spend to develop specific countermeasures. HHS officials said they are hesitant to provide estimates because they do not want to create the expectation that estimates would reflect final contract amounts. However, consistent with our prior recommendation and Pandemic and All-Hazards Preparedness Reauthorization Act requirements, HHS plans to include spending estimates in the next iteration of the plan, anticipated in September 2014, but has not determined the nature and format of the estimates that would be included. Providing estimates would allow HHS's industry partners to suitably target research and development to fulfill countermeasure priorities, especially in tighter budget climates. What GAO Recommends Although GAO is not making any new recommendations, based on prior work GAO is continuing to emphasize its 2011 recommendation that HHS make more specific anticipated spending information available to countermeasure developers. In its comments, HHS discussed its efforts to develop spending estimates.
<1. Background> The United States is the world s largest net importer of oil. In 2006, the United States had net imports of 12.2 million gallons of oil per day, more than twice as much as Japan and over three times as much as China, the world s next largest importers. The transport of oil into the United States occurs primarily by sea with ports throughout the United States receiving over 40,000 shipments of oil in 2005. In addition, vessels not transporting oil, such as cargo and freight vessels, fishing vessels, and passenger ships, often carry tens of thousands of gallons of fuel oil to power their engines. With over 100,000 commercial vessels navigating U.S. waters, oil spills are inevitable. Fortunately, however, they are relatively infrequent and are decreasing. While oil transport and maritime traffic have continued to increase, the total number of reported spills has generally declined each year since 1990. OPA forms the foundation of U.S. maritime policy as it pertains to oil pollution. OPA was passed in 1990, following the 1989 Exxon Valdez spill in Alaska, which highlighted the need for greater federal oversight of maritime oil transport. OPA places the primary burden of liability and the costs of oil spills on the vessel owner and operator who was responsible for the spill. This polluter pays system provides a deterrent for vessel owners and operators who spill oil by requiring that they assume the burden of spill response, natural resource restoration, and compensation to those damaged by the spill, up to a specified limit of liability which is the amount above which responsible parties are no longer financially liable under certain conditions. For example, if a vessel s limit of liability is $10 million and a spill resulted in $12 million in costs, the responsible party only has to pay up to $10 million the Fund will pay for the remaining $2 million. Current limits of liability, which vary by type of vessel and are determined by a vessel s gross tonnage, were set by the Congress in 2006. The Coast Guard is responsible for adjusting limits for significant increases in inflation and for making recommendations to the Congress on whether adjustments are necessary to help protect the Fund. OPA also requires that vessel owners and operators must demonstrate their ability to pay for oil spill response up to their limit of liability. Specifically, by regulation, with few exceptions, owners and operators of vessels over 300 gross tons and any vessels that transship or transfer oil in the Exclusive Economic Zone are required to have a certificate of financial responsibility that demonstrates their ability to pay for oil spill response up to their limit of liability. OPA consolidated the liability and compensation provisions of four prior federal oil pollution initiatives and their respective trust funds into the Oil Spill Liability Trust Fund and authorized the collection of revenue and the use of the money, with certain limitations, with regards to expenditures. The Fund has two major components the Principal Fund and the Emergency Fund. The Emergency Fund consists of $50 million apportioned each year to fund spill response and the initiation of natural resource damage assessments, which provide the basis for determining the natural resource restoration needs that address the public s loss and use of natural resources as a result of a spill. The Principal Fund provides the funds for third-party and natural resource damage claims, limit of liability claims, reimbursement of government agencies removal costs, and provides for oil spill related appropriations. A number of agencies including the Coast Guard, EPA, and DOI receive an annual appropriation from the Fund to cover administrative, operational, personnel, and enforcement costs. From 1990 to 2006, these appropriations amounted to the Fund s largest expense (see fig. 2). The Fund s balance has generally declined from 1995 through 2006, and since fiscal year 2003, its balance has been less than the authorized limit on federal expenditures for the response to a single spill, which is currently set at $1 billion (see fig. 3). The balance has declined, in part, because the Fund s main source of revenue a $0.05 per barrel tax on U.S. produced and imported oil was not collected for most of the time between 1993 and 2006. As a result, the Fund balance was $604.4 million at the end of fiscal year 2006. The Energy Policy Act of 2005 reinstated the barrel tax beginning in April 2006. With the barrel tax once again in place, NPFC anticipates that the Fund will be able to cover its projected noncatastrophic liabilities. OPA also defines the costs for which responsible parties are liable and for the costs for which the Fund is made available for compensation in the event that the responsible party does not pay or is not identified. These costs, or OPA compensable costs, are of two main types: Removal costs: Removal costs are incurred by the federal government or any other entity taking approved action to respond to, contain, and clean up the spill. For example, removal costs include the equipment used in the response skimmers to pull oil from the water, booms to contain the oil, planes for aerial observation as well as salaries and travel and lodging costs for responders. Damages caused by the oil spill: OPA-compensable damages cover a wide range of both actual and potential adverse impacts from an oil spill, for which a claim may be made to either the responsible party or the Fund. (Table 1 provides a brief definition of OPA-compensable removal costs and damages.) Claims include natural resource damage claims filed by trustees, claims for uncompensated removal costs and third-party damage claims for lost or damaged property and lost profits, among other things. The Fund also covers costs when responsible parties cannot be located or do not pay their liabilities. NPFC encounters cases where the source of the spill, and therefore the responsible party is unknown, or where the responsible party does not have the ability to pay. In other cases, since the cost recovery can take a period of years, the responsible party may be bankrupt or dissolved. Based on our analysis of NPFC records, excluding spills with limit of liability claims, the recovery rate for costs from the 51 major oil spills since 1990 is 65 percent, which means that responsible parties have paid 65 percent of costs. The 35 percent of nonreimbursed costs to the Fund for these major spills have amounted to $53.9 million. Response to large oil spills is typically a cooperative effort between the public and private sector, and there are numerous players who participate in responding to and paying for oil spills. To manage the response effort, the responsible party, the Coast Guard, EPA, and the pertinent state and local agencies form the unified command, which implements and manages the spill response. Beyond the response operations, there are other stakeholders, such as accountants who are involved in documenting and accounting for costs, and receiving and processing claims. In addition, insurers and underwriters provide financial backing to the responsible party. The players involved in responding to and/or paying for major spill response are as follows: Government agencies: The lead federal authority, or Federal On-Scene Coordinator, in conducting a spill response is usually the nearest Coast Guard Sector and is headed by the Coast Guard Captain of the Port. The Federal On-Scene Coordinator directs response efforts and coordinates all other efforts at the scene of an oil spill. Additionally, the on-scene coordinator issues pollution removal funding authorizations guarantees that the agency will receive reimbursement for performing response activities to obtain services and assistance from other government agencies. Other federal agencies may also be involved. NOAA provides scientific support, monitoring and predicting the movement of oil, and conducting environmental assessments of the impacted area. The federal, state, and tribal trustees join together to perform a natural resource damage assessment, if necessary. Within the Coast Guard, the NPFC is responsible for disbursing funds to the Federal On-Scene Coordinator for oil spill removal activities and seeking reimbursement from responsible parties for federal costs. Additionally, regional governmental entities that are affected by the spill both state and local as well as tribal government officials or representatives may participate in the unified command and contribute to the response effort, which is paid for by the responsible party or are reimbursed by the responsible party or the Fund. Responsible parties: OPA stipulates that both the vessel owner and operator are ultimately liable for the costs of the spill and the cleanup effort. The Coast Guard has final determination on what actions must be taken in a spill response, and the responsible party may form part of the unified command along with the Federal On-Scene Coordinator and pertinent state and local agencies to manage the spill response. The responsible parties rely on other entities to evaluate the spill effects and the resulting compensation. Responsible parties hire environmental and scientific support staff, specialized claims adjustors to adjudicate third- party claims, public relations firms, and legal representation to file and defend limit of liability claims on the Fund, as well as serve as counsel throughout the spill response. Qualified individuals: Federal regulations require that vessels carrying oil as cargo have an incident response plan and, as part of the plan, they appoint a qualified individual who acts with full authority to obligate funds required to carry out response activities. The qualified individual acts as a liaison with the Federal On-Scene Coordinator and is responsible for activating the incident response plan. Oil spill response organizations: These organizations are private companies that perform oil spill cleanup, such as skimming and disposal of oil. Many of the companies have contractual agreements with responsible parties and the Coast Guard. The agreements, called basic ordering agreements, provide for prearranged pricing, response personnel, and equipment in the event of an oil spill. Insurers: Responsible parties often have multiple layers of primary and excess insurance coverage, which pays oil spill costs and claims. Pollution liability coverage for large vessels is often underwritten by not-for-profit mutual insurance organizations. The organizations act as a collective of ship owners, who insure themselves, at-cost. The primary insurers of commercial vessels in U.S. waters are the Water Quality Insurance Syndicate, an organization providing pollution liability insurance to over 40,000 vessels, and the International Group of P & I Clubs, 13 protection and indemnity organizations that provide insurance primarily to foreign- flagged large vessels. <2. Oil Spills Costing More than $1 Million Occurred Infrequently Since 1990, but Estimated Costs Total $860 Million to $1.1 Billion> On the basis of information we were able to assemble about responsible parties expenditures and payments from the Fund, we estimate that 51 oil spills involving removal costs and damage claims totaling $1 million or more have occurred since 1990. In all, the Fund spent $240 million on these spills, and the responsible parties themselves spent about $620 million to $840 million, for a total of $860 million to $1.1 billion. The number of spills and their costs varied from year to year and showed no discernable trends in either frequency or cost. <2.1. Less Than 2 Percent of Oil Spills Occurring Since 1990 Were Major Spills> Less than 2 percent of oil spills from vessels, since 1990, had removal costs and damage claims of $1 million or greater. Each year, there are thousands of incident reports called into the National Response Center that claim oil or oil-like substances have been spilled from vessels sailing in coastal or inland waters in the United States -but only a small percentage of these reported incidents are oil spills from vessels that received federal reimbursement for response efforts. Specifically, there have been 3,389 oil spills from vessels that sought reimbursement from the Fund for response efforts. Of these spills, we estimate that 51 were major oil spills. As figure 4 shows, there are no discernable trends in the number of major oil spills that occur each year. The highest number of spills was seven in 1996; the lowest number was zero in 2006. These 51 spills occurred in a variety of locations. As figure 5 shows, the spills occurred on the Atlantic, Gulf, and Pacific coasts and include spills both in open coastal waters and more confined waterways. <2.2. Total Cost of Major Spills Ranges from $860 Million to $1.1 Billion, and Responsible Parties Pay the Majority of Costs> The total cost of the 51 spills cannot be precisely determined, for several reasons: Private-sector expenditures are not tracked: The NPFC tracks federal removal costs expended by the Fund for Coast Guard and other federal agencies spill response efforts, but it does not oversee costs incurred by the private sector. There is also no legal requirement in place that requires responsible parties to disclose costs incurred for responding to a spill. The various parties involved in covering these costs do not categorize them uniformly: For example, one vessel insurer we spoke with separates total spill costs by removal costs (for immediate spill cleanup) and loss adjustment expenses, which contain all other expenses, including legal fees. In contrast, the NPFC tracks removal costs and damage claims in terms of the statutory definitions delineated in OPA. Spill costs are somewhat fluid and accrue over time: In particular, the natural resource and third-party damage claims adjudication processes can take many years to complete. Moreover, it can take many months or years to determine the full effect of a spill to natural resources and to determine the costs and extent of the natural resource injury and the appropriate restoration needed to repair the damage. For example, natural resource damage claims were recently paid for a spill that occurred near Puerto Rico in 1991, over 16 years ago. Because spill cost data are somewhat imprecise and the data we collected vary somewhat by source, the results described below will be reported in ranges, in which various data sources are combined together. The lower and higher bounds of the range represent the low and high end of cost information we obtained. Our analysis of these 51 spills shows their total cost was approximately $1 billion ranging from $860 million to $1.1 billion. This amount breaks down by source as follows: Amount paid out of the Fund: Because the NPFC tracks and reports all Fund expenditures, the amount paid from the Fund can be reported as an actual amount, not an estimate. For these 51 spills, the Fund paid a total of $239.5 million. Amount paid by responsible parties: Because of the lack of precise information about amounts paid by responsible parties and the differences in how they categorize their costs, this portion of the expenditures must be presented as an estimate. Based on the data we were able to obtain and analyze, responsible parties spent between $620 million and $840 million. Even at the low end of the range, this amount is nearly triple the expenditure from the Fund. <2.3. Costs Vary Widely by Spill and Year> Costs of these 51 spills varied widely by spill, and therefore, by year (see fig. 6). For example, 1994 and 2004 both had four spills during the year, but the average cost per spill in 1994 was about $30 million, while the average cost per spill in 2004 was between $71 million and $96 million. Just as there was no discernible trend in the frequency of these major spills, there is no discernible trend in their cost. Although the substantial increase in 2004 may look like an upward trend, 2004 may be an anomaly that reflects the unique character of two of the four spills that occurred that year. These two spills accounted for 98 percent of the year s costs. <3. Key Factors Affect Oil Spill Costs in Unique Ways> Location, time of year, and type of oil are key factors affecting oil spill costs, according to industry experts, agency officials, and our analysis of spills. Data on the 51 major spills show that spills occurred on all U.S. coasts, across all seasons, and for all oil types. In ways that are unique to each spill, however, each of these factors can affect the breadth and difficulty of the response effort or the extent of damage that requires mitigation. For example, spills that occur in remote areas can make response difficult in terms of mobilizing responders and equipment, as well as complicating the logistics of removing oil all of which can increase the costs. Officials also identified two other factors that may influence oil spill costs to a lesser extent the effectiveness of the spill response and the level of public interest in a spill. <3.1. Location Impacts Costs in Different Ways> The location of a spill can have a large bearing on spill costs because it will determine the extent of response needed, as well as the degree of damage to the environment and local economies. According to state officials with whom we spoke and industry experts, there are three primary characteristics of location that affect costs: Remoteness: For spills that occur in remote areas, spill response can be particularly difficult in terms of mobilizing responders and equipment, and they can complicate the logistics of removing oil from the water all of which can increase the costs of a spill. For example, a 2001 spill in Alaska s Prince William Sound which occurred approximately 40 miles from Valdez, AK resulted in considerable removal costs after a fishing vessel hit a rock and sank to a depth of approximately 1,000 feet. Response took many days and several million dollars to contain the oil that was still in the vessel, but the effort was eventually abandoned because it was too difficult from that depth. Proximity to shore: There are also significant costs associated with spills that occur close to shore. Contamination of shoreline areas has a considerable bearing on the costs of spills as such spills can require manual labor to remove oil from the shoreline and sensitive habitats. The extent of damage is also affected by the specific shoreline location. For example, spills that occur in marshes and swamps with little water movement are likely to incur more severe impacts than flowing water. A September 2002 spill from a cargo vessel in the Cooper River near the harbor in Charleston, SC, spread oil across 30 miles of a variety of shoreline types. The spill resulted in the oiling of a number of shorebirds and a temporary disruption to recreational shrimp-baiting in area waters, among other things. As of July 2007, a settlement for natural resource damages associated with the spill was still pending. Proximity to economic centers: Spills that occur in the proximity of economic centers can also result in increased costs when local services are disrupted. A spill near a port can interrupt the flow of goods, necessitating an expeditious response in order to resume business activities, which could increase removal costs. Additionally, spills that disrupt economic activities can result in expensive third-party damage claims. For example, after approximately 250,000 gallons of oil spilled from a tanker in the Delaware River in 2004, a large nuclear plant in the vicinity was forced to suspend activity for more than a week. The plant is seeking reimbursement for $57 million in lost profits. Overall, for the 51 major oil spills, location had the greatest effect on costs for spills that occurred in the waters of the Caribbean, followed by the East Coast, Alaska, and the Gulf states. (See fig. 7). The range of average per spill costs for the spills that occurred in the East Coast locations ranged from about $27 million to over $37 million, higher than the average costs in any other region besides the two spills in Caribbean. The high spill costs in the East Coast locations were caused by several spills in that geographic area that had considerably higher costs. Specifically, four of the eight most expensive spills occurred on the waters off the East Coast. <3.2. Time of Year Has Impact on Local Economies and Response Efforts> The time of year in which a spill occurs can also affect spill costs in particular, impacting local economies and response efforts. According to several state and private-sector officials with whom we spoke, spills that disrupt seasonal events that are critical for local economies can result in considerable expenses. For example, spills in the spring months in areas of the country that rely on revenue from tourism may incur additional removal costs in order to expedite spill cleanup, or because there are stricter standards for cleanup, which increase the costs. This situation occurred in March of 1996 when a tank barge spilled approximately 176,000 gallons of fuel oil along the coast of Texas. Because the spill occurred during the annual spring break tourist season, the time frames for cleaning up the spill were truncated, and the standards of cleanliness were elevated. Both of these factors contributed to higher removal costs, according to state officials we interviewed. The time of year in which a spill occurs also affects response efforts because of possible inclement weather conditions. For example, spills that occur during the winter months in areas of the country that experience harsh winter conditions can result in higher removal costs because of the increased difficulty in mobilizing equipment and personnel to respond to a spill in inclement weather. According to a state official knowledgeable about a January 1996 spill along the coast of Rhode Island, extremely cold and stormy weather made response efforts very difficult. Although the 51 spills occurred during all seasons of the year, they were most prevalent in the fall and winter months, with 20 spills occurring in the fall and 13 spills during the winter, compared with 9 spills in the spring and 9 in the summer months. On a per-spill basis, the cost range for the 51 spills was highest in the fall (see fig. 8). <3.3. Type of Oil Spilled Impacts the Extent of the Response Effort and the Amount of Damage> The type of oil spilled affects the degree to which oil can be cleaned up and removed, as well as the nature of the natural resource damage caused by the spill both of which can significantly impact the costs associated with an oil spill. The different types of oil can be grouped into four categories, each with its own set of impacts on spill response and the environment (see table 2). For example, lighter oils such as jet fuels, gasoline, and diesel dissipate quickly, but they are highly toxic, whereas heavier oils such as crude oils and other heavy petroleum products do not dissipate much and, while less toxic, can have severe environmental impacts. Very light and light oils naturally dissipate and evaporate quickly, and as such, often require minimal cleanup. However, light oils that are highly toxic can result in severe impacts to the environment, particularly if conditions for evaporation are unfavorable. For instance, in 1996, a tank barge that was carrying home-heating oil grounded in the middle of a storm near Point Judith, Rhode Island, spilling approximately 828,000 gallons of heating oil (light oil). Although this oil might dissipate quickly under normal circumstances, heavy wave conditions caused an estimated 80 percent of the release to mix with water, with only about 12 percent evaporating and 10 percent staying on the surface of the water. The natural resource damages alone were estimated at $18 million, due to the death of approximately 9 million lobsters, 27 million clams and crabs, and over 4 million fish. Medium and heavy oils do not evaporate much, even during favorable weather conditions, and thus, can result in significant contamination of shoreline areas. Medium and heavy oils have a high density and can blanket structures they come in contact with boats and fishing gear, for example as well as the shoreline, creating severe environmental impacts to these areas, and harming waterfowl and fur-bearing mammals through coating and ingestion. Additionally, heavy oils can sink, creating prolonged contamination of the sea bed and tar balls that sink to the ocean floor and scatter along beaches. These spills can require intensive shoreline and structural cleanup, which is time consuming and expensive. For example, in 1995, a tanker spilled approximately 38,000 gallons of heavy fuel oil into the Gulf of Mexico when it collided with another tanker as it prepared to lighter its oil to another ship. Less than 1 percent (210 gallons) of the oil was recovered from the sea, and as a result, recovery efforts on the beaches of Matagorda and South Padre Islands were labor intensive, as hundreds of workers had to manually pick up tar balls with shovels. The total removal costs for the spill were estimated at $7 million. Spills involving heavy oil were the most prevalent among the 51 spills; 21 of the 51 major oil spills were from heavy oils. On a per-spill basis, costs among the 51 spills, varied by type of oil, but the cost ranges for medium and heavy oils were higher than light and very light oils (see fig. 9). <3.4. Other Factors Also Affect Spill Costs> Although available evidence points to location, time of year, and type of oil spilled as key factors affecting spill costs, some industry experts reported that the effectiveness of the spill response and the level of the public interest can also impact the costs incurred during a spill. Effectiveness of spill response: Some private-sector officials stated that the effectiveness of spill response can impact the cost of cleanup. The longer it takes to assemble and conduct the spill response, the more likely it is that the oil will move with changing tides and currents and affect a greater area, which can increase costs. Some officials also stated that the level of experience of those involved in the incident command is critical to the effectiveness of spill response, and they can greatly affect spill costs. For example, poor decision making during a spill response could lead to the deployment of unnecessary response equipment, or worse, not enough equipment to respond to a spill. In particular, several private-sector officials with whom we spoke expressed concern that Coast Guard officials are increasingly inexperienced in handling spill response, in part because the Coast Guard s mission has been increased to include homeland security initiatives. Additionally, another noted that response companies, in general, have less experience in dealing with spill response and less familiarity with the local geography of the area affected by the spill, which can be critical to determining which spill response techniques are most effective in a given area. They attributed the limited experience to the overall decline in the number of spills in recent years. Further, one private-sector official noted that response companies can no longer afford to specialize in cleaning up spills alone, given the relatively low number of spills, and thus, the quality, effectiveness, and level of expertise and experience diminish over time. Public interest: Several officials with whom we spoke stated that level of public attention placed on a spill creates pressure on parties to take action and can increase costs. They also noted that the level of public interest can increase the standards of cleanliness expected, which may increase removal costs. For example, several officials noted that a spill along the Texas coast in February 1995 resulted in increased public attention because it occurred close to peak tourist season. In addition to raising the standards of cleanliness at the beaches to a much higher level than normal because of tourist season, certain response activities were completed for primarily aesthetic reasons, both of which increased the removal costs, according to state officials. <4. Fund Has Been Able to Cover Costs Not Paid by Responsible Parties, but Risks Remain> The Fund has been able to cover costs from major spills that responsible parties have not paid, but risks remain. Although liability limits were increased in 2006, the liability limits for certain vessel types, notably tank barges, may be disproportionately low relative to costs associated with such spills. There is also no assurance that vessel owners and operators are able to financially cover these new limits, because the Coast Guard has not yet issued regulations for satisfying financial responsibility requirements. In addition, although OPA calls for periodic increases in liability limits to account for significant increases in inflation, such increases have never been made. We estimate that not making such adjustments in the past potentially cost the Fund $39 million between 1990 and 2006. Besides issues related to limits of liability, the Fund faces other potential drains on its resources, including ongoing claims from existing spills, claims related to already-sunken vessels that may begin to leak oil, and the threat of a catastrophic spill such as occurred with the Exxon Valdez in 1989. <4.1. Further Attention to Limits of Liability Is Needed> Major oil spills that exceed the vessel s limit of liability are infrequent, but their impact on the Fund could be significant. Limits of liability are the amount, under certain circumstances, above which responsible parties are no longer financially liable for spill removal costs and damage claims. If the responsible party s costs exceed the limit of liability, they can make a claim against the Fund for the amount above the limit. Of the 51 major oil spills that occurred since 1990, 10 spills resulted in limit of liability claims on the Fund. The limit of liability claims of these 10 spills ranged from less than $1 million to over $100 million, and totaled over $252 million in claims on the Fund. Limit of liability claims will continue to have a pronounced effect on the Fund. NPFC estimates that 74 percent of claims under adjudication that were outstanding as of January 2007 were for spills in which the limit of liability had been exceeded. The amount of these claims under adjudication was $217 million. We identified three areas in which further attention to these liability limits appears warranted: the appropriateness of some current liability limits, the need to adjust limits periodically in the future to account for significant increases in inflation, and the need for updated regulations for ensuring vessel owners and operators are able to financially cover their new limits. <4.1.1. Some Recent Adjustments to Liability Limits Do Not Reflect the Cost of Major Spills> The Coast Guard and Maritime Transportation Act of 2006 significantly increased the limits of liability from the limits set by OPA in 1990. Both laws base the liability on a specified amount per gross ton of vessel volume, with different amounts for vessels that transport oil commodities (tankers and tank barges) than for vessels that carry oil as a fuel (such as cargo vessels, fishing vessels, and passenger ships). The 2006 act raised both the per-ton and the required minimum amounts, differentiating between vessels with a double hull, which helps prevent oil spills resulting from collision or grounding, and vessels without a double hull (see table 3 for a comparison of amounts by vessel category). For example, the liability limit for single-hull vessels larger than 3,000 gross tons was increased from the greater of $1,200 per gross ton or $10 million to the greater of $3,000 per gross ton or $22 million. Our analysis of the 51 spills showed that the average spill cost for some types of vessels, particularly tank barges, was higher than the limit of liability, including the new limits established in 2006. We separated the vessels involved in the 51 spills into four types (tankers, tank barges, cargo and freight ships, and other vessels such as fishing boats); determined the average spill costs for each type of vessel; and compared the costs with the average limit of liability for these same vessels under both the 1990 and 2006 limits. As figure 10 shows, the 15 tank barge spills and the 12 fishing/other vessel spills had average costs greater than both the 1990 and 2006 limits of liability. For example, for tank barges, the average cost of $23 million was higher than the average limit of liability of $4.1 million under the 1990 limits and $10.3 million under the new 2006 limits. The nine spills involving tankers, by comparison, had average spill costs of $34 million, which was considerably lower than the average limit of liability of $77 million under the 1990 limits and $187 million under the new 2006 limits. In a January 2007 report examining spills in which the limits of liability had been exceeded, the Coast Guard had similar findings on the adequacy of some of the new limits. Based on an analysis of 40 spills in which costs had exceeded the responsible party s liability limit since 1991, the Coast Guard found that the Fund s responsibility would be greatest for spills involving tank barges, where the Fund would be responsible for paying 69 percent of costs. The Coast Guard concluded that increasing liability limits for tank barges and nontank vessels cargo, freight, and fishing vessels over 300 gross tons would positively impact the Fund balance. With regard to making specific adjustments, the Coast Guard said dividing costs equally between the responsible parties and the Fund was a reasonable standard to apply in determining the adequacy of liability limits. However, the Coast Guard did not recommend explicit changes to achieve either that 50/50 standard or some other division of responsibility. <4.1.2. Liability Limits Have Not Been Adjusted for Inflation> Although OPA requires adjusting liability limits to account for significant increases in inflation, no adjustments to the limits were made between 1990 and 2006, when the Congress raised the limits in the Coast Guard and Maritime Transportation Act. During those years, the Consumer Price Index rose approximately 54 percent. OPA requires the President, who has delegated responsibility to the Coast Guard, through the Secretary of Homeland Security, to issue regulations not less often than every 3 years to adjust the limits of liability to reflect significant increases in the Consumer Price Index. We asked Coast Guard officials why no adjustments were made between 1990 and 2006. Coast Guard officials stated that they could not speculate on behalf of other agencies as to why no adjustments had been made prior to 2005 when the delegation to the Coast Guard was made. The decision to leave limits unchanged had financial implications for the Fund. Raising the liability limits to account for inflation would have the effect of reducing payments from the Fund, because responsible parties would be responsible for paying costs up to the higher liability limit. Not making adjustments during this 16-year period thus had the effect of increasing the Fund s financial liability. Our analysis showed that if the 1990 liability limits had been adjusted for inflation during the 16-year period, claims against the Fund for the 51 major oil spills would have been reduced 16 percent, from $252 million to $213 million. This would have meant a savings of $39 million for the Fund. <4.1.3. Certification of Compliance with the New Liability Limits Is Not in Place> Certificates of Financial Responsibility have not been adjusted to reflect the new liability limits. The Coast Guard requires Certificates of Financial Responsibility, with few exceptions, for vessels over 300 gross tons or any vessels that are lightering or transshipping oil in the Exclusive Economic Zone as a legal certification that vessel owners and operators have the financial resources to fund spill response up to the vessel s limit of liability. Currently, Certificate of Financial Responsibility requirements are consistent with the 1990 limits of liability and, therefore, there is no assurance that responsible parties have the financial resources to cover their increased liability. The Coast Guard is currently making Certificates of Financial Responsibility consistent with current limits of liability. The Coast Guard plans to initiate a rule making to issue new Certificate of Financial Responsibility requirements. Coast Guard officials indicated their goal is to publish a Notice of Proposed Rulemaking by the end of 2007, but the officials said they could not be certain they would meet this goal. <4.2. Other Challenges Could Also Affect the Fund s Condition> The Fund also faces several other potential challenges that could affect its financial condition: Additional claims could be made on spills that have already been cleaned up: Natural resource damage claims can be made on the Fund for years after a spill has been cleaned up. The official natural resource damage assessment conducted by trustees can take years to complete, and once it is completed, claims can be submitted to the NPFC for up to 3 years thereafter. For example, the NPFC recently received and paid a natural resource damage claim for a spill in U.S. waters in the Caribbean that occurred in 1991. Costs and claims may occur on spills from previously sunken vessels that discharge oil in the future: Previously sunken vessels that are submerged and in threat of discharging oil represent an ongoing liability to the Fund. There are over 1,000 sunken vessels that pose a threat of oil discharge. These potential spills are particularly problematic because, in many cases, there is no viable responsible party that would be liable for removal costs. Therefore, the full cost burden of oil spilled from these vessels would likely be paid by the Fund. Spills may occur without an identifiable source and therefore, no responsible party: Mystery spills also have a sustained impact on the Fund, because costs for spills without an identifiable source and therefore no responsible party may be paid out of the Fund. Although mystery spills are a concern, the total cost to the Fund from mystery spills was lower than the costs of known vessel spills in 2001 through 2004. Additionally, none of the 51 major oil spills was the result of a discharge from an unknown source. A catastrophic spill could strain the Fund s resources: Since the 1989 Exxon Valdez spill, which was the impetus for authorizing the Fund s usage, no oil spill has come close to matching its costs. Cleanup costs for the Exxon Valdez alone totaled about $2.2 billion, according to the vessel s owner. By comparison, the 51 major oil spills since 1990 cost, in total, between $860 million and $1.1 billion. The Fund is currently authorized to pay out a maximum of $1 billion on a single spill. Although the Fund has been successful thus far in covering costs that responsible parties did not pay, it may not be sufficient to pay such costs for a spill that has catastrophic consequences. <5. Conclusions> The polluter pays system established under OPA has been generally effective in ensuring that responsible parties pay the costs of responding to spills and compensating those affected. Given that responsible parties liability is not unlimited, the Fund remains an important source of funding for both response and damage compensation, and its viability is important. The Fund has been able to meet all of its obligations, helped in part by the absence of any spills of catastrophic size. This favorable result, however, is no guarantee of similar success in the future. Even moderate spills can be very expensive, especially if they occur in sensitive locations or at certain times of the year. Increases in some liability limits appear warranted to help ensure that the polluter pays principle is carried out in practice. For certain vessel types, such as tank barges, current liability limits appear disproportionately low relative to their historic spill costs. The Coast Guard has reached a similar conclusion but so far has stopped short of making explicit recommendations to the Congress about what the limits should be. Absent such recommendations, the Fund may continue to pay tens of millions of dollars for spills that exceed the responsible parties limits of liability. As the agency responsible for the Fund, it is important that the Coast Guard regularly assess whether and how the limits of liability for all vessel types should be adjusted and recommends a course of action to the Congress on the adjustments that are warranted. Further, to date, liability limits have not been adjusted for significant changes in inflation. Consequently, the Fund was exposed to about $39 million in liability claims for the 51 major spills between 1990 and 2006 that could have been saved if the limits had been adjusted for inflation. Authority to make such adjustments was specifically designated to the Coast Guard in 2005, and with this clear authority, it is important for the Coast Guard to periodically adjust the limits of liability for inflation, as well. Without such actions, oil spills with costs exceeding the responsible parties limits of liability will continue to place the Fund at risk. <6. Recommendations for Executive Action> To improve and sustain the balance of Oil Spill Liability Trust Fund, we recommend that the Commandant of the Coast Guard take the following two actions: Determine whether and how liability limits should be changed, by vessel type, and make specific recommendations about these changes to the Congress Adjust the limits of liability for vessels every 3 years to reflect significant changes in inflation, as appropriate. <7. Agency Comments and Our Evaluation> We provided a draft of this report to the Department of Homeland Security (DHS), including the Coast Guard and NPFC, for review and comment. DHS provided written comments, which are reprinted in appendix II. In its letter, DHS agreed with both recommendations. Regarding our recommendation that the Coast Guard review limits of liability by vessel type and make recommendations to the Congress, DHS stated that it has met the intent of the recommendation by issuing the first of its annual reports, in January 2007, on limits of liability. As stated in our report, however, our concern is that the current annual report made no specific recommendations to the Congress regarding liability limit adjustments. Therefore, we continue to recommend that in its next annual report to the Congress on limits of liability, the Coast Guard make explicit recommendations, by vessel type, on how such limits should be adjusted. Regarding our recommendation that the Coast Guard adjust the limits of liability for vessels every 3 years to reflect significant changes in inflation, DHS stated that the Coast Guard will make adjustments to limits as appropriate. In response to other concerns that DHS expressed, we modified the report to clarify the Coast Guard s responsibility for adjusting liability limits in response to Consumer Price Index increases, and to deal with the Coast Guard s concern that the report not imply that responsible parties liability is unlimited. In addition, we provided a draft report to several other agencies the Departments of Commerce, Transportation, DOI and EPA for review and comment, because some of the information in the report was obtained from these agencies and related to their responsibilities. The agencies provided technical clarifications, which we have incorporated in this report, as appropriate. We are sending copies of this report to the Departments of Homeland Security, including the Coast Guard; Transportation, Commerce, DOI, and EPA; and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or (202) 512-4431. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To address our objectives, we analyzed oil spill removal cost and claims data from the National Pollution Funds Center (NPFC); the National Oceanic and Atmospheric Administration s (NOAA) Damage Assessment, Remediation, and Restoration Program; and the Department of the Interior s (DOI) Natural Resource Damage Assessment and Restoration Program; and the U.S. Fish and Wildlife Service (FWS). We also analyzed data obtained from vessel insurers, and in contract with Environmental Research Consulting. We interviewed NPFC and NOAA officials and state officials responsible for oil spill response, as well as industry experts and representatives from key industry associations and a vessel operator. In addition, we selected five oil spills that represented a variety of factors such as geography, oil type, and spill volume for an in-depth review. During this review, we interviewed NPFC officials involved in spill response for all five spills, as well as representatives of private-sector companies involved in the spill and spill response; we also conducted a file review of NPFC records of the federal response activities and costs associated with spill cleanup. We also reviewed documentation from the NPFC regarding the Fund balance and vessels limits of liability. Based on reviews of data documentation, interviews with relevant officials, and tests for reasonableness, we determined that the data were sufficiently reliable for the purposes of our study. This report focuses on oil spills that have occurred since the enactment of OPA August 18, 1990 for which removal costs and damage claims exceeded $1 million, and we refer to such spills as major oil spills. We conducted our review from July 2006 through August 2007 in accordance with generally accepted government auditing standards. <8. Our Categorization of Oil Spill Costs> For the purposes of this review, we included removal (or response) costs and damage claims that are considered OPA compensable; that is, the OPA-stipulated reimbursable costs that are incurred for oil pollution removal activities when oil is discharged into the navigable waters, adjoining shorelines, and the Exclusive Economic Zone of the United States, as well as costs incurred to prevent or mitigate the substantial threat of such an oil discharge. OPA compensable removal costs include containment and removal oil from water and shorelines; prevention or minimization of a substantial threat of discharge; contract services (e.g., cleanup contractors, incident management support, and wildlife rehabilitation); equipment used in removals; chemical testing required to identify the type and source of oil; proper disposal of recovered oil and oily debris; costs for government personnel and temporary government employees hired for the duration of the spill response, including costs for monitoring the activities of responsible parties; completion of documentation; and identification of responsible parties. OPA compensable damage claims include uncompensated removal costs, damages to natural resources, damages to real or personal property, loss of subsistence use of natural resources, loss of profits or earning capacity, loss of government revenues, and increased cost of public services. <9. Available Data> In order to present the best available data on spill costs, we gathered cost information from a number of sources, including federal agencies, vessel insurance companies and other private-sector companies involved in oil spill response, and Environmental Research Consulting a private consultant. Federal agencies: We gathered federal data on OPA compensable oil spill removal costs from the NPFC. Additionally, we gathered federal data on OPA compensable third-party damage claims from the NPFC, and natural resource damage claims from NOAA s Damage Assessment, Remediation, and Restoration Program, DOI s Natural Resource Damage Assessment and Restoration Program, and FWS. Insurers and other private-sector companies: We collected the best available data for OPA-compensable removal costs and damage claims from private-sector sources, including vessel insurers such as the Water Quality Insurance Syndicate and the International Group of Protection and Indemnity Clubs; oil spill response organizations, including the Alaska Chadux Corporation and Moran Environmental Recovery; and a vessel operator. We made many attempts to contact and interview the responsible parties involved in the five spills we reviewed in-depth. One was willing to speak to GAO directly. Environmental Research Consulting: Environmental Research Consulting is a consulting firm that specializes in data analysis, environmental risk assessment, cost analyses, and the development of comprehensive databases on oil/chemical spills and spill costs. Environmental Research Consulting supplied cost estimates based on reviews of court documents, published reports, interviews with responsible parties, and other parties involved with major oil spills. In addition, Environmental Research Consulting verified its data collection by relying exclusively on known documented costs, as opposed to estimated costs. Environmental Research Consulting, therefore, did not include general estimates of spill costs, which can be inaccurate. A complete and accurate accounting of total oil spill costs for all oil spills is unknown, primarily because there is no uniform mechanism to track responsible party spill costs, and there are no requirements that private sector keep or maintain cost records. The NPFC tracks federal costs to the Coast Guard and other federal agencies, which are later reimbursed by the Fund, but does not oversee costs incurred by the private sector. There is also no legal requirement in place that requires responsible parties to disclose costs incurred for responding to a spill. We cannot be certain that all private-sector cost information we gathered included only OPA- compensable costs. However, we explicitly outline which costs are included in our review. Furthermore, private-sector data were obtained primarily from insurance companies, and one official told us that insurance coverage for pollution liability usually defines compensable losses in the same manner as OPA. For instance, while responsible parties incur costs ancillary to the spill response, such as public relations and legal fees, these costs are not generally paid by oil spill insurance policies. In addition, spill costs are somewhat fluid and accrue over time, making it sometimes difficult to account for the entire cost of a spill at a given time. In particular, the natural resource and third-party damage claims adjudication processes can take many years to complete. Based on consultation with committee staff, we agreed to present the best available data for major oil spills between 1990 and 2006, and we determined that the data gathered were sufficiently reliable for the purposes of our study. Because of the imprecise nature of oil spill cost data, and the use of multiple sources of data, the data described in this report were combined and grouped into cost ranges. Using ranges of costs to provide upper and lower estimates of total costs and damage claims allows us to report data on major oil spills from all reliable sources. <10. Universe of Major Oil Spills> To establish the universe of vessel spills that have exceeded $1 million in total removal costs and damage claims since 1990, we used in consultation with oil spill experts a combination of readily available data and reasoned estimation. Since federal government cost data are available, we first established an estimate of the probable share of spill costs between the federal government and the private sector to determine what amount of federal costs might roughly indicate the total costs were over $1 million. We interviewed Environmental Research Consulting, as well as agency officials from the NPFC and NOAA, to determine a reasonable estimated share of costs between the private and public sectors. The officials with whom we spoke estimated that in general, at least 90 percent of all spill costs are typically paid by the private sector. Based on that estimation, any spill with at least $100,000 in federal oil spill removal costs and damage claims probably cost at least $1 million in total -that is, 90 percent of the total costs being paid by the private sector, and the remaining 10 percent paid by the public sector. Therefore, we initially examined all spills with at least $100,000 in federal oil spill removal costs and damage claims. We obtained these data on federal oil spill removal costs and damage claim payments from the NPFC. Of 3,389 federally managed spills since 1990, there were approximately 184 spills where the federal costs exceeded $100,000. From this group of spills, we limited our review to spills that occurred after the enactment of OPA on August 18, 1990. Additionally, we omitted (1) spill events in which costs were incurred by the federal government for measures to prevent a spill although no oil was actually spilled and (2) spills of fewer than 100 gallons, where, according to the NPFC, the likelihood of costs exceeding $1 million was minimal. Lastly, in consultation with Environmental Research Consulting, we used estimated spill costs and additional research to determine spills that were unlikely to have had total costs and claims above $1 million. Through this process, we concluded that since the enactment of OPA, 51 spills have had costs and claims that have exceeded $1 million. <11. Data Analysis and Case Studies> To assess the costs of oil spills based on various factors, we collected data from federal government, private sector, and a consultant, and combined the data into ranges. In addition to collecting data on removal costs and damage claims, we collected additional information on major oil spills. We categorized and grouped spill costs based on the vessel type, time of year, location, and oil type to look for discernable trends in costs based on these characteristics. We collected information on the limits of liability of the vessels at the time of the spill and the limits of liability for vessels after changes in liability limits in the Coast Guard and Maritime Transportation Act of 2006. In addition, to analyze the effects of inflation on the Fund and liability limits, using the Consumer Price Index, we calculated what the limits of liability would have been at the time of each spill if the OPA- stipulated limits had been adjusted for inflation. We used the Consumer Price Index as the basis for inflationary measures because OPA states that limits should be adjusted for significant increases in the Consumer Price Index. In reporting spill cost data by year and by certain categories, we use ranges, including the best available data. For certain statistics, such as the public-sector/private-sector cost share, where costs are aggregated for all spills, we calculated percentages based on the mid-point of the cost ranges. To test the reliability of using the mid-point of the ranges, we performed a sensitivity test, analyzing the effects of using mid-point versus the top and bottom of the cost range. We determined that presenting the certain figures based on the mid-point of the ranges is reliable and provides the clearest representation of the data. To supplement our data analysis and in order to determine the factors that affect the costs of major oil spills, we interviewed officials from the NPFC, NOAA, and EPA regarding the factors that affect major oil spill costs. We also interviewed state officials responsible for oil spill response from Alaska, California, New York, Rhode Island, Texas, and Washington to determine the types of costs incurred by states when responding to oil spills and the factors that affect major oil spills costs. Additionally, we interviewed industry experts and a vessel insurer about the factors that affect major oil spill costs. To determine the implications of major oil spills on the Fund, we interviewed agency officials from the NPFC and the Coast Guard as well as vessel insurers and industry experts to get the private sector s perspective on the major oil spills impact on the Fund. In addition, we reviewed recent Coast Guard reports to Congress on the status of the Fund and limits of liability. Lastly, we conducted in-depth reviews of five oil spills. The spills were selected to represent a variety of factors that potentially affect the costs of spills geography, oil type, and spill volume. During this review, we interviewed the NPFC case officers who were involved with each spill, state agency officials; insurance companies; and private-sector companies, such as oil spill response organizations that were involved in the spill and the spill response. To the best of our ability, we attempted to interview the responsible parties involved in each spill. We were able to speak with one vessel operator. Our interviews were designed to gain perspectives on the response effort for each spill, the factors that contributed to the cost of the spill, and what actual costs were incurred by the responsible party. Finally, we also conducted a file review of NPFC records of federal response activities, removal costs, and damage claims made to the Fund for each of the five spills we reviewed in-depth. We conducted our review from July 2006 through August 2007 in accordance with generally accepted government auditing standards, including standards for data reliability. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments <12. GAO Contact> <13. Staff Acknowledgments> In addition to the contact named above, Nikki Clowers, Assistant Director; Michele Fejfar; Simon Galed; H. Brandon Haller; David Hooper; Anne Stevens; Stan Stenersen; and Susan Zimmerman made key contributions to this report.
Why GAO Did This Study When oil spills occur in U.S. waters, federal law places primary liability on the vessel owner or operator--that is, the responsible party--up to a statutory limit. As a supplement to this "polluter pays" approach, a federal Oil Spill Liability Trust Fund administered by the Coast Guard pays for costs when a responsible party does not or cannot pay. The Coast Guard and Maritime Transportation Act of 2006 directed GAO to examine spills that cost the responsible party and the Fund at least $1 million. This report answers three questions: (1) How many major spills (i.e., $1 million or more) have occurred since 1990, and what is their total cost? (2) What factors affect the cost of spills? and (3) What are the implications of major oil spills for the Oil Spill Liability Trust Fund? GAO's work to address these objectives included analyzing oil spill costs data, interviewing federal, state, and private-sector officials, and reviewing Coast Guard files from selected spills. What GAO Found On the basis of cost information collected from a variety of sources, GAO estimates that 51 spills with costs above $1 million have occurred since 1990 and that responsible parties and the federal Oil Spill Liability Trust Fund (Fund) have spent between about $860 million and $1.1 billion for oil spill removal costs and compensation for damages (e.g., lost profits and natural resource damages). Responsible parties paid between about 72 percent and 78 percent of these costs; the Fund has paid the remainder. Since removal costs and damage claims may stretch out over many years, the costs of the spills could rise. The 51 spills, which constitute about 2 percent of all vessel spills since 1990, varied greatly from year to year in number and cost. Three main factors affect the cost of spills: a spill's location, the time of year, and the type of oil spilled. Spills that occur in remote areas, for example, can increase costs involved in mobilizing responders and equipment. Similarly, a spill occurring during tourist or fishing season might produce substantial compensation claims, while a spill occurring during another time of year may not be as costly. The type of oil affects costs in various ways: fuels like gasoline or diesel fuel may dissipate quickly but are extremely toxic to fish and plants, while crude oil is less toxic but harder to clean up. Each spill's cost reflects a unique mix of these factors. To date, the Fund has been able to cover costs from major spills that responsible parties have not paid, but risks remain. Specifically, the Coast Guard and Maritime Transportation Act of 2006 increased liability limits, but GAO's analysis shows the new limit for tank barges remains low relative to the average cost of such spills. Since 1990, the Oil Pollution Act required that liability limits be adjusted above the limits set forth in statute for significant increases in inflation, but such changes have never been made. Not making such adjustments between 1990 and 2006 potentially shifted an estimated $39 million in costs from responsible parties to the Fund.
<1. Background> During World War II, the U.S. government partnered with academic scientists in ad-hoc laboratories and research groups to meet unique research and development (R&D) needs of the war effort. These efforts resulted in technologies such as the proximity fuse, advanced radar and sonar, and the atomic bomb. Those relationships were later re-structured into federal research centers to retain academic scientists in U.S. efforts to continue advancements in technology, and by the mid-1960 s the term federally funded research and development centers was applied to these entities. Since that time, the U.S. government has continued to rely on FFRDCs to develop technologies in areas such as combating terrorism and cancer, addressing energy challenges, and tackling evolving challenges in air travel. For example, one of DOE s laboratories was used to invent and develop the cyclotron, which is a particle accelerator that produces high energy beams, critical to the field of nuclear physics for the past several decades. Today, FFRDCs support their sponsoring federal agencies in diverse fields of study. For example, DOE sponsors the most FFRDCs 16 in total all of which are research laboratories that conduct work in such areas as nuclear weapons, renewable energy sources, and environmental management. DHS recently established two FFRDCs: one to develop countermeasures for biological warfare agents and the other to provide decision makers with advice and assistance in such areas as analysis of the vulnerabilities of the nation s critical infrastructures, standards for interoperability for field operators and first responders, and evaluating developing technologies for homeland security purposes. FFRDCs are privately owned but government-funded entities that have long-term relationships with one or more federal agencies to perform research and development and related tasks. Even though they may be funded entirely, or nearly so, from the federal treasury, FFRDCs are regarded as contractors not federal agencies. In some cases, Congress has specifically authorized agencies to establish FFRDCs. For example, the 1991 appropriation for the Internal Revenue Service authorized the IRS to spend up to $15 million to establish an FFRDC as part of its tax systems modernization program. According to the Federal Acquisition Regulation (FAR), FFRDCs are intended to meet special long-term research or development needs that cannot be met as effectively by existing in-house or contractor resources. In sponsoring an FFRDC, agencies draw on academic and private sector resources to accomplish tasks that are integral to the mission and operation of the sponsoring agency. In order to discharge responsibilities to their sponsoring agencies, the FAR notes that FFRDCs have special access, beyond that which is common for normal contractual relationships, to government and supplier data including sensitive and proprietary data and other government resources. Furthermore, the FAR requires FFRDCs to operate in the public interest with objectivity and independence, to be free of organizational conflicts of interest, and to fully disclose their affairs to the sponsoring agencies. FFRDCs may be operated by a university or consortium of universities; other nonprofit organizations; or a private industry contractor as an autonomous organization or a separate unit of a parent organization. Agencies develop sponsoring agreements with FFRDCs to establish their research and development missions and prescribe how they will interact with the agency; the agencies then contract with organizations to operate the FFRDCs to accomplish those missions. At some agencies the sponsoring agreement is a separate document that is incorporated into the contract, and at other agencies the contract itself constitutes the sponsoring agreement. The sponsoring agreement and contract together identify the scope, purpose, and mission of the FFRDC and the responsibilities of the contractor in ensuring they are accomplished by the FFRDC. Although the contract or sponsoring agreement may take various forms, the FAR requires FFRDC sponsoring agreements to contain certain key terms and conditions. For example, the agreement term may not exceed 5 years, but can be periodically renewed in increments not to exceed 5 years. Sponsoring agreements must also contain prohibitions against the FFRDCs competing with non-FFRDCs in response to a federal agency request for proposals for other than the operation of an FFRDC. The agreement also must delineate whether and under what circumstances the FFRDC may accept work from other agencies. In addition, these agreements may identify cost elements requiring advance agreement if cost-type contracts are used and include considerations affecting negotiation of fees where fees are determined appropriate by sponsors. The National Science Foundation (NSF), which keeps general statistics on FFRDCs, identifies the following types of FFRDCs: Research and development (R&D) laboratories: fill voids where in-house and private sector R&D centers are unable to meet core agency needs. These FFRDCs are used to maintain long-term competency in sophisticated technology areas and develop and transfer important new technology to the private sector. Study and analysis centers: used to provide independent analyses and advice in core areas important to their sponsors, including policy development, support for decision making, and identifying alternative approaches and new ideas on significant issues. Systems engineering and integration centers: provide support for complex systems by assisting with the creation and choice of system concepts and architectures, the specification of technical system and subsystem requirements and interfaces, the development and acquisition of system hardware and software, the testing and verification of performance, the integration of new capabilities, and continuous improvement of system operations and logistics. The NSF maintains a master list of the current FFRDCs and collects funding data from their agency sponsors on an annual basis. According to NSF data, R&D funding for FFRDCs has risen steadily across the federal government, increasing 40 percent from fiscal year 1996 to 2005, from $6.9 billion to $9.7 billion. (See fig. 1 below.) This does not represent the full amount of funding provided to FFRDCs by federal agencies, however, since it does not include non-R&D funding. Nevertheless, it is the only centrally reported information on federal funding for FFRDCs. For a list of the 38 FFRDCs currently sponsored by the U.S. government, see appendix II. <2. Most Agencies Compete Cost- Reimbursement Contracts for Operating Their FFRDCs, but Some Do Not Have Specific Personal Conflict-of- Interest Requirements> The four agencies we reviewed use cost-reimbursement contracts with the organizations that operate their FFRDCs, and three of these agencies generally use full and open competition in awarding these contracts. While the agencies require that their FFRDCs be free from organizational conflicts of interest in accordance with federal regulations, only DOD and DOE have agencywide requirements that prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. DHS and HHS policies do not specifically prescribe areas that contractors must include to address these conflicts. <2.1. Three Agencies Generally Compete FFRDC Contracts, While DOD Does Not> Federal law and regulations require federal contracts to be competed unless they fall under specific exceptions to full and open competition. One such exception is awarding contracts to establish or maintain an essential engineering, research, or development capability to be provided by an FFRDC. While some agencies we reviewed awarded FFRDC contracts through other than full and open competition in the past, including sole-source contracts, three have generally used full and open competition in recent years. Starting in the mid-1990 s, DOE took steps to improve FFRDC laboratory contractors performance with a series of contracting reforms, including increasing the use of competition in selecting contractors for its labs. Subsequent legislation required DOE to compete the award and extension of contracts used at its labs, singling out the Ames Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory for mandatory competition because their contracts in effect at the time had been awarded more than 50 years ago. In addition, according to DOE officials, the Los Alamos contract was competed due to performance concerns with the contractor, and Argonne West s contract was competed to combine its research mission with that of the Idaho National Engineering and Environmental Laboratory to form the Idaho National Laboratory. DOE now routinely uses competitive procedures on contracts for its FFRDC laboratories unless a justification for the use of other than competitive procedures is approved by the Secretary of Energy. Of DOE s 16 FFRDCs, DOE has used full and open competition in the award of 13 contracts, is in the process of competing one contract, and plans to compete the remaining two contracts when their terms have been completed. For the 13 contracts that have been competed, in 2 cases the incumbent contractor received the new contract award, in 8 cases a new consortium or limited liability corporation was formed that included the incumbent contractor, and in 3 cases a different contractor was awarded the contract. Other agencies also have used competitive procedures to award FFRDC contracts: HHS has conducted full and open competition on the contract for its cancer research lab since its establishment in 1972, resulting in some change in contractors over the years. Recently, however, HHS noncompetitively renewed the contract with the incumbent contractor. The last time it was competed, in 2001, HHS received no offers other than SAIC-Frederick, which has performed the contract satisfactorily since then. HHS publicly posted in FedBizOpps its intention to noncompetitively renew the operations and technical support contract with SAIC-Frederick for a potential 10-year period. Interested parties were allowed to submit capability statements, but despite some initial interest none were submitted. DHS competed the initial contract awards for the start up of its two FFRDCs, with the award of the first contract in 2004. DHS plans to compete the award of the next studies and analyses FFRDC contract this year. In contrast, DOD continues to award its FFRDC contracts on a sole-source basis under statutory exemptions to competition. In the early 1990s, a report by a Senate subcommittee and a Defense Science Board task force both criticized DOD s management and use of its FFRDCs, including a lack of competition in contract award. This criticism mirrored an earlier GAO observation. GAO subsequently noted in a 1996 report, however, that DOD had begun to strengthen its process for justifying its use of FFRDCs under sole-source contracts for specific purposes. DOD plans to continue its sole-source contracting for the three FFRDC contracts that are due for renewal in 2008 and the six contracts to be renewed in 2010. <2.2. Agencies Use Cost- Reimbursement Contracts with Varying Types of Fee Structures, Primarily Funded through Program Offices> All of the FFRDC contracts we reviewed were cost-reimbursement contracts, most of which provided for payments of fixed, award, or incentive fees to the contractor in addition to reimbursement of incurred costs. Fixed fees often are used when, according to the agencies we reviewed, the FFRDC will need working capital or other miscellaneous expense requirements that cannot be covered through reimbursing direct and indirect costs. Fixed fees generally account for a small percentage of the overall contract costs; for fiscal year 2007 fixed fees paid to the FFRDCs we reviewed vary from a low of about 0.1 percent to a high of 3 percent. Award or incentive fees, on the other hand, are intended to motivate contractors toward such areas as excellent technical performance and cost effective management. These types of performance-based fees ranged from 1 to 7 percent at the agencies we reviewed. Among agencies we reviewed, contract provisions on fees varied significantly: Most DOD contracts are cost-plus-fixed-fee, and DOD, as a general rule, does not provide award or incentive fees to its FFRDCs. DOD s FFRDC management plan its internal guidance document for DOD entities that sponsor FFRDCs limits fees to amounts needed to fund ordinary and necessary business expenses that may not be otherwise recoverable under the reimbursement rules that apply to these types of contracts. For example, the FFRDC operator may incur a one-time expense to buy an expensive piece of needed equipment, but the government s reimbursement rules require that this expense be recovered over several future years in accordance with an amortization schedule. DOD s management plan indicates that fees are necessary in such instances to enable the contractor to service the debt incurred to buy the equipment and maintain the cash flow needed for the contractor s business operations. DOD officials told us they scrutinize these fees carefully and do not always pay them. For example, the contract between DOD and the Massachusetts Institute of Technology (MIT), which operates the Lincoln Laboratory FFRDC, specifies that MIT will not receive such fees. DOE and DHS use fixed fees, performance-based fees, and award terms, which can extend the length of the contract as a reward for good performance. For example, Sandia Corporation, a private company that operates Sandia National Laboratories, receives both a fixed fee and an incentive fee, which for fiscal year 2007 together amounted to about $23.2 million, an additional 1 percent beyond its estimated contract cost. In addition, Sandia Corporation has received award terms that have lengthened its contract by 10 years. HHS provides only performance-based fees to the private company that operates its one FFRDC. Rather than receiving direct appropriations, most FFRDCs are funded on a project-by-project basis by the customers, either within or outside of the sponsoring agency, that wish to use their services by using funds allocated to a program or office. FFRDC contracts generally specify a total estimated cost for work to be performed and provide for the issuance of modifications or orders for the performance of specific projects and tasks during the period of the contract. Congressional appropriations conferees sometimes directed specific funding for some DHS and DOD FFRDCs in conference reports accompanying sponsoring agencies appropriations. For example, although according to DOD officials, 97 percent of its FFRDC funding comes from program or office allocations to fund specific projects, half of its FFRDCs receive some directed amounts specified in connection with DOD s annual appropriations process. Specifically, for fiscal year 2008, the following DOD FFRDCs received conferee-directed funding in the DOD appropriations conference report: MIT Lincoln Laboratory Research Program, $30 million; the Software Engineering Institute, $26 million; the Center for Naval Analyses, $49 million; the RAND Project Air Force, $31 million; and the Arroyo Center, $20 million. In addition, DOD officials noted that the congressional defense committees sometimes direct DOD s FFRDCs to perform specific studies for these committees through legislation or in committee reports. In fiscal year 2008, two DOD FFRDCs conducted 16 congressionally requested studies. <2.3. All Four Agencies Address Organizational Conflicts of Interest but Vary in Addressing Personal Conflicts of Interest of FFRDC Employees> As FFRDCs may have access to sensitive and proprietary information and because of the special relationship between sponsoring agencies and their FFRDCs, the FAR requires that FFRDC contractors be free from organizational conflicts of interest. In addition, we recently reported that, given the expanding roles that contractor employees play, government officials from the Office of Government Ethics and DOD believe that current requirements are inadequate to address potential personal conflicts of interest of contractor employees in positions to influence agency decisions. While each agency we reviewed requires FFRDC operators to be free of organizational conflicts of interest, DOD and DOE prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. The FAR states that an organizational conflict of interest exists when because of other interests or relationships, an entity is unable or potentially unable to render impartial assistance or advice to the government or the entity might have an unfair competitive advantage. Because sponsors rely on FFRDCs to give impartial, technically sound, objective assistance or advice, FFRDCs are required to conduct their business in a manner befitting their special relationship with the government, to operate in the public interest with objectivity and independence, to be free from organizational conflicts of interest, and to fully disclose their affairs to the sponsoring agency. Each sponsoring agency we reviewed included conflict-of-interest clauses in its sponsoring agreements with contractors operating their FFRDCs. For example, a DHS FFRDC contract includes a clause that specifically prohibits contractors that have developed specifications or statements of work for solicitations from performing the work as either a prime or first-tier subcontractor. In addition to organizational conflicts of interest requirements, DOD and DOE have specific requirements for their FFRDC contractors to guard against personal conflicts of interest of their employees. For purposes of this report, a personal conflict of interest may occur when an individual employed by an organization is in a position to materially influence an agency s recommendations and/or decisions and who because of his or her personal activities, relationships, or financial interests may either lack or appear to lack objectivity or appear to be unduly influenced by personal financial interests. In January 2007, the Under Secretary of Defense (Acquisition, Technology, and Logistics) implemented an updated standard conflict-of-interest policy for all of DOD s FFRDCs that requires FFRDC contractors to establish policies to address major areas of personal conflicts of interest such as gifts, outside activities, and financial interests. The updated policy and implementing procedures now are included in all DOD FFRDC sponsoring agreements and incorporated into the DOD FFRDC operating contracts. This action was prompted by public and congressional scrutiny of a perceived conflict of interest by the president of a DOD FFRDC who then voluntarily resigned. As a result, DOD s Deputy General Counsel (Acquisition and Logistics) reviewed the conflict of interest policies and procedures in place at each of its FFRDCs and determined that although sponsoring agreements, contracts, and internal policies were adequate, they should be revised to better protect DOD from employee-related conflicts. DOD s revised policy states that conflicts of interest could diminish an FFRDC s objectivity and capacity to give impartial, technically sound, objective assistance or advice, which is essential to the research, particularly with regard to FFRDCs access to sensitive information. Therefore, the policy provides that FFRDC conflict of interest policies address such issues as gifts and outside activities and requires an annual submission of statements of financial interests from all FFRDC personnel in a position to make or materially influence research findings or recommendations that might affect outside interests. DOE s FFRDCs, which operate under management and operating (M&O) contracts a special FAR designation for government-owned, contractor- operated facilities such as DOE s have additional provisions for addressing personal conflicts of interest. The provisions address such areas as reporting any outside employment that may constitute a personal conflict of interest. In addition, the National Nuclear Security Administration (NNSA), which sponsors three of DOE s FFRDCs, is planning to implement additional requirements in its laboratory contracts later this year requiring contractors to disclose all employee personal conflict of interests, not just outside employment as is currently required. An NNSA procurement official noted that other personal conflict of interests may include any relationship of an employee, subcontractor employee, or consultant that may impair objectivity in performing contract work. NNSA officials stated that it plans to share the policy with the DOE policy office for potential application across the department. Currently, DHS and HHS policies do not specifically prescribe areas that contractors must include to address employees personal conflicts. However, DHS officials stated that they provided guidance to the two contractors that operate DHS s FFRDCs to implement requirements to address some of their employees personal conflicts with DHS s interests. In addition, both DHS and HHS FFRDC contractors provide that their staff avoid or disclose financial interests or outside activities that may conflict with the interests of the company. For example, the contractor operating the FFRDC for HHS requires about 20 percent of its employees to report activities that may constitute a conflict with the company s interests, but allows the bulk of its staff to self-determine when they need to report. In May 2008, we reported that officials from the Office of Government Ethics expressed concerns that current federal requirements and policies are inadequate to prevent certain kinds of ethical violations on the part of contractor employees, particularly with regard to financial conflicts of interest, impaired impartiality, and misuse of information and authority. The acting director identified particular concerns with such conflicts of interest in the management and operations of large research facilities and laboratories. Our report noted that DOD ethics officials had generally the same concerns. Therefore, we recommended that DOD implement personal conflict-of-interest safeguards similar to those for federal employees for certain contractor employees. <3. Agencies Vary in FFRDC Oversight Approaches and Do Not Regularly Share Best Practices> Sponsoring agencies take various approaches in exercising oversight of their FFRDCs. The agencies determine appropriateness of work conducted by their FFRDCs; perform on-going and annual assessments of performance, costs and internal controls; and conduct comprehensive reviews prior to renewing sponsoring agreements. Each agency develops its own processes in these areas, and no formal interagency mechanisms exist to facilitate the sharing of FFRDC oversight best practices. <3.1. Agencies Approve Research Plans and Work Conducted at Their FFRDCs> To ensure work remains within each FFRDCs purpose, mission, scope of effort, and special competency, sponsoring agencies develop and approve annual research plans for the FFRDCs and review and approve FFRDC work assigned on a project-by-project basis. While the majority of each FFRDC s work is done for its sponsoring agency, FFRDCs may perform work for other institutions, subject to sponsoring agency approval. Officials at DOD, DOE, and DHS identified the processes they use to develop annual research plans that describe each FFRDC s research agenda. For example, DHS designates an executive agent to ensure that its FFRDC is used for the agency s intended purposes. Each year DHS develops a research plan that is reviewed and approved by the executive agent, including any subsequent changes. DHS also uses an Advisory Group to ensure that its FFRDCs produce work consistent with the sponsoring agreement. DOD has a similar mechanism for approving the annual research plan for its Lincoln Laboratory FFRDC. This FFRDC has a Joint Advisory Committee that annually reviews and approves the proposed research plan. Members of this committee include representatives from the various DOD services e.g., Air Force, Army, and Navy who are the users of the laboratory s R&D capabilities. Of the four agencies included in our review, only HHS does not create a separate annual research plan for its FFRDC. Instead, the work at HHS FFRDC is guided by the National Cancer Institute s overall mission, which is described in its annual budgetary and periodic strategic planning documents. In determining the proposed research plan, DOD must abide by congressionally set workload caps. These caps were imposed in the 1990 s in response to concerns that DOD was inefficiently using its FFRDCs, and therefore, each fiscal year Congress sets an annual limitation on the Staffyears of Technical Effort (STE) that DOD FFRDCs can use to conduct work for the agency. The STE limitations aim to ensure that (1) work is appropriate and (2) limited resources are used for DOD s highest priorities. Congress also sets an additional workload cap for DOD s FFRDCs for certain intelligence programs. Once DOD receives from Congress the annual total for STEs, then DOD s Office of the Undersecretary of Acquisition, Technology and Logistics allocates them across DOD s FFRDCs based on priorities set forth in the annual research plan developed by each FFRDC. DOD officials observed that while the overall DOD budget has increased about 40 percent since the early 1990s, the STE caps have remained steady, and therefore, DOD must turn aside or defer some FFRDC-appropriate work to subsequent years. Although the majority of work that DOD s FFRDCs conduct is subject to these limitations, the work that DOD FFRDCs conduct for non-DOD entities is not subject to these caps. Each sponsoring agency also reviews and approves tasks for individual FFRDC projects to make sure that those tasks (1) are consistent with the core statement of the FFRDC and (2) would not constitute a personal service or inherently governmental function. Listed below are examples of procedures used by agencies included in our review to approve tasks for individual projects: DOD sponsors generally incorporate in their sponsoring agreement guidelines for performance of work by the FFRDC. The work is screened at various levels for appropriateness, beginning with FFRDC clients who request the work, then program and contract managers, and then it is reviewed and approved as well by the primary sponsor. In some cases, projects are entered into a computer-based tool, which the Air Force has developed to determine and develop its overall requirements for that year. The tool is intended to assist the Air Force in prioritizing requests for its FFRDC and in ensuring that work requested is in accordance with guidelines and that potential alternative sources have been considered. DOE FFRDCs must document all DOE-funded projects using work authorizations to help ensure that the projects are consistent with DOE s budget execution and program evaluation requirements. In addition, DOE uses an independent scientific peer-review approach including faculty members and executives from other laboratories at several of its FFRDC laboratories to ensure the work performed is appropriate for the FFRDC and scientifically sound. In some cases, DOE s Office of Science holds scientific merit competitions between national laboratories (including FFRDCs), universities, and other research organizations for some R&D funding for specific projects. HHS uses an automated yellow task system to determine if work is appropriate for its FFRDC, and several officials must approve requests for work, including the government contracting officer and overseeing project officer for the FFRDC, with reference to a set of criteria. This agency requires a concept review by advisory boards for the various HHS institutes to ensure the concept is appropriate for the FFRDC and meets its mission or special competency. DHS requires certain officials at its sponsoring office to conduct a suitability review using established procedures for reviewing and approving DHS-sponsored tasks. This review is required under DHS s Management Directive for FFRDCs. FFRDCs are required to have their sponsors review and approve any work they conduct for others, and the four agencies included in our review have policies and procedures to do so. FFRDCs may conduct work for others when required capabilities are not otherwise available from the private sector. This work for others can be done for federal agencies, private sector companies, and local and state governments. The sponsoring agency of an FFRDC offers the work for others, with full costs charged to the requesting entity, to provide research and technical assistance to solve problems. At laboratory FFRDCs, work for others can include creating working models or prototypes. All work placed with the FFRDC must be within the purpose, mission, general scope of effort, or special competency of the FFRDC. Work for others is considered a technology transfer mechanism, which helps in sharing knowledge and skills between the government and the private sector. Under work for others, according to DOD officials and federal regulation, the title to intellectual property generally belongs to the FFRDC conducting the work, and the government may obtain a nonexclusive, royalty-free license to such intellectual property or may choose to obtain the exclusive rights. As required by FAR, sponsoring agreements or sponsoring agencies we reviewed identified the extent to which their FFRDCs may perform work for other than the sponsors (other federal agencies, state or local government, nonprofit or profit organizations, etc.) and the procedures that must be followed by the sponsoring agency and the FFRDC. In addition, according to agency officials FFRDCs have a responsibility to steer inquiries about potential research for other entities to their primary sponsor s attention for approval. Agency officials stated that they work with their FFRDCs when such situations arise. DOE s Office of Science established a Work for Others Program for all of its FFRDC laboratories. Under this program, the contractor of the FFRDC must draft, implement, and maintain formal policies, practices, and procedures, which must be submitted to the contracting officer for review and approval. In addition, DOE may conduct periodic appraisals of the contractor s compliance with its Work for Others Program policies, practices, and procedures. For DOE s National Nuclear Security Administration (NNSA), officials reported that the work for others process at the Sandia National Laboratories requires DOE approval before the Sandia Corporation develops the proposed statement of work, which is then sent to DOE s site office for review and approval. For DHS, each FFRDC includes the work for others policy in its management plan. For example, one management plan states that the FFRDC may perform work for others and that such work is subject to review by the sponsoring agency for compliance with criteria mutually agreed upon by the sponsor and the FFRDC contractor. The DHS FFRDC laboratory director said he routinely approves any work-for-others requests but gives first priority to the DHS-sponsored work. The sponsor for this FFRDC also periodically assesses whether its work for others impairs its ability to perform work for its sponsor. HHS and DOD also have work-for-others programs for the FFRDCs they sponsor. For example, at HHS s FFRDC the program is conducted under a bilateral contract between the entity that is requesting the work and the FFRDC to perform a defined scope of work for a defined cost. This agency developed a standard Work for Others Agreement for its FFRDC, the terms and conditions of which help ensure that the FFRDC complies with applicable laws, regulations, policies, and directives specified in its contract with the HHS. Some agency sponsors report that work for others at their FFRDCs has grown in the past few years. For example, DOE officials said work for others at the Sandia National Laboratories related to nanotechnologies and cognitive sciences has grown in the last 3 years. As shown in table 1, the amount of work for others by FFRDCs since fiscal year 2001 has increased for many of the FFRDCs included in our review. While funding for work for others has increased, some agencies in our review reported limiting the amount of work for others their FFRDCs conduct. For example, DOE s Office of Science annually approves overall work-for-others funding levels at its laboratories based on a request from the laboratory and recommendation from the responsible site office. Any work-for-others program that is above 20 percent of the laboratory s operating budget, or any request that represents a significant change from previous year s work-for-others program will be reviewed in depth before the approval is provided. Similarly, DOE officials limit commitments to conduct work for others at the National Renewable Energy Laboratory s to about 10 percent of the laboratory s total workload. <3.2. Agencies Assess FFRDCs Performance, Costs, and Internal Controls> In addition to ensuring work is appropriate for their FFRDCs, the four sponsoring agencies in our case study regularly review the contractors performance in operating the FFRDCs, including reviewing and approving costs incurred in operations and internal control mechanisms. Agency performance evaluations for FFRDC contractors vary, particularly between those that incorporate performance elements into their contracts and those that do not. Furthermore, contracting officers at each agency regularly review costs to ensure that they are appropriate, in some cases relying on audits of costs and internal controls to highlight any potential issues. <3.2.1. Agencies Review Performance of FFRDC and Operating Contractor> All four agencies conduct at least annual reviews of the performance of their FFRDCs and contractors. At three agencies, the outcomes of these reviews provide the basis for contractors to earn performance-based incentives or awards. Specifically, DOE, HHS, and DHS provide for award fees to motivate contractors toward excellence in high performance, and contractors operating FFRDCs for DOE and DHS may earn additional contract extensions by exceeding performance expectations. DOE uses a performance-based contracting approach with its FFRDCs, which includes several mechanisms to assess performance. First, DOE requires contractors to conduct annual self-assessments of their management and operational performance. Also, contracting officers conduct annual assessments of the performance of the FFRDC contractor, relying in part on user satisfaction surveys. All of this input contributes to each lab s annual assessment rating. For example, Sandia National Laboratories, operated by Sandia Corporation (a subsidiary of Lockheed Martin) received an overall rating of outstanding for fiscal year 2007 and was awarded 91 percent of its available award fee ($7.6 million of a possible total fee of $8.4 million). DOE noted that Sandia National Laboratories scientific and engineering support of U.S. national security was an exceptional performance area. DOE publishes such report cards for its laboratories on the internet. DOE includes detailed performance requirements in each contract in a Performance Evaluation and Measurement Plan that is organized by goals, objectives, measures, and targets. The DOE Office of Science mandates that each of its ten FFRDC laboratories establish the same eight goals in each FFRDC s contractual plan. For example, the Ernest Orlando Lawrence Berkeley National Laboratory, operated by the University of California, received high ratings in providing efficient and effective mission accomplishment and science and technology program management. These ratings resulted in an award of 94 percent or $4.2 million of the total available fee of $4.5 million. HHS, which also uses performance-based contracting, has identified certain designated government personnel to be responsible for evaluation of the FFRDC contractor. This review process includes different levels of reviews, from coordinators who review performance evaluations to an FFRDC Performance Evaluation Board, which is responsible for assessing the contractor s overall performance. The board rates each area of evaluation based on an established Performance Rating System to determine the amount of the contractor s award fee. In fiscal year 2007, the National Cancer Institute at Frederick, operated by Science Applications International Corporation-Frederick (a subsidiary of Science Applications International Corporation), received 92 percent of its available award fee or $6.9 million of a possible $7.4 million. Similar to the other agencies, DHS regularly conducts performance reviews throughout the life cycle of its FFRDC contract. This includes program reviews as described in the sponsoring agreement, midyear status reviews, technical progress reports, monthly and quarterly reports, and annual stakeholder surveys to ensure the FFRDC is meeting customer needs. DHS also drafts a multiyear improvement plan and collects performance metrics as evidence of the FFRDC s performance. For fiscal year 2007, Battelle National Biodefense Institute, operating the National Biodefense Analysis and Countermeasures Center, received 82 percent of its performance-based award fee amounting to $1.4 million. According to DHS officials, Analytic Services, Inc., which operates the Homeland Security Institute, received a fixed fee of about 2 percent or approximately $.68 million for fiscal year 2007. DOD conducts annual performance reviews and other internal reviews, such as conducting periodic program management reviews and annual customer surveys to monitor the performance of its FFRDCs in meeting their customers expectations. As part of this review process, major users are asked to provide their perspectives on such factors as the use and continuing need for the FFRDC, and how these users distinguish work to be performed by the FFRDC from work to be performed by others. According to DOD, these performance evaluations provide essential input to help it assess the effectiveness and efficiency of the FFRDC s operations. Typically the performance reviews obtain ratings from FFRDC users and sponsors on a variety of factors including the quality and value of the work conducted by the FFRDCs, as well as its ability to meet technical needs, provide timely and responsive service, and manage costs. <3.2.2. Agencies Review Costs and Internal Controls> Federal regulations, policies, and contracts establish various cost, accounting, and auditing controls that agencies use to assess the adequacy of FFRDC management in ensuring cost-effective operations and ensure that costs of services being provided to the government are reasonable. Sponsors of the FFRDCs we reviewed employ a variety of financial and auditing oversight mechanisms to review contractors management controls, including incurred cost audits, general financial and operational audits, annual organizational audits, and audited financial statements. These mechanisms differ, depending on the agencies involved and the type of organization operating the FFRDCs. Under cost-reimbursement contracts, the costs incurred are subject to cost principles applicable to the type of entity operating the FFRDC. Most FFRDC contracts we examined include a standard clause on allowable costs that limits contract costs to amounts that are reasonable and in compliance with applicable provisions of the FAR. Under the FAR, contracting officers are responsible for authorizing cost-reimbursement payments and may request audits at their discretion before a payment is made. In addition, when an allowable cost clause is included in a contract, the FAR requires that an indirect cost rate proposal be submitted annually for audit. At DOD, the Defense Contract Audit Agency (DCAA) generally performs both annual incurred cost audits and close-out audits for completed contracts and task orders at the end of an FFRDC s 5-year contract term. The audit results are included in the comprehensive review of DOD s continued need for its FFRDCs. DCAA also performs these types of audits for DHS s FFRDCs. At DOE, the Office of the Inspector General is responsible for incurred cost audits for major facilities contractors. At HHS, officials stated that while the contracting officer for its FFRDC regularly reviews the incurred costs, no audits of these costs have been performed. Agencies and FFRDC contractors also conduct financial and operational audits in addition to incurred cost audits. DOE relies primarily upon FFRDC contractors annual internal audits rather than on third-party monitoring through external audits. These internal audits are designed to implement DOE s Cooperative Audit Strategy a program that partners DOE s Inspector General with contractors internal audit groups to maximize the overall audit coverage of M&O contractors operations and to fulfill the Inspector General s responsibility for auditing the costs incurred by major facilities contractors. This cooperative audit strategy permits the Inspector General to make use of the work of contractors internal audit organizations to perform operational and financial audits, including incurred cost audits, and to assess the adequacy of contractors management control systems. DHS and DOD generally rely on audits performed by those agencies, a designated audit agency, or an accounting firm, though their FFRDC contractors usually perform some degree of internal audit or review function as part of their overall management activity. In addition, all nonprofits and educational institutions that annually expend more than $500,000 in federal awards including those that operate FFRDCs are subject to the Single Audit Act which requires annual audits of: (1) financial statements, (2) internal controls, and (3) compliance with laws and regulations. We have previously reported these audits constitute a key accountability mechanism for federal awards and generally are performed by independent auditors. At DOD, for example, DCAA participates in single audits normally on a coordinated basis at the election of the organization being audited with the audited organization s independent public accountant. The financial statements, schedules, corrective action plan, and audit reports make up the single audit package, which the audited organization is responsible for submitting to a federal clearing house designated by OMB to receive, distribute, and retain. DOD s Office of Inspector General, for example, as a responsible federal agency, receives all single audit submissions for nonprofits and educational institutions that operate DOD s FFRDCs. These audit results are employed by DOD as partial evidence of its FFRDCs cost- effectiveness and incorporated in the 5-year comprehensive reviews. These annual single audits for nonprofit and educational FFRDC contractors are a useful adjunct to other cost, accounting, and auditing controls discussed previously, designed to help determine contractor effectiveness, efficiency, and accountability in the management and operation of their FFRDCs. Private contractors that publicly trade their securities on the exchanges including those that operate FFRDCs are registered with the Securities and Exchange Commission (SEC) and are required to file audited financial statements with the SEC. These audited statements must be prepared in conformity with generally accepted accounting principles (GAAP) and securities laws and regulations, including Sarbanes-Oxley, that address governance, auditing, and financial reporting. These financial statements are designed to disclose information for the benefit of the investing public, not to meet government agencies information needs. Accordingly, SAIC and Lockheed private contractors that manage National Cancer Institute at Frederick and Sandia National Laboratories respectively prepare audited financial statements for their corporate entities, but do not separately report information on their individual FFRDCs operations. Finally, even though financial statements are not required by university and nonprofit sponsored FFRDCs, some of the FFRDCs in agencies we reviewed have audited financial statements prepared solely for their own operations. DOD s Aerospace and DHS s HSI and NBACC are examples. Most others financial operations, however, are included in the audited financial statements of their parent organizations or operating contractor. Some, like MITRE, which manages not only DOD s C3I FFRDC but also two others (one for the Federal Aviation Administration and one for the Internal Revenue Service), provides supplemental schedules, with balance sheets, revenues and expenses, and sources and uses of funds for all three FFRDCs. Others, like the Institute for Defense Analyses, which also operates two other FFRDCs in addition to the Studies and Analyses Center for DOD, provide only a consolidated corporate statement with no information on specific FFRDCs. <3.3. Agencies Periodically Rejustify Their Sponsorship of FFRDCs> The FAR requires that a comprehensive review be undertaken prior to extending a sponsoring agreement for an FFRDC. We found that the four agencies in our case study were conducting and documenting these reviews, but noted that implementation of this requirement by each agency is based on its own distinct management policies, procedures, and practices. During the reviews prior to agreement renewal, sponsoring agencies should include the following five areas identified by the FAR examination of the continued need for FFRDC to address its sponsor s technical needs and mission requirements; consideration of alternative sources, if any, to meet those needs; assessment of the FFRDC s efficiency and effectiveness in meeting the sponsor s needs, including objectivity, independence, quick response capability, currency in its field(s) of expertise, and familiarity with the sponsor; assessment of the adequacy of FFRDC management in ensuring a cost- determination that the original reason for establishing the FFRDC still exists and that the sponsoring agreement is in compliance with FAR requirements for such agreements. DOD sponsoring offices begin conducting detailed analyses for each of the five FAR review criteria approximately 1 to 2 years in advance of the renewal date. As DOD has received criticism in the past for its lack of competition in awarding FFRDC contracts, it now conducts detailed and lengthy comprehensive reviews prior to renewing FFRDC sponsoring agreements and contracts with incumbent providers. DOD s FFRDC Management Plan lays out procedures to help provide consistency and thoroughness in meeting FAR provisions for the comprehensive review process. DOD procedures require, and the comprehensive reviews we examined generally provided, detailed examinations of the mission and technical requirements for each FFRDC user, and explanations of why capabilities cannot be provided as effectively by other alternative sources. For example, DOD convened a high level, independent Technical Review Panel to review whether Lincoln Laboratory s research programs were within its mission as well as whether the research was effective, of high technical quality, and of critical importance to DOD. The panel composed of a former Assistant Secretary of the Air Force, a former president of another FFRDC, former senior military officers, and a high level industry representative found that no other organizations had the capacity to conduct a comparable research program. In addition, DOD sponsors use information from annual surveys of FFRDC users that address such performance areas as cost effectiveness and technical expertise. Determinations to continue or terminate the FFRDC agreement are made by the heads of sponsoring DOD components (e.g., the Secretary of the Army or Air Force) with review and concurrence by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. DOE has a documented comprehensive review process that explicitly requires DOE sponsors to assess the use and continued need for the FFRDC before the term of the agreement has expired. DOE s process requires that the review be conducted at the same time as the review regarding the decision to extend (by option) or compete its FFRDC operating contract. According to DOE s regulation, the option period for these contracts may not exceed 5 years and the total term of the contract, including any options exercised, may not exceed 10 years. DOE relies on information developed as part of its annual performance review assessments as well as information developed through the contractor s internal audit process to make this determination. The comprehensive review conducted prior to the most recent award of the contract to operate Sandia National Laboratories concluded that the FFRDC s overall performance for the preceding 6 years had been outstanding. The Secretary of Energy determined that the criteria for establishing the FFRDC continued to be satisfied and that the sponsoring agreement was in compliance with FAR provisions. At DHS, we found that its guidance and process for the comprehensive review mirror many aspects of the DOD process. DHS has undertaken only one such review to date, which was completed in May 2008. As of the time we completed our work, DHS officials told us that the documentation supporting the agency s review had not yet been approved for release. HHS in contrast to the structured review processes of the other agencies relies on the judgment of the sponsoring office s senior management team, which reviews the need for the continued sponsorship of the FFRDC and determines whether it meets the FAR requirements. Agency officials stated that this review relies on a discussion of the FFRDC s ability to meet the agency s needs within the FAR criteria, but noted there are no formal procedures laid out for this process. The final determination is approved by the director of the National Cancer Institute and then the director of the National Institutes of Health. <3.4. No Formal Interagency Mechanisms Exist for Sharing of Best Practices for Overseeing FFRDCs> Some agencies have used the experiences of other agencies as a model for their own oversight of their FFRDCs. There is no formal mechanism, however, for sharing of best practices and lessons learned among sponsoring agencies. DHS officials have adopted several of DOD s and DOE s policies and procedures for managing FFRDCs to help their newly created FFRDCs gain efficiencies. DHS mirrored most of DOD s FFRDC Management Plan, and officials have stated that the STE limitations for DOD could be a potentially useful tool for focusing FFRDCs on the most strategic and critical work for the agency. Also, DHS officials stated they have made use of DOE s experience in contracting for and overseeing the operation of its laboratories, such as including a DOE official in the DHS process to select a contractor to operate its laboratory FFRDC. In addition, HHS officials said they are incorporating the DOE Blue Ribbon Report recommendation to set aside a portion of the incentive fee paid on their FFRDC contract to reward scientific innovations or research. The idea for the new contract is to base 80 percent of the available award fee in a performance period on operations and use the final 20 percent to reward innovation. HHS also may adopt the technique used by DOE of providing for contract extensions on the basis of demonstrated exceptional performance. To take advantage of others experiences, some FFRDCs sponsored by particular agencies have formed informal groups to share information. For example, DOD s FFRDCs have formed informal groups at the functional level Chief Financial Officers, Chief Technology Officers, and General Counsels which meet periodically to share information on issues of common concern. In addition, the security personnel from the DOD FFRDC contractors meet once a year to discuss security and export control related issues. The contractor officials at Sandia National Laboratories said they share best practices for operating DOE s laboratory FFRDCs at forums such as the National Laboratory Improvement Council. This Council was also mentioned in a DOE review of management best practices for the national laboratories as one of the few groups that deliberate a broader and more integrated agenda among laboratories. Despite these instances of information sharing within agencies and the acknowledgment by some officials of potential benefits in such knowledge sharing, no formal mechanisms exist for sharing information across agencies that sponsor and oversee FFRDCs. We reported in 2005 that federal agencies often carry out related programs in a fragmented, uncoordinated way, resulting in a patchwork of programs that can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. The report suggested frequent communication across agency boundaries can prevent misunderstandings, promote compatibility of standards, policies, and procedures, and enhance collaboration. For example, the Federal Laboratory Consortium for Technical Transfer was created to share information across national laboratories. This includes the FFRDC laboratories, but not the other types of FFRDCs. Some agency officials stated that there would be benefits to sharing such best practices. <4. Conclusions> All federal agencies that sponsor FFRDCs are subject to the same federal regulations, and each agency included in our review has developed its own processes and procedures to ensure compliance and conduct oversight of its FFRDCs. For the most part the differences in approaches are not of great consequence. In at least one key area, however, the different approaches have the potential to produce significantly different results. Specifically, while all FFRDCs are required to address organizational conflicts of interest, only DOD and DOE have requirements that their FFRDC contractors address specific areas of personal conflicts of interest of their employees. In light of the special relationship that FFRDCs have with their sponsoring agencies, which often involves access to sensitive or confidential information, it is critical not only that the FFRDC as an entity but also that employees of the entity in positions to make or influence research findings or agency decision making be free from conflicts. Lacking such safeguards, the FFRDC s objectivity and ability to provide impartial, technically sound, objective assistance or advice may be diminished. The two agencies with the most experience sponsoring FFRDCs have recognized this gap and have taken steps to address personal conflicts of interest. These steps are consistent with our recent recommendation to DOD that highlighted the need for personal conflicts- of-interest safeguards for certain contractor employees. The other agencies included in our review of FFRDCs could benefit from additional protections in the area of personal conflicts of interest. Currently, although DHS and HHS have policies that generally require their FFRDC contractors to implement such safeguards, they lack the specificity needed to ensure their FFRDC contractors will consistently address employees personal conflicts of interest. Conflict-of-interest requirements is only one of several areas where agencies that sponsor FFRDCs can learn from each other. Other areas include the use of effective and efficient oversight mechanisms such as incentive and award fees, obtaining competition, and conducting comprehensive reviews. In the absence of established knowledge-sharing mechanisms, however, agencies may be missing opportunities to enhance their management and oversight practices. Sharing knowledge among agencies that sponsor FFRDCs, as has been done informally in some instances, could help to ensure that agencies are aware of all the various tools available to enhance their ability to effectively oversee their FFRDCs. <5. Recommendations for Executive Action> To ensure that FFRDC employees operate in the government s best interest, we recommend that the Secretary of Homeland Security revise agency policies to address specific areas for potential personal conflicts of interest for FFRDC personnel in a position to make or materially influence research findings or agency decision making; and that the Secretary of Health and Human Services review agency policy regarding personal conflicts of interest for its sponsored FFRDC and revise as appropriate to ensure that this policy addresses all personnel in a position to make or materially influence research findings or agency decision making. To improve the sharing of oversight best practices among agencies that sponsor FFRDCs, we recommend that the Secretaries of Energy, Defense, Homeland Security, and Health and Human Services, which together sponsor the vast majority of the government s FFRDCs, take the lead in establishing an ongoing forum for government personnel from these and other agencies that sponsor FFRDCs to discuss their agencies FFRDC policies and practices. Areas for knowledge sharing could include, for example, implementing personal conflicts of interest safeguards and processes for completing the justification reviews prior to renewing sponsoring agreements, among others. <6. Agency Comments and Our Evaluation> The Departments of Health and Human Services and Homeland Security concurred with our recommendation that they revise their conflict of interest policies. In addition, the departments of Defense, Energy, and Homeland Security all concurred with our recommendation to establish a forum to share best practices, while HHS is considering participation in such a forum. We received letters from Defense, Energy, and Health and Human Services, which are reprinted in appendixes III, IV, and V, respectively. In addition, the departments of Health and Human Services and Homeland Security provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We then will provide copies of this report to the Secretaries of Defense, Energy, Health and Human Services and Homeland Security and other interested parties. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-4841 or [email protected] or (202) 512-9846 or [email protected]. Key contributors to this report are acknowledged in appendix VI. Appendix I: Objectives, Scope, and Methodology To conduct this review, we chose a nongeneralizable sample of four of the nine federal agencies that sponsor FFRDCs: the departments of Energy (DOE) and Defense (DOD) have the longest histories in sponsoring federally funded research and development centers (FFRDCs) and sponsor the most 16 and 10, respectively; the Department of Homeland Security (DHS) has the 2 most recently established FFRDCs; the Department of Health and Human Services (HHS) has 1 FFRDC laboratory. From the collective 29 FFRDCs that those four agencies sponsor, we selected a nongeneralizable sample of 8 FFRDCs that represented variation among the type of operating contractor, including some operated by universities, some by nonprofits, and some by private industry. Within DOD and DHS, we chose FFRDCs that represent the variation among types these two agencies sponsor, while DOE and HHS only sponsor laboratory type FFRDCs. See appendix II for the FFRDCs included in our case study. To identify sponsors contracting and oversight methods at the four agencies in our case study, we interviewed federal department officials at each office that sponsors FFRDCs as well as offices that have contractor management roles and audit roles: (1) DOE s Office of Science, National Nuclear Security Administration, Office of Energy Efficiency and Renewable Energy, Office of Environmental Management, Office of Nuclear Energy, and Office of Inspector General; (2) DOD s departments of the Navy, Air Force, and Army; Office of the Secretary of Defense; Office of Acquisition, Technology, and Logistics; Defense Contract Audit Agency; and the Defense Contract Management Agency; (3) HHS s National Institutes of Health, National Cancer Institute, and National Institute of Allergy and Infectious Diseases; and (4) DHS s Directorate for Science and Technology. In addition, we obtained and analyzed federal and agency policies and guidance, contracts for the FFRDCs in our case studies and other supporting documentation such as performance and award fee plans, sponsoring agreements (when separate from contracts), and a variety of audits and reviews. While we did not assess the effectiveness of or deficiencies in specific agencies controls, we reviewed agency documentation on incurred cost audits, general auditing controls, single audits, and audited financial statements. We also obtained and analyzed funding data from sponsoring agencies as well as from the National Science Foundation (NSF), which periodically collects and reports statistical information regarding FFRDCs, such as their sponsors, category types, contractors, and funding. While we did not independently verify the data for reliability, we reviewed the NSF's methodology and noted that it reports a 100 percent response rate, no item nonresponse, and no associated sampling errors. For FFRDCs in our case study, we conducted on-site visits, interviewed key contractor administrative personnel, and obtained information and documentation on how they meet sponsoring agencies research needs and adhere to policy guidance. We observed examples of the types of research the FFRDCs conduct for their sponsors and obtained and analyzed documentation such as contractor ethics guidance and policies, performance plans, and annual reports. To obtain the perspective of the government contracting community, we met with high-level representatives of the Professional Services Council, a membership association for companies that provide services to the U.S. federal government. Appendix II: List of 38 Federally Funded Research and Development Centers <7. El Segundo, Calif.> RAND Corp. Santa Monica, Calif. MITRE Corp. Bedford, Mass., and McLean, Va. Alexandria, Va. Institute for Defense Analyses Studies and Analyses Center Alexandria, Va. Institute for Defense Analyses Communications and Computing Center Alexandria, Va. Lexington, Mass. RAND Corp. Santa Monica, Calif. RAND Corp. Santa Monica, Calif. Pittsburgh, Penn. Argonne, Ill. Upton, N.Y. Brookhaven Science Associates, Inc. Ernest Orlando Lawrence Berkeley National Laboratory Berkeley, Calif. Universities Research Association, Inc. Batavia, Ill. Livermore, Calif. Golden, Colo. Midwest Research Institute; Battelle Memorial Institute; Bechtel National, Inc. Oak Ridge, Tenn. Richland, Wash. Princeton, N.J. (subsidiary of Lockheed Martin Corp.) Westinghouse Savannah River Co. Aiken, S.C. Stanford, Calif. Newport News, Va. National Cancer Institute at Frederick Frederick, Md. (wholly owned subsidiary of Science Applications International Corp) Analytic Services, Inc. Arlington, Va. National Biodefense Analysis & Countermeasures Center Frederick, Md. National Aeronautics and Space Administration Pasadena, Calif. National Astronomy and Ionosphere Center Arecibo, P.R. Boulder, Colo. Association of Universities for Research in Astronomy, Inc. Tucson, Ariz. Associated Universities, Inc. Charlottesville, Va. Science and Technology Policy Institute Washington, D.C. Center for Nuclear Waste Regulatory Analyses San Antonio, Tex. MITRE Corp. McLean, Va. MITRE Corp. McLean, Va. Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Energy Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contact and Staff Acknowledgments <8. Acknowledgments> In addition to the individuals named above, key contributors to this report were John Neumann, Assistant Director; Cheryl Williams, Assistant Director; Sharron Candon; Suzanne Sterling; Jacqueline Wade; and Peter Zwanzig.
Why GAO Did This Study In 2006, the federal government spent $13 billion--14 percent of its research and development (R&D) expenditures--to enable 38 federally funded R&D centers (FFRDCs) to meet special research needs. FFRDCs--including laboratories, studies and analyses centers, and systems engineering centers--conduct research in military space programs, nanotechnology, microelectronics, nuclear warfare, and biodefense countermeasures, among other areas. GAO was asked to identify (1) how federal agencies contract with organizations operating FFRDCs and (2) agency oversight processes used to ensure that FFRDCs are well-managed. GAO's work is based on a review of documents and interviews with officials from eight FFRDCs sponsored by the departments of Defense (DOD), Energy (DOE), Health and Human Services (HHS), and Homeland Security (DHS). What GAO Recommends What GAO Found Federal agencies GAO reviewed use cost-reimbursement contracts with the organizations that operate FFRDCs, and three of the agencies generally use full and open competition to award the contracts. Only DOD consistently awards its FFRDC contracts on a sole-source basis, as permitted by law and regulation when properly justified. FFRDCs receive funding for individual projects from customers that require the FFRDCs' specialized research capabilities. Because FFRDCs have a special relationship with their sponsoring agencies and may be given access to sensitive or proprietary data, regulations require that FFRDCs be free from organizational conflicts of interest. DOD and DOE also have policies that prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. In a May 2008 report, GAO recognized the importance of implementing such safeguards for contractor employees. Currently, although DHS and HHS have policies that require their FFRDC contractors to implement conflicts-of-interest safeguards, these policies lack the specificity needed to ensure their FFRDC contractors will consistently address employees' personal conflicts of interest. Sponsoring agencies use various approaches in their oversight of FFRDC contractors, including: (1) Review and approval of work assigned to FFRDCs, or conducted for other agencies or entities, to determine consistency with the FFRDC's purpose, capacity, and special competency. In this process, only DOD must abide by congressionally imposed annual workload limits for its FFRDCs. (2) Conduct performance reviews and audits of contractor costs, finances, and internal controls. (3) Conduct a comprehensive review before a contract is renewed to assess the continuing need for the FFRDC and if the contractor can meet that need, based on annual assessments of contractor performance. Some agencies have adopted other agencies' FFRDC oversight and management practices. For example, DHS mirrored most of DOD's FFRDC Management Plan--an internal DOD guidance document--in developing an approach to FFRDC oversight, and DHS officials told us they learned from DOE's experience in selecting and overseeing contractors for laboratory FFRDCs. In addition, HHS plans to implement certain DOE practices, including rewarding innovation and excellence in performance through various contract incentives. While agency officials have acknowledged the potential benefits from sharing best practices, there is currently no formal cross-agency forum or other established mechanism for doing so.
<1. Almost Half of MUAs Lacked a Health Center Site in 2006, and Types of Services Provided by Each Site Could Not Be Determined> In August 2008, we reported that almost half of MUAs nationwide 47 percent, or 1,600 of 3,421 lacked a health center site in 2006, and there was wide variation among the four census regions and across states in the percentage of MUAs that lacked health center sites. (See fig. 1.) The Midwest census region had the most MUAs that lacked a health center site (62 percent), while the West census region had the fewest MUAs that lacked a health center site (32 percent). More than three-quarters of the MUAs in 4 states Nebraska (91 percent), Iowa (82 percent), Minnesota (77 percent), and Montana (77 percent) lacked a health center site. (See app. I for more detail on the percentage of MUAs in each state and the U.S. territories that lacked a health center site in 2006.) In 2006, among all MUAs, 32 percent contained more than one health center site; among MUAs with at least one health center site, 60 percent contained multiple health center sites, with about half of those containing two or three sites. Almost half of all MUAs in the West census region contained more than one health center site, while less than one-quarter of MUAs in the Midwest contained more than one site. The states with three-quarters or more of their MUAs containing more than one health center site were Alaska, Connecticut, the District of Columbia, Hawaii, New Hampshire, and Rhode Island. In contrast, Nebraska, Iowa, and North Dakota were the states where less than 10 percent of MUAs contained more than one site. We could not determine the types of primary care services provided at individual health center sites because HRSA did not collect and maintain readily available data on the types of services provided at individual sites. While HRSA requests information from applicants in their grant applications on the services each site provides, in order for HRSA to access and analyze individual health center site information on the services provided, HRSA would have to retrieve this information from the grant applications manually. HRSA separately collects data through the UDS from each grantee on the types of services it provides across all of its health center sites, but HRSA does not collect data on services provided at each site. Although each grantee with community health center funding is required to provide the full range of comprehensive primary care services, HRSA does not require each grantee to provide all services at each health center site it operates. HRSA officials told us that some sites provide limited services such as dental or mental health services. Because HRSA lacks readily available data on the types of services provided at individual sites, it cannot determine the extent to which individuals residing in MUAs have access to the full range of comprehensive primary care services provided by health center grantees. This lack of basic information can limit HRSA s ability to assess the full range of primary care services available in needy areas when considering the placement of new access points and can also limit the agency s ability to evaluate service area overlap in MUAs. <2. 2007 Awards Reduced the Number of MUAs That Lacked a Health Center Site, but Wide Geographic Variation Remained> In August 2008, we reported that our analysis of new access point grants awarded in 2007 showed that these awards reduced the number of MUAs that lacked a health center site by about 7 percent. Specifically, 113 fewer MUAs in 2007 or 1,487 MUAs in all lacked a health center site when compared with the 1,600 MUAs that lacked a health center site in 2006. (See app. I.) As a result, 43 percent of MUAs nationwide lacked a health center site in 2007. Despite the overall reduction in the percentage of MUAs nationwide that lacked health center sites in 2007, regional variation remained. The West and Midwest census regions continued to show the lowest and highest percentages of MUAs that lacked health center sites, respectively. (See fig. 2.) Three of the four census regions showed a 1 or 2 percentage point decrease since 2006 in the percentage of MUAs that lacked a health center site, while the South census region showed a 5 percentage point decrease. We found that the minimal impact of the 2007 awards on regional variation was due, in large part, to the fact that more than two-thirds of the nationwide decline in the number of MUAs that lacked a health center site 77 out of the 113 MUAs occurred in the South census region. In contrast, only 24 of the 113 MUAs were located in the Midwest census region, even though the Midwest had nearly as many MUAs that lacked a health center site in 2006 as the South census region. While the number of MUAs that lacked a health center site declined by 12 percent in the South census region, the other census regions experienced declines of about 4 percent. The South census region experienced the greatest decline in the number of MUAs lacking a health center site in 2007 in large part because it was awarded more new access point grants that year than any other region. Specifically, half of all new access point awards made in 2007 from the two separate new access point competitions went to applicants from the South census region. For example, when we examined the High Poverty County new access point competition, in which 200 counties were targeted by HRSA for new health center sites, we found that 69 percent of those awards were granted to applicants from the South census region. The greater number of awards made to the South census region may be explained by the fact that nearly two-thirds of the 200 counties targeted were located in the South census region. When we examined the open new access point competition, which did not target specific areas, we found that the South census region also received a greater number of awards than any other region under that competition. Specifically, the South census region was granted nearly 40 percent of awards; in contrast, the Midwest received only 17 percent of awards. <3. Concluding Observations> In our August 2008 report, we noted that awarding new access point grants is central to HRSA s ongoing efforts to increase access to primary health care services in MUAs. From 2006 to 2007, HRSA s new access point awards achieved modest success in reducing the percentage of MUAs that lacked a health center site nationwide. However, in 2007, 43 percent of MUAs continued to lack a health center site, and the new access point awards made in 2007 had little impact on the wide variation among census regions and states in the percentage of MUAs lacking a health center site. The relatively small effect of the 2007 awards on geographic variation may be explained, in part, because the South census region received a greater number of awards than other regions, even though the South was not the region with the highest percentage of MUAs lacking a health center site in 2006. We reported that HRSA awards new access point grants to open new health center sites, which increase access to primary health care services for underserved populations in needy areas, including MUAs. However, HRSA s ability to target these awards and place new health center sites in locations where they are most needed is limited because HRSA does not collect and maintain readily available information on the services provided at individual health center sites. Having readily available information on the services provided at each site is important for HRSA s effective consideration of need when distributing federal resources for new health center sites, because each health center site may not provide the full range of comprehensive primary care services. This information could also help HRSA assess any potential overlap of services provided by health center sites in MUAs. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or Members of the Committee may have. <4. GAO Contacts and Staff Acknowledgments> For further information about this statement, please contact Cynthia A. Bascetta at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Helene Toiv, Assistant Director; Stella Chiang; Karen Doran; and Karen Howard. Appendix I: Number and Percentage of Medically Underserved Areas (MUA) Lacking a Health Center Site, 2006 and 2007 <5. Total number of MUAs> This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Health centers funded through grants under the Health Center Program--managed by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS)--provide comprehensive primary care services for the medically underserved. The statement GAO is issuing today summarizes an August 2008 report, Health Resources and Services Administration: Many Underserved Areas Lack a Health Center Site, and the Health Center Program Needs More Oversight (GAO-08-723). In that report, GAO examined to what extent medically underserved areas (MUA) lacked health center sites in 2006 and 2007. To do this, GAO obtained and analyzed HRSA data and grant application What GAO Found In its August 2008 report, which is summarized in this statement, GAO found the following: (1) Grant awards for new health center sites in 2007 reduced the overall percentage of MUAs lacking a health center site from 47 percent in 2006 to 43 percent in 2007. (2) There was wide geographic variation in the percentage of MUAs that lacked a health center site in both years. (3) Most of the 2007 nationwide decline in the number of MUAs that lacked a health center site occurred in the South census region, in large part because half of all awards made in 2007 for new health center sites were granted to the South census region. (4) HRSA lacked readily available data on the services provided at individual health center sites. GAO concluded that from 2006 to 2007, HRSA's grant awards to open new health center sites reduced the number of MUAs that lacked a site by about 7 percent. However, in 2007, 43 percent of MUAs continued to lack a health center site, and the grants for new sites awarded that year had little impact on the wide variation among census regions and states in the percentage of MUAs lacking a health center site. GAO reported that HRSA's grants to open new health center sites increased access to primary health care services for underserved populations in needy areas, including MUAs. However, HRSA's ability to place new health center sites in locations where they are most needed was limited because HRSA does not collect and maintain readily available information on the services provided at individual health center sites. Because each health center site may not provide the full range of comprehensive primary care services, having readily available information on the services provided at each site is important for HRSA's effective consideration of need when distributing federal resources for new health center sites.
<1. Background> As demonstrated by the terrorist attacks of September 11, 2001, the United States and other nations face increasingly diffuse threats. Potential adversaries are more likely to strike vulnerable civilian or military targets in nontraditional ways to avoid direct confrontation with our military forces on the battlefield, to try to coerce our government to take some action terrorists desire, or simply to make a statement. Moreover, according to the President's December 2000 national security strategy,such threats are more viable today because of porous borders, rapid technological change, greater information flow, and the destructive power of weapons now within the reach of states, groups, and individuals who may aim to endanger our values, way of life, and the personal security of our citizens. Hostile nations, terrorist groups, and even individuals may target Americans, our institutions, and our infrastructure with weapons of mass destruction including biological, chemical, radiological, nuclear, or high explosive weapons. Although they would have to overcome significant technical and operational challenges to make and release many chemical or biological agents of a sufficient quality and quantity to kill large numbers of people, it has been tried, as demonstrated by the current incidents of anthrax-laced letters. Previous attempts have been made such as in 1995 when the Aum Shinrikyo group succeeded in killing 12 people and injuring thousands by releasing the nerve agent sarin in the Tokyo subway. Prior to the Aum Shinrikyo attack, in 1984, the Rajneeshee religious cult in Oregon contaminated salad bars in local restaurants with salmonella bacteria to prevent people from voting in a local election. Although no one died, hundreds of people were diagnosed with food-borne illness. A fundamental role of the government under our Constitution is to protect America from both foreign and domestic threats. The government must be able to prevent and deter attacks on our homeland as well as detect impending danger before attacks or incidents occur. Although it may not be possible to detect, prevent, or deter every attack, steps can be taken to manage the risk posed by the threats to homeland security. <2. Risk Management Efforts by Individual Agencies Have Been Inconclusive> We have conducted numerous cross-agency reviews of programs to combat terrorism and have made recommendations that the federal government adopt a risk management approach which could be used at the national as well as the state and local level. Efforts by individual federal agencies related to risk management are underway by the Department of Justice (in conjunction with state governments), the FBI, and DOD. However, the results to date have been inconclusive. <2.1. National Level Threat Assessments Approaching Completion> In September 1999, we recommended that the Department of Justice, specifically the FBI, conduct threat and other assessments at the national level as part of a risk management approach that could be useful nationwide. In response to our report, the FBI agreed to lead two assessments. The first assessment is a report on those chemical and biological agents that may be more likely to be used in the United States by a terrorist group that was not state sponsored (e.g., terrorist groups without access to foreign government stockpiles, production capabilities, or funding). Because of limitations on intelligence, the FBI decided to focus on chemical and biological agents. While not identifying specific terrorist groups, this assessment would still be useful in determining requirements for programs to combat terrorism. The FBI is sponsoring this assessment in conjunction with the Department of Justice's National Institute of Justice and the Technical Support Working Group. This assessment will be provided to state and local governments to help them conduct their own risk management assessments. The Department of Justice had estimated that the final assessment would be published in December 2001. The second assessment is a national-level threat assessment of the terrorist threat in the United States. According to the Department of Justice, the FBI is in the process of conducting this assessment which will encompass domestic terrorism, international terrorism, weapons-of-mass- destruction terrorism, cyber-terrorism, and proliferation of weapons of mass destruction. The report will assess the current threat, the projected threat, emerging threats, and related FBI initiatives. The Department had estimated that this classified assessment would be completed in October 2001. Department of Justice and FBI officials told us that the September 11 terrorist attacks may dictate revisions to these assessments and delay their completion. While we view both of these assessments as positive, the FBI noted that these would be limited to threat assessments only and will not include other important aspects of risk management that we discuss below. <2.2. State and Local Threat Assessments Underway> In April 1998, we asked the Congress to consider requiring the domestic preparedness program then run by the DOD to use a risk management approach in its efforts to prepare state and local governments for terrorist attacks involving weapons of mass destruction. The Department of Justice took over that program in fiscal year 2001, and has worked with the FBI to create a risk management tool for state and local governments.This tool includes a step-by-step methodology for assessing threats, risks, and requirements. It also includes information on how to prioritize programs and to project spending amounts. The information from the assessments will be used to develop statewide domestic preparedness strategic plans. The statewide assessment process includes an initial risk assessment and identification of the most likely scenarios. This risk assessment is the culmination of three other assessments: threat, vulnerabilities, and public health assessments. This design feature enables the program to focus resources on preparing for the most likely scenarios. The Department of Justice plans to use the results of these assessments to drive the allocation of its resources for equipment, training, and exercise programs, consistent with our recommendation. According to Department of Justice officials, these assessments have been completed by four states Rhode Island, South Carolina, Hawaii, and Utah. <2.3. DOD Uses a Risk Management Approach in Antiterrorism Efforts> In September 2001, we recommended that the DOD take steps to improve its risk management approach in its force protection efforts through better assessments of threats, vulnerabilities, and criticality. Regarding DOD's threat assessments, we recommended that the Department expand its methodology to increase the awareness of the consequences of changing business practices at installations that may create workplace violence situations or new opportunities for individuals not affiliated with the DOD to gain access to installations. We also recommended that installation commanders form threat working groups and personally and actively engage state, local, and federal law enforcement officials to provide threat information from these sources on a regular basis. The Department agreed with these recommendations and stated it would review its methodology to ensure that no threat indicators are overlooked and that it would require installation commanders to establish threat working groups. To improve its vulnerability assessments, we recommended that DOD identify those installations that serve a critical role in support of our national military strategy, and ensure that that they receive a vulnerability assessment. We further recommended that the Department develop a strategy to conduct vulnerability assessments at National Guard installations and develop a mechanism to record and track all vulnerability assessments conducted. DOD agreed with these recommendations and is changing its program standards and procedures to implement these recommendations. Regarding criticality assessments, we recommended that DOD require criticality assessments be done at all installations. DOD agreed with this recommendation and has revised its program standards to require this assessment. <3. A Risk Management Approach Can Guide Preparedness Efforts> Risk management is a systematic, analytical process to consider the likelihood that a threat will harm an asset or individuals and to identify actions that reduce the risk and mitigate the consequences of an attack or event. Risk management principles acknowledge that while risk generally cannot be eliminated, enhancing protection from known or potential threats can reduce it. As described in detail below, a risk management approach can have three elements: assessments of threat, vulnerabilities, and criticality (or relative importance). This general approach is used or endorsed by federal agencies, government commissions, and multi- national corporations. Figure 1 below is a graphical representation of the risk management approach we discuss. <3.1. Threat Assessments Are an Important First Step> A threat assessment is used to evaluate the likelihood of terrorist activity against a given asset. It is a decision support tool that helps to establish and prioritize security-program requirements, planning, and resource allocations. A threat assessment identifies and evaluates each threat on the basis of various factors, including capability, intention, and impact of an attack. Intelligence and law enforcement agencies assess the foreign and domestic terrorist threats to the United States. The U.S. intelligence community which includes the Central Intelligence Agency, the Defense Intelligence Agency, and the State Department's Bureau of Intelligence and Research, among others monitors the foreign-origin terrorist threat to the United States. The FBI gathers information and assesses the threat posed by domestic sources of terrorism. Threat information gathered by both the intelligence and law enforcement communities can produce threat assessments for use in national security strategy planning. Several federal government organizations as well as companies in the private sector apply some formal threat assessment process in their programs, or such assessments have been recommended for implementation. For example, DOD uses threat assessments for its antiterrorism program designed to protect military installations. DOD evaluates threats on the basis of several factors, including a terrorist group's intentions, capabilities, and past activities. The assessments provide installation commanders with a list of credible threats that can be used in conjunction with other information (such as the state of the installation's preparedness) to prepare against attack, to recover from the effects of an attack, and to adequately target resources. Similarly, the Interagency Commission on Crime and Security in U.S. Seaports reported that threat assessments would assist seaports in preparing for terrorist threats. The Commission recommended that the federal government establish baseline threat assessments for terrorism at U.S. seaports and, thereafter, conduct these assessments every 3 years. Additionally, a leading multi-national oil company attempts to identify threats in order to decide how to manage risk in a cost-effective manner. Because the company operates overseas, its facilities and operations are exposed to a multitude of threats, including terrorism, political instability, and religious or tribal conflict. In characterizing the threat, the company examines the historical record of security and safety breaches and obtains location-specific threat information from government organizations and other sources. It then evaluates these threats in terms of company assets that represent likely targets. While threat assessments are key decision support tools, it should be recognized that, even if updated often, threat assessments might not adequately capture emerging threats posed by some terrorist groups. No matter how much we know about potential threats, we will never know that we have identified every threat or that we have complete information even about the threats of which we are aware. Consequently, we believe that a risk management approach to prepare for terrorism with its two additional assessments, discussed below, can provide better assurance of preparedness for a terrorist attack. <3.2. Vulnerability Assessments Are a Way to Identify Weaknesses> A vulnerability assessment is a process that identifies weaknesses in physical structures, personnel protection systems, processes, or other areas that may be exploited by terrorists and may suggest options to eliminate or mitigate those weaknesses. For example, a vulnerability assessment might reveal weaknesses in an organization's security systems, financial management processes, computer networks, or unprotected key infrastructure such as water supplies, bridges, and tunnels. In general, these assessments are conducted by teams of experts skilled in such areas as engineering, intelligence, security, information systems, finance, and other disciplines. For example, at many military bases, experts have identified security concerns including the distance from parking lots to important buildings as being so close that a car bomb detonation would damage or destroy the buildings and the people working in them. To mitigate this threat, experts have advised that the distance between parking lots and some buildings be increased. Another security enhancement might be to reinforce the windows in buildings to prevent glass from flying into the building if an explosion occurs. The Seaport Commission recommended similar vulnerability assessments be conducted. It identified factors to be considered that include the accessibility of vessels or facilities, avenues of ingress and egress, and the ease of access to valuable or sensitive items such as hazardous materials, arms, ammunition, and explosives. For private sector companies, such assessments can identify vulnerabilities in the company's operations, personnel security, and physical and technical security. With information on both vulnerabilities and threats, planners and decision-makers are in a better position to manage the risk of a terrorist attack by more effectively targeting resources. However, risk and vulnerability assessments need to be bolstered by a criticality assessment, which is the final major element of the risk management approach. Because we may not be able to afford the same level of protection for all vulnerable assets, it is necessary to prioritize which are most important and thus would get the highest level of protection. <3.3. Criticality Assessments Are Necessary to Prioritize Assets for Protection> A criticality assessment is a process designed to systematically identify and evaluate important assets and infrastructure in terms of various factors, such as the mission and significance of a target. For example, nuclear power plants, key bridges, and major computer networks might be identified as "critical" in terms of their importance to national security, economic activity, and public safety. In addition, facilities might be critical at certain times, but not others. For example, large sports stadiums, shopping malls, or office towers when in use by large numbers of people may represent an important target, but are less important when they are empty. Criticality assessments are important because they provide a basis for identifying which assets and structures are relatively more important to protect from an attack. The assessments provide information to prioritize assets and allocate resources to special protective actions. These assessments have considered such factors as the importance of a structure to accomplish a mission, the ability to reconstitute this capability, and the potential cost to repair or replace the asset. The Seaports Commission has identified potential high-value assets (such as production, supply, and repair facilities; transfer, loading, or storage facilities; transportation modes; and transportation support systems) that need to be included in a criticality analysis, but it reported that no attempt has been made to identify the adverse effect from the loss of such assets. To evaluate the risk to an asset, the Seaports Commission advised that consideration be given to the mission and the military or economic impact of its loss or damage. The multi-national company we reviewed uses descriptive values to categorize the loss of a structure as catastrophic, critical, marginal, or negligible. It then assigns values to its key assets. This process results in a matrix that ranks as highest risk, the most important assets with the threat scenarios it believes are most likely to occur. <4. Conclusion> Some federal agencies have taken steps related to risk management, but the results have been inconclusive. We continue to believe that risk management is the best approach to guide programs and responses to better prepare against terrorism and other threats. After threat, vulnerability, and criticality assessments have been completed and evaluated in this risk-based decision process, key actions can be taken to better prepare ourselves against potential attacks or events. Threat assessments alone are insufficient to support the key judgments and decisions that must be made. However, in conjunction with vulnerability and criticality assessments, leaders and managers can make better decisions based on this risk management approach. If the federal government were to apply this approach universally and if similar approaches were adopted by other segments of society, we could more effectively and efficiently prepare in-depth defenses against acts of terrorism and other threats directed against our country. Without a risk management approach, there is little assurance that programs to combat terrorism are prioritized and properly focused. This concludes my prepared statement. I will be pleased to respond to any questions you or other members of the Committee may have. <5. Contacts and Acknowledgements> For further information about this testimony, please contact me at (202) 512-6020. Stephen L. Caldwell, Brian J. Lepore, Mark A. Pross, Lorelei St. James, and Lee Purdy also made key contributions to this statement. Related GAO Products Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-199T, Oct. 24, 2001). Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-175T, Oct. 24, 2001). Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness (GAO-02-162T, Oct. 17, 2001). Homeland Security: Need to Consider VA's Role in Strengthening Federal Preparedness (GAO-02-145T, Oct. 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, Oct. 12, 2001). Bioterrorism: Review of Public Health Preparedness (GAO-02-149T, Oct. 10, 2001). Bioterrorism: Public health and Medical Preparedness (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, Sept. 28, 2001). Homeland Security: A Framework for Addressing the Nation's Issues (GAO-01-1158T, Sept. 21, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management (GAO-01-909, Sept. 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President's Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01- 14, Nov. 30, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas (GAO/NSIAD-00-181, July 19, 2000). Weapons of Mass Destruction: DOD's Actions to Combat Weapons Use Should Be More Integrated and Focused (GAO/NSIAD-00-97, May 26, 2000). Combating Terrorism: Comments on Bill H.R. 4210 to Manage Selected Counterterrorist Programs (GAO/T-NSIAD-00-172, May 4, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, Apr. 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed (GAO/HEHS/AIMD-00-36, Oct. 29, 1999). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, Oct. 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Combating Terrorism: Analysis of Federal Counterterrorist Exercises (GAO/NSIAD-99-157BR, June 25, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO/NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorist Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, Mar. 16, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, Mar. 11, 1999). Combating Terrorism: FBI's Use of Federal Funds for Counterterrorism- Related Activities (FYs 1995-98) (GAO/GGD-99-7, Nov. 20, 1998). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, Oct. 2, 1998). Combating Terrorism: Observations on Crosscutting Issues (GAO/T- NSIAD-98-164, Apr. 23, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, Apr. 9, 1998).
What GAO Found Because of the terrorist attacks against the World Trade Center and the Pentagon on September 11 and the subsequent appearance of letters containing anthrax, terrorism rose to the top of the national agenda. The Attorney General has indicated that the country needs to be prepared for still more terrorist incidents. The Department of Justice is working with state and local governments to complete risk management tools for the domestic preparedness program. However, the FBI told GAO that these will be limited to threat assessments only and will not include other aspects of risk management that GAO advocates. Despite these inconclusive results, the federal government can benefit from risk management. Risk management is a systematic and analytic process to consider the likelihood that a threat will endanger an asset and to identify actions that reduce the risk and mitigate the consequences of an attack. An effective risk management approach includes a threat assessment, a vulnerability assessment, and a criticality assessment. Such an approach could help the nation prepare against threats it faces and help better target finite resources to areas of highest priority.
<1. Background> Billions of fasteners are used in safety-critical applications such as buildings, nuclear power plants, bridges, motor vehicles, airplanes, and other products or equipment each year. For example, an automobile may have as many as 3,000 fasteners. In 1988, the House Committee on Energy and Commerce s Subcommittee on Oversight and Investigations issued a report on counterfeit and substandard fasteners that, along with hearings held by the House Science Committee, led to the enactment of FQA on November 16, 1990. The subcommittee reported that failures of substandard and often counterfeit fasteners may have been responsible for deaths and injuries, reduced defense readiness, and that they potentially threatened the safety of every American. According to the subcommittee report, the Defense Industrial Supply Center, which supplies fasteners to the armed services, found that its inventory contained over 30 million counterfeit fasteners and that Army depots contained another 2.6 million. Similarly, the National Aeronautics and Space Administration (NASA) reported that it found substandard fasteners in space shuttle equipment, and six of its fastener vendors were found to have inadequate quality-control systems. The Air Force likewise discovered substandard flight safety-critical aerospace fasteners in its inventory. FQA covers certain threaded, metallic, heat-treated fasteners of one- quarter inch diameter or greater for use in safety-critical applications. As originally enacted in 1990, FQA required manufacturers and importers to submit all lots of fasteners with significant safety applications to accredited laboratories for testing; established a laboratory accreditation program at the Commerce Department s National Institute of Standards and Technology (NIST); required original test certificates to accompany the fasteners throughout the sale process; established requirements for manufacturers insignias to ensure traceability of fasteners to manufacturers and distributors; and provided for civil and criminal penalties for violations of the act. Since its passage, FQA has been amended several times. Concerns over the regulatory burden of FQA on aviation manufacturers led Congress, in August 1998, to amend the act to exempt certain fasteners approved by the Federal Aviation Administration for use in aircraft. The 1998 amendments also delayed implementation of NIST s regulations for accrediting testing laboratories. FQA was amended again on June 8, 1999,to make it less burdensome: Fasteners that are part of an assembly or that are ordered for use as a spare, substitute, service, or replacement part in a package containing 75 or fewer parts at the time of sale or are contained in an assembly kit (i.e., the small-lot exemption) were exempted from coverage. Fasteners manufactured in a facility using quality-assurance systems were exempted from coverage. The amendment required accredited laboratory testing only of fasteners manufactured to consensus standards requiring testing, and postponed that requirement until June 2001. Companies were allowed to transmit and store electronically all records on fastener quality provided that reasonable means of authentication of the source of the document existed. The Commerce Department was required to establish and maintain a hotline for reporting alleged violations of the law. All credible allegations would then be forwarded to the Attorney General. The amendment also made it unlawful to knowingly misrepresent or falsify the fastener s record of conformance or identification, characteristics, properties, mechanical or performance marks, chemistry, or strength. Although FQA does not mention Customs, Customs is authorized by 15 U.S.C. 1125(b) to identify and detain imported goods marked or labeled with a false description or representation. Under this authority, Customs has conducted spot checks of imported fasteners since 1987 to determine if fasteners descriptions or representations are accurate. It has seven laboratories located around the country that provide scientific support to all Customs officers, other government agencies, and foreign governments as part of international assistance programs. Customs laboratories tested samples from randomly selected shipments of graded boltsimported from January through April 1998 in various sized lots and again in March and April 2001. These included one or more of the following tests: carbon, sulfur, phosphorous, alloying elements (chemical tests); or tensile strength and hardness (mechanical tests). <2. Customs Limited Tests of Imported Fasteners Found No Evidence of Substandard Fasteners Imported After December 1999> Customs Chicago laboratory tested 66 randomly selected shipments of graded bolts (12 in small lots) imported during March and April 2001 and found that none were substandard. As discussed below, this is a decrease from results of tests that Customs did before December 1999. Customs laboratories also tested a random sample of 77 shipments of graded bolts imported in various sized lots from January 12 to April 12, 1998, and found three (not in small lots) to be substandard. The bolts failed either the tensile or hardness test and were imported through Chicago from Korea or Taiwan. On the basis of these sample results, the Customs study estimated that 5 percent of the 3,097 shipments of the same type of bolts that entered U.S. ports during the 3-month test period were substandard. <3. Customs 2001 Test of Defense Fasteners Found None Were Defective> In addition to testing graded fasteners imported in March and April 2001, Customs Chicago laboratory tested, at our request, samples of graded bolts from 15 small lotsthat DSCP had purchased between January 1998 and February 2001, and found that none were defective. Three lots were from contracts for purchases after December 1999and the remainder were before this time. According to a DSCP official, there is no way to determine if the contractors used foreign or domestic materials. Because of the small number of lots tested, the results, by themselves, cannot be used to make any conclusions about industry changes in manufacturing small lots. These results are, however, the best data available on fasteners that DSCP purchased in small lots. <4. Responses to Federal Register Notice Uncovered No Evidence of Changes in Industry Practices> None of the 14 responses to our Federal Register notice stated that the fastener industry had changed any practices as a result of the small-lot exemption, as shown in the examples below. The Industrial Fasteners Institute and the National Fastener Distributors Association said they believe that there will be no evidence of significant changes in industry practice because most fasteners sold under the small- lot exemption are produced under quality-assurance systems and are therefore not subject to the act. They further stated that since fastener manufacturers can comply with the test requirements in the amended act in a cost-efficient manner, it is doubtful that industry members would attempt to avoid these costs by marketing fasteners in small-lot packages. The Canadian Fasteners Institute said that in the last decade, the fastener industry has made great advances and investments in product quality control and assurance. It said that the concern with the small-lot exemption stems from its potential for creating a public safety hazard and that the opportunity for the emergence of substandard products in commerce is too great a risk with the small-lot exemption in place. It suggested that, in lieu of any exemptions, FQA be amended to say that the manufacturer, distributor, or importer that sells fasteners as having certain mechanical and physical properties must be capable of substantiating those properties. That is, promises a seller makes to a buyer must be verifiable with objective evidence. The Alliance of Automobile Manufacturers and the Association of International Automobile Manufacturers (AIAM) said that their members produce virtually all the passenger cars and light trucks sold in the United States and use 300 billion fasteners annually. They reported that Congress exempted most automotive fasteners from FQA because strong incentives exist to enhance fastener quality, given the potential impact of faulty fasteners on customer satisfaction, product liability, and regulatory liability. They said that manufacturers have developed various measures, as follows, to assure the quality of the fasteners that they purchase: Proprietary standards Vehicle manufacturers have developed their own fastener standards to assure that their fasteners are appropriate for specific applications. Quality-assurance systems Vehicle manufacturers generally require that their fastener suppliers be certified under fastener quality-assurance systems to minimize the occurrence of nonconforming fasteners. Closed-loop acquisition Vehicle manufacturers generally purchase their fasteners from approved suppliers to assure quality and accountability, and rarely purchase generic fasteners on the open market. The Alliance and AIAM said that they surveyed their members to obtain responses to the questions contained in our Federal Register notice. They said that the responses they received represented over 90 percent of U.S. light vehicle sales in calendar year 1999. None of the respondents reported any significant changes in procurement and packaging practices that involved a reduction in units per package to below 75 units, or an increase in the use of assembly kits as a means of complying with the FQA requirements through the small-lot exemption. The Alliance and AIAM said that on the basis of these survey results, virtually all of the fasteners produced to assemble or service members products are either manufactured to internal company proprietary standards or are produced under a qualifying fastener quality-assurance system, or both. As a result, they said much less than 1 percent of fasteners purchased are exempt from FQA solely through the small-lot exemption. These groups reported that the small-lot exemption still serves a very important purpose: to allow the continued availability, at an affordable price, of many spare-part fasteners required to service their members products in a safe manner. The majority of these small package/assembly kit fasteners are used to service older models that typically have very low annual sales of spare parts. Without this vital exemption, they report, the costs of such parts would become prohibitive, forcing their members to remove many of these products from the market. In such a case, they believe, the customer desiring to service his or her car would typically be forced to substitute the correct-specification fastener with a generic hardware store look-alike fastener, one that in all likelihood was manufactured to different specifications and uncertain quality standards. The Equipment Manufacturers Institute, an association of companies that manufacture agricultural, construction, forestry, materials-handling, and utility equipment, reported that its members want the small-lot exemption to remain in law. They are concerned that altering or removing it could result in burdensome paperwork and wasteful and unnecessary quality tests for fasteners that are commonly used for the off-road equipment industry. They said this would result in large nonvalue-added costs that would ultimately be borne by the consumer and reduce America s global competitiveness and cost jobs. Additionally, they stated, fastener quality has not been a problem for its industry, and remains that way today. Other comments received included the following: The director of quality assurance at Huck Fasteners, Inc., said that he had surveyed his eight manufacturing facilities and found no changes in how fasteners are packaged as a result of FQA. A fastener manufacturer s representative said that he had not seen any changes in industry practices as a result of the small-lot exemption, and that all the manufacturers and distributors he knows are in compliance. The president of Edward W. Daniel Co., a manufacturer of industrial lifting hardware and a member of the National Fastener Distributors Association, said that most manufacturers/importers of fasteners have developed quality programs and maintain the appropriate records for tracing the manufacturing materials used. <5. Federal Agencies Reported No Evidence of Changes in Industry Practices> None of the officials that we spoke with in DSCP or NASA reported any evidence of changes in fastener industry practices resulting from, or apparently resulting from, the small-lot exemption. DSCP officials reported that their agency requires prospective suppliers of fasteners to have a quality-assurance system. Likewise, officials from the Departments of Commerce and Justice, agencies that have specific responsibilities under FQA, stated that they did not have any evidence of changes in fastener industry practices. DSCP did not report any changes in industry practices. It operates a program that requires both manufacturers and distributors who want to sell to it to be prequalified. According to the agency Web site, applicants for the program must demonstrate their controls and established criteria to provide maximum assurance that the products procured conform to specifications. In addition, DSCP tests certain product lines, such as aerospace products, and randomly selects products for testing on a regular basis from its inventory. DSCP officials said that they manage approximately 1.2 million items, of which about 300,000 are fastener products and about 10 percent are covered under FQA. None of NASA s nine centersreported any changes in industry practices as a result of the small-lot exemption. NIST officials responsible for FQA said that, as of March 31, 2001, they have not received any reports that the fastener industry has changed any practices as a result of the small-lot exemption. Similarly, officials from the Bureau of Export Administration reported that, as of March 30, 2001, their fraud hotline, which became operational on June 27, 2000, had not received any allegations that relate to the small-lot exemption. Officials at the Department of Justice said that the 1999 amendments to FQA were so new that neither its criminal nor civil divisions had any activity involving fasteners. Additionally, they said, they were not aware of any prosecutions or convictions involving fasteners sold in packages of 75 or fewer or in assembly kits since December 1999. <6. Conclusion> We found no evidence that the fastener industry has changed any practices resulting from, or apparently resulting from, the small-lot exemption. <7. Agency Comments and Our Evaluation> We provided a draft of this report to the Secretary of Commerce, the Secretary of Treasury, and the Secretary of Defense for review and comment. In a June 4, 2001, letter, the Secretary of Commerce stated that the relevant bureaus of the Department of Commerce had reviewed the report and had no substantive comments (see app. III). Other Commerce staff provided technical comments on the draft report, which we incorporated as appropriate. In a May 23, 2001, memorandum, the Director, Office of Planning, U.S. Customs Service stated that he had no substantive comments to make (see app. IV). Other U.S. Customs staff provided technical comments on the draft report, which we also incorporated as appropriate. The Department of Defense provided comments, concurring in the report s findings and providing technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce; the Secretary of the Treasury; the Secretary of Defense; and the Administrator, National Aeronautics and Space Administration. Copies will also be available at our Web site at www.gao.gov. Should you have any questions on matters contained in this report, please contact me at (202) 512-6240 or Alan Stapleton, Assistant Director, at (202) 512-3418. We can also be reached by e-mail at [email protected] or [email protected], respectively. Other key contributors to this report included David Plocher and Theresa Roberson. Appendix I: Objective, Scope, and Methodology As stated in FQA, our objective was to determine if there had been any changes in fastener industry practice resulting from or apparently resulting from the small-lot exemption in FQA. To achieve this objective, we compared the results of Customs mechanical and chemical tests of bolts imported during March and April 2001 with the results of similar testing performed by Customs for bolts imported from January through April 1998. These tests had several limitations. According to Customs officials, the document that an importer provides for each shipment of fasteners does not have to identify that the shipment contains packages of 75 or fewer fasteners (i.e., small lots) or that the fasteners are of a particular grade. Therefore, for both the 1998 and 2001 tests, Customs could not randomly select just those shipments containing small lots of grade 5 and grade 8 fasteners. Rather, the selection also included ungraded fasteners that were not sent to the laboratory for testing because, without the grade marking, Customs could not identify the test standards. For the 2001 test, Customs recorded when the package selected contained 75 or fewer graded bolts so we could compare their test results with those for packages containing more than 75 bolts. We observed Customs inspection of imported fasteners at Chicago s O Hare International Airport; we also visited Customs Chicago laboratory and observed its testing of some of the selected fasteners. Another limitation was that Customs designed both its 1998 and 2001 studies to only randomly select shipments valued at $2,500 or more so that resources were not spent on small, inconsequential shipments. However, problems during the 1998 study caused over 28 percent of the shipments selected to be valued at less than $2,500. These included 80 shipments valued at less than $500 and at least one valued at $1. Based on the price of grade 5 and grade 8 bolts, it is likely that some of the 80 shipments valued at less than $500 included in the 1998 test were in small lots. To address our objective, we also compared the results of Customs mechanical and chemical tests of fasteners DSCP purchased in small lots from January 1998 to December 1999 with the results of Customs mechanical and chemical tests of fasteners DSCP purchased from January 2000 to January 2001. We selected DSCP because of its problems in the 1980s with counterfeit fasteners. We asked DSCP to send the samples directly to Customs for testing. There were limitations in DSCP s selection of the samples. DSCP officials initially identified 56 different contracts for small-lot purchases for potential testing, yet only 15 lots were ultimately tested. DSCP officials decided that 15 of the 56 contracts were ineligible for testing because the lot size was fewer than 25 bolts; thus, taking several bolts for testing could result in DSCP s not being able to fill a customer s order. Officials further said that 25 small-lot purchases were not tested because no inventory remained at the time the depots were asked to ship the bolts to Customs laboratory. Finally, one sample sent to Customs for testing was not traceable to a contract number, and so it was eliminated from the test results. To give the public an opportunity to report any changes in industry practices, we published a notice in the Federal Register on August 9, 2000 (F.R. 48714), and on our Web site, asking for comments no later than November 30, 2000. We also notified nearly 60 journals, newsletters, associations, and manufacturers of our Federal Register notice. As a result, several journals (e.g., Fastener Industry News and Wire Journal International) wrote articles about our study that often referred readers who wanted more information to our Federal Register notice or Web site. We also asked associations representing the fastener industry and the automobile industry to notify their memberships about our Federal Register notice and Web site notice. We asked officials at agencies that had experienced problems with fasteners in the past (DSCP and NASA) and NIST (with responsibilities under FQA) if they were aware of any changes in industry practices resulting from, or apparently resulting from, the FQA small-lot exemption. In addition, we asked officials at Commerce s Bureau of Export Administration whether they had received any FQA allegations involving small lots of fasteners and officials in the Department of Justice about any allegations, investigations, prosecutions, or convictions involving fasteners sold in small lots or in assembly kits. We also attempted to compare the results of NASA s tests of grade 8 fasteners purchased by its Langley Research Center before and after December 1999. However, there were too few mechanical and chemical tests completed to make this comparison possible. We conducted our review from January 2000 to May 2001, in accordance with generally accepted government auditing standards. We performed our work in Washington D.C., and Chicago, Illinois. Appendix II: Federal Register Notice Appendix III: Comments From the Department of Commerce Appendix IV: Comments From the U.S. Customs Service
Why GAO Did This Study This report reviews changes in fastener industry practice "resulting from or apparently resulting from" the small-lot exemption of the Fastener Quality Act. What GAO Found GAO found no evidence that the fastener industry changed any practices resulting from, or apparently resulting from, the small-lot exemption. The Customs Service's limited tests of imported fasteners in 2001 found no evidence of substandard fasteners and no evidence of any decline in the quality of fasteners from the results of tests Customs conducted in 1998.
<1. Background> USPS has a universal service obligation, part of which requires it to provide access to retail services. Several statutory provisions govern USPS when considering changes to its retail network, such as the following. Section 101 of Title 39 of the U.S. Code states, The Postal Service shall have as its basic function the obligation to provide postal services to bind the Nation together through the personal, educational, literary, and business correspondence of the people. USPS is required to serve the public and provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining. USPS is authorized to determine the need for post offices and to provide such offices as it determines are needed. Regarding post offices, the law requires that no small post office shall solely for operating at a deficit, and language in annual be closed appropriations has provided that none of the appropriated funds shall be used to consolidate or close small rural and other small post offices. 39 U.S.C. 404(a)(3). USPS guidance uses the term discontinuance to describe ending operations at a USPS-operated retail facility, such as a post office, station, or branch. Stations and branches are subordinate units of a main post office and generally offer the same products and services as post offices. Throughout this report, we use the term closure, except when referring to USPS guidance. 39 U.S.C. 404(d)(5). closing postal facilities is provided in a recent report by the Congressional Research Service. The size of USPS s retail network has remained largely unchanged over the past 5 years, although customer visits and transactions have declined, as shown in table 1. In 2002, USPS released a transformation planchallenges it faced with its retail network and optimization strategy. It also described plans to address these challenges for example, by introducing retail alternatives in concert with reducing its retail network footprint and operating costs. USPS stated that it would provide customers with easier and more convenient retail access. Postal services will be available where customers need them at home, at work, where they shop, or at the post office. The Postal Service will promote the convenience of existing, underutilized alternatives and develop new low- cost solutions using technology, partnerships, and product simplification. that described In 2003, the President s Commission on the Postal Service issued a report that noted many of the nation s post offices were no longer necessary to fulfill USPS s universal service obligation, given the proliferation of alternative retail access points in grocery stores, drug stores, ATMs, and other more convenient locales in communities across the country. The commission recommended that USPS maximize the potential of low-activity post offices by operating those necessary for fulfilling the universal service obligation, even if they operate at a substantial economic loss. However, where low-activity post offices are not necessary, it should have flexibility to dispose of them, with appropriate local community involvement, and existing statutes and appropriations that limit flexibility should be repealed. We have said that network restructuring is a key action to help USPS reduce its costs and improve efficiency. In 2009, we suggested that USPS restructure its retail network to eliminate growing excess capacity, reduce costs, and improve efficiency. recommended that it develop a plan for optimizing its retail network that addresses both traditional post offices and retail alternatives. A senior USPS official told us in January 2012 that USPS was in the process of developing a retail optimization plan. Additionally, in 2011, we In February 2012, USPS released a 5-year business plan, with an organization-wide goal to achieve $22.5 billion in annual cost savings through a combination of legislative and operational changes, including $2 billion in savings from optimizing the retail network. According to USPS, it plans to reduce its total workforce of 557,000 employees by 155,000 within the next 5 years through attrition, as over half of its career employees are now retirement-eligible. The plan did not indicate how many of these proposed employee reductions would occur as a result of changes to its retail network. GAO, High Risk Series: Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability, GAO-09-937SP (Washington, D.C. July 2009). <2. USPS Has Taken Several Actions to Restructure Its Retail Network> Over the past 5 years, USPS has taken several actions to change its retail network through reducing its workforce and retail footprints, while expanding retail alternatives. It estimated that it saved about $800 million by reducing retail work hours during this period. It also closed 631 of its post offices, but it did not have cost savings estimates related to these closures. Further, most of the facilities closed (500) were post offices where operations had first been suspended due to emergencies or a postmaster vacancy. Fewer closures (131) have resulted from the nationwide reviews that USPS initiated in 2009 and 2011. <2.1. Retail Workforce Reductions> Over the past 5 years, USPS reduced the number of retail clerks by 26 percent and the number of postmasters by 7.4 percent at USPS-operated facilities, as shown in table 2. It also created a new noncareer postal support employee position, whose wages will be approximately one-third of a clerk s average wage. An agreement reached with the American Postal Workers Union in May 2011 allows USPS to increase its use of noncareer employees by up to 20 percent of clerk positions covered by the agreement. In related efforts to cut costs, USPS has reduced total retail work hours of clerks and postmasters by about 20 percent since fiscal year 2006 through employee attrition and schedule adaptation. During the past 5 years, USPS designed two nationwide initiatives known as the Station and Branch Optimization Initiative (2009 Retail Initiative) and the Retail Access Optimization Initiative (2011 Retail Initiative) to review (1) over 3,000 USPS-operated retail facilities in urban and suburban areas and (2) about 3,650 primarily rural facilities for possible closure. Table 3 describes key information about these initiatives. In addition, over the last 5 years, USPS district offices have identified and closed around 500 USPS-operated retail facilities on an individual, ad-hoc basis as they determined the need.response to a postmaster vacancy or the suspension of operations due to These individual closures were in an expired lease or irreparable damage to the facility following a natural disaster. Many of these closures were for facilities that had suspended operations years ago, but USPS did not formally close the facility until recently. <2.2. Alternative Retail Expansion> USPS has continued to expand the number and type of alternatives at which customers can access retail postal products and services outside of USPS-operated postal facilities. These alternatives include self-service options as well as partnerships with retailers, which could help it contain facility and labor costs while still providing access for customers. Examples of retail alternatives include its website, self-service kiosks, contract postal units, rural carrier services, approved shippers, Village Post Offices, stamp retailers, orders of supplies by telephone, and package pickup at the door. The percentage of retail revenue from these alternatives increased from 24 percent in 2007 to 35 percent in 2011, as shown in table 4. USPS has projected that by 2020, alternatives to USPS- operated retail facilities may account for 60 percent of its retail revenue. <3. Stakeholders Have Expressed Concerns about USPS Retail Initiatives> Postal stakeholders, including Members of Congress, PRC, USPS OIG, customers, employee associations, and some community residents have raised concerns about USPS s retail restructuring initiatives. These concerns include access to postal services, including community residents ability to obtain retail services, the adequacy of retail alternatives, and changes to delivery services; the impact of facility closures on communities; the adequacy of data analysis of facilities facing closure and the reliability of data, particularly with regard to the accuracy of cost- savings estimates; the transparency and equity of closure decisions; the fairness of facility closure procedures; and changes in who can manage a post office. <3.1. Access to Postal Services> USPS regulations provide that local management host a community meeting to obtain public input when it proposes to close a facility. One of the major concerns of community residents at meetings we attended was that the communities access to postal services would decline if USPS closed the facility and the next closest postal facility was too far. For example, at a meeting we attended in Colorado, one resident described the community as isolated and expressed concern that the distance to the next closest post office (about 25 miles) was unreasonable. Another resident stated that should the post office close, driving about 50 miles round-trip to mail packages was not a viable option. At another site we visited in Arkansas, postal officials told us that proximity of all post offices is one of the major factors they consider when reviewing facilities on the 2011 Retail Initiative study list. At meetings we attended, community residents also raised questions about the adequacy of other available alternatives. For example, one resident said that he could not access retail services on USPS s website because Internet service was not available. Additional concerns about retail alternatives were raised by PRC. In the 2011 Retail Initiative, PRC questioned whether USPS had adequate alternative retail access options available for retail facilities that it proposed closing. PRC said that alternative access should be considered concurrently with closure studies and be presently available. Postal officials at the meetings we attended encouraged customers to provide feedback on proposed retail alternatives, and USPS officials told us they have attempted to coordinate expanding alternatives with closures. They gave the example of introducing the concept of partnering with local businesses to create a Village Post Office along with the announcement of the 2011 Retail Initiative. However, Village Post Offices may not offer a realistic alternative for customers in some rural areas because there may not be businesses in the community to host a post office. As of January 2012, nine Village Post Offices were in operation, and 13 others were under contract, according to USPS. Citizens in the rural communities we visited also had concerns about how mail delivery, including post office box locations and addresses would change if USPS closed the community s postal facility. USPS officials presented rural route service as an alternative, acknowledging that it could require an address change. In addition, according to officials, because some customers may not meet the requirements to receive rural route delivery, they may have to go to another post office to pick up their mail. Some community members who used post office boxes as their only mode of delivery were worried about the inconvenience of having to travel further to another postal facility to pick up their mail. Several customers stated that if their post office were to close, they would prefer having some postal physical presence in their town, such as cluster boxes. <3.2. Impact on Communities> We observed USPS and its customers sometimes had varied expectations about its role in the community. For example, community residents in one small town stated that they were concerned about loss of community identity if the post office were to close. Appeals filed with the PRC highlight issues similar to those brought up at community meetings. For example, various appeals that we examined included concerns that losing the local post office would have a negative impact on community, including loss of identity and inhibiting economic growth; because USPS did not have accurate information about the community, customers believed USPS did not have complete information about the community s needs. USPS had not allowed for adequate community input; at public meetings, residents perceived that a decision had already been made about the closure; and residents believed that the available alternatives were not adequate or were too inconvenient. In written responses to customer concerns in appeals cases, USPS has indicated that community identity comes from the interest and vitality of its residents and that it would still help to preserve the identity by maintaining a community s ZIP Code. At one community meeting we attended, a postal official told those in attendance that the community s identity is not dependent on the post office. USPS has responded to concerns about the economic effects of closures by stating that businesses require regular and effective service, which would be provided to them by the alternative offered to replace the closing facility. It also indicated that its analysis of customer questionnaires helps to determine if the potential change would have a negative impact on local businesses by asking whether customers would continue to use those businesses in the event of a facility closure. Postal officials we spoke with noted that they used the responses to customer questionnaires to see where customers obtained other services, such as buying groceries, to figure out which alternative locations could offer community residents convenient access to postal services. <3.3. Adequacy of USPS Analysis and Data> Another issue the PRC and other stakeholders raised was related to the adequacy of USPS s analysis and data. For example, in the 2009 Retail Initiative, the PRC recommended that USPS improve its financial analysis to better reflect potential revenue declines and operational expenses that may result from closing a post office. In the 2011 Retail Initiative proceeding, USPS reported that it had improved its financial analysis model, generating data that it determined would provide a better assessment of cost savings than the previous model. Despite the change, PRC s 2011 Retail Initiative advisory opinion stated that it was unable to develop a reasonable estimate of the financial impact of the initiative because USPS did not collect facility-specific revenue and cost data or separate retail costs from other operational costs. In addition to questions about the overall financial impact of initiatives, the adequacy of USPS s cost-savings estimates for individual facilities has also been questioned by the PRC during the appeals process. year 2011, in six of the nine cases that PRC remanded that is, where the PRC sent the cases back to USPS for further consideration PRC either found that USPS did not adequately consider economic savings and asked USPS to clarify aspects of its estimate upon remand or cited related concerns such as overestimating savings from postmaster salaries or leases. When USPS makes a decision to close or consolidate a post office, customers of the post office may appeal the decision to PRC. 39 U.S.C. 404(d)(5). PRC may affirm the decision or remand it to USPS for further consideration. USPS and PRC do not agree on whether PRC has jurisdiction over appeals for station and branches. PRC, USPS OIG, and we have raised concerns about USPS s retail network data. PRC outlined concerns and recommendations, including in the following examples, about USPS s data pertaining to its retail network in its advisory opinion on the 2011 Retail Initiative. USPS does not collect cost and revenue data separately for post offices, stations, and branches. Additionally, operating costs for retail activities cannot be separated from nonretail operating costs, restricting USPS s ability to estimate the potential cost savings from closures because it makes it difficult to determine the actual costs associated with individual retail facilities. USPS does not collect the data to measure revenue lost due to closures, restricting the ability to perform a post-implementation analysis on the net financial impact of closures. Postal officials told us they were in the process of creating a tool that would estimate total demand for retail postal services by geographic area and account for the revenue and cost implications of alternatives. PRC found that USPS should develop a method to measure how successfully it met its goals for the 2011 Retail Initiative and that it should attempt to coordinate and estimate the impact of all relevant initiatives that could affect customer access to services. This analysis would also help ensure that the right data are collected to measure stated goals. USPS officials told us that although they have looked at the overall effect on customers, they are unable to predict which initiatives will eventually be implemented, as some require statutory change and therefore they have examined the impact of each initiative independently. Additional data concerns that we and the USPS OIG have raised include the following: In November 2011, we reported that USPS lacks performance measures and data needed to know the extent to which customers are aware of and willing to use its various retail alternatives. We also spoke with USPS officials about customer data, and they stated that transaction data and customer visits are not tracked in electronic databases for some small post offices approximately 10 percent of the total retail facilities it operates. Lack of tracking makes it difficult to match alternatives to the services that customers are demanding or using at small post offices. A USPS official commented that it is developing a strategic retail plan that includes a charter designed to provide greater convenience, lower-cost service, and improve the customer experience, but it is unclear how this plan will address the lack of data at small post offices. In December 2011, the USPS OIG recommended that USPS improve the reliability and usefulness of retail facilities data by validating, correcting, and updating information in its retail facility database. Moreover, we have also recommended improvements to USPS s retail facilities data. In both cases, USPS agreed to implement the recommendations related to the facilities data. <3.4. Transparency and Equity Concerns Raised about USPS Closure Decisions> Recent USPS OIG analyses of the 2009 and 2011 Retail Initiatives found that USPS could make improvements in establishing clear criteria for evaluating closure decisions and implementing an integrated retail network strategy that includes short- and long-term plans, milestones, and goals. USPS OIG found that these improvements could raise stakeholders confidence that USPS will make transparent, equitable, and fact-based decisions. USPS agreed with the findings in these reports but noted that a one-size-fits-all approach might not take factors about the local community into account and that its retail operations will never be uniform across the entire network. Other customers raised equity issues with USPS s decisions. For example, some people in a small community at a public meeting we attended viewed rural post offices as bearing the brunt of closures and viewed urban areas as not being equally affected. Customers also wanted clarification on the criteria used to decide which facilities would be studied and whether other closure initiatives would affect their service. Similarly, in an appeals case, customers expressed concerns that the nearest post office was also being studied for closure and that if both facilities were closed, they would have to travel even further to obtain services. <3.5. Fairness in Facility Closure Procedures> PRC raised a concern in its 2009 Retail Initiative advisory opinion that USPS was not providing customers of stations and branches with the same rights as customers of post offices in a closure proceeding. The advisory opinion also noted that the public does not really understand the distinction between various facility types and it is confusing to have procedures for stations and branches that are different for post offices. Further, PRC found that stations and branches fulfill the same operational purposes as post offices and recommended that USPS provide similar treatment to customers if their local station, branch, or post office were closed. In another PRC proceeding in 2010, PRC raised a similar concern about the practice of suspending operations at offices for extended periods without giving the public the right to comment as would be afforded in a formal closure study. At some facilities, USPS suspended services and took no further action to restore service or proceed with closure for, at times, many years. In response to PRC concerns, USPS made several changes, including as part of the 2011 Retail Initiative and in district office-initiated post office closures that began after July 2011. These changes included the following: Implementing uniform closure procedures for all USPS-operated retail facilities. USPS developed standards that were finalized in July 2011 to address internal and public confusion over different discontinuance procedures. Clarifying circumstances that can prompt a closure study. New regulations allow USPS headquarters to identify USPS-operated retail facilities for studies and also provide details on the particular circumstances that can prompt a study. These circumstances include: a postmaster vacancy, an emergency suspension, low- workload levels, insufficient customer demand, and the availability of reasonable alternative access to postal services. Creating a web-based data program to guide closure studies. USPS created this program and incorporated it into its closure processes as of December 2010. The program is used to collect information, such as all community comments during the closure process, and to guide USPS along a series of required steps. According to USPS, the web- based program has helped streamline the overall closure process and improved the internal tracking of facility closures, including customer comments and community statistics, such as the number of businesses in the community and the nearest retail alternatives. Clarifying procedures for reviewing facilities where operations have undergone emergency suspensions. USPS issued revised guidance in July 2011, affirming that customers of facilities that have undergone emergency suspensions must be allowed an opportunity to comment on the proposed discontinuance. This provision addressed long-standing concerns of stakeholders regarding postal facilities where USPS suspends services for long periods of time. In addition, there has been a sharp increase recently in the number of appeals filed with PRC related to USPS decisions to close or consolidate a post office. In fiscal year 2010, 6 appeals were filed with PRC. In fiscal year 2011, PRC received more than 100 post office closing appeals and 100 were filed in the first quarter of fiscal year 2012. To expedite the appeals process, PRC streamlined and simplified its procedures for reviewing appeals and simplified the process to make it easier for the public to participate in and to understand PRC s decision-making process. <3.6. Changes in Who Can Manage Post Offices> USPS also changed its regulations related to the staffing of post offices. Previously, a postmaster was the only employee who could manage operations at a post office. New regulations now allow post offices to be operated or staffed by other types of postal employees, who would be paid less than postmasters and would report to a postmaster. USPS expects that this change would give it more staffing flexibility, reduce the number of postmasters, and reduce costs. A USPS official explained that changes in operations, such as the removal of delivery operations from some retail facilities, have resulted in a decreased level of responsibility for some postmasters over time. Postmasters filed a complaint with PRC about these proposed changes.because it was filed before this provision of the proposed rule was finalized. <4. Challenges Restrict USPS from Changing Its Retail Network> USPS faces challenges such as legal restrictions and resistance from some Members of Congress and the public that have limited its ability to restructure its network. Also, certain policy issues are unresolved, and pending legislation takes differing approaches to resolving USPS s challenges. <4.1. Legal Restrictions and Resistance> Some legal restrictions have presented challenges to USPS s plans to restructure its retail network. As described in the background section of this report, the law states that no small post office shall be closed solely for operating at a deficit. Further, language in annual appropriations acts has provided that none of the funds appropriated in the acts (about $100 million for fiscal year 2011) shall be used to consolidate or close small rural and other small post offices. act like a business and be self-financing, but on the other hand, it is restricted by law from making decisions that businesses would commonly make, such as closing unprofitable units. On one hand, USPS is supposed to In addition to these statutory restrictions, USPS faces resistance from some Members of Congress and the public who oppose some facility closures. For example, at some public meetings we attended, staff from some congressional offices spoke to community members about actions they could take to challenge potential closures. For example, they encouraged residents to write letters to their Members of Congress and cite specific, negative impacts a potential closing might have. At one community meeting, congressional staffers said they had received several letters from community members. In response to such actions by constituents, many Members of Congress have written letters to USPS requesting that it not close post offices in their districts. In December 2011, 20 Senators signed a letter to Senate leaders requesting that they consider including language in an appropriations bill that would prevent USPS from closing any rural post offices until Congress has passed reform legislation. USPS then placed a moratorium on all facility closures until May 15, 2012, while Congress considers postal reform bills. USPS has encouraged Congress to enact postal reform legislation that would provide USPS with more flexibility to make retail closure decisions by eliminating statutory restrictions. Further, USPS officials told us that in response to resistance to closures, they are considering reducing post office operating hours rather than closing some facilities. <4.2. Certain Policy Issues and Pending Legislation> Pending postal reform legislation provides an opportunity for Congress to address certain unresolved policy issues related to USPS s retail restructuring plans. These policy issues include what level or type of retail services should USPS provide to meet customers changing use of postal services; how should the cost of these services be paid; how should USPS restructure its operations, networks, and workforce to support changes in services; and how should Congress provide USPS with flexibility to restructure its networks and workforce while still holding USPS accountable to Congress and the public? Several bills related to postal reform have been introduced in the 112th Congress, and two have been approved by the Senate and House oversight committees S. 1789 and H.R. 2309. As seen in the following two examples, these bills provide different approaches to addressing the legal restrictions and resistance USPS faces to closing facilities and the unresolved policy issues. S. 1789 requires USPS to establish retail service standards and consider several factors before making a closure decision, including consolidating with another facility, reducing hours of operation, and procuring a contract to provide retail services within the community. The bill also allows USPS to provide retail alternatives to dedicated post offices but also puts in place considerations before closing post offices. H.R. 2309 removes the statutory restriction on post office closures solely for operating at a deficit and establishes a commission similar to the Base Realignment and Closure Commission. USPS would submit a plan to the commission, which would then make closure recommendations to Congress that would be implemented unless Congress passed a joint resolution of disapproval. Table 5 summarizes several challenges to restructuring the retail network and some options to address these challenges that are included in these bills. <5. Concluding Observations> USPS must carefully work to ensure a viable strategy to effectively size its retail network to reduce costs to match declining mail volume while maintaining access to retail services. It is clear that USPS cannot support its current level of services and operations from its current revenues. USPS s ability to continue providing its current level of services is in jeopardy, and it is up to both Congress and USPS to construct solutions that will either reduce the cost of services or increase revenues from other sources. But it appears that USPS cannot restructure its retail network unless Congress addresses USPS s financial instability and the long-standing challenges that hinder its ability to change its retail network. If Congress prefers to retain the current level of service and associated network, decisions will need to be made about how USPS s costs for providing these services will be paid, including additional cost reductions or revenue sources. Because USPS is in the process of responding to several retail restructuring recommendations that its OIG, the PRC, and we have made, we are not making any additional recommendations. <6. Agency Comments> USPS provided written comments on a draft of this report by a letter dated April 11, 2012. USPS agreed with our findings, noting limitations management faces to restructuring. Further, it observed that its operating model is unsustainable and that maintaining the same level of retail services will require solutions to cover the costs of those services either through cost reductions or revenue enhancements. USPS also provided us with technical comments that were incorporated into the final version of this report as appropriate. USPS s comments are reprinted in appendix II. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report is available at no charge on GAO s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To help inform your consideration of actions needed to restructure U.S. Postal Service (USPS) operations and help it achieve financial viability, you asked us to examine USPS s retail network. This report discusses (1) key actions USPS has taken to restructure its retail network over the past 5 years; (2) concerns raised by postal stakeholders, including Congress, the Postal Regulatory Commission (PRC), USPS Office of Inspector General (OIG), and postal business and residential customers, and USPS s response to these concerns; and (3) the challenges that USPS faces in changing its retail network. To determine what key actions USPS has taken to restructure its retail network, we reviewed documents related to the Station and Branch Optimization and Consolidation (2009 Retail Initiative) and Retail Access Optimization (2011 Retail Initiative), retail alternatives, and individual district-initiated discontinuances. We examined criteria and goals of the initiatives, the number of facilities studied, and the number of facilities closed. In describing trends in the number of USPS-operated facilities, retail transactions, and other retail operating statistics, we reviewed reports from USPS OIG and documents filed in several dockets from PRC related to the 2009 and 2011 Retail Initiatives, and other discontinuance procedures in general. We also reviewed past GAO work on the development of retail alternatives. reviewed USPS documents, including guidance for discontinuance processes, background documents on major initiatives, and overall goals for the retail network. Our review included examining rule changes between old and new discontinuance procedures. interviewed USPS officials who oversee retail network restructuring to discuss background of initiatives, criteria used for closures, discontinuance processes, and relevant data. reviewed USPS data from fiscal years 2006-2011 on the facilities, costs of the retail network, employees, and customer statistics to show trends in number of retail facilities, retail revenues, and operating costs over the past 5 years. We also requested estimates and projections for fiscal year 2015. To understand the context of data provided by USPS, we spoke with knowledgeable officials to get a more-detailed understanding of how databases are used by officials and USPS s methodology for collecting information. We observed a demonstration of USPS s Change Suspension Discontinuance Center program, which contains all of the information used during the discontinuance process. This demonstration gave us an idea of how district officials would use the program in support of discontinuance activities. We also interviewed USPS officials to discuss data we requested, including how variables were collected, and the methodology for cost-savings estimates and future projections. For customer visits and retail transaction data, we used an extrapolation provided by USPS since it does not collect data for some small post offices. We also reviewed variables used to estimate cost savings for individual facilities and the overall cost savings USPS provided us for the retail facility closures from fiscal years 2007 through 2011. We assessed the reliability of USPS data and noted, where appropriate, the limitations of certain data. For example, we requested annual cost-savings data related to USPS retail facility closures for fiscal years 2007 through 2011. USPS initially provided us with aggregate annual cost savings, but because it did not provide disaggregated data, we were not able to assess the reliability of these data. We also discuss in this report the problems with USPS data and analysis as reported by PRC and the USPS OIG. To identify concerns raised by postal stakeholders and to determine what challenges USPS faces in restructuring its retail network, we analyzed past work by GAO, USPS OIG, and PRC as well as statutory requirements regarding facility closures and access to retail services. We also identified stakeholder concerns, both from communities and Members of Congress, that contributed to resistance to closures and reviewed proposed legislation to identify potential options for addressing retail network restructuring. We also discussed challenges to retail network restructuring with USPS and PRC officials. For example, when PRC attempted to estimate the costs and savings of the 2011 Retail Initiative, it reported it was unable to develop a reasonable estimate of the financial impact of the 2011 Retail Initiative because USPS did not collect facility-specific revenue and cost data or separate retail costs from other operational costs. In addition to questions about the overall financial impact of initiatives, the adequacy of USPS s cost-savings estimates for individual facilities has also been questioned by PRC during the appeals process. To obtain information on stakeholder issues raised by customers during past facility closures and USPS s communication, we conducted an analysis of the PRC appeals docket for fiscal year 2011. We examined reasons why facility closures were appealed, alternatives given by USPS to replace services provided at closed facilities, customer concerns and USPS s responses to the concerns contained in the administrative record, and PRC s analyses of the cases. To obtain information on stakeholder concerns for the ongoing 2011 Retail Initiative and recommendations for improving the initiative, we conducted an analysis of the Nature of Service docket on the 2011 Retail Initiative and the resulting PRC advisory opinion. We examined USPS testimony, briefs, and responses to interrogatories, summarized major issues brought up by stakeholders (including unions, postmaster groups, the National Newspaper Association, and the Public Representative), and examined the PRC advisory opinion to inform GAO findings on challenges to making progress in optimizing the retail network. To obtain information on resistance to closing facilities, we observed congressional hearings and community meetings, reviewed relevant news articles about congressional resistance to closures, and interviewed USPS and PRC officials. To observe stakeholder concerns firsthand, we conducted site visits to USPS districts to attend public meetings and to obtain detailed information on discontinuance procedures, including criteria for closures and the 2009 and 2011 Retail Initiatives. We chose two sites to visit, the Arkansas and Colorado/Wyoming districts, based on the following criteria: number of upcoming public meetings, 2011 Retail Initiative study category, time range of meetings, proximity of meetings to district offices or cities, and cost and convenience of travel. After applying these criteria to choose site visit locations, we attended two to three community meetings per location and met with various district officials to discuss the district-level discontinuance review process and challenges to closing retail facilities. District officials we met with included: district discontinuance coordinators, managers of marketing, managers of customer and industry, and managers of post office operations. When possible, we met with other relevant stakeholders during the site visits to further our understanding of issues to facility closures. In Arkansas, we spoke with a small business owner who had filed a petition for appeal of a closure in a suburban area. In Colorado, we spoke with senior postal officials in USPS s Western area office. In addition to conducting site visits to areas that had predominantly small post offices, we also attended community meetings in urban areas for stations and branches. In total, we attended 10 community meetings at the following locations: Ivan, AR, Post Office. Jacksonport, AR, Post Office. Conejos, CO, Post Office. Chama, CO, Post Office. Jaroso, CO, Post Office. Theological Seminary Station in Alexandria, VA. Leisure World Station in Aspen Hill, MD. Market Center Station in Baltimore, MD. T Street and Kalorama Stations in Washington, D.C. (2 meetings). Five meetings were for small post offices we observed during site visits, and 5 were for suburban or urban stations and branches in the Washington, D.C., area. To analyze which concerns were raised most frequently at the meetings we attended, we recorded all of the questions and comments made by customers during all of the meetings we attended. To examine options for addressing challenges to restructuring the retail network, we compared provisions in several pieces of proposed postal reform legislation. We also spoke with USPS officials to discuss how to get an update on their ongoing initiatives, current options to achieve cost savings in the retail network, and their strategy for the retail network, including the integration of retail alternatives with facility closure initiatives. We conducted this performance audit from April 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Postal Service Appendix III: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Teresa Anderson (Assistant Director), Amy Abramowitz, Shelby Kain, Margaret McDavid, SaraAnn Moessbauer, Amrita Sen, and Crystal Wesco made key contributions to this report.
Why GAO Did This Study Since 2006, USPS has accumulated losses of $25 billion and projects a $14.1 billion net loss for fiscal year 2012. In September 2011, the Postmaster General testified that USPS needed to reduce its annual costs by $20 billion, or 27 percent of its projected expenses. One effort to reduce costs includes restructuring, or optimizing, the size of USPS’s retail network and workforce. The network includes approximately 32,000 USPS-operated facilities, such as traditional post offices, as well as alternative non-USPS-operated locations that sell its products and services. To optimize this network, USPS plans to evaluate and locate its retail facilities to maximize revenue and minimize costs while still providing access to services. As requested, this report discusses (1) key actions USPS has taken over thepast 5 years to restructure its retail network, (2) concerns raised by stakeholders, and (3) the challenges USPS faces in changing its retail network. GAO analyzed USPS documents, interviewed USPS officials and stakeholders, and observed public meetings on retail facility closures. What GAO Found Over the past 5 years, the U.S. Postal Service (USPS) has taken several actions to restructure its retail network through reducing its workforce and its footprint while expanding retail alternatives. USPS officials estimated that it had saved about $800 million from reducing the number of work hours dedicated to retail operations. USPS also closed 631 of its post offices, but it did not have cost-savings estimates for these closures. Most of the facilities closed (500) were in response to a postmaster vacancy or the suspension of operations due to an expired lease or irreparable damage following a natural disaster. Fewer closures (131) have resulted from nationwide reviews that USPS initiated in 2009 and 2011. USPS has also restructured its retail network by expanding alternatives through self-service options as well as partnerships with other retailers. Members of Congress, the Postal Regulatory Commission (PRC), the USPS Office of Inspector General (OIG), customers, employee associations, and some community residents have raised concerns about USPS’s retail restructuring initiatives. The concerns include access to postal services, including community residents’ ability to obtain retail services, the adequacy of retail alternatives, and changes to delivery services; the impact of facility closures on communities; the adequacy of USPS analysis of facilities facing closure and the reliability of USPS data, particularly the accuracy of USPS cost savings estimates; · the transparency and equity of USPS closure decisions; the fairness of USPS’s facility closure procedures; and changes in who can manage a post office. PRC, USPS OIG, and GAO have recommended improvements to address some of these issues. In particular, GAO has recommended that USPS develop a plan that addresses both traditional post offices and retail alternatives and ensures that USPS has a viable strategy for effectively adapting its networks to changing mail use and maintaining adequate service as it reduces costs. USPS officials have said they are in the process of addressing these recommendations. USPS faces challenges, such as legal restrictions and resistance from some Members of Congress and the public, that have limited its ability to change its retail network. For example, USPS is supposed to be self-financing, but it is also restricted by law from making decisions that businesses would commonly make, such as closing unprofitable units. Additionally, some Members of Congress and the public have challenged USPS’s plans to close retail facilities in their districts or communities. Certain policy issues remain unresolved related to what level of retail services USPS should provide, how the cost of these services should be paid, and how USPS should optimize its retail network. Pending legislation takes differing approaches to addressing these policy issues. If Congress prefers to retain the current level of retail service and associated network, decisions will need to be made about how USPS will pay for these services, including through additional cost reductions or revenue sources. What GAO Recommends GAO makes no recommendations in this report, as it has previously reported on the urgency for Congress to allow USPS to adapt its retail network to changing customer behavior and reduce costs. USPS agreed with GAO’s draft report, noting limitations it faces to retailrestructuring. It also observed that maintaining the same level of retail services will require solutions to cover the costs of those services.
<1. Background> <1.1. Governance and Structure of the Military Health System> DOD established the Defense Health Agency on October 1, 2013, to provide administrative support for the services respective medical programs, combine common shared services, and coordinate the work of the services military treatment facilities with care purchased from the private sector. The Defense Health Agency supports the delivery of services to Military Health System beneficiaries and is responsible for integrating clinical and business processes across the Military Health System. The Military Health System, which serves all of the military services including the Coast Guard, has two missions: (1) supporting wartime and other deployments and (2) providing peacetime health care. The Military Health System is a complex organization that provides health services to almost 10 million DOD and Coast Guard servicemembers and their dependents across a range of care venues, including the battlefield, traditional hospitals and clinics at stationary locations, and authorized civilian providers. The Military Health System employs more than 150,000 military, civilian, and contract personnel working in military treatment facilities. In the Military Health System, care is provided through TRICARE, DOD s regionally structured health-care system. Under TRICARE, DOD and Coast Guard active-duty servicemembers typically receive most of their care in what is known as the direct-care component that is, in military hospitals and clinics referred to as military treatment facilities. The care provided in military treatment facilities is supplemented by services offered through TRICARE s purchased-care networks of civilian providers. The Defense Health Agency oversees the TRICARE health plan and military treatment facilities and subordinate clinics, but does not have direct command and control of the military services military treatment facilities outside of the National Capital Region. Each military service, including the Coast Guard, operates its own military treatment facilities and their subordinate clinics. In addition, the military services, including the Coast Guard, administer medical programs and provide medical and mental-health services to servicemembers. Military medical personnel providing mental-health services include psychiatrists, psychologists, mental-health nurse practitioners, licensed social workers, and alcohol and drug counselors. Although it is part of the Military Health System, the Coast Guard has adopted some, but not all, of DOD s health-related guidance. According to DOD officials, the Navy, the Marine Corps, and the Air Force provide nonmedical education and counseling services for individuals with problem gambling who do not meet the criteria for gambling disorder diagnosis. These educational and supportive services are provided through the Fleet and Family Service Centers, the Marine Corps Community Services Behavioral Health Clinics, and the Airman and Family Readiness Centers, respectively, as well as through the Military OneSource program a program that provides confidential, short-term, nonmedical counseling services and information both face-to-face and remotely. Coast Guard servicemembers can receive counseling services through CG SUPRT, a confidential program similar to Military OneSource. Army officials told us that gambling disorder is not currently addressed within the Army Substance Abuse Program and that Army regulation does not require the program to cover gambling disorder. In addition, service chaplains offer nonmedical counseling services for DOD and Coast Guard servicemembers. DOD officials stated that individuals with a diagnosable gambling disorder would be referred to a military treatment facility. <1.2. Behavioral Health Definitions Regarding Gambling> The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, defines gambling disorder as the persistent and recurrent problematic gambling behavior leading to clinically significant impairment or distress. The term gambling disorder replaced pathological gambling as the gambling-related diagnosis in the most recent (2013) edition of the Diagnostic and Statistical Manual of Mental Disorders. The primary difference between the two diagnostic terms is that pathological gambling was considered an impulse-control disorder whereas gambling disorder is in the diagnostic category of substance-related and addictive disorders. Table 1 presents the diagnostic criteria in the fourth and fifth editions of the Diagnostic and Statistical Manual of Mental Disorders. In addition, the American Society of Addiction Medicine published criteria that evaluate the appropriate venue for an individual to be treated based on a multidimensional assessment and that are designed to define one national set of criteria for providing outcome-oriented and results-based care in the treatment of addiction. According to the American Society of Addiction Medicine, addiction which it defines as pathologically pursuing reward and/or relief by substance use and other behaviors can be associated with various substances and behaviors, such as alcohol and gambling. In explaining its decision to recategorize gambling disorder as an addiction in the 2013 revision of the Diagnostic and Statistical Manual of Mental Disorders, the American Psychiatric Association observed that gambling disorder and substance-related disorders display commonalities in symptoms as well as treatment. In addition, the Diagnostic and Statistical Manual of Mental Disorders states that individuals with gambling disorder have high rates of other mental disorders, such as substance-use disorders, depressive disorders, anxiety disorders, and personality disorders. Table 2 presents the similarities and differences between substance use and gambling disorder. <1.3. DOD and the Coast Guard Determined That the Prevalence of Gambling Disorder Is Low, Based on Military Health System Data DOD and the Coast Guard Determined That the Prevalence of Problem Gambling and Gambling Disorder Is Low, Based on Servicemembers Use of the Military Health System> Based on DOD data that show 514 DOD and Coast Guard active-duty servicemembers and 72 Reserve Component servicemembers less than 0.03 percent of the average number of servicemembers in each year were diagnosed with gambling disorder or seen for problem gambling in fiscal years 2011 through 2015, DOD officials stated that the prevalence of gambling disorder in the military is low. DOD bases its determination of prevalence of gambling disorder and problem gambling on Military Health System data and does not include other sources of information, such as DOD-wide surveys and records of treatment provided outside of the Military Health System. The active-duty components of the DOD military services and the Coast Guard averaged about 1.4 million servicemembers each year, and the Reserve Component averaged about 0.8 million servicemembers each year. Table 3 shows the number of DOD and Coast Guard servicemembers who were seen through the Military Health System for pathological gambling, gambling disorder, and problem gambling during fiscal years 2011 through 2015. The Defense Health Agency compiles these problem gambling and gambling disorder data in the Military Health System Data Repository, which includes data on clinical interactions between DOD and Coast Guard servicemembers and health-care professionals in military treatment facilities and in civilian facilities through the TRICARE system, and DOD health officials told us that they use these data to determine the prevalence of gambling disorder. The prevalence of alcohol-related disorders is higher by comparison, according to data from the Military Health System Data Repository. For example, in fiscal years 2011 through 2015, 107,702 DOD and Coast Guard active-duty servicemembers and 10,896 Reserve Component servicemembers were seen through the Military Health System for alcohol-related disorders. The Military Health System Data Repository does not include data on DOD and Coast Guard servicemembers who received treatment or counseling for gambling disorder or problem gambling outside of the Military Health System. For example, Marine Corps officials stated that Marines may receive short-term, nonmedical counseling at Marine Corps Community Services Behavioral Health Substance Abuse Counseling Centers, and these interactions will be reflected in the Military Health System Data Repository only when the patient also visits a military treatment facility for this issue. Additionally, the Military Health System Data Repository does not capture data regarding treatment received by servicemembers of the Reserve Component unless they are on active orders for more than 30 days, need treatment for a line-of-duty injury or condition, or enrolled in TRICARE Reserve Select. According to DOD officials, Reserve Component medical personnel also are unlikely to learn about an individual s gambling problem because of the short periods that nonactivated members of the Reserve Component spend in uniform, which limits the ability of the medical personnel to refer them for treatment and counseling. DOD officials also told us that servicemembers who call Military OneSource a telephonic resource for servicemembers for referrals for gambling problems may be referred to a military treatment facility or a TRICARE provider, but they also may be referred to local, civilian treatment programs or support groups. For example, servicemembers may seek treatment through state programs or from a nearby Gamblers Anonymous chapter. DOD officials told us that they do not collect data on the number of servicemembers who call Military OneSource for problem gambling or gambling disorder, nor do they collect data on gambling-related referrals. <1.4. Problem Gambling and Gambling Disorder Prevalence Data from Other Sources Provide Contextual Information, but Are Not Directly Comparable to Military Health System Data> DOD surveys of servicemembers, data on medical care provided by the Department of Veterans Affairs, and reviews of literature provide contextual information on the prevalence of problem gambling and gambling disorder in the military, but these data are not directly comparable to DOD clinical data. DOD s 2002 Health Related Behaviors Survey of the DOD active-duty population and the 2010 2011 survey of the DOD Reserve Component populations asked respondents about their problematic gambling behaviors, but the results of these surveys are not directly comparable to the DOD data, as shown in table 3. An estimated 1.2 percent (with a standard error of 0.2 percent) of active-duty military personnel met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria for lifetime prevalence of probable pathological gambling based on the self-administered survey of health-related behaviors. Similarly, an estimated 1.3 percent (with a standard error of 0.1 percent) of Reserve Component respondents to the 2010 2011 DOD Health Related Behaviors Reserve Component Survey met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria for lifetime prevalence of probable pathological gambling. These survey data are not comparable to DOD data presented in table 3 for three reasons. First, the surveys were based on the Diagnostic and Statistical Manual of Mental Disorders criteria that were applicable at the time of the surveys, which, as discussed above, differed from the current version in the disorder s categorization and the number of diagnostic criteria. Second, the surveys were based on anonymous self- administered questionnaires, while gambling disorder and problem gambling clinical data were based on clinicians interactions with servicemembers. Third, the estimate of lifetime prevalence does not indicate current prevalence. Further, according to the most-recent edition of the Diagnostic and Statistical Manual of Mental Disorders, a majority of individuals with gambling disorder do not seek out treatment, which implies that servicemembers may report problematic gambling behaviors on an anonymous survey but not seek treatment. DOD stated that since 2002 more-recent versions of the active-duty Health Related Behavior Survey, which occurs approximately every 3 years, have not asked about gambling behaviors because previous iterations of the survey in 1992 and 1998 showed similar low rates of gambling behaviors for the active-duty force, and DOD officials stated that they had sought to shorten the length of the survey. However, the Health Related Behavior Surveys have regularly included at least one question regarding financial difficulties, which may indicate a gambling problem. The Department of Veterans Affairs also collects data on the number of former servicemembers who visit the Veterans Health Administration for treatment for problem gambling or gambling disorder. However, Veterans Health Administration officials stated that they do not have data on how many individuals developed gambling problems during their military service. Data from the Veterans Health Administration showed that 10,012 veterans were seen for problem gambling or gambling disorder through the Veterans Health Administration in fiscal years 2011 through 2015. On average, 8.9 million veterans were enrolled in the Veterans Affairs health-care system each year. However, Department of Veterans Affairs officials stated that they do not systematically collect information on which facilities provide services for treating gambling disorder, and that there is not a required screening program for veterans with problem gambling or gambling disorder. In addition, we conducted two reviews of literature on the prevalence of problem gambling and gambling disorder: one on the U.S. general adult population and one on the U.S. military population. For the U.S. general population review, we identified only one report within our scope; that study estimated the prevalence of past-year problem gambling to be 4.6 or 5.0 percent of the population and the prevalence of past-year pathological gambling to be 2.4 or 1.0 percent of the population (using the South Oaks Gambling Screen Revised and the Diagnostic Interview Schedule-IV instruments, respectively). This study is not directly comparable to Military Health System data because it analyzed the results of phone-based surveys of the U.S. general population; on the other hand, Military Health System data are based on clinical interactions of the U.S. military population. In a separate literature search on the prevalence of problem gambling and gambling disorder in the U.S. military, three of the four results were the previously discussed DOD Health Related Behavior Surveys. The fourth result was a small, nonrepresentative, 6-month self-report questionnaire study of incoming patients at a Navy outpatient psychiatric clinic that indicated 1.9 percent of 360 military personnel were diagnosed with a lifetime prevalence of pathological gambling using the South Oaks Gambling Screen. However, this is the prevalence among those patients voluntarily presenting to a psychiatric clinic who agreed to complete the questionnaire and is not a valid indicator of prevalence among all clinic patients, the base population, the military service, or the military as a whole. <2. DOD and the Coast Guard Do Not Systematically Screen for Gambling Disorder, but DOD and Coast Guard Medical Personnel State That They Address Gambling Disorder in Line with Current Medical Practices> <2.1. DOD and the Coast Guard Do Not Systematically Screen for Gambling Disorder> DOD and the Coast Guard do not systematically screen for gambling disorder through DOD s annual health assessment or any other type of periodic health screening of servicemembers. DOD Instruction 6025.19, Individual Medical Readiness, which is applicable to the military services, including the Coast Guard, requires military departments to screen servicemembers for physical and mental health conditions using an annual screening tool called the Periodic Health Assessment. DOD and the Coast Guard use the annual Periodic Health Assessment to assess each servicemember s overall health and medical readiness and to initiate preventive services, as warranted. The Periodic Health Assessment assesses health conditions that may limit or prevent a servicemember from deploying, and the behavioral-health section asks specific questions on prescription drugs, alcohol consumption, and post- traumatic stress. However, there are no questions on the assessment that explicitly mention gambling that would allow for medical personnel to screen for gambling disorder an addictive disorder medically similar to substance abuse, as previously discussed. Furthermore, DOD Instruction 6490.07, Deployment-Limiting Medical Conditions for Service Members and DoD Civilian Employees, which a 2013 DOD memorandum supplements, describes mental-health conditions that would limit or prevent personnel from deploying. These conditions may include, but are not limited to, individuals with substance-use disorders undergoing active treatment as well as those at risk for suicide. As part of the Department of Health and Human Services, the Substance Abuse and Mental Health Services Administration aims to reduce the effect of substance abuse and mental illness and to focus the nation s public-health agenda on these issues as well as addiction. They provide some examples for screening-question sets that are used to identify potential gambling disorders in the general population by health-care personnel that could be used on the DOD Periodic Health Assessment. One example is the South Oaks Gambling Screen that consists of questions in categories such as frequency of gambling, type of gambling, and patient perceptions. Another screening example is the Lie/Bet screening which requires just two questions: 1. Have you ever had to lie to people important to you about how much you gambled? 2. Have you ever felt the need to bet more and more money? DOD and Coast Guard officials stated that they do not screen for gambling disorder because they focus military health surveillance on mental-health disorders that are high risk to overall readiness, high volume, and have validated measures for assessment. However, while gambling disorder is a comparatively low-volume disorder, the preoccupation with gambling, financial hardship, and increased risk of suicide can pose a risk to individual readiness and has been identified in the recent Diagnostic and Statistical Manual of Mental Disorders as sharing similar symptoms and treatment methods with substance-use disorders. According to DOD Instruction 6490.07, individuals with clinical psychiatric disorders with residual symptoms that impair duty performance are precluded from contingency deployment unless a waiver is granted. In addition, a 2013 Assistant Secretary of Defense for Health Affairs memorandum that supplements this instruction also states that individuals with mental disorders should demonstrate a pattern of stability without significant symptoms or impairment for at least 3 months prior to deployment, unless a waiver is granted. It also states that servicemembers diagnosed with substance-use disorders should not deploy if doing so would interrupt active treatment. Therefore, for servicemembers with gambling disorder, it may be difficult to maintain individual deployment readiness and perform duties effectively. For example, according to the 2013 Diagnostic and Statistical Manual of Mental Disorders, gambling disorder is a risk factor for suicide attempts, with roughly 17 percent of those in treatment attempting suicide at some point. In addition, data from the DOD Suicide Event Report show there were 8 suicides and 13 suicide attempts related to gambling behavior by servicemembers in fiscal years 2011 2015. Additionally, gambling disorder is difficult to detect because there are no objective laboratory tests, such as urinalysis for substance use, to identify individuals with potential gambling disorder, according to the American Society of Addiction Medicine. For this reason, screening is even more important to identifying servicemembers that may need assistance. In addition, military servicemembers may also be reluctant to seek mental- health treatment because of perceived stigma. According to the 2013 Diagnostic and Statistical Manual of Mental Disorders, less than 10 percent of individuals with gambling disorder seek treatment. Similarly, the Substance Abuse and Mental Health Services Administration has indicated that screening is important because few seek treatment directly for gambling disorder, and they instead seek treatment for other problems such as depression. Gambling disorder can also be easier to hide than other addictions, according to the American Society of Addiction Medicine. Without proactively asking gambling disorder questions as part of screening to help detect gambling disorder, DOD and the Coast Guard risk not identifying affected servicemembers and providing assistance for the disorder. When coupled with higher suicide rates, high rate of co- occurrence of other mental disorders, and the potential for critical financial situations, the effect that gambling disorder can have on individual readiness and the military family could be significant. <2.2. DOD and Coast Guard Medical Personnel State That They Use Current Diagnostic Standards to Diagnose Gambling Disorder and They Treat It Using Established Substance-Use Addiction Treatment Programs> According to Defense Health Agency and service medical officials, both DOD and Coast Guard medical personnel use the Diagnostic and Statistical Manual of Mental Disorders criteria to diagnose servicemembers with gambling disorders, and they employ the same evidence-based practices, such as cognitive behavioral therapy, to treat the disorder. DOD and Coast Guard medical providers diagnose and treat servicemembers on a case-by-case basis in behavioral- and mental- health departments of military treatment facilities. DOD also has dedicated outpatient programs for a variety of addiction and mental-health disorders. For example, the Navy has developed the Centering Heroes on Integrating Changes and Enhancing Strength program, in which medical professionals treat individuals with gambling disorder as well as other addictions. This program also includes treatment for post-traumatic stress and other mental-health disorders. In addition, each service provides counseling services on most bases where servicemembers can seek help and receive limited nonmedical treatment in the form of counseling sessions or other forms of support that do not require a medical diagnosis. Servicemembers may also be referred to civilian treatment facilities through TRICARE if inpatient treatment is required, as there are few residential DOD addiction treatment programs and none operated by the Coast Guard. Much like DOD, the competencies and resources of civilian facilities vary by location, but all locations that have state-certified mental-health providers are able to treat those with gambling disorder. In addition to mental-health treatment, clinicians with whom we spoke stated that financial counseling is also an important part of gambling disorder treatment. Servicemembers with gambling disorder would be provided with financial counseling services in addition to mental-health treatment. DOD and Coast Guard officials stated that financial counseling programs are available to all servicemembers and their families. For example, the Navy operates the Fleet and Family Support Program that provides financial counseling services on all naval installations and is free to all servicemembers and their families. According to DOD officials, military and civilian clinicians may have certifications or training specific to the treatment of gambling disorder, but it is not required to have these certifications or training to provide treatment. As long as the clinician maintains state-required licenses to treat individuals with mental-health disorders, the clinician is able to assess, diagnose, and treat conditions within the scope of practice determined by the state license and can treat gambling disorder with the training the clinician has. DOD mental-health providers include psychologists, psychiatrists, mental-health nurse practitioners, licensed social workers, and alcohol and drug counselors. Additional or supplemental training specific to gambling is made available to clinicians. For example, the Naval Medical Center San Diego offers continuing medical education, which includes a session on the clinical management of gambling disorder. <3. DOD and Coast Guard Guidance Does Not Address Gambling Disorder in a Similar Manner to Other Addictive Disorders> DOD and the Coast Guard do not have guidance that address gambling disorder in a similar manner as other addictive disorders, such as substance abuse. DOD Instruction 1010.04, Problematic Substance Use by DOD Personnel, outlines education and awareness policies for all DOD personnel, including commanders and nonmedical personnel, for substance use disorders, but not for gambling disorder, and officials were not aware of any other guidance that explicitly addressed gambling disorder. DOD health officials stated that this instruction implicitly covers gambling disorder; however, it refers only to problematic substance use and does not reference gambling disorder. DOD defines problematic substance use as the use of any substance in a manner that puts users at risk of failing in their responsibilities to mission or family, or that is considered unlawful by regulation, policy, or law. This definition includes substance use that results in negative consequences to the health or well-being of the user or others; or meets the criteria for a substance use disorder. However, DOD servicemembers without detailed knowledge of the 2013 Diagnostic and Statistical Manual of Mental Disorders that recategorized gambling disorder as an addiction rather than a behavior issue may not be able to translate its application to gambling disorder. In addition, the instruction establishes guidelines for problematic substance use, such as its incompatibility with readiness and military discipline, the goal of substance treatment programs to maintain force health and readiness, and implications for eligibility for access to classified information. As previously discussed, gambling disorder is the only non-substance-related condition categorized as an addiction in the Diagnostic and Statistical Manual of Mental Disorders. The non- substance-use terminology is significant because of its absence throughout the DOD instruction, while problematic substance use is a repetitive theme. Although medical personnel with whom we spoke are aware of the change in the treatment of gambling disorder between the manual s editions, this change is not reflected in the guidance for DOD s nonmedical personnel that would help ensure that servicemembers are referred to medical providers for gambling problems. We also found that the DOD military services Army, Marine Corps, Air Force, and Navy do not include gambling disorder in their substance abuse policy and guidance documents: Army: Army Regulation 600-85, The Army Substance Abuse Program, provides alcohol- and drug-abuse prevention and control policies, as well as individual responsibilities. Officials from the Army Substance Abuse Program told us that the program does not provide services to servicemembers with gambling disorder. However, Army Regulation 600-85 derives its program authority from DOD Instruction 1010.04, which, according to Defense Health Agency officials, implicitly applies to gambling disorder and, therefore, requires the Army substance- abuse program to include gambling disorder as well. This example indicates that there is not a clear understanding whether DOD Instruction 1010.04 covers gambling disorder within the Army. Navy: The Navy provides its policy for alcohol- and drug-abuse prevention in Chief of Naval Operations Instruction 5350.4D, Navy Alcohol and Drug Abuse Prevention and Control. In addition, the Navy also uses the Bureau of Medicine and Surgery Instruction 5353.4B, Standards for Provision of Substance Related Disorder Treatment Services, to update a uniform set of standards for the provision of substance-related disorder treatment services within the Department of the Navy. Both of these Navy documents list the current version of the Diagnostic and Statistical Manual of Mental Disorders as a reference, but neither discusses any specific information on gambling disorder. Air Force: Air Force Instruction 44-121, Alcohol and Drug Abuse Prevention and Treatment (ADAPT) Program, outlines the Air Force s policies for its alcohol- and drug-abuse prevention and treatment program. The Air Force guidance does list the 2013 Diagnostic and Statistical Manual of Mental Disorders as a reference, but does not specifically mention gambling disorder. Marine Corps: Marine Corps Order 5300.17, Marine Corps Substance Abuse Program, provides policy and procedural guidance to commanders, substance-abuse personnel, and Marines to effectively use and carry out the Marine Corps substance-abuse program, and so that commanders may improve their capability to treat and prevent alcohol- and drug-abuse problems. This guidance does reference the 2013 Diagnostic and Statistical Manual of Mental Disorders, but does not include any policy or guidance information on gambling disorder. The Coast Guard, which DOD Instruction 1010.04 does not cover, has three documents that provide guidance and policy to both medical and nonmedical personnel on substance abuse, but Coast Guard officials stated that they do not have any policy that specifically discusses gambling disorder. However, they did indicate that, from a medical perspective, gambling disorder has multiple similarities with substance abuse and is treated in accordance with Commandant Instruction M6200.1C, Coast Guard Health Promotion Manual. While this document does not list the Diagnostic and Statistical Manual of Mental Disorders as a reference, it does mention it in several sections primarily in regard to diagnostic codes. Commandant Instruction M1000.10, Coast Guard Drug and Alcohol Abuse Program, details the Coast Guard s general administrative policies on the substance-use program, that although does not mention gambling disorder, is currently under revision to remove all reference to medical issues. Although Commandant Instruction M6000.1F Coast Guard Medical Manual, does refer to pathological gambling (the former name of gambling disorder), the manual classifies the condition as an impulse-control disorder, not as an addiction as prescribed by the most recent edition of the Diagnostic and Statistical Manual of Mental Disorders. According to Commandant Instruction M6000.1F, a diagnosis of an impulse-control disorder may warrant separation from the Coast Guard, whereas certain types of substance use disorders are addressed in the Coast Guard Drug and Alcohol Abuse Program. Thus, gambling disorder is not being treated in the same manner as other addictive disorders, such as alcohol-use disorder. According to GAO Standards for Internal Control in the Federal Government, management must communicate high-quality information internally to enable personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. In addition, these standards require that DOD communicates high-quality information throughout the entity using established reporting lines. High- quality information is to be communicated down, across, up, and around reporting lines to all levels of the entity. DOD and the Coast Guard do not explicitly communicate their policies to unit commanders and other nonmedical personnel that gambling disorder should be addressed in the same manner as other addictive disorders, such as substance-use disorders, that is, through each service s substance-use programs or civilian providers through TRICARE. DOD has taken the initial step in communicating high-quality information regarding the diagnostic classification of gambling disorder as an addiction to its medical personnel through a memorandum in December 2013 directing the adoption of the recent edition of the Diagnostic and Statistical Manual of Mental Disorders. Given this action, DOD health officials stated that DOD Instruction 1010.04 implicitly covers gambling disorder, but this implied knowledge would only likely be derived from a detailed awareness of the Diagnostic and Statistical Manual of Mental Disorders publication and its diagnostic classifications. Coast Guard officials told us they do not have a formal instruction on gambling disorder. Additionally, because the Coast Guard s medical manual, Commandant Instruction M6000.1F, is based on the previous edition of the Diagnostic and Statistical Manual of Mental Disorders, pathological gambling is not classified as an addiction. When the issue of gambling disorder was raised with DOD and Coast Guard health-care officials, the officials agreed that, while their guidance on substance use technically covers problem gambling, the guidance would help clarify if it was revised to explicitly include problem gambling. However, the officials at this point do not have any plans to update the guidance accordingly. By not explicitly including mention of gambling disorders in its guidance for problematic substance use, the Office of the Secretary of Defense (OSD) and the military services, including the Coast Guard, are not communicating necessary policy, education, and awareness information to nonmedical personnel. As currently written, OSD and military service, including Coast Guard, personnel, such as unit commanders, do not have guidance that instructs them to refer personnel with gambling problems for medical evaluation of a potentially addictive disorder, thus possibly preventing personnel from receiving necessary and appropriate medical assistance. This could lead to administrative or disciplinary actions that address only the misconduct associated with the behavior. Also, gambling disorder is one of the factors that can also lead to the revocation or failing of a background security investigation for security clearances, thus affecting individual readiness and the capacity of the organization to meet its mission. While gambling disorder is a comparatively low-volume disorder, DOD instructions acknowledge that mental health issues may affect individual readiness. Gambling disorder may also be a risk to national security. A 2006 memorandum from the Under Secretary of Defense for Intelligence stated that compulsive gambling is a concern as it may lead to financial crimes including espionage. Absent explicit guidance, OSD, the DOD military services, and the Coast Guard risk not being able to identify and provide appropriate treatment and counseling to DOD and Coast Guard servicemembers afflicted by gambling disorder and mitigate or prevent individual readiness issues. <4. Conclusions> Gambling disorder has been identified within the medical community as an addiction similar to drug or alcohol use. Gambling disorder can also develop in conjunction with other addictions. Gambling disorder is a risk factor for suicide and according to the Diagnostic and Statistical Manual of Mental Disorders about 17 percent of individuals in treatment for gambling disorder attempt suicide at some point in their life. A person with gambling disorder may also have financial or legal issues that, combined with other addictions, could spiral out of control. According to the American Psychiatric Association, only 10 percent of individuals with gambling disorder seek treatment. However, DOD and the Coast Guard do not include gambling disorder questions as part of a systematic screening process for identifying servicemembers who may have a gambling disorder. Implementing systematic screening for gambling disorder may help to identify servicemembers with problem gambling or gambling disorder. Without incorporating medical screening questions specific to gambling disorder, gambling problems may not be identified until they reach a critical point affecting the individual s readiness in addition to harming the financial situation of the servicemember and, potentially, national security. In addition, guidance for nonmedical personnel does not discuss gambling disorder as an addiction; therefore, DOD and service guidance do not direct nonmedical personnel that gambling should be treated in a medical manner. Explicitly including gambling disorder in guidance would identify it as a medical issue for nonmedical personnel. Communicating this change throughout DOD would make clear the proper steps to be taken to address this addiction before it becomes an administrative or disciplinary issue. Given the importance of and concern with maintaining individual readiness among servicemembers, without updated guidance to nonmedical personnel, DOD and the Coast Guard may not be able to increase awareness that gambling disorder is a medical condition and that individuals with a potential gambling problem should be referred to appropriate medical officials. <5. Recommendations for Executive Action> We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following two actions: Incorporate medical screening questions specific to gambling disorder as part of a systematic screening process across DOD, such as DOD s annual Periodic Health Assessment, for behavioral and mental-health issues. Update DOD Instruction 1010.04, Problematic Substance Use by DOD Personnel, to explicitly include gambling disorder as defined in the 2013 Diagnostic and Statistical Manual of Mental Disorders. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following action: Update Army Regulation 600-85, The Army Substance Abuse Program, to explicitly include gambling disorder. We recommend that the Secretary of Defense direct the Secretary of the Navy to take the following action: Update Naval Operations Instruction 5350.4D, Navy Alcohol and Drug Abuse Prevention and Control, to explicitly include gambling disorder. We recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following action: Update Air Force Instruction 44-121, Alcohol and Drug Abuse Prevention and Treatment (ADAPT) Program, to explicitly include gambling disorder. We recommend that the Secretary of Defense direct the Commandant of the Marine Corps to take the following action: Program, to explicitly include gambling disorder. We recommend that the Commandant of the Coast Guard take the following two actions: Update Commandant Instruction M6000.1F, Coast Guard Medical Manual, to classify gambling disorder as an addiction and not as an impulse control issue. Update Commandant Instruction M1000.10, Coast Guard Drug and Alcohol Abuse Program, to explicitly include gambling disorder. <6. Agency Comments and Our Evaluation> We provided a draft of this report to DOD and the Department of Homeland Security for review and comment. In written comments, reproduced in appendix III, DOD concurred with five recommendations, did not concur with one recommendation, and provided two substantive technical comments for our consideration. In written comments, reproduced in appendix IV, the Department of Homeland Security concurred with both of the recommendations directed to the Coast Guard and, separately, provided technical comments, which we incorporated as appropriate. DOD concurred with the five recommendations to update DOD and military service policies to explicitly include gambling disorder as defined in the 2013 Diagnostic and Statistical Manual of Mental Disorders. However, DOD did not concur with our recommendation to incorporate medical screening questions specific to gambling disorder as part of a systematic screening process across DOD, such as in DOD s annual Periodic Health Assessment for behavioral and mental-health issues. In its written comments, DOD stated that there is no evidence to suggest that gambling disorder is a high-prevalence disorder in DOD and that it is impractical to screen for every low-prevalence disorder. DOD noted that there are numerous mental health disorders with similar or higher prevalence (e.g., bipolar disorder, psychotic disorders, and obsessive- compulsive disorder) for which DOD does not routinely screen. DOD stated that screening for additional conditions in the Periodic Health Assessment adds time and resources and would require an additional burden on the servicemember and provider. DOD noted that priority to screen for a disorder is given to high-risk, high-volume, and problem- prone disorders with validated measures for assessment. We disagree that DOD can definitively conclude that gambling disorder and problem gambling among DOD and Coast Guard servicemembers are low prevalence, and therefore related screening questions should not be a part of a systematic screening. First, as we noted in our report, DOD prevalence data are limited to those compiled in the Military Health System Data Repository, which reflects only DOD and Coast Guard servicemembers who seek care for gambling-related issues through the TRICARE system, and excludes information from sources outside of the Military Health System. These sources include, for example, nonmedical counseling that a Marine might receive from the Marine Corps Community Counseling Program, Behavioral Health Program, or Consolidated Substance Abuse Counseling Center; from treatment or counseling provided at civilian facilities outside the TRICARE system; or from counseling received at local support groups, such as Gamblers Anonymous. Second, the data do not reflect care received by Reserve Component members unless they are on active orders for more than 30 days. Third, the data does not account for those who do not seek care inside or outside the TRICARE system. Screening specifically for gambling disorder takes on particular importance because, as noted in the 2013 Diagnostic and Statistical Manual of Mental Disorders (i.e., the primary source used by civilian and military mental-health-care providers to diagnose mental disorders), less than 10 percent of individuals with gambling disorder seek help and because, according to the American Society of Addiction Medicine, gambling disorder can be easier to hide than other addictions. According to the 2013 Diagnostic and Statistical Manual of Mental Disorders, persons diagnosed with gambling disorder exhibit a preoccupation with gambling, are at risk of a higher rate of a co- occurrence of other mental disorders, are at increased risk of suicide, and are at risk of critical financial situations. We note that these issues can pose a significant risk to individual readiness and, potentially, to national security. Furthermore, DOD stated that the two examples for screening-question sets that we cite in our report, the Lie/Bet screening and the South Oaks Gambling Screen are not appropriate diagnostic tools because the Lie/Bet screening has scored poorly as a diagnostic screening tool and the South Oaks Gambling Screen has a high rate of false positives and would result in an additional burden to servicemembers and the provider. To clarify, in our recommendation we do not limit DOD to using these two examples for screening-question sets as the specific questions that we are recommending that DOD incorporate as part of a systematic screening process across DOD. DOD further stated that it is actively engaged in screening servicemembers for financial difficulties and other symptoms often associated with gambling through the Health Related Behavior Survey and the Periodic Health Assessment. As we note in our report, limiting screening to questions about financial difficulties will likely not result in the identification of individuals with gambling disorder before it affects individual readiness. Due to each of these reasons, we continue to believe that our recommendation to incorporate medical screening questions specific to gambling disorder as part of a systematic screening process across DOD is valid. DOD also included two substantive technical comments as part of its written response. First, DOD stated that it is unclear what is within our scope for inclusion of epidemiological studies on the prevalence of gambling disorder. DOD felt the prevalence percentages in the Kessler study (2008) should have been included as part of our results. We did identify this study in our literature search and, although it was published after 2006, the data were collected between 2001 and 2003, which is outside the time frame of our review, as stated in appendix II. To address DOD s comment, we included more specific language related to the scope of the literature search in the abbreviated scope and methodology section earlier in the report. Second, DOD noted that our statement regarding 8 suicides and 13 suicide attempts related to gambling behavior should be put into context. Specifically, DOD stated that, according to DOD Suicide Event Reporting data, only 0.6 percent of all suicides and 0.3 percent of all attempts had a history of problem gambling. We did not address this comment in the report because the contextualization does not apply to the entire scope of the population included in this review. DOD Suicide Event Reporting data do not include the entire reserve component population as we stated in the report only those in an active-duty status. As a result, including these percentages would not be appropriate in this case. <6.1. Department of Homeland Security> The Department of Homeland Security concurred with the recommendations to update Commandant Instruction M6000.1F and Commandant Instruction M1000.10 to explicitly include gambling disorder. With respect to Commandant Instruction M1000.10, they stated that it is currently under revision and being updated to remove all references to medical issues and associated terminology, including those related to gambling disorder. Once revised, the manual will strictly focus on administrative separation policy based on misconduct associated with alcohol and drug abuse and will meet the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, the Secretary of Homeland Security, and the Commandant of the Coast Guard. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Number and Location of Slot Machines by Military Service on Department of Defense Military Installations Overseas On certain overseas U.S. military installations, the Department of Defense (DOD) has slot machines both to generate revenue to fund other recreational activities and to serve as a recreational opportunity for DOD servicemembers, their adult family members, and local civilians with access to the installations. The slot machines are generally located in recreational centers, such as bowling alleys and clubs for officers and enlisted personnel. As of July 31, 2016, DOD has 3,141 slot machines located primarily on installations in Japan, the Republic of Korea, and Germany, as shown in table 4. The Army Recreation Machine Program, under the Army Installation Management Command, operates slot machines on Army, Navy, and Marine Corps installations. The Army operates most of the slot machines on Navy installations and all of the slot machines on Marine Corps installations in accordance with two memorandums of agreement. The Army s revenue generated from slot machines comprises the revenue from slot machines on Army installations and a share of the revenues generated from slot machines on Navy and Marine Corps installations. The Navy and Marine Corps receive a share of the revenues generated from slot machines on their respective installations by mutual agreement with the Army on a site-by-site basis. The Navy s proceeds also include revenue generated from Navy-operated machines on installations on Diego Garcia. The Air Force Gaming Program, part of the Air Force Personnel Center, operates all of the Air Force s slot machines, and the Air Force retains all of the revenue from these slot machines. The Army and the Air Force pay for nonappropriated personnel, operation, maintenance, and other overhead expenses related to the slot machines out of their respective proceeds. The Navy pays only for personnel, operation, maintenance, and other overhead expenses for the Navy- operated machines on Diego Garcia. Coast Guard installations overseas do not have any slot machines. According to data provided by DOD, in fiscal years 2011 through 2015, DOD-run slot machines generated a total of $538.9 million in revenue. DOD calculates this revenue, listed by service in table 5, by subtracting payouts to gamblers from the amounts they paid to play. Appendix II: Scope and Methodology The scope of our review included all Department of Defense (DOD) and Coast Guard offices responsible for oversight or administration of gambling activities and medical commands or offices responsible for diagnosing and treating gambling disorder. We included both the active and reserve components, including the federal components of the National Guard. Table 6 contains a list of the agencies and offices we contacted during the course of our review. To describe what is known about the prevalence of gambling disorder among servicemembers in DOD and the Coast Guard (objective 1), we analyzed the most-recent data from the Military Health System Data Repository for fiscal years 2011 2015 for active duty servicemembers from the military services, including the Coast Guard, and Reserve Component servicemembers diagnosed with gambling disorder or seen for problem gambling. To assess the reliability of these data, we sent a questionnaire to officials from the Defense Health Agency, which oversees the Military Health System Data Repository system, and interviewed knowledgeable Defense Health Agency and service officials about how the data are entered, collected, stored, and processed. We also met with service mental-health providers to learn, among other things, how they diagnose patients and enter their information into the electronic health records. We also reviewed Military Health System Data Repository documentation including the user s guide and data dictionary. We analyzed the summary data for accuracy and obvious errors, and we found none. We found these data to be sufficiently reliable to show the number of servicemembers seen for gambling disorder, pathological gambling, or problem gambling in the Military Health System in fiscal years 2011 through 2015. In addition, we reviewed DOD s most-recent health-related behaviors surveys that included specific questions on gambling conducted in 2002 for the active component and in 2010 and 2011 for the Reserve and National Guard to identify what is known about the prevalence of problematic gambling behaviors among servicemembers. We analyzed Department of Veterans Affairs data on the prevalence of problem gambling and gambling disorder, interviewed cognizant officials, and reviewed Department of Veterans Affairs medical record documentation. We reviewed summary data for accuracy and obvious errors and determined that the data were sufficiently reliable to report on the total number of individuals diagnosed with gambling disorder or seen for problem gambling in the Department of Veterans Affairs system. We also conducted literature searches regarding the prevalence of problem gambling and gambling disorder in the general population as well as the military population. Specifically, for the military prevalence review, we searched for studies reporting the prevalence of problem gambling within the active and reserve (but not veteran) U.S. military population, including the Coast Guard, in English-language professional journals, government reports, and other published and unpublished papers published between 2001 and 2016. We searched Proquest, Proquest professional, SCOPUS, Homeland Security Digital Library, ECO, ArticleFirst, WorldCat, PolicyFile, and CQ hearings databases using search terms including variations and Boolean combinations of the following terms: DOD, defense, armed forces, military, army, navy, marines, air force, coast guard, service members, gambling, betting, wagering, gaming, casino, prevalence, risks, problem, diagnosed, treated, treatment, and financial counseling. This search resulted in identifying 94 potentially relevant sources. We screened these sources by reviewing the titles and abstracts and other necessary bibliographic information and excluded sources that were out of scope. We also reviewed bibliographies to identify additional sources, but no new sources meeting the review criteria were identified. This process resulted in identifying four sources for a complete review, of which two were found not to contain relevant data. DOD also brought two other DOD survey studies to our attention, making one of the search- based studies obsolete. Therefore, a total of three studies on military prevalence were fully reviewed by two specialists and are described in this report. For the U.S. general-population prevalence review, we searched for studies reporting the prevalence of problem gambling in the adult U.S. general population in English-language professional journals, government reports, and other published and unpublished papers published with data collected in 2006 or later. We searched Proquest, Proquest professional, SCOPUS, Homeland Security Digital Library, ECO, ArticleFirst, WorldCat, PolicyFile, and CQ hearings databases using search terms including variations and Boolean combinations of the following terms: Gambling, gambler, problem, disorder, pathological, personality traits, prevalence, risk, population, statistics, epidemiology, risk factors, frequency, occurrence, rate, amount, occasion, incident, US, USA, and United States. This search resulted in identifying 278 potentially relevant sources. We then modified the criteria to include only studies with data collected in or after 2006 because this would more closely match data provided by DOD. We screened the sources by reviewing the titles and abstracts and other necessary bibliographic information and excluded sources that were out of scope. We also reviewed bibliographies to identify additional sources, but no new sources meeting the review criteria were identified. Thirty of the 278 sources were excluded because they were redundant with other sources, and another 224 were excluded because they were out of scope. Three sources from the 278 sources were reviewed to identify new sources from their bibliographies but they did not identify any new in-scope sources. This process resulted in selecting 21 sources for a full review. However, upon reviewing them, 2 were found to be redundant with the previously discussed sources, and 18 were found to be out of scope or were review papers that did not contain any new in-scope sources. Therefore, only one study was found to meet the review criteria. That study was fully reviewed by two specialists and is described in this report. We also interviewed DOD and Coast Guard officials regarding the prevalence and risk associated with gambling disorder. We also conducted a literature search to identify any studies testing the hypothesis that increased availability of gambling opportunities leads to higher prevalence of problem gambling within the military population. We initially searched for any studies reporting the correlation between gambling availability and problem gambling in any adult general population around the world, published in English-language professional journals, government reports, and other published and unpublished papers between 1996 and 2016. We conducted the initial search through Proquest, SCOPUS, Web of Science, ECO, ArticleFirst, WorldCat, and PolicyFile databases using search terms including variations and Boolean combinations of the following terms: Gambling, gambler, problem, disorder, pathological, availability, proximity, accessibility, near, correlated, correlation, associated, association, causation, causal, and related. This search identified 62 potentially relevant sources. Two were excluded because they were redundant with another source or were updated by another source already in our list, and another 39 were excluded because they were found to be outside the original scope. Based on the original search criteria, 21 were identified as potentially relevant to be reviewed in full or for a bibliography review. We then narrowed the criteria to any studies that reported estimates of the causal effect of gambling availability on problem gambling in the U.S. military population. We scanned the titles and abstracts and other necessary bibliographic information, or fully reviewed sources, and determined that no studies estimated the causal effect of gambling availability on problem gambling in the U.S. military population. To assess DOD s and the Coast Guard s approaches to screening, diagnosing, and treating servicemembers for gambling disorder (objective 2), we reviewed the primary source of criteria for civilian and military mental health professions for diagnosing patients with gambling disorder the American Psychiatric Association s Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. We reviewed screening tools, such as the DOD s Periodic Health Assessment to identify whether they contained gambling disorder screening questions. We also reviewed the American Society of Addiction Medicine treatment criteria for addictive conditions, which assess the appropriate treatment venue for each patient based on a multidimensional assessment; these criteria were designed to define one national set of criteria for providing outcome-oriented and results-based care in the treatment of addiction. We interviewed mental-health officials, counselors, chaplains, and personnel-security officials regarding the screening, diagnosis, counseling, and treatment of individuals with gambling disorder. We also conducted interviews with officials from both domestic and overseas bases to determine practices and methods for diagnosing and treating gambling disorder. We selected a nongeneralizable sample of four military installations that represented all four DOD military services as well as the Coast Guard and had reported at least one diagnosed case of gambling disorder. We also selected these installations because they were in close proximity to each other and close to GAO facilities in the Southern California area, for cost reasons. In addition, the California National Guard provided written responses to questions on the diagnosis and treatment of gambling disorder. We selected and met with Army, Navy, and Marine Corps representatives from four overseas bases in the Republic of Korea and Japan that had reported more than one case of gambling disorder or problem gambling in fiscal years 2011 2015 or had DOD-run slot machines. Only two combatant commands have bases with slot machines U.S. Pacific Command and U.S. European Command. We chose bases in U.S. Pacific Command, which had an outpatient program for servicemembers with gambling disorder in Okinawa, Japan. We were unable to arrange any meetings with Air Force bases in the Republic of Korea and Japan. The overseas interviews included chaplains stationed in this theater. We also reviewed DOD data on suicide and suicide attempts that were related to gambling. At our request, DOD conducted a search of its Suicide Event Report using the following search terms: gamble, debt, bookie, casino, roulette, cards, poker. The resulting numbers are underestimates of the number of suicides and attempted suicides due to the way the data are collected; the magnitude of the underestimation is unknown, according to DOD officials. The officials reviewed the search results to ensure that the content was indicative of monetary gambling (i.e., to ensure that debt referenced gambling debts and not other sources of debt, such as credit card debt or child support). Death-risk gambling (e.g., Russian Roulette) was excluded in this analysis. The search identified 8 deaths due to suicide and 13 suicide attempts between fiscal years 2011 and 2015 where the behavioral-health professional completing the DOD Suicide Event Report indicated any gambling behavior as a relevant antecedent factor. DOD stated the occurrence of gambling behavior should not be interpreted as being causally related to the occurrence of the suicide behavior. To evaluate the extent to which DOD and Coast Guard guidance address gambling disorder in a manner similar to substance-use disorder, we compared DOD s and the Coast Guard s respective policies on substance use against GAO s Standards for Internal Control in the Federal Government. According to Standards for Internal Control in the Federal Government, management must communicate high-quality information internally to enable personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. We also reviewed DOD (including service-level) and Coast Guard guidance pertaining to the screening, diagnosis, and treatment of gambling disorder. We compared mental-health criteria documents such as the 2013 Diagnostic and Statistical Manual of Mental Disorders to DOD and service-level guidance to assess congruence. Our review included the following documents: DOD Instruction 1010.04, Problematic Substance Use by DOD Personnel (Feb. 20, 2014); DOD Instruction 6025.19, Individual Medical Readiness (June 9, Army Regulation 600-85, The Army Substance Abuse Program (Dec. 28, 2012); Chief of Naval Operations Instruction 5350.4D, Navy Alcohol and Drug Abuse Prevention and Control (June 4, 2009); Bureau of Medicine and Surgery Instruction 5353.4B, Standards for Provision of Substance Related Disorder Treatment Services (July 6, 2015); Air Force Instruction 44-121, Alcohol and Drug Abuse Prevention and Treatment (ADAPT) Program (July 8, 2014); Marine Corps Order 5300.17, Marine Corps Substance Abuse Program (April 11, 2011); Commandant Instruction M6200.1C, Coast Guard Health Promotion Manual (July 9, 2015); Commandant Instruction M1000.10, Coast Guard Drug and Alcohol Abuse Program (Sept. 29, 2011); Commandant Instruction M6000.1F, Coast Guard Medical Manual (Aug. 22, 2014); American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; and American Society of Addiction Medicine, The ASAM Criteria: Treatment Criteria for Addictive, Substance-Related, and Co- Occurring Conditions, Third Edition. We conducted this performance audit from December 2015 to January 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments <7. GAO Contact> <8. Staff Acknowledgments> In addition to the contact named above, Kimberly C. Seay (Assistant Director), Mae Jones, Shari Nikoo, Matthew Sakrekoff, Paul Seely, Michael Silver, and Eric Warren made major contributions to this report.
Why GAO Did This Study The American Psychiatric Association's 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders defines gambling disorder as persistent and recurrent problematic gambling behavior leading to clinically significant impairment or distress. Public Law 114-92 included a provision for GAO to review gambling among members of the armed forces. This report (1) describes what is known about the prevalence of gambling disorder among servicemembers in DOD and the CG; (2) assesses DOD's and the CG's approaches to screening, diagnosing, and treating servicemembers for gambling disorder; and (3) evaluates the extent to which DOD and CG guidance address gambling disorder in a manner similar to substance-use disorders. GAO analyzed DOD's most recent data related to gambling disorder prevalence (fiscal years 2011–2015) and DOD and CG policies. What GAO Found Department of Defense (DOD) data show 514 DOD and Coast Guard (CG) active-duty servicemembers and 72 Reserve Component servicemembers—less than 0.03 percent of the average number of servicemembers in each year—were diagnosed with gambling disorder or were seen for problem gambling in fiscal years 2011 through 2015 in the Military Health System (MHS). The MHS provides health services to beneficiaries across a range of care venues, such as military treatment facilities and civilian facilities through TRICARE. DOD bases this prevalence of gambling disorder and problem gambling on MHS data and does not include other sources of information, such as DOD-wide surveys and records of treatment provided outside of the MHS. The Defense Health Agency compiles these data in the MHS Data Repository, which includes data on clinical interactions between servicemembers and health-care professionals. The MHS Data Repository does not include data on DOD and CG servicemembers who received treatment or counseling for gambling disorder or problem gambling outside of the MHS. DOD and the CG do not systematically screen for gambling disorder and, according to medical officials, both DOD and the CG use the 2013 Diagnostic and Statistical Manual of Mental Disorders criteria to diagnose servicemembers with gambling disorders, and they employ the same evidence-based treatments. Clinicians who GAO interviewed stated that financial counseling is also an important part of gambling disorder treatment. However, DOD's and CG's medical professionals do not incorporate medical screening questions specific to gambling disorder as they do for other similar medically determined addictive disorders, such as substance use. DOD officials stated they do not screen for gambling disorder because they focus on mental-health disorders that are high risk to overall readiness, high volume, and have validated measures for assessment. While gambling disorder is not a frequently diagnosed condition, the preoccupation with gambling, financial hardship, and increased risk of suicide can pose a risk to individual readiness. In addition, the Substance Abuse and Mental Health Services Administration has indicated that screening is important because few seek treatment directly for gambling disorder. Without proactively asking gambling disorder questions as part of screening to help detect gambling disorder, DOD and the CG risk not identifying affected servicemembers and providing treatment or counseling. DOD and CG nonmedical personnel do not have clear guidance addressing gambling disorder. Neither DOD's nor CG's guidance for substance-use disorders explicitly includes gambling disorder. DOD health officials stated that their substance-use instruction “implicitly” covers gambling disorder; however, it refers only to problematic substance use. The Coast Guard has three documents that provide guidance and policy to both medical and nonmedical personnel on substance abuse, but these documents do not specifically discuss gambling disorder as an addiction. Without explicitly including gambling disorder in DOD and CG guidance on substance use, DOD and the CG may not being able to identify and provide appropriate treatment and counseling to DOD and CG servicemembers afflicted by gambling disorder and mitigate or prevent individual readiness issues. What GAO Recommends GAO makes eight recommendations, including that DOD incorporate gambling disorder questions in a systematic screening process and DOD and the CG update guidance to include gambling disorder. DOD concurred with five recommendations focused on updating guidance, but did not concur with incorporating gambling questions into a screening process due to the disorder's low prevalence. GAO maintains that this recommendation is still valid because, among other things, DOD's prevalence data are limited. The CG concurred with the two recommendations focused on updating guidance.
<1. Background> Federal employees, including postal workers, are protected by a variety of laws against discrimination based on race, color, sex, religion, national origin, age, or disability. In addition, federal employees are protected from retaliation for filing a complaint, participating in an investigation of a complaint, or opposing a prohibited personnel practice. Federal employee EEO complaints are to be processed in accordance with regulations (29 C.F.R. part 1614) promulgated by EEOC. These regulations also establish processing time requirements for each stage of the complaint process. Under these regulations, federal agencies decide whether to dismiss or accept complaints employees file with them and investigate accepted complaints. After the investigation, a complainant can request a hearing before an EEOC administrative judge who may issue a recommended decision that the agency is to consider in making its final decision. An employee who is dissatisfied with a final agency decision or its decision to dismiss a complaint may file an appeal with EEOC.Generally, federal employees must exhaust the administrative process before pursuing their complaints in court. EEOC will be implementing changes to the complaint process beginning in November 1999. One of the most significant changes involves decisions issued by administrative judges. Under the regulations, these decisions would no longer be recommendations that agencies could modify. Rather, as its final action (as final decisions will be called), an agency would issue a final order indicating whether or not it would fully implement the administrative judge s decision. If the agency chooses not to fully implement the decision, it will be required to file an appeal of the decision with EEOC. Complainants would retain their right to appeal an agency s final order. For a further discussion of the complaint process and upcoming changes, see app. II. In July 1998, we reported on our analysis of inventories of unresolved EEO complaints at federal agencies and EEOC and how trends in the number of complaints filed and the time taken to process them had contributed to inventory levels. We found that agencies complaint inventories, and even more so, EEOC s hearings and appeals inventories, had increased since fiscal year 1991; as the size of inventories grew, so did the average length of time that cases had been in inventory as well as the proportion of cases remaining in inventory longer than allowed by regulations; the size of the inventories and the age of cases in them increased as agencies and EEOC did not keep up with the influx of new cases; with the increased caseloads, EEOC and, to some extent, agencies, took longer on average to process complaints, contributing to the size and age of inventories; and the implications of these trends were that inventories of cases pending would grow even larger in the future, particularly at EEOC, and that cases would take even longer to process. <2. Scope and Methodology> In updating our analysis, we used preliminary data for fiscal year 1998 provided by EEOC and reviewed the agency s budget request for fiscal year 2000 and its Annual Performance Plans for fiscal years 1999 and 2000. We also examined EEOC s planned changes to the complaint process. In addition, because postal workers have accounted for about half of the complaints filed in recent years, we separately analyzed data reported by the U.S. Postal Service in order to compare statistics for the postal workforce with the nonpostal workforce (see app. III). Appendix I contains details about our scope and methodology. We requested comments on a draft of this report from the Chairwoman, EEOC, and the Postmaster General. Their comments are discussed near the end of this letter. We performed our work from March through May 1999 in accordance with generally accepted government auditing standards. <3. Complaint Inventories Continued to Rise> Since we last reported in July 1998, agencies complaint inventories and, even more so, EEOC s hearings and appeals inventories were, once again, higher. Table 1 shows the trends in the inventories of complaints at agencies and of hearing requests and appeals at EEOC for fiscal years 1991 to 1998. At agencies, the inventory of unresolved complaints had risen from 16,964 at the end of fiscal year 1991 to 34,286 by the end of fiscal year 1997. One year later, agencies inventories of unresolved complaints had increased by an additional 6 percent, to 36,333. Inventory levels increased at the Postal Service and nonpostal agencies in fiscal year 1998, but growth was more rapid in the nonpostal agencies. Compared with fiscal year 1997, the Postal Service inventory increased by 3.3 percent, from 13,549 to 13,996 (see app. III, table III.1), while the inventories at nonpostal agencies rose by 7.7 percent, from 20,737 to 22,337. Overall, from fiscal year 1991 to fiscal year 1998, complaint inventories at federal agencies rose by about 114 percent. The increase in agencies inventories was accounted for mainly by the growing number of the agencies cases pending a hearing before an EEOC administrative judge. An agency s inventory of unresolved complaints is affected by EEOC s handling of hearing requests because EEOC must resolve a hearing request before an agency can make a final decision on the complaint. Of the 36,333 cases in agencies inventories at the end of fiscal year 1998, 13,357 (about 37 percent) were awaiting a hearing before an EEOC administrative judge. The 13,357 cases awaiting a hearing before an EEOC administrative judge represented a 3,755 case (39 percent) increase over the fiscal year 1997 level of 9,602. The increase in the number of cases in the hearing stage more than offset reductions in the number of cases in agencies inventories at the initial acceptance/dismissal and final agency decision stages of the complaint process. At EEOC, the inventory of hearing requests, which had increased from 3,147 at the end of fiscal year 1991 to 10,016 at the end of fiscal year 1997, increased by an additional 19.5 percent, to 11,967, by the end of fiscal year 1998. Overall, from fiscal year 1991 to fiscal year 1998, EEOC s hearing request inventory rose by about 280 percent. EEOC s inventory of appeals, which had increased from 1,466 to 9,980 during fiscal years 1991 to 1997, increased by an additional 9.9 percent, to 10,966, by the end of fiscal year 1998. Overall, from fiscal year 1991 to fiscal year 1998, EEOC s appeals inventory rose by 648 percent. (See app. IV, figure IV.2). <3.1. Age of Complaints in Inventories Continued to Grow> As the size of the inventories continued to grow, so did the average length of time that cases, and the conflict underlying these complaints, remained unresolved. Table 2 shows the trends in the average age of complaints in agencies inventories and of hearing requests and appeals in EEOC s inventories for fiscal years 1991 to 1998. The overall average age of unresolved complaints in agencies inventories, after declining through fiscal year 1994, reached a new level of 446 days at the end of fiscal year 1998. The age of cases varied by the stage of the complaint process. Table 3 shows the average age of complaints in inventory, from the time a complaint was filed, at various stages of the complaint process, both overall and at the Postal Service and nonpostal agencies at the end of fiscal year 1998. (Also see app. IV, figure IV.3 for trends in the average age of complaints in inventory at the various stages of the complaint process for fiscal years 1991 to 1998.) As table 3 shows, the complaints that were in agencies inventories the longest at the end of fiscal year 1998 were those awaiting a hearing before an EEOC administrative judge. The average age of cases awaiting a hearing had a significant impact on the overall average age of unresolved complaints in inventory, particularly at the Postal Service. Because cases remained in inventory for lengthy periods, agencies frequently did not meet the regulatory requirement that they dismiss or accept a complaint, investigate an accepted complaint, and report the investigation results to the complainant within 180 days from the filing of a complaint (see app. IV, figure IV.4). The proportion of cases pending the initial acceptance or dismissal decision more for than 180 days stood at 32.5 percent in fiscal year 1998. At the Postal Service, 65.5 percent of cases in the acceptance/dismissal stage had been in inventory more than 180 days at the end of fiscal year 1998 (see app. III, table III.3); the figure for nonpostal agencies was 26.2 percent. Of the complaints pending investigation, 48.3 percent had been in inventory more than 180 days. At the Postal Service, 36.5 percent of cases in the investigation stage had been in inventory more than 180 days at the end of fiscal year 1998 (see app. III, table III.3); the figure for nonpostal agencies was 52 percent. <3.1.1. The Situation at EEOC> At EEOC, the average age of cases in both the agency s inventory of hearing requests and its inventory of appeals was higher in fiscal year 1998 than in fiscal year 1997 (see table 2). The average age of hearing requests in inventory increased sharply, from 243 days in fiscal year 1997 to 320 days in fiscal year 1998. The figure for fiscal year 1998 is about 3 times what is was in fiscal year 1993, when the average age of a hearing request in inventory had reached a low of 105 days. As a result of the rising age of hearing requests in inventory, a greater proportion of these cases did not meet the requirement in EEOC s regulations that administrative judges issue a recommended decision within 180 days of a request for a hearing. In fiscal year 1998, 56.2 percent of the hearing requests had been in inventory longer than the 180-day time limit, up from 50.3 percent the previous year. EEOC has had increasing difficulty meeting the 180-day requirement since fiscal year 1993, when 13.3 percent of hearing requests had been in inventory longer than the 180 days. (See app. IV, figure IV.6.) The increasing age of EEOC s hearing request inventory has been a major factor in the size and age of cases in agencies inventories awaiting a hearing before an administrative judge. In contrast to hearing requests, table 2 shows a smaller increase in the average age of appeals in EEOC s inventory, from 285 days in fiscal year 1997 to 293 days in fiscal year 1998 (see app. IV, figure IV.5). Nonetheless, the figure for fiscal year 1998 is more than 3 times what it was in fiscal year 1992, when the average age of appeals in inventory was 87 days. Although EEOC regulations prescribe time limits for processing hearing requests, they do not prescribe time limits for processing appeals. However, one indicator of the time it takes EEOC to process appeals is the percentage of cases remaining in inventory more than 200 days. EEOC s data show that in fiscal year 1998, 58.5 percent of the appeals cases remained in inventory longer than 200 days, a slight increase from fiscal year 1997, when this figure was 58 percent. However, the figures for fiscal years 1997 and 1998 represent a substantial increase compared with fiscal year 1991, when only about 3 percent of appeals had been in inventory longer than 200 days. (See app. IV, figure IV.7.) <4. Agencies and EEOC Unable to Keep Up With Influx of New Cases> The size of the inventories and the age of the cases in them continued their upward trend as agencies and EEOC did not keep up with the influx of new cases. As discussed later in this report, the increase in the number of complaints did not necessarily signify an equivalent increase in the actual number of individuals filing complaints. Table 4 shows the trends in the number of complaints filed with agencies and the number of hearing requests and appeals filed with EEOC for fiscal years 1991 through 1998. At agencies, the overall number of complaints, which had increased from 17,696 in fiscal year 1991 to 28,947 in fiscal year 1997, declined by 2.8 percent, to 28,147 in fiscal year 1998. At the nonpostal agencies, the number of new cases declined, from 14,621 in fiscal year 1997 to 13,750 in fiscal year 1998. During this period, however, the number of new complaints at the Postal Service increased slightly, from 14,326 to 14,397 (see app. III, table III.5). Overall, the number of complaints filed with federal agencies in fiscal year 1998 was 59.1 percent higher than in fiscal year 1991. At EEOC, requests for hearings, which increased from 5,773 to 11,198 during fiscal years 1991 to 1997, rose again, by 9.1 percent, to 12,218, in fiscal year 1998. Appeals to EEOC of agency decisions, however, which increased from 5,266 to 8,453 during fiscal years 1991 to 1997, increased only slightly, by three-tenths of 1 percent, to 8,480, in fiscal year 1998. Historically, the rate of growth in the number of hearing requests filed has outpaced that of appeals. Compared with fiscal year 1991, the number of hearing requests filed in 1998 was 111.6 percent higher; the comparable figure for appeals was 61 percent. More recently, since fiscal year 1995, the number of hearing requests filed increased by about 16 percent, while the number of appeals filed increased by about 4 percent. Postal workers continue to account for a large and disproportionate share of complaints, hearing requests, and appeals. In fiscal year 1998, postal workers represented about 32 percent of the federal workforce and accounted for about 51 percent of complaints, about 47 percent of hearing requests, and about 47 percent of appeals. (See app. III , tables III.4 and III.5.) <4.1. Processing Times Rose With the Influx of New Cases> With increasing caseloads since fiscal year 1991, agencies and EEOC have been taking longer on average to process complaints, contributing to the size and age of the inventories. Table 5 shows the average processing time for complaints at agencies and for hearing requests and appeals at EEOC for fiscal years 1991 to 1998. The overall average number of days agencies took to close a case, which had reached a low of 305 days in fiscal year 1995, was 384 days in fiscal year 1998. This represented a slight improvement over fiscal year 1997 s 391-day average. Average closure time varied according to the type of closure action. In addition to closing cases by dismissing them or by issuing final decisions on their merits (with and without a hearing before an EEOC administrative judge), an agency may settle a case with a complainant or a complainant may withdraw his or her complaint. Table 6 shows average closure time for each type of closure overall and at the Postal Service and nonpostal agencies in fiscal year 1998 (see app. IV, figure IV.10 for average closure time by type of case closure for all agencies for fiscal years 1991 to 1998). Table 6 shows that, in general, the Postal Service processed cases more quickly than nonpostal agencies in fiscal year 1998. One factor may have been that the Postal Service investigated complaints more quickly compared with nonpostal agencies. In fiscal year 1998, a complaint investigation at the Postal Service took an average of 174 days from the time a case was assigned to an investigator to when the investigation was completed. The comparable figure at nonpostal agencies was 283 days. Table 6 also shows that complaints with final agency decisions involving a hearing took the longest to close. This figure is affected by EEOC s performance because a hearing precedes an agency s final decision; the longer EEOC takes to process a hearing request, the longer it will take an agency to make its final decision. As will be discussed below, EEOC has been taking longer to process hearing requests. <4.1.1. The Situation at EEOC> The increases in the amount of time to process cases were most apparent at EEOC. The average amount of time EEOC took to process a hearing request, which had increased from 173 days in fiscal year 1991 to 277 days in fiscal year 1997, increased further, to 320 days, in fiscal year 1998, well in excess of the 180-day requirement in regulations. Also, the time EEOC took to adjudicate an appeal, which had increased from 109 days in fiscal year 1991 to 375 days in fiscal year 1997, rose substantially in fiscal year 1998 to 473 days or by 26 percent. Because of the length of time taken by agencies and EEOC to process cases, parties to a case traveling the entire complaint process from complaint filing through hearing and appeal could expect the case to take 1,186 days, based on fiscal year 1998 data. In fiscal year 1997, this figure was 1,095. <5. Implications of the Trends in Inventories, New Cases, and Processing Times> The implications of these trends, at least in the short run, are that inventories of unresolved cases may grow even larger, particularly at EEOC, and that cases, as well as the conflicts underlying these cases, may take even longer to resolve than they currently do. The long-term outlook is uncertain. Only when EEOC and agencies are able to process and close more cases than they receive will progress be made toward reducing backlogs. The size of the caseloads will be influenced by the effect of revisions to the complaint process regulations and procedures, while agencies and EEOC s capacity to process cases will be affected by available resources. EEOC projects that the number of new cases will continue to rise and exceed its capacity to process them, resulting in yet higher inventories and case processing times. EEOC s projections, however, do not take into account how complaint process revisions may affect caseload trends and resource needs. <5.1. Factors Affecting the Size of the Complaint Caseload> In our July 1998 report about rising trends in EEO complaint caseloads, we reported that the increase in the number of discrimination complaints could be attributed to several factors, according to EEOC, dispute resolution experts, and officials of federal and private-sector organizations. One factor that experts and officials cited for the increase in complaints was downsizing, which resulted in appeals of job losses and reassignments. A second factor was the Civil Rights Act of 1991, which motivated some employees to file complaints by allowing compensatory damage awards of up to $300,000 in cases involving unlawful, intentional discrimination. A third factor was the Americans With Disabilities Act of 1990, which expanded discrimination protection. EEOC and Postal Service officials also said that the current regulations governing the EEO complaint process, implemented in October 1992, were a factor because they provided improved access to the complaint process. In a report we issued in May 1999, however, we said that there were several factors indicating that an increase in the number of complaints did not necessarily signify an equivalent increase in the actual number of individuals filing complaints. First, an undetermined number of federal employees have filed multiple complaints. EEOC officials and representatives of the Council of Federal EEO and Civil Rights Executives said that, while they could not readily provide figures, it has been their experience that a small number of employees often referred to as repeat filers account for a disproportionate share of complaints. A Postal Service official said that between 60 and 70 employees account for every 100 complaints filed. Additionally, an EEOC workgroup that reviewed the federal employee discrimination complaint process reported that the number of cases in the system was swollen by employees filing spin-off complaints new complaints challenging the processing of existing complaints. Further, the work group found that the number of complaints was unnecessarily multiplied by agencies fragmenting some claims involving a number of different allegations by the same employee into separate complaints, rather than consolidating these claims into one complaint. In addition, there has been an increase in the number of complaints alleging reprisal, which, for the most part, involve claims of retaliation by employees who have previously participated in the complaint process. Further, in past reports and testimonies, we noted, among other things, that the discrimination complaint process was burdened by a number of cases that were not legitimate discrimination complaints; some were frivolous complaints or attempts by employees to get a third party s assistance in resolving workplace disputes unrelated to discrimination.Similarly, EEOC reported in its 1996 study that a sizable number of complaints might not involve discrimination issues but instead reflect basic communications problems in the workplace. EEOC said that such issues may be brought into the EEO process because of a perception that there is no other forum to air general workplace concerns. The agency also said that there is little question that these types of issues would be especially conducive to resolution through ADR processes. EEOC will be implementing regulatory and procedural changes beginning in November 1999 to deal with some of the factors contributing to the volume of complaints flowing through the process. One change will allow agencies and administrative judges to dismiss spin-off complaints. Another change will allow agencies and administrative judges to dismiss complaints in which employees are abusing the process. The revised regulations and EEOC s policies will deal with the problem of fragmented complaints. In addition, EEOC will require agencies to make ADR processes available to complainants. <5.2. EEOC s Capacity to Process Cases> Among the factors that can affect inventory levels and case processing times is the relationship between the influx of cases and the capacity of staff to process them. Data that EEOC reports in the Federal Sector Report on EEO Complaints Processing and Appeals does not allow a precise comparison of the number of staff at agencies to caseloads at various stages of the complaint process. However, the data enable a comparison of EEOC s hearing and appeal caseloads to the number of nonsupervisory administrative judges and attorneys available to process these cases. These data show that as the overall number of hearing requests received each year increased by 111.6 percent, from 5,773 in fiscal year 1991 to 12,218 in fiscal year 1998 (see table 4, p. 8), the number of administrative judges available for hearings increased at a lower rate (41.5 percent) during this period, from 53 to 75. These data also show that as the number of appeals increased by 61 percent, from 5,266 in fiscal year 1991 to 8,480 in fiscal year 1998 (see table 4, p. 8), the number of attorneys processing appeals actually declined, from 40 in fiscal year 1991 to 39 during fiscal years 1992 to 1998. Although EEOC officials recognized the need for additional staff to process hearings and appeals, they said that requested funds for the needed positions were not appropriated. <5.2.1. New Cases Surpass Closures Despite Productivity Gains> At EEOC, the hearings and appeals inventories rose because the average caseload for each administrative judge and attorney outpaced increases in their productivity. The number of hearing requests received each year per administrative judge rose, from 109 in fiscal year 1991 to 163 by fiscal year 1998. The hearings inventory grew larger because although the average number of cases processed and closed each year per administrative judge increased, this figure was, except for fiscal year 1993, always less than the average number of requests received. In fiscal year 1991, administrative judges processed and closed 95 hearing requests, a figure that increased to 135 by fiscal year 1998. The situation for appeals was similar. The number of appeals received each year per attorney increased, from 133 in fiscal year 1991 to 217 by fiscal year 1998. The appeals inventory grew because the average number of cases processed and closed each year per attorney, was, except for fiscal year 1991, always less than the average number of appeals received. In fiscal year 1991, attorneys processed and closed an average of 133 cases, a figure that increased to 192 by fiscal year 1998. To deal with the imbalance between new cases and closures, EEOC s fiscal year 1999 budget provided for an increase in its administrative judge and appeals attorney corps. Under the fiscal year 1999 budget, the authorized number of administrative judges increased by 19, from 75 to 94, while the authorized number of appeals attorneys increased by 14, from 39 to 53. <5.2.2. Additional Inventory Growth Expected> Even with these added resources, the hearings and appeals inventories may continue to rise unless the flow of new cases is reduced. EEOC estimates that with the full complement of administrative judges on board in fiscal year 2000, it will be able to process and close 11,280 hearing requests, or 120 cases per judge, each year. This figure is 938 cases less than the 12,218 hearing requests EEOC received in fiscal year 1998. If, for example, the number of hearing requests received in fiscal year 2000 remained at fiscal year 1998 levels, EEOC s hearings inventory would increase by 938 cases during the year, while the average time EEOC takes to process a hearing request would grow by about 30 days. Over 5 years, with no change in the number of new cases received each year or resources to process them, EEOC s hearings inventory could increase by 4,690 cases, while adding 150 days to the average processing time. Similarly, when the full complement of appeals attorneys is on board by fiscal year 2000, EEOC estimates it will be able to process and close 7,685 appeals, or 145 cases per attorney, each year. This figure, however, is 795 cases less than the 8,480 appeals filed in fiscal year 1998. If, for example, the number of appeals filed in fiscal year 2000 remained at fiscal year 1998 levels, EEOC s appeals inventory would increase by 795 cases during that year, while the average processing time would increase by about 37 days. Over 5 years, with no change in the number of new cases filed each year or resources to process them, EEOC s appeals inventory could increase by 3,975, while adding about 186 days to the average processing time. While our analysis assumed no increase in the number of new cases, EEOC s fiscal year 2000 budget request projects that incoming hearing requests and appeals would rise at an annual rate of 3 percent, and exceed the number of cases it can close. As a result, according to the agency, hearings and appeals inventories and processing times will continue to climb, further affecting the agencies inventories and case processing times. To deal with this situation, EEOC s fiscal year 2000 budget proposal requests funding for 19 additional administrative judges to process hearing requests and 13 additional attorneys to process appeals. The agency projects that with these additional resources, the hearings and appeals inventories and processing times would initially decline in fiscal year 2000, only to begin rising again in fiscal year 2004. <5.2.3. Effects of Revisions to Complaint Process Not Known> Neither our analysis nor EEOC s projections and requested funding increase take into account, however, the possible effects of changes to program regulations and procedures intended to reduce the number of cases flowing into and through the complaint process. Since EEOC s workload is dependent on the number of cases in the pipeline at agencies, it is important to understand how the program changes are likely to affect caseloads at agencies. The requirement that agencies offer ADR processes to employees, including in the counseling phase before a formal complaint is filed, should resolve some workplace disputes without a complaint being filed and resolve other disputes in the early complaint stages. Other changes allowing dismissal of spin-off complaints and other complaints in which an employee is believed to be abusing the process should halt the processing of these cases early in the process and possibly discourage the filing of such complaints. In addition, policies to prevent agencies from fragmenting cases should also reduce the number of new complaints. However, although EEOC designed its changes to program regulations and procedures to reduce the flow of new cases, it has not estimated the likely effect of these changes on the volume of complaints. EEOC officials explained that they had been deferring developing estimates until the regulations had been approved because of how the details of the final regulations could affect caseload estimates. They also said that although one goal of the regulations is to reduce caseloads, another goal is to improve the fairness of the process. The EEOC officials said that one measure to improve fairness is to remove agencies ability to reject or modify administrative judges decisions in arriving at final decisions. The officials said that complainants could view this change as giving the administrative judges more authority, and they speculated that more complainants might seek a hearing. Estimates of the expected changes in complaint levels are important because a decrease in new complaints would affect how quickly EEOC might be able to reduce its inventories, and thus how many, if any, additional staff would be needed and for how long. EEOC s Compliance and Control Division Director said that it would be appropriate to consider the effects of these changes when the agency prepares its fiscal year 2001 budget request. Because the changes could begin affecting complaint levels in fiscal year 2000 and because any new staff, if not hired on a temporary basis, could be with EEOC a long time, estimates of likely changes in complaint levels also could be important to congressional consideration of EEOC s future budget requests. EEOC also has not completed the development of the measures and indicators that it will use in the future to gauge the actual effect of the changes. In its fiscal year 1999 annual performance plan, EEOC said that it would develop measures and indicators for assessing the effectiveness of these revisions, which, according to the agency s fiscal year 2000 Annual Performance Plan, would be implemented in fiscal year 2000. <6. Conclusions> Rising inventory levels of unresolved EEO complaints and lengthy case processing times to resolve these workplace disputes remain stubborn problems for agencies and EEOC. The struggle of nonpostal agencies was especially evident in that their inventories rose by almost 8 percent in fiscal year 1998 despite a 6 percent decline in new complaints. Similarly, despite increases in its productivity, EEOC s appeals inventory increased by almost 10 percent in fiscal year 1998, even though the number of appeals filed remained almost unchanged. At the same time, EEOC s inventory of hearing requests rose by almost 20 percent, about twice the rate of increase in new hearing requests that the agency received. How long present conditions will continue, and whether they will improve or deteriorate further, depends on the ability of agencies and EEOC to process cases currently in the complaint pipeline as well as on the volume of new complaints entering the pipeline in the future. Future trends and, therefore, agencies and EEOC s resource needs, are likely to be affected by the revisions to the complaint process. However, EEOC has not developed estimates of the extent to which revisions to complaint process regulations and procedures may affect the flow of cases into and through the process. Among the changes, the requirement that agencies offer ADR to complainants could reduce the number of new cases filed, or resolve disputes in the early stages. In addition, other changes to be implemented dealing with fragmenting of complaints, spin-off complaints, and abuse of process could reduce the number of new complaints or short-circuit them early in the process. EEOC s request for additional funding for attorneys and judges and the implementation of changes to program regulations and procedures in November 1999 lend urgency to gaining an understanding of the likely effects of the proposed changes on the complaint process and complaint inventories. In addition, until the measures and indicators promised in EEOC s fiscal year 1999 Annual Performance Plan are developed and implemented, the actual effect of the revisions on the EEOC complaint process will be difficult to track. Estimates of the effect of the changes combined with anticipated productivity levels could be used to further estimate the resources needed to reduce EEOC s inventory of hearing requests to levels that would allow the average case to be processed within the 180-day requirement in regulations. In addition, current regulations do not prescribe a processing time standard for appeals, which could be used to establish and develop estimates of the resources needed to reduce the average appeal processing time to an acceptable level of timeliness. In the case of both hearing and appeal processing, the estimates could be useful in determining how many, if any, additional staff are needed to reduce the backlogs and whether the staff should be a permanent or temporary addition to EEOC s workforce. Given the size of the backlogs, estimates for reducing them to acceptable levels over different time frames could allow EEOC and Congress to weigh the trade-offs between additional cost and the rapidity with which the inventory of cases is resolved. Measures and indicators to assess the actual effect of changes in program regulations should be adopted before the changes are implemented to ensure that consistent data are collected from the start and to ensure that systems are in place to generate valid and reliable data. <7. Recommendations> To provide Congress with a clear picture of future caseload trends and the resources that are needed to deal with current backlogs, as well as the volume of cases expected in the future, we recommend that the EEOC Chairwoman take steps to (1) develop estimates of the effects of the forthcoming changes in program regulations and procedures on agencies and EEOC s caseloads and (2) complete development of measures and indicators to track and assess the impact of these revisions on caseload trends. We also recommend that the Chairwoman use these data to develop estimates, under various time frames, of the resources needed to reduce its average hearings processing time to meet the 180-day requirement in regulations. We further recommend that the Chairwoman establish a policy of an acceptable level of timeliness for processing appeals and develop estimates, under various time frames, of the resources needed to reduce its average appeals processing time to meet this standard. <8. Agency Comments and Our Evaluation> We received comments on a draft of this report from EEOC and the Postal Service. The EEOC Chairwoman said in her written comments (see app. V) that she shared our concerns that complaint inventories are too high and that federal employees wait far too long for their complaints to be processed by their agencies and EEOC. She said that analyses of the kind in our 1998 report on rising EEO complaint caseloads in the federal sector had persuaded her that bold steps were necessary to bring about improvements. She said that, in addition to the changes in regulations, EEOC is implementing a comprehensive, strategic approach to link the hearings and appeals programs with strong oversight, technical assistance, and educational initiatives. These efforts are to include on-site reviews, which EEOC believes are one of the most important vehicles with which to focus on and correct root causes of persistent problems. Also, the Chairwoman said that with additional resources, EEOC would increase its efforts on conflict prevention and early intervention, since these are the most cost-effective ways to reduce inventories. Further, the Chairwoman pointed out that EEOC, with the National Partnership for Reinventing Government (NPR), is cosponsoring the Interagency Federal EEO Task Force that will look into ways to enhance the fairness, efficiency, and effectiveness of the federal employee EEO complaint process. EEOC also responded to the first three of our four recommendations that it (1) develop estimates of the effects of changes in regulations on caseloads, (2) complete development of measures and indicators to track and assess the impact of these revisions, and (3) develop estimates of the resources needed under various time frames to reduce hearings and appeals processing times. EEOC said that right now it would be premature and highly speculative for the agency to venture guesses on what the actual experiences under the revised regulations might be. In addition, EEOC said that it was not possible to develop measures and indicators for assessing the effectiveness of the revisions to the federal sector EEO complaint process before the draft regulations were approved. However, with the publication of the final rules in the Federal Register on July 12, 1999, EEOC said that it expects to complete development of the measures and indicators by the end of fiscal year 1999. The Chairwoman added, however, that other complex issues must be resolved, including how baseline data will be collected and what data collection method will be used. Consequently, she said that the first year for which data will be collected on experiences under the revised regulations will be fiscal year 2001. She said that when these data are available at the end of calendar year 2001, it would be possible to estimate resource requirements under various time frames. The Chairwoman further said that these data would be used to prepare EEOC s fiscal year 2004 budget request, which would be submitted to the Office of Management and Budget in September 2002 and to Congress in early 2003. We continue to believe that in order for Congress to carry out its oversight and appropriation responsibilities and make informed budget decisions, it needs timely estimates from EEOC of how changes in the complaint process may affect caseloads and resource requirements. Further, we believe congressional decisionmaking would benefit from EEOC s best estimate of the resources needed under various time frames to reduce hearings and appeals processing times to acceptable levels. With such estimates, Congress could consider options to deal with this serious situation. We recognize that early estimates may be inexact. However, without any estimate of the effect the new regulations may have on caseloads and of information on how quickly, if at all, additional staff might be able to reduce the current case backlogs, Congress has no basis to judge whether requested resources to increase staffing are reasonable. Although initial estimates of necessity involve considerable judgment, we believe it would be better to offer estimates than to provide no perspective on the regulations anticipated effect. Estimation is an iterative process, and EEOC can improve the precision of its estimates as more and better data become available. The Chairwoman said that EEOC will explore alternative means for obtaining feedback on the kinds of changes that may flow from the revised regulations. In addition to EEOC examining its own caseloads, such alternatives, we believe, could include obtaining data during on-site visits, through the NPR/EEOC Interagency Federal EEO Task Force, or through informal surveys of agencies. As EEOC and agencies scrutinize inventories to see how the new provisions apply to existing cases, such data-gathering initiatives could yield increasingly reliable and timely information on the effects of the new provisions. In response to our fourth recommendation that an acceptable level of timeliness be established for the processing of appeals, the Chairwoman said that 180 days is an appropriate goal. She did not say how this goal might be operationalized. We believe that such a goal would carry more significance and accountability if it were articulated in writing as a policy, such as by inclusion in EEOC s annual performance plan. In oral comments on a draft of this report made on July 7, 1999, the Postal Service Manager, EEO Compliance and Appeals, concurred with our observations. He added that the Postal Service will be in compliance with the new EEOC regulation requiring that ADR be available to complainants because of its REDRESS (Resolve Employment Disputes, Reach Equitable Solutions Swiftly) program. In a separate discussion, the Postal Service s National REDRESS Program Manager said that the program, which uses outside mediators in the precomplaint stage, was fully implemented as of July 1999. She provided statistics showing that during the first 10 months of fiscal year 1999 a period during which the program was still being rolled out there were about 17 percent fewer formal EEO complaints, compared with the same period in fiscal year 1998 (7,050 versus 8,522). She and the EEO Compliance and Appeals Manager said this decline was in large measure due to the REDRESS program. The EEO Compliance and Appeals Manager also said that the Postal Service was expanding ADR to complaints awaiting a hearing before an EEOC administrative judge. He said that pilot programs have shown promise in reducing the inventory of complaints at this stage, with about one-third of the cases reviewed found to be candidates for settlement and another one-third found to be candidates for mediation. The remaining one-third, he said, will probably go to hearing. The official said that agencies have a responsibility to address these cases and can play an important role in reducing not only their own caseloads, but EEOC s as well. The implications of the Postal Service s experience with ADR, if the reported results are sustained, are significant for several reasons. First, they show that an agencywide ADR program to resolve disputes at an early stage can reduce the number of formal complaints. Second, because postal workers account for about half of the EEO complaints filed by federal employees, a substantial reduction in the number of formal complaints by postal workers could mean a reduction in the number of cases entering EEOC s hearings and appeals pipeline. Third, the Postal Service s limited experience, under its pilot programs, of applying ADR to cases awaiting a hearing show that some portion of this inventory can be resolved without using EEOC hearing resources. Although the Postal Service has not had broad experience with applying ADR to cases awaiting a hearing, the experiences of the Merit Systems Protection Board (MSPB) may be instructive to agencies and EEOC in establishing dispute resolution strategies and allocating resources. MSPB has had a long-established policy of trying to settle cases it does not dismiss on jurisdictional or timeliness grounds. Over the past 10 years, MSPB has avoided hearings by settling about half of employee appeals of personnel actions. We are sending copies of this report to Senators Daniel K. Akaka, Thad Cochran, Joseph I. Lieberman, and Fred Thompson; and Representatives Robert E. Andrews, John A. Boehner, Dan Burton, William L. Clay, Chaka Fattah, William F. Goodling, Steny H. Hoyer, Jim Kolbe, John M. McHugh, David Obey, Harold Rogers, Joe Scarborough, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Members of Senate and House Committees and Subcommittees. We will also send copies to the Honorable Ida L. Castro, Chairwoman, EEOC; the Honorable William J. Henderson, Postmaster General; the Honorable Janice R. Lachance, Director, Office of Personnel Management; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will make copies of this report available to others on request. If you or your staff have any questions concerning this report, please contact me or Assistant Director Stephen Altman on (202) 512-8676. Other major contributors to this report were Anthony P. Lofaro, Gary V. Lawson, and Sharon T. Hogan. Scope and Methodology As with our previous report about complaint caseloads, we developed information on complaints falling within the jurisdiction of the Equal Employment Opportunity Commission (EEOC), and not the Merit Systems Protection Board (MSPB), because (1) the vast majority of discrimination complaints fall within EEOC's jurisdiction and (2) concerns about case inventories and processing times raised in hearings before the House Subcommittee on Civil Service focused on complaints within EEOC's jurisdiction. We updated (1) trends in the size of inventories and the age of cases in inventory at the various stages of the equal employment opportunity (EEO) complaint process and (2) trends in the number of complaints filed by federal employees and the time taken by agencies and EEOC to process them to include fiscal years 1991 through 1998. Agencies' complaint data for fiscal year 1998, which EEOC provided and which we used in our analysis, were preliminary. We selected 1991 as a base year because it preceded intensive government downsizing, the implementation of new laws expanding civil rights protections and remedies, and the implementation of new regulations governing the federal employee EEO complaint process. Because postal workers accounted for about half the complaints filed since fiscal year 1995, we separately analyzed data reported by the Postal Service in order to compare statistics for the postal workforce with the nonpostal workforce. To update and analyze information about (1) the trends in the size and age of complaint inventories and (2) the number of complaints filed by federal employees and the amount of time taken by federal agencies and EEOC to process them, we obtained data reported (1) to EEOC by the Postal Service and other agencies and (2) by EEOC in its annual Federal Sector Report on EEO Complaints Processing and Appeals. We did not verify the data in EEOC's reports or data provided by the Postal Service. To make observations about the implications of the trends, we drew upon our analysis of the trend data, our past work, and discussions with EEOC officials. In addition, we reviewed EEOC s budget request for fiscal year 2000 and its annual performance plans for fiscal years 1999 and 2000. We also reviewed changes to the regulations governing the federal employee complaint process (29 C.F.R. part 1614) that are to be implemented beginning in November 1999. We have previously noted limitations to the data presented in our reports because of concerns about the quality of data available for analysis.Although we have no reason to question EEOC s statistics about its own hearings and appeals activities, we had identified errors and inconsistencies in the data on agencies inventory levels and on the age of cases in inventory. Because EEOC had not verified the data it received from agencies, it is possible that other data problems may have existed. EEOC corrected the errors we identified and, in response to a recommendation we made, said that it would take action to address our concerns about data consistency, completeness, and accuracy. Before providing the fiscal year 1998 agency data to us, EEOC reviewed agencies hard-copy submissions of complaint statistics and compared these data to statistics the agencies provided in an automated format. EEOC also tested the accuracy of its computer program to aggregate the data submitted by agencies. In response to our recommendation in an earlier report, before it publishes the complaint statistics in the fiscal year 1998 Federal Sector Report on EEO Complaints Processing and Appeals, EEOC said it would visit selected agencies to assess the reliability of the reported data. On balance, total caseload data currently available, while needing further quality assurance checks, present useful information on the volume of complaints actually being processed in the federal EEO complaint system. We performed our work in Washington, D.C., from March through May 1999 in accordance with generally accepted government auditing standards. Processing Federal Employee EEO Complaints Agencies and EEOC process federal employees EEO complaints under regulations promulgated by EEOC, which also establish processing time standards. Employees unable to resolve their concerns through counseling can file a complaint with their agency, which either dismisses or accepts it (the first stage) and, if the complaint is accepted, conducts an investigation (the second stage). Agencies are to decide whether to accept a complaint, investigate it, and report investigation results within 180 days from the complaint s filing. After receiving the investigation results, an employee who pursues a complaint has two choices: (1) request a hearing before an EEOC administrative judge (the third stage) who issues a recommended decision, which the agency can accept, reject, or modify in making its final decision or (2) forgo a hearing and ask for a final agency decision (the fourth stage). An employee has 30 days to make this decision. When a hearing is requested, the administrative judge is to issue a recommended decision within 180 days of the request. An agency is to issue its final decision within 60 days of receiving an administrative judge s recommendation or a request for a final decision. Up to this point, EEOC standards have allowed complaint processing to take up to 270 days without a hearing, 450 days with one. An employee dissatisfied with a final agency decision or its decision to dismiss a complaint may appeal to EEOC, which is to conduct a de novoreview (the fifth stage). The employee has 30 days to file an appeal, but regulations do not establish time standards for EEOC s review. The final (sixth) stage within the administrative process is that the complainant or agency may request EEOC to reconsider its decision from the appeal within 30 days of receiving the decision. However, regulations do not establish time standards for the EEOC s reconsideration. EEOC will be implementing revisions to the regulations, including changes to hearing and appeal procedures, beginning in November 1999. Under the new rules, administrative judges will continue to issue decisions on complaints referred to them for hearings. However, agencies will no longer be able to modify these decisions. Instead, as its final action (as final decisions will be called), an agency will issue a final order indicating whether or not it will fully implement the administrative judge s decision. If the agency does not fully implement the decision, it will be required to file an appeal of the decision with EEOC. Employees will retain the right to appeal an agency s final action to EEOC. In addition, the decision on an appeal from an agency s final action will be based on a de novo review, except that the review of the factual findings in a decision by an administrative judge will be based on a substantial evidence standard of review. Selected Complaint Data on U.S. Postal Service Table III.1: Total and Postal Service Inventories of Compliants, Hearing Requests, and Appeals and Postal Service as a Percentage of the Totals for Fiscal Years 1991-1998 Not available. <9. Table III.2: Average Age (Days) of Complaints in the Postal Service's Inventory Since Complaint Filed, by Stage of the Complaint Process for Fiscal Years 1991-1998 Stage of1994199519921993199619971991 process> No cases reported. When the agency notified the complainant in writing of its proposed disposition of the complaint and of the right to a final decision with or without an EEOC hearing. Discontinued as a reporting category. <10. Table III.3: Percentage of Postal Workers' Complaints Pending Dismissal/Acceptance and Investigation More Than 180 Days for Fiscal Years 1991-1998 Stage of process1991> No cases reported. <11. Table III.5: Total and Postal Workers Complaints, Hearing Requests, and Appeals and Postal Workers as a Percentage of Total Complaints, Hearing Requests, and Appeals for Fiscal Years 1991-1998> Selected Federal Sector EEO Complaint Data for Fiscal Years 1991 to 1998 The following figures show the trends in (1) inventories of unresolved equal employment opportunity (EEO) complaints at federal agencies and the Equal Employment Opportunity Commission (EEOC); (2) the age of cases in the inventories; (3) the number of complaints, hearing requests, and appeals filed; and (4) processing times for complaints, hearings, and appeals. Figure IV.3: Average Age of the Complaint Inventory at Agencies FYs 1991 - 1998 Not reported. <12. Figure IV.6: Proportion of EEOC s Hearings Inventory Older Than 180 Days Has Risen> Separate data not reported for closures with and without hearings. Comments From the U.S. Equal Employment Opportunity Commission The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO provided information on the Equal Employment Opportunity Commission's (EEOC) complaint caseload, focusing on: (1) trends in the size of inventories and the age of cases in inventory at various stages of the EEO complaint process; (2) trends in the number of complaints filed by federal employees and the time taken by agencies and EEOC to process them; and (3) implications of these trends and how future caseloads may be affected by EEOC's regulatory changes to the complaint process. What GAO Found GAO noted that: (1) inventories of unresolved federal sector discrimination cases at agencies and EEOC have continued to grow; (2) overall, from fiscal year (FY) 1991 to FY 1998, complaint inventories at federal agencies rose by about 114 percent, to 36,333; (3) at EEOC, during the same period, the hearings inventory rose by 280 percent, to 11,967, while the appeals inventory went up by 648 percent, to 10,966; (4) as inventories grew, the average age of cases in agencies' inventories (446 days) and EEOC's hearings (320 days) and appeals (293 days) inventories also reached new levels; (5) the size of the inventories and the age of cases in them continued their upward trend during FY 1998 as neither the agencies nor EEOC kept up with the influx of new cases; (6) agencies' inventories grew by 6 percent in FY 1998 despite a 2.8 percent decline in the number of new complaints; (7) the growth in EEOC's inventory of hearing requests during this period--19.5 percent--was greater than the increase in the number of new hearing requests, which rose by about 9.2 percent; (8) at the same time, EEOC's appeals inventory increased by 9.9 percent, even though the number of new appeals filed remained almost unchanged; (9) the average time to process a complaint at agencies showed a small decline in FY 1998, from 391 to 384 days, but there were sharp increases in the average time EEOC took to process hearing requests (rising from 277 to 320 days) and appeals (rising from 375 to 473 days); (10) a case travelling the entire complaint process could be expected to take 1,186 days to process, based on FY 1998 data; (11) this was 91 days longer than in FY 1997; (12) the logjams at EEOC and agencies are likely to persist, at least in the short run, as long as agencies and EEOC receive more new cases than they process and close; (13) the long-term outlook, however, is unclear; (14) substantive revisions to complaint program regulations and procedures are to be implemented beginning in November 1999; (15) these revisions are intended to reduce the volume of cases flowing through the complaint process; (16) the revisions include a requirement that agencies offer alternative dispute resolution, as well as other rules to reduce the opportunities for multiple complaints by the same complainant; and (17) however, EEOC has not yet developed estimates of how the revisions to program regulations will affect caseload trends and resource needs, nor has the agency completed development of measures and indicators to track the effects of these revisions once they are implemented.
<1. Los Alamos National Laboratory, New Mexico. In January 2001, DOE found that> the University of California had inadequate work controls at one of its laboratory facilities, resulting in eight workers being exposed to airborne plutonium and five of those workers receiving detectable intakes of plutonium. This was identified as one of the 10 worst radiological intake events in the United States in over 40 years. DOE assessed, but cannot collect, a penalty of $605,000 for these violations. <2. Argonne National Laboratory West, Idaho. In February 2001, DOE found that the> University of Chicago had violated the radiation protection and quality assurance rules, leading to worker contamination and violations of controls intended to prevent an uncontrolled nuclear reaction from occurring. DOE assessed, but cannot collect, a penalty of $110,000 for these violations. DOE has cited two other reasons for continuing the exemption, but as we indicated in our 1999 report, we did not think either reason was valid: DOE said that contract provisions are a better mechanism than civil penalties for holding nonprofit contractors accountable for safe nuclear practices. We certainly agree that contract mechanisms are an important tool for holding contractors accountable, whether they earn a profit or not. However, since 1990 we have described DOE s contracting practices as being at high risk for fraud, waste, abuse, and mismanagement. Similarly, in November 2000, the Department s Inspector General identified contract administration as one of the most significant management challenges facing the Department. We have noted that, recently, DOE has been more aggressive in reducing contractor fees for poor performance in a number of areas. However, having a separate nuclear safety enforcement program provides DOE with an additional tool to use when needed to ensure that safe nuclear practices are followed. Eliminating the exemption enjoyed by the nonprofit contractors would strengthen this tool. DOE said that its current approach of exempting nonprofit educational institutions is consistent with Nuclear Regulatory Commission s (NRC) treatment of nonprofit organizations because DOE issues notices of violation to nonprofit contractors without collecting penalties but can apply financial incentives or disincentives through the contract. However, NRC can and does impose monetary penalties for violations of safety requirements, without regard to the profit-making status of the organization. NRC sets lower penalty amounts for nonprofit organizations than for- profit organizations. The Secretary could do the same, but does not currently take this approach. Furthermore, both NRC and other regulatory agencies have assessed and collected penalties or additional administrative costs from some of the same organizations that DOE exempts from payment. For example, the University of California has made payments to states for violating environmental laws in California and New Mexico because of activities at Lawrence Livermore and Los Alamos National Laboratories. The enforcement program appears to be a useful and important tool for ensuring safe nuclear practices. Our 1999 review of the enforcement program found that, although it needed to be strengthened, the enforcement program complemented other contract mechanisms DOE had to help ensure safe nuclear practices. Advantages of the program include its relatively objective and independent review process, a follow-up mechanism to ensure that contractors take corrective action, and the practice of making information readily available to the contractor community and the public. Modifications to H.R. 723 Could Help Clarify and Strengthen the Penalty Provisions H.R. 723 eliminates both the exemption from paying the penalties provided by statute and the exemption allowed at the Secretary s discretion. While addressing the main problems we discussed in our 1999 report, we have several observations about clarifications needed to the proposed bill. The discretionary fee referred to in the bill is unclear. H.R. 723, while eliminating the exemption, limits the amount of civil penalties that can be imposed on nonprofit contractors. This limit is the amount of "discretionary fees" paid to the contractor under the contract under which the violation occurs. The meaning of the term discretionary fee is unclear and might be interpreted to mean all or only a portion of the fee paid. In general, the total fee that is, the amount that exceeds the contractor s reimbursable costs under DOE s management and operating contracts consists of a base fee amount and an incentive fee amount. The base fee is set in the contract. The amount of the available incentive fee paid to the contractor is determined by the contracting officer on the basis of the contractor s performance. Since the base fee is a set amount, and the incentive fee is determined at the contracting officer's discretion, the term discretionary fee may be interpreted to refer only to the incentive fee and to exclude the base fee amount. However, an alternate interpretation also is possible. Certain DOE contracts contain a provision known as the Conditional Payment of Fee, Profit, Or Incentives clause. Under this contract provision, on the basis of the contractor s performance, a contractor s entire fee, including the base fee, may be reduced at the discretion of the contracting officer. Thus, in contracts that contain this clause, the term discretionary fee might be read to include a base fee. If the Congress intends to have the entire fee earned be subject to penalties, we suggest that the bill language be revised to replace the term discretionary fee with total amount of fees. If, on the other hand, the Congress wants to limit the amount of fee that would be subject to penalties to the performance or incentive amount, and exclude the base fee amount, we suggest that the bill be revised to replace the term discretionary fee with performance or incentive fee. Limiting the amount of any payment for penalties made by tax-exempt contractors to the amount of the incentive fee could have unintended effects. Several potential consequences could arise by focusing only on the contractor s incentive fee. Specifically: Contractors would be affected in an inconsistent way. Two of the nonprofit contractors University Research Associates at the Fermi National Accelerator Laboratory and Princeton University do not receive an incentive fee (they do receive a base fee). Therefore, depending on the interpretation of the term discretionary fee as discussed above, limiting payment to the amount of the incentive fee could exempt these two contractors from paying any penalty for violating nuclear safety requirements. Enforcement of nuclear safety violations would differ from enforcement of security violations. The National Defense Authorization Act for Fiscal Year 2000 established a system of civil monetary penalties for violations of DOE regulations regarding the safeguarding and security of restricted data. The legislation contained no exemption for nonprofit contractors but limited the amount of any payment for penalties made by certain nonprofit contractors to the total fees paid to the contractor in that fiscal year. In contrast, these same contractors could have only a portion of their fee (the discretionary fee ) at risk for violations of nuclear safety requirements. It is not clear why limitations on the enforcement of nuclear safety requirements should be different than existing limitations on the enforcement of security requirements. Disincentives could be created if the Congress decides to limit the penalty payment to the amount of the incentive fee. We are concerned that contractors might try to shift more of their fee to a base or fixed fee and away from an incentive fee, in order to minimize their exposure to any financial liability. Such an action would have the effect of undermining the purpose of the penalty and DOE s overall emphasis on performance-based contracting. In fact, recent negotiations between DOE and the University of California to extend the laboratory contracts illustrate this issue. According to the DOE contracting officer, of the total fee available to the University of California, more of the fee was shifted from incentive fee to base fee during recent negotiations because of the increased liability expected from the civil penalties associated with security violations. If a nonprofit contractor s entire fee was subject to the civil penalty, the Secretary has discretion that should ensure that no nonprofit contractor s assets are at risk because of having to pay the civil penalty. This is because the Secretary has considerable latitude to adjust the amount of any civil penalty to meet the circumstances of any specific situation. The Secretary can consider factors such as the contractor s ability to pay and the effect of the penalty on the contractor s ability to do business. Preferential treatment would be expanded to all tax-exempt contractors. Under the existing law, in addition to the seven contractors exempted by name in the statute, the Secretary was given the authority to exempt nonprofit educational institutions. H.R. 723 takes a somewhat different approach by exempting all tax-exempt nonprofit contractors whether or not they are educational institutions. This provision would actually reduce the liability faced by some contractors. For example, Brookhaven Science Associates, the contractor at Brookhaven National Laboratory, is currently subject to paying civil penalties for nuclear safety violations regardless of any fee paid because, although it is a nonprofit organization, it is not an educational institution. Under the provisions of H.R. 723, however, Brookhaven Science Associates would be able to limit its payments for civil penalties. This change would result in a more consistent application of civil penalties among nonprofit contractors. Some contractors might not be subject to the penalty provisions until many years in the future. As currently written, H.R. 723 would not apply to any violation occurring under a contract entered into before the date of the enactment of the act. Thus, contractors would have to enter into a new contract with DOE before this provision takes effect. For some contractors that could be a considerable period of time. The University of California, for example, recently negotiated a 4-year extension of its contract with DOE. It is possible, therefore, that if H.R. 723 is enacted in 2001, the University of California might not have to pay a civil penalty for any violation of nuclear safety occurring through 2005. In contrast, when the Congress set up the civil penalties in 1988, it did not require that new contracts be entered into before contractors were subject to the penalty provisions. Instead, the penalty provisions applied to the existing contracts. In reviewing the fairness of this issue as DOE prepared its implementing regulations, in 1991 DOE stated in the Federal Register that a contractor s obligation to comply with nuclear safety requirements and its liability for penalties for violations of the requirements are independent of any contractual arrangements and cannot be modified or eliminated by the operation of a contract. Thus, DOE considered it appropriate to apply the penalties to the contracts existing at the time.
Why GAO Did This Study This testimony discusses GAO's views on H.R. 723, a bill that would modify the Atomic Energy Act of 1954 by changing how the Department of Energy (DOE) treats nonprofit contractors who violate DOE's nuclear safety requirements. Currently, nonprofit contractors are exempted from paying civil penalties that DOE assesses under the act. H.R. 723 would remove that exemption. GAO supports eliminating the exemption because the primary reason for instituting it no longer exists. The exemption was enacted in 1988 at the same time the civil monetary penalty was established. The purpose of the exemption was to ensure that the nonprofit contractors operating DOE laboratories who were being reimbursed only for their costs, would not have their assets at risk for violating nuclear safety requirements. However, virtually all of DOE's nonprofit contractors have an opportunity to earn a fee in addition to payments for allowable costs. This fee could be used to pay the civil monetary penalties. What GAO Found GAO found that DOE's nuclear safety enforcement program appears to be a useful and important tool for ensuring safe nuclear practices.
<1. Background> In 2010, federal agencies reported about 3.35 billion square feet of building space to the FRPP: 79 percent of the reported building space was federally owned, 17 percent was leased, and 4 percent was otherwise managed. The data indicated that the agencies used most of the space about 64 percent as offices, warehouses, housing, hospitals, and laboratories. The five agencies we reviewed GSA, DOE, Interior, VA, and USDA reported owning or leasing more than 866 million square feet of building space, or about 25 percent of the total reported square footage for all agencies. Initially, FRPC defined 23 FRPP data elements to describe the federal government s real property inventory. By 2008, FRPC had expanded the number of data elements included in the FRPP to 25. FRPC requires agencies to update their FRPP real property data annually. Each asset included in the database is assigned a unique identification number that allows for tracking of the asset to the unique data that describe it. See appendix II for a list of the 25 FRPP data elements as defined in 2010. FRPP data elements as performance measures: utilization, condition index, annual operating costs, and mission dependency. The definitions of these four data elements in 2010 can be found in table 1. FRPC s 2010 Guidance for Real Property Inventory Reporting provides specific guidelines on how to report a building as overutilized, underutilized, utilized, or not utilized based on the building s use and the percentage of the building that is used (see table 2). FRPC has been collecting FRPP data on federal government properties since 2005. We have reported that results-oriented organizations follow a number of sound data collection practices when gathering the information necessary to achieve their goals. For example, these organizations recognize that they must balance their ideal performance measurement systems against real world considerations, such as the cost and effort involved in gathering and analyzing data. These organizations also tie performance measures to specific goals and demonstrate the degree to which the desired results are achieved. Conversely, we have observed that organizations that seek to manage an excessive number of performance measures may risk creating a confusing excess of data that will obscure rather than clarify performance issues. Limiting the number of measures to the vital few not only keeps the focus of data collection where it belongs, it helps ensure that the costs involved in collecting and analyzing the data do not become prohibitive. Furthermore, results- oriented organizations report on the performance data they collect. Following the implementation of the executive order and nationwide data collection efforts, we have reported that agencies continue to face challenges with managing excess and underutilized properties. For example, we have previously reported that the legal requirements agencies must adhere to, such as requirements for screening and environmental cleanup as well as requirements related to historical properties, present a challenge to consolidating federal properties. In addition, before GSA can dispose of a property that an agency no longer needs, it must offer the property to other federal agencies. If other federal agencies do not need the property, GSA must then make the property available to state and local governments as well as certain nonprofit organizations and institutions for public benefit uses such as homeless shelters, educational facilities, or fire and police training centers. According to agency officials, as a result of this lengthy process, excess or underutilized properties may remain in an agency s possession for years. Furthermore, the costs of disposing of property can further hamper an agency s efforts to address its excess and underutilized property problems. For example, properties that contain radiological contamination must be mitigated before they can be disposed. In addition, the interests of multiple and often competing stakeholders may not align with the most efficient use of government resources and complicate real property decisions. Despite these challenges, both the previous and current administrations have implemented a number of cost savings initiatives associated with excess and underutilized property. In August 2005, the administration set a goal to reduce the size of the federal inventory by $15 billion by 2009. In June 2010, the President directed federal civilian agencies to achieve $3 billion in savings by the end of fiscal year 2012 through reducing annual operating costs, generating income through disposing of assets, using existing real property more effectively by consolidating existing space, expanding telework, and other space realignment efforts. Furthermore, on May 4, 2011, the administration proposed legislation referred to as the Civilian Property Realignment Act (CPRA) to establish a legislative framework for disposing of and consolidating real property, among other things. In September 2011, OMB projected that the proposal would save the government $4.1 billion over 10 years from sales proceeds, and that savings would also be achieved through decreased operating costs and efficiencies. However, the Congressional Budget Office (CBO) has concluded that CPRA would probably not result in a significant increase in proceeds from the sale of federal properties over the next 10 years. <2. Excess and Underutilized Property Data Are Inconsistent and Inaccurate because of Lack of Sound Data Collection Practices> FRPC has not followed sound data collection practices, and, as a result, FRPP data do not describe excess and underutilized properties consistently and accurately. Consistent with this, FRPP data did not always accurately describe the properties at the majority of sites we visited and often overstated the condition and annual operating costs, among other things. <2.1. FRPP Data Do Not Describe Excess and Underutilized Federal Real Properties Consistently and Accurately> Agency officials described ways in which key performance measures in the FRPP database are reported inconsistently or inaccurately. At 23 of the 26 sites that we visited, we found inconsistencies or inaccuracies related to the following performance measures described in the background: (1) utilization, (2) condition index, (3) annual operating costs, and (4) mission dependency. As a result of the discussions we had with agency officials about how FRPP data are reported, as well as the inconsistencies and inaccuracies described in the following sections, we question whether FRPP data provide an adequate tool for decision making or measuring performance, such as the cost savings initiatives put forth by OMB. We found that the agencies we reviewed do not report property utilization consistently. FRPC guidance states that for offices, hospitals, and warehouses, utilization is the ratio of occupancy to current design capacity. Although USDA requires its agencies to follow FRPC guidance, USDA stated that FRPC has not established governmentwide definitions for occupancy or current design capacity. As a result, each agency within USDA has its own internal procedures for determining a building s utilization level. Moreover, VA defines utilization differently from FRPC guidance, that is, the ratio of ideal space to existing space, which VA stated is different from occupancy. Despite the inconsistency of this method of defining utilization with FRPC guidance, VA officials reported that OMB staff approved of their method of reporting utilization.Furthermore, OMB acknowledged that it is standard practice for agencies to measure utilization tailored to the agencies specific needs and circumstances. Among the 26 federal sites we visited, we found utilization data inconsistencies or inaccuracies for properties at 19 of these sites. For example, at one VA site, a building we toured was reported to have a utilization of 39 percent in 2010 FRPP data and 45 percent utilization in 2011 source data, even though local officials said this building has been fully occupied since 2008. See figure 1. Another building that we toured at the same site was reported to be 0 percent utilized in 2010 FRPP data and 59 percent utilized in 2011 agency source data. However, all but one of the rooms in the building were vacant, and local officials said only 10 percent of the building was utilized. In addition, at one USDA site we visited, we found two houses that have been empty since 2009; however, they were both reported to the FRPP as utilized for 2009 and 2010. See figure 2 to view images of these two USDA buildings. We also found problems with the utilization data at properties owned by the other three agencies included in our review. As was the case with utilization, we found that agencies do not report the condition of their properties consistently. According to FRPC guidance, condition index is a general measure of the constructed asset s condition and is calculated by using the ratio of repair needs to the plant replacement value (PRV). Needed repairs are determined by the amount of repairs necessary to ensure that a constructed asset is restored to a condition substantially equivalent to the originally intended and designed capacity, efficiency, or capability. However, we found that agencies do not always follow this guidance. For example, when agencies have determined that a property is not needed and will ultimately be disposed, they may assign no repair needs to that property even though the property may be in a state of significant disrepair. Doing so allows agencies to use their limited funds to maintain properties that they regularly use, but it can lead to condition index data that do not accurately reflect each property s condition as set forth in FRPC guidance. Figure 3 is an example of how the condition index of a building with high repair needs can significantly change depending on whether agency officials choose to follow FRPC guidance or if they assign zero dollars in repair needs because repairs are not planned. While it may be a good practice not to assign repair needs to dilapidated buildings that no longer support agencies in carrying out their mission, the fact that these buildings may report a perfect or near-perfect condition index provide decision makers with an inconsistent representation of the condition of buildings at a given site. We found examples at all five agencies we visited where a property in very poor condition received a higher condition index score than a property in good condition. Figure 4 demonstrates examples of this at an Interior site we visited. We found condition index reporting inconsistencies and inaccuracies at 21 of 26 sites visited. The practice of assigning no repair needs to many excess and underutilized buildings because agencies have no intention of repairing them led to severely blighted buildings receiving excellent condition scores.received high condition index scores, even though they are in poor condition. Some of the problems with these buildings include asbestos, mold, collapsed walls or roofs, health concerns, radioactivity, deterioration, and flooding. <3. Although Some Progress Has Been Made, Federal Property Management Is Still Challenging and Efforts Lack a National Strategy> The federal government has taken some steps to address excess and underutilized property management problems by developing the FRPP database, among other things. However, cost savings efforts associated with excess and underutilized property over the years were discontinued and recent efforts may overstate potential savings. Although the federal agencies we reviewed have taken some actions to try to address excess and underutilized properties, long-standing challenges remain. As a result, a national strategy could help the federal government prioritize future management efforts. <3.1. Limited Progress Has Been Made in Managing Excess and Underutilized Federal Property, but the Extent of Cost Savings Is Not Clear> The federal government has made some progress in managing real property since we first added this issue to our high-risk series. In a 2007 review of federal real property, we found that the administration at that time made progress toward managing federal real property and addressing some long-standing problems. The 2004 executive order established FRPC to develop property management guidance and act as a clearinghouse for property management best practices. FRPC created the FRPP database and began data collection in December 2005. As part of a 2011 update to our high-risk series, we reported that the federal government has also taken steps to improve real property management, most notably by implementing some GSA data controls and requiring agencies to develop data validation plans.management as high risk, reliable tools for tracking property were generally unavailable. Consequently, we determined that the development of a database and the implementation of additional data quality controls were steps in the right direction. However, on the basis of our current work, it appears that data controls have not brought about widespread improvements with data consistency and accuracy as was anticipated. Nonetheless, we found that the FRPP can be used in a general sense to track assets. For example, during our site visits, agency officials were able to match assets with the real property unique Prior to designating property identification numbers assigned to them in the FRPP database and were able to locate even small, remote buildings using these numbers. In addition to establishing FRPC, developing the FRPP, and implementing the executive order, the previous and current administrations have sought ways to generate cost savings associated with improving management of excess and underutilized properties. However, these efforts have not led to proven cost savings associated with the management of these properties. Cost savings goals set by the previous administration were discontinued. In 2007, we reported that adding real property management to the President s Management Agenda in 2004 increased its visibility as a key management challenge and focused greater attention on real property issues across the government. As part of this agenda, the previous administration set a goal of reducing the size of the federal real property inventory by 5 percent, or $15 billion, by the year 2015. OMB staff at the time reported that there was an interim goal to achieve $9 billion of the reductions by 2009. OMB staff recently told us that the current administration is no longer pursuing these goals. Furthermore, the senior real property officers of the five agencies we reviewed told us that they were never given specific disposal targets to reach as part of these prior disposal goals. Cost savings associated with improved management of excess and underutilized properties as directed in the June 2010 presidential memorandum are unclear. OMB staff also said that while the goals of the previous administration are no longer being pursued, the current administration issued a memorandum that directed civilian agencies to achieve $3 billion in savings by fiscal year 2012 through better management of excess properties, among other things. According to the administration s website, as of September 2011, approximately half of the cost savings had been achieved ($1.48 billion). Almost half of the total goal (about $1.4 billion) is targeted to the five agencies we reviewed. Officials from these agencies reported various cost savings measures such as selling real property, forgoing operations and maintenance costs from disposed properties, and reducing energy costs through sustainability efforts to achieve agency savings targets. As of the first quarter of fiscal year 2012, only two of the agencies we reviewed GSA and USDA were claiming any sales proceeds from the sale of federal real property: GSA reported $41.1 million in savings from sales proceeds and USDA reported approximately $5.6 million. Interior officials stated that individual sales with positive net proceeds are offset by those sales in which the cost of the disposal (i.e., as a result of environmental remediation and repair) is greater than any proceeds realized. Furthermore, DOE officials reported that the disposition costs of the properties they sold during the time frame of the memorandum were actually greater than the proceeds. As a result, DOE has reported a net loss of $128 million on property sales for this time period. VA also did not include asset sales as part of its savings plan. Four of the five agencies told us that they believe they will reach their savings targets by the end of fiscal year 2012; however, whether they claim to reach those goals or not, the actual and estimated savings associated with excess and underutilized property management may be overstated. Furthermore, agencies were not required to develop cost savings that reflected a reduction in agency budgets. We found problems with cost savings estimates related to excess and underutilized property management from all five of the federal agencies we reviewed (see table 4). OMB staff has not provided information to support projected cost savings if CPRA is enacted. In addition to the expected savings resulting from the June 2010 presidential memorandum, OMB staff reported that CPRA the legislation the administration has proposed to address real property management obstacles will result in $4.1 billion in savings within 10 years following enactment from sales proceeds as well as unspecified savings from operating costs and efficiencies. However, the CPRA projections may not reflect true cost savings. OMB staff did not provide a methodology, calculations, or any other basis for its stated projections. Furthermore, CBO concluded that CPRA would probably not result in a significant increase in proceeds from the sale of federal properties over the next 10 years. CBO noted that the Department of Defense holds about one-third of the excess properties. CPRA would have no effect on these properties, because the proposal only applies to civilian agencies. Furthermore, CBO estimated that implementing CPRA would cost $420 million over the 2012 through 2016 period to prepare properties for sale or transfer. The President s fiscal year 2013 budget requested $17 million to implement CPRA (if it is enacted) and $40 million to establish an Asset Proceeds and Space Management Fund to facilitate the disposal process intended to reimburse agencies for some necessary costs associated with disposing of property. This amount is far short of the $420 million that CBO projected would be needed to prepare properties for sale or transfer within a 4-year period. <3.2. Federal Agencies Have Made Progress but Still Face Long-standing Challenges> Despite problems with data collection and national cost savings goals, we found that agencies have taken steps to address excess and underutilized properties in their portfolios. For example, all five agencies we reviewed have taken steps to use property more efficiently, as follows: Identifying underutilized assets to meet space needs. VA officials told us that they implemented a process to identify vacant and underutilized assets that they could use to meet space needs. In addition, VA officials stated that the department is planning to reuse currently utilized assets that will be available in the future. VA officials added that they have identified 36 sites that include 208 buildings and more than 600 acres that they can use to provide more than 4,100 units of homeless and other veteran housing. Consolidating offices among and within agencies. USDA and Interior signed a memorandum of understanding in November 2006 that allows the agencies to colocate certain operations and use their buildings more efficiently. The memorandum of understanding enables the agencies to share equipment and space. In addition, USDA closed laboratories at four locations and consolidated operations with existing USDA sites. In its National Capital Region, USDA has consolidated five separate leased locations, totaling 363,482 square feet, into one location at Patriot s Plaza in Washington, D.C. USDA reported that the consolidation into Patriot s Plaza will result in annual rent savings of about $5.6 million. DOE officials also stated that the department encourages offices to consolidate operations when it is cost effective to do so. The department also increased the use of an office building at the Lawrence Livermore National Laboratory from 22 percent to 100 percent by changing its use from office space to a building that houses computers. Furthermore, VA consolidated its medical center campuses in Cleveland, Ohio, and engaged a number of private partners directly to reuse the unneeded sites, using its Enhanced Use Lease authority. Reducing employee work space. To use space more efficiently, Interior reduced new space utilization per employee from 200 usable square feet per person to 180 usable square feet per person. This action decreased total new space by 10 percent in all areas including employee work space and conference space. Using operations and maintenance charges to reduce operating costs and encourage efficient use of space. DOE officials reported that several sites servicing multiple programs or performing work for others have developed a space charge system whereby a site charges tenants for the operations and maintenance of the square footage they occupy on a square foot basis. This charge defrays operations and maintenance costs associated with the site and encourages tenants to minimize their own space use. Transferring unneeded property to other entities. Interior officials have disposed of excess properties by transferring them to other organizations to use. For example, Interior officials reported that the department donated a freezer building and a laboratory building at the Woods Hole Science Center in Falmouth, Massachusetts to the Woods Hole Oceanographic Institution. The department also transferred buildings and land at a Corbin, Virginia site to the National Oceanic and Atmospheric Administration. Creating alternate uses for unused assets. GSA found an alternate use for 400,000 square feet of a concrete slab that remained after demolishing an excess building. When needed, GSA leases the slab to the Federal Emergency Management Agency as outdoor storage space for electric generators and other heavy equipment and as a staging area for equipment during responses to disasters (see fig. 10). Using telework and hoteling work arrangements.we reviewed require or allow employees to use alternate work arrangements such as teleworking or hoteling, when feasible, to more efficiently use space. For example, GSA instituted a pilot hoteling project at the Public Building Service headquarters in Washington, D.C., to reduce needed space. Progress notwithstanding, agencies still face many of the same long- standing challenges we have described since we first designated real property management as a high-risk area. Agency disposal costs can outweigh the financial benefits of property disposal. USDA officials reported that the costs of disposing of real property can outweigh savings that result from building demolition and that limited budgetary resources create a disincentive to property disposal. USDA determined that the total annual cost of maintaining 1,864 assets with annual operating costs less than $5,000 was $3 million. Conversely, USDA concluded that the disposal costs for these assets equals or exceeds their annual operating cost of $3 million. Thus, disposal of the assets would not result in immediate cost savings, and USDA has not demolished the assets. In addition, Interior officials reported that numerous National Park Service buildings acquired during the planning for a Delaware River dam that was never built are excess, as are many cabins and houses along the Appalachian Trail. Because Interior is not spending any operations and maintenance on these assets, disposing of them would not provide savings to the department. As a result, Interior has made a business decision to only fund a small percentage of these disposals at the Delaware River dam site. Legal requirements such as those related to preserving historical properties and the environment can make the property disposal process lengthy according to agency officials. Meeting requirements associated with historical properties can delay or prevent disposal of excess buildings. The National Historic Preservation Act, as amended, requires agencies to manage historic properties under their control and jurisdiction and to consider the effects of their actions on historic preservation.dispose of a 15,200-square-foot building at Menlo Park, California that has been used as both a residence and a research building during its 83- year history. The building has been scheduled for demolition since 2001, but VA cannot demolish it because of a historical designation. In addition, in 2010, Interior canceled the disposal of a 95-square-foot stone property that we visited because it was found eligible for historic designation. The property is in poor condition and has not been used for many years, but Interior officials told us that they are now planning to stabilize and restore the structure (see fig. 11). <4. Conclusions> The federal government has made some progress in managing real property since it was first added to our high-risk series. The FRPC created the FRPP database to track federal property and the federal agencies we reviewed have taken some actions to address excess and underutilized property. Even with long-standing efforts to improve the management of excess and underutilized properties and save costs, federal agencies continue to face many of the same challenges that we have reported for over a decade. The problems still facing the federal government in this area highlight the need for a long-term, comprehensive national strategy to bring continuity to efforts to improve how the federal government manages its excess and underutilized real property and improve accountability for these efforts. Such a strategy could lay the framework for addressing the issue of inconsistent and inaccurate data on excess and underutilized federal properties. We continue to believe that consistent and accurate data on federal real property are necessary for the federal government to effectively manage real property. While the 2004 executive order charged the Administrator of GSA, in consultation with FRPC, to develop the data reporting standards for the FRPP database, the current standards have allowed agencies to submit data that are inconsistent and therefore not useful as a measure for comparing performance inside and outside the federal government. Also, the current definitions of certain data elements could perpetuate confusion on the nature of federal government properties. For example, the FRPP data element, PRV, is commonly referred to as an asset s value, which can cause decision makers to make assumptions about the worth of the asset even though the PRV cannot be accurately used in this way. Moreover, many agencies do not have the resources to collect data at the asset level, and the information that is reported in order to meet requirements for asset-level data is likely conveying an inaccurate picture of excess and underutilized property. Furthermore, federal government agencies have vastly different uses for properties, and it may be challenging to collect certain kinds of property management data using a single database. This makes it difficult for decision makers to understand the scope of the problem and assess potential cost savings and revenue generation. Now that FRPC has had several years of experience with these data, it is in a better position to refine data collection requirements by identifying data that are suitable for comparison in a nationwide database. Following sound data collection practices could help FRPC to thoroughly evaluate and retool the FRPP so that it collects and provides data that are consistent and accurate to decision makers, even if this means collecting less data in the short term. GSA is uniquely positioned to lead this effort because of its charge to develop FRPP data reporting standards. <5. Recommendations for Executive Action> We are making two recommendations, one to the Director of OMB and one to the Administrator of GSA. We recommend that the Director of OMB require the OMB Deputy Director for Management, as chair of FRPC, in collaboration and consultation with FRPC member agencies, to develop and publish a national strategy for managing federal excess and underutilized real property that includes, but is not limited to, the following characteristics: a statement of purpose, scope, and methodology; problem definition and risk assessment; goals, subordinate objectives, activities, and performance measures, including the milestones and time frames for achieving objectives; resources, investments, and risk management; organizational roles, responsibilities, and coordination; and integration and implementation plans. We recommend that the Administrator of GSA, in collaboration and consultation with FRPC member agencies, develop and implement a plan to improve the FRPP, consistent with sound data collection practices, so that the data collected are sufficiently complete, accurate, and consistent. This plan should include, but not be limited to the following areas: ensuring that all data collection requirements are clearly defined and that data reported to the database are consistent from agency to agency; designating performance measures that are linked to clear performance goals and that are consistent with the requirements in the 2004 executive order (or seeking changes to the requirements in this order as necessary); collaborating effectively with the federal agencies that provide the data when determining data collection requirements and limiting the number of measures collected to those deemed essential, taking into account the cost and effort involved in collecting the data when determining data collection requirements; and developing reports on the data that are collected. <6. Agency Comments and our Evaluation> We provided a draft of this report to OMB, GSA, VA, USDA, DOE, and Interior for review and comment. OMB did not directly state whether it agreed or disagreed with our recommendations. OMB agreed that challenges remain in the management of the federal government's excess and underutilized properties; however, OMB raised concerns with some of the phrasing in our report and offered further context and clarification regarding the administration s overall efforts on real property reform. OMB s comments are contained in appendix III, along with our response. GSA agreed with our recommendation to improve the FRPP and described actions its officials are taking to implement it. GSA also partially agreed with our findings and offered some clarifications. GSA s comments are contained in appendix IV along with our response. VA generally agreed with the overall message of our report, but disagreed with how we presented certain issues. VA s comments are contained in appendix V along with our response. USDA provided clarifying comments which we incorporated, where appropriate. USDA s comments are contained in appendix VI. DOE provided technical clarifications, which we incorporated where appropriate, but did not include as an appendix. Interior did not provide comments. OMB stated that, because our conclusion regarding the accuracy of FRPP data is based on our sample of 26 site visits, further study is needed to determine whether the problems we found are systemic. However, as discussed in the report, our findings are primarily based on the issues we identified with FRPC s data collection practices, which are the basis of the entire FRPP data collection process and are thus systemic. The 26 sites that we visited complement those findings and illustrate how poor data collection practices affect data submissions; however, they are not the only basis for our findings. Furthermore, OMB stated that the administration has a strategy for improving the management of federal real property that serves as an important foundation for the national strategy we recommend in this report. While the initiatives OMB described may represent individual, positive steps, we do not believe that they fully reflect the key characteristics of a cohesive national strategy. A national strategy would improve the likelihood that current initiatives to improve real property management will be sustained across future administrations. A more detailed discussion of our views on OMB s comments can be found in appendix III. GSA stated that our report correctly identifies many of the problems that hampered effective FRPP data collection in 2011. According to GSA, it has taken specific actions to begin addressing our recommendation, including modifying FRPC guidance to the agencies to clarify report definitions and proposing reforms of the collection process to FRPC consistent with our recommendation. GSA also offered a few clarifications on our findings. GSA stated that it was unclear whether the examples of inconsistencies we discuss in our report are systemic. As noted, our findings are primarily based on the problems we found with the overall data collection process. Thus, our recommendation to GSA involves adopting sound data collection practices. In addition, GSA stated that, because FRPP data is reported annually, property utilization and condition may change from the time that information is submitted. However, we took steps, including discussing the history of each property with local property managers, to ensure that any inconsistencies we found were not due to changes between the time data was reported and the time we visited the building. These steps and a more detailed discussion of our views on GSA s comments can be found in appendix IV. VA generally agreed with our findings and provided additional information on VA s federal real property portfolio, their methods of reporting real property data, and efforts the department is taking to address its excess and underutilized properties. However, VA disagreed with some of our statements related, for example, to property utilization. A more detailed discussion of our views on VA s comments can be found in appendix V. In addition, USDA provided comments and clarifications which we incorporated, where appropriate. For example, USDA clarified its previous statement regarding utilization reporting to emphasize that component agencies are directed to follow FRPC guidance, but acknowledged that this guidance was inconsistent. USDA also clarified a previous statement regarding problems faced by the agency when reporting FRPP data in 2011. USDA s comments can be found in appendix VI. We are sending copies of this report to the Director of OMB; the Administrator of GSA; and the Secretaries of Energy, Interior, Veterans Affairs, and Agriculture. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-5731 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine to what extent (1) the Federal Real Property Profile (FRPP) database consistently and accurately describes the nature, use, and extent of excess and underutilized federal real property, and (2) progress is being made toward more effectively managing excess and underutilized federal real property. We identified five civilian real property-holding agencies for our review: the General Services Administration (GSA); the Departments of Energy (DOE), the Interior (Interior), and Veterans Affairs (VA); and the U.S. Department of Agriculture (USDA). We chose GSA, DOE, Interior, and VA because these were the four largest agencies in terms of total building square footage of all civilian real property agencies that are required to submit data under the executive order. On the basis of the data available, these five agencies report approximately two-thirds of the building square footage reported by civilian agencies. We did not consider agencies in the Department of Defense because we previously reported on the department s excess facilities. We added USDA to our list of selected agencies because USDA reported more excess properties than any other civilian agency in 2009. To determine to what extent the FRPP database described the nature, use, and extent of excess and underutilized federal real property, we obtained and analyzed FRPP data submissions and other real property data from the five selected agencies; interviewed real property officers at these agencies; visited sites where the agencies had reported excess or underutilized properties; interviewed Office of Management and Budget (OMB) staff; and reviewed FRPC guidance and other documents related to the agencies real property data and the FRPP database. We obtained the agencies FRPP data submissions for fiscal years 2008 through 2010. According to our conversations with agency officials, FRPP submissions can only be changed by the agency submitting the data. As a result, we believe that the FRPP submissions obtained from the agencies match the data contained in the FRPP database and are sufficiently reliable for the purpose of evaluating the consistency and accuracy of the FRPP database. In addition, for select data elements, we obtained real property data from the source databases that each agency uses to generate its annual FRPP submissions. We obtained source system data to get the actual percentage of utilization of each property as of the date when these data were extracted and provided to us in September or October of 2011. For the years of our FRPP data review (fiscal years 2008 through 2010), agencies were only required to report utilization using four categories: overutilized, utilized, underutilized, or not utilized. However, the FRPP guidance stated that agencies should maintain the actual percentage of utilization in their own systems for audit purposes. We posed questions to senior real property officers at the five agencies about the collecting and reporting of real property data. To gather detailed examples of excess and underutilized properties and to learn about the processes by which data on such properties are collected and submitted to the FRPP database, we visited sites where the five agencies had reported excess or underutilized properties. We selected these sites using information from the agencies FRPP submissions. To narrow our scope, we chose only federally owned buildings for our visits. Using the most recent FRPP submissions we had at the time (fiscal year 2010), we selected a nonprobability sample of owned buildings for each agency that were listed as excess (on the status indicator data element) or underutilized (on the utilization data element), or both. Because VA did not classify any of their owned buildings as excess, we also selected VA buildings classified as not utilized. sample, observations made at these site visits do not support generalizations about other properties described in the FRPP database or Because this is a nonprobability about the characteristics or limitations of other agencies real property data. Rather, the observations made during the site visits provided specific, detailed examples of issues that were described in general terms by agency officials regarding the way FRPP data is collected and reported. We focused on sites clustered around four cities: Washington, D.C.; Dallas, Texas; Los Angeles, California; and Oak Ridge, Tennessee. This strategy afforded both geographic diversity and balance among our selected agencies while also accommodating time and resource constraints. In selecting sites and buildings in and around these four cities, we took into account the following factors: We prioritized sites that had multiple excess and/or underutilized properties. This allowed us to see more properties in a limited amount of time. We prioritized the selection of excess and/or underutilized properties that fell into one of the five types of real property uses required to submit utilization data in 2010 offices, warehouses, hospitals, laboratories, and housing. However, we also selected some buildings classified as other, particularly buildings that were large or that had high reported values. We attempted to balance the numbers of excess and underutilized buildings we selected. (Some buildings were classified as both excess and underutilized since these classifications are made in different data elements in FRPP.) We attempted to visit four or five sites from each of the five different However, most GSA sites consisted of only one building, agencies.so we selected more sites for GSA. In the end, we selected four sites from each of Interior and USDA, five from each of DOE and VA, and eight from GSA. In all, we selected 26 sites. Whereas we selected sites based in large part on the numbers and kinds of buildings they had, the exact set of buildings we visited at each site depended on additional factors. At some sites, there were too many excess and underutilized properties to see them all. In those circumstances, we prioritized large buildings with high reported values and tried to see a number of different kinds of buildings (e.g., a mix of offices and warehouses). At several sites, local property officials identified other properties with issues related to excess and underutilized property that we toured and analyzed. Prior to each site visit, we analyzed the FRPP data submissions for fiscal years 2008 through 2010 and agencies source system data we obtained in September or October 2011, and developed questions about the data submissions for local property managers. During our site visits, we interviewed local property managers and compared what we observed at each building with the FRPP data for that building. When not restricted by security concerns, we photographed the building. In addition to questions about individual properties, we questioned the local officials about the kind of data they collect on the properties and how they collect it. To summarize inconsistencies and inaccuracies between our observations at the properties we visited and the FRPP data for those properties, we analyzed 2008 through 2010 FRPP data for all of the properties. As part of this review, we checked the reported utilization, condition index, value, and annual operating costs for each building for all three years. Four analysts, working together, evaluated these data both for inaccuracies (cases where the data clearly misrepresented the actual utilization, condition, value, or annual operating costs of a property) and for year-to-year inconsistencies (cases where reported values showed large year-to-year changes that did not correspond to observable changes in the property and that agency officials could not explain). Each of the 26 sites was counted as having a problem on a given data element if at least one inconsistency or inaccuracy was identified for that element.The four analysts discussed each case and arrived at a consensus as to whether a problem existed in each data element for each site. To determine the progress being made toward more effective management of federal excess and underutilized real property, we asked the senior real property officers at each of our selected agencies to provide written responses to a standard list of questions. These questions addressed management issues related to excess and underutilized owned buildings, how FRPP data are reported, and progress the agency is making toward sales and utilization goals set by the OMB. We analyzed the written responses to our questions and reviewed supporting documentation provided by agency officials such as regulations, policies, and other documents. In addition to reviewing the written responses to our questions, we reviewed a number of our previous reports and pertinent reports by the Federal Real Property Council (FRPC), the Congressional Budget Office, and the Congressional Research Service. We also reviewed and analyzed federal laws relating to real property for the major real property-holding agencies. Because OMB chairs FRPC and has set cost savings goals related to federal real excess and underutilized properties, we analyzed documents related to these goals including the 2004 executive order, the June 2010 presidential memorandum on Disposing of Unneeded Federal Real Estate, and legislation proposed by the administration known as the Civilian Property Realignment Act (CPRA). We also interviewed knowledgeable OMB staff about agency-specific targets related to the June 2010 presidential memorandum, the methodology used to project potential cost savings if CPRA were to be enacted, and progress toward costs savings goals set by the previous administration. We conducted this performance audit from May 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Real Property Council Fiscal Year 2010 Inventory Data Elements and Descriptions <7. Data element number Data element name 1 2 3> Data element definition Real property type indicates the asset as land, building, or structure. Real property use indicates the asset s predominant use as land, building, or structure. The legal interest indicator is used to identify a real property asset as being owned by the federal government, leased to the federal government (i.e., as lessee), or otherwise managed by the federal government. Otherwise managed properties are (1) owned by a state or foreign government that has granted rights for use to the federal government using an arrangement other than a lease, or (2) trust entities that hold titles to assets predominantly used as museums, yet may receive some federal funds to cover certain operational and maintenance costs. Status reflects the predominant physical and operational status of the asset. Buildings, structures, and land assets have one of the following attributes: Active. Currently assigned a mission by the reporting agency. Inactive. Not currently being used but may have a future need. Includes real property in a caretaker status (closed pending disposal; for example, facilities that are pending a Base Realignment and Closure action) and closed installations with no assigned current federal mission or function. Excess. Formally identified as having no further program use of the property by the landholding agency. Disposed. Required for assets that have exited the federal portfolio of assets during the current reporting period. Each asset owned or leased by the federal government (and those otherwise managed by museum trusts) has one of the following historical status attributes: National Historic Landmark National Register listed National Register eligible Noncontributing element of a National Historic Landmark or National Register listed district Evaluated, not historic Reporting agency refers to the federal government agency reporting the property to the FRPC inventory database. Using organization refers to the predominant federal government agency or other nonfederal government entity occupying the property. Size refers to the size of the real property asset according to appropriate units of measure. The unit of measure used for the three real property types is as follows: For land, the unit of measure is acreage and is designated as either rural acres or urban acres. For buildings, the unit of measure is area in square feet and is designated as gross square feet. For structures, the unit of measure includes the size (or quantity) and unit of measure, and can include square yards, linear feet, miles, and the numbers of specific types of structures. <8. Data element number Data element name 9 Utilization> Data element definition Utilization is defined as the state of having been made use of, that is, the rate of utilization. The utilization rate for each of the five building predominant use categories is defined as follows: office: ratio of occupancy to current design capacity, hospital: ratio of occupancy to current design capacity, warehouse: ratio of gross square feet occupied to current design capacity, laboratory: ratio of active units to current design capacity, and housing: percent of individual units that are occupied. Value is defined as the cost of replacing the existing constructed asset at today s standards and is also known as plant replacement value (PRV) or functional replacement value. Condition index is a general measure of the constructed asset s condition at a specific point in time. The condition index is calculated as the ratio of repair needs to PRV. Repair needs are the amount necessary to ensure that a constructed asset is restored to a condition substantially equivalent to the originally intended and designed capacity, efficiency, or capability. Agencies will initially determine repair needs based on existing processes, with a future goal to further refine and standardize the definition. The condition index will be reported as a percent condition on a scale of zero to 100 percent. Mission dependency is the value an asset brings to the performance of the mission as determined by the governing agency: mission critical: without constructed asset or parcel of land, mission is compromised; mission dependent, not critical: does not fit into mission critical or not mission dependent categories; and not mission dependent: mission unaffected. Annual operating costs consist of the following: recurring maintenance and repair costs, utilities, cleaning and janitorial costs, and roads and grounds expenses Main location refers to the street or delivery address for the asset or the latitude and longitude coordinates. Real property unique identifier is a code that is unique to a real property asset that will allow for linkages to other information systems. The real property unique identifier is assigned by the reporting agency and can contain up to 24 alpha-numeric digits. The city or town associated with the reported main location in which the land, building, or structure is located. The state or District of Columbia associated with the reported main location in which the land, building, or structure is located. The country associated with the reported main location in which the land, building, or structure is located. The county associated with the reported main location in which the land, building, or structure is located. The congressional district associated with the reported main location in which the land, building, or structure is located. The ZIP code associated with the reported main location in which the land, building, or structure is located. <9. Data element number Data element name 22> Data element definition Installation identifier. Land, buildings or other structures, or any combination of these. Examples of installations are a hydroelectric project, office building, warehouse building, border station, base, post, camp, or an unimproved site. Subinstallation identifier. Part of an installation identified by a different geographic location code than that of the headquarters installation. An installation must be separated into subinstallations and reported separately when the installation is located in more than one state or county. However, an agency may elect to separate an installation into subinstallations even if the installation is not located in more than one state or county. Restrictions are limitations on the use of real property and include environmental restrictions (cleanup-based restrictions, etc.), natural resource restrictions (endangered species, sensitive habitats, floodplains, etc.), cultural resource restrictions (archeological, historic, Native American resources, except those excluded by Executive Order 13007, Section 304 of the National Historical Preservation Act, etc.), developmental (improvements) restrictions, reversionary clauses from deed, zoning restrictions, easements, rights of way, mineral interests, water rights, air rights, other, not applicable Agencies are required to provide all assets that have exited the federal portfolio of assets during the reporting fiscal year. This will include, but is not limited to, sales, federal transfers, public benefit conveyances, demolitions, and lease terminations. Disposition data is reported only in the year the asset has exited the federal portfolio of assets. Agencies are required to provide status, reporting agency, real property unique identifier, disposition. Agencies are also required to report disposition method (methods include public benefit conveyance, federal transfer, sale, demolition, lease termination, or other), disposition date, disposition value (the PRV for public benefit conveyances, federal transfers, demolitions, and other dispositions; the sales price for sales; and the government s cost avoidance for lease terminations), net proceeds (the proceeds received as part of assets disposed through sales and termination of leases minus the disposal costs incurred by the agency), and recipient (the name of the federal agency or nonfederal recipient that received the property through public benefit conveyance or federal transfer). <10. Data element number Data element name 25 Sustainability> Data element definition Sustainability is reported for building assets, is optional reporting for structures, and is not reported for land and reflects whether or not an asset meets the sustainability criteria set forth in Section 2 (f) (ii) of Executive Order 13423. To be considered sustainable and report yes, the asset must meet the five Guiding Principles for High Performance and Sustainable Buildings or be third-party certified as sustainable by an American National Standards Institute (ANSI)-accredited institution: Yes. Asset has been evaluated and meets guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. No. Asset has been evaluated and does not meet guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. Not yet evaluated. Asset has not yet been evaluated on whether or not it meets guidelines set forth in Section 2 (f) (ii) of Executive Order 13423. Not applicable. Guidelines set forth in Section 2 (f) (ii) of Executive Order 13423 do not apply to the asset. This includes assets that will be disposed of by the end of fiscal year 2015 and are no longer in use. The legal interest element includes a lease maintenance indicator and a lease authority indicator, which are not reported for owned and otherwise managed properties. This report focuses on owned properties. The status element includes an outgrant indicator identifying when the rights to the property have been conveyed or granted to another entity. For the purposes of this report, we did not evaluate or analyze information for the outgrant indicator. Appendix III: Comments from the Office of Management and Budget <11. GAO Comments> 1. OMB stated that the agency agreed with the report s general conclusion that challenges remain in the management of excess and underutilized properties, but that significant progress has been made. While we stated that limited progress has been made, our draft and final report do not describe the progress as significant. 2. OMB stated that the agency is concerned with some phrasing in the report that may lead the reader to draw unintended conclusions regarding the appropriate next steps for improving the accuracy and consistency of the FRPP. OMB stated that based on its understanding of our report, our findings are based on the 26 site visits we conducted and further study is needed to determine whether the issues we found with the consistency and accuracy of FRPP data are systemic. OMB also asserts that despite our use of a non- probability sample, we make generalizations based on the sample in the report. As we discuss in the report and reiterated in discussions with OMB staff during the comment period, our findings are primarily based on the problems we found with FRPC s data collection practices, which affect the entire data collection process. The work we did at 26 sample sites complement those findings and illustrate how poor data collection practices impact data submissions, but they are not the only basis for our conclusions. Furthermore, in its comments, OMB acknowledged that it has been standard practice for each agency to measure certain data elements, such as utilization, through agency-specific means tailored to the agency s individual needs and circumstances. Therefore, it is unlikely, as OMB asserts, that further study could find consistent data on properties outside of our sample when OMB has acknowledged that the standards themselves are inconsistent for reporting data. For these reasons, we believe our recommendation remains valid that GSA, in consultation with FRPC, should first address the problems with data collection practices, which our methodology and findings showed were in fact systemic. In response to OMB s comment, we clarified the report to emphasize the basis for our findings. 3. OMB commented that this report conflicts with previous testimonies and our 2011 update to GAO s high-risk series which described prior improvements. This report acknowledges such prior progress but provides a more in-depth review of multiple agencies data collection practices than prior work. Furthermore, as our report describes, in December 2011, changes were made to the data collection requirements which led to further concerns by agencies about data accuracy. Our report findings are also consistent with a September 2011 GAO report that discussed the Department of Defense s FRPP data. In that report we found that the Department of Defense s reported FRPP utilization data consisted of multiple discrepancies between the reported utilization designations and actual building utilization, along with other FRPP submission inaccuracies. Therefore, this report is consistent with our prior conclusions that some progress has been made since 2003, which we have discussed in multiple GAO reports and testimonies. However, the report on the Department of Defense s data and this report demonstrated significant problems in the data collection process. 4. We believe that our first recommendation that OMB develop a national strategy would assist in addressing the tension OMB describes between providing agencies with the flexibility to define data elements based on their agency-specific requirements and establishing governmentwide data elements that can be used to support aggregate analysis across the entire FRPP database. We will continue to engage OMB on the topic of real property management and we believe that this report outlines the next steps. As we recommend in this report, a critical step is for OMB to develop a national strategy for managing excess and underutilized properties. In the area of data collection, a national strategy could help identify management priorities for problems such as this and lay out the principles for weighing the cost of uniform data collection to the agencies with the benefit that would be obtained by aggregate analysis of uniform data. As we stated in the report, if certain data elements cannot be collected consistently, they may not be appropriate to include in a database that appears to be standard across the government. 5. We agree with OMB s statement that the method of attributing cost savings to efforts made to improve property management could be further clarified so that the public has a clear understanding of how such savings are calculated. We believe that transparency and accountability are critical in the federal government s service to the taxpayers and would support action taken by OMB to increase transparency in this regard. We did not make a specific recommendation regarding how cost savings, particularly cost savings associated with the June 2010 presidential memorandum, should be clarified. Our report assessed real property management issues related to excess and underutilized property and recommended a national strategy that could be used to guide efforts such as the June 2010 presidential memorandum. 6. OMB stated that the report s characterization of the administration s Civilian Property Realignment Act (CPRA) proposal could benefit from further clarity on savings goals and further context about our recent support for the proposal. Regarding savings goals, OMB stated that the administration s $4.1 billion estimate of the potential proceeds from the Act s implementation reflects an analysis of the potential proceeds that would result from the entire federal real property inventory, not just those currently identified as excess. We acknowledged in the report that these savings, according to OMB, would also come from reduced operating costs and efficiencies. We could not, however, analyze the basis for these savings because, as we discussed, OMB did not provide us with a methodology, calculations, or any basis for its stated projections. We requested this information from OMB multiple times over a period of eight months, and were only provided with a general description of the savings, similar to what OMB provided in its letter commenting on this report. Until we can evaluate the analysis OMB references, we will be unable to provide a more thorough assessment. Furthermore, our views on the effect that CPRA could have on problems we have found in federal real property management have not changed: that CPRA can be somewhat responsive to real property management challenges faced by the government. For example, CPRA proposes an independent board that would streamline the disposal process by selecting properties it considers appropriate for public benefit uses. This streamlined process could reduce disposal time and costs. 7. OMB stated that the administration has a strategy for improving the management of federal real property that serves as an important foundation for the national strategy we recommend in this report. OMB stated that several significant initiatives, including the June 2010 presidential memorandum on excess property and the recommendation for a civilian property realignment board, represent a comprehensive and carefully considered governmentwide strategy for addressing the government s long-term real property challenges. While the efforts OMB describes represent a range of individual initiatives, we continue to believe that they lack the key characteristics of a cohesive national strategy. A national strategy would improve the likelihood that current initiatives to improve real property management will be sustained across future administrations. The desirable characteristics of a national strategy that we ve identified such as a clear purpose, scope, and methodology; problem definition and risk assessment; and identified resources, investments, and risk management could serve to articulate a more sustained, long-term strategy to guide individual initiatives such as those described in OMB s comments. For example, related to resources and investments, agencies often lack funding to prepare unneeded properties for disposal or to pursue demolition. A national strategy could address this issue directly and transparently so that the true costs of real property reform are evaluated more completely by decision makers. Appendix IV: Comments from the General Services Administration <12. GAO Comments> 1. GSA stated that it is unclear whether the examples of inconsistencies described in our report are systemic throughout the FRPP, or are occurring in specific agencies reporting of the data. As we discuss in the report, our findings are primarily based on the problems we found with FRPC s data collection practices, which negatively impact the entire data collection process. The examples of inconsistencies and inaccuracies that we describe complement those findings and illustrate how poor data collection practices affect data submissions, but they are not the only basis for our conclusions. In fact, our recommendation to improve FRPP data collection involves the sound data collection practices that we believe should be put in place. GSA has agreed with this recommendation and has taken action to begin correcting the problems we identified. In response to GSA s comments, we made some clarifications to the report s discussion of the basis of our findings. 2. GSA stated that, because the FRPP is an annual report, property utilization may change from the time it is submitted in December. As we conducted site visits for this review, we took steps to ensure that any inconsistencies and inaccuracies we found were not due to a significant change in the building s use from the time it was reported to the time we visited. First, we discussed the history of the building s use with the local officials who manage the building to ensure that there was no recent change in the building s utilization. Second, since 2011 FRPP data had not been reported at the time we began our site visits, we obtained utilization data from the agencies source systems (which are used to produce FRPP utilization data) so that we had recent utilization data (as of the fall of 2011) before we began our site visits in December 2011. GSA also stated that the condition of the buildings may change or may not be updated annually. Related to this issue, we found that all five agencies did not always follow the guidance provided by the FRPC on how to calculate condition index. This led to severely blighted buildings receiving excellent condition scores, which could not be accounted for by reported changes in condition over a relatively short period of time. 3. GSA made a comment related to the computation formula for Condition Index. We have clarified this statement in the report. Appendix V: Comments from the Department of Veterans Affairs <13. GAO Comments> 1. VA stated that its complex model for calculating utilization is consistent with FRPC guidance because the guidance allows for flexibility on how agencies determine a key component of utilization (current design capacity) and that OMB agreed with their approach. However, rather than exercising flexibility in its use of current design capacity, VA used a different definition of utilization than the definition outlined in FRPC guidance. FRPC guidance defines utilization as the ratio of occupancy to current design capacity; however, VA defines utilization as the ratio of ideal space to existing space. While we acknowledge in our report that VA received OMB approval for reporting utilization differently, this method of reporting utilization is still inconsistent with the definition of utilization in FRPC guidance. Utilization is a performance measure and the 2004 executive order stated that performance measures shall be designed to allow comparing the agencies performance against industry and other public sector agencies. The inconsistencies we found from VA and other agencies in reporting utilization makes comparing utilization among agencies impossible. 2. VA also stated that identifying underutilizations is much better than ignoring the fact that the building may not be properly sized to deliver services to Veterans. We did not suggest that VA should ignore any aspect of its buildings that is problematic. We continue to believe that VA s method of calculating utilization has led to some buildings being continuously designated as underutilized even when local officials, who know the buildings best, have told us that the buildings have been fully occupied. 3. VA stated that the reasons for the inaccuracies that we found in utilization at two VA buildings were due to the use of these buildings as swing space, meaning that utilization changes frequently based on need for space. In its comments, VA indicated that since FRPP data are reported annually, the designation of this space at the time of reporting changed from the time that we visited the sites. However, as we conducted our site visits for this review, we took steps to ensure that any inconsistencies we found were not due to a significant change in the building s use from the time it was reported to the time we visited. First, we discussed the history of the building s use since 2008 with the local officials who manage the building to ensure that there was no recent change in the building s utilization. Second, since 2011 FRPP data had not been reported at the time we began our site visits, we obtained utilization data from the agencies source systems (which are used to produce FRPP utilization data) so that we had recent utilization data (as of the fall of 2011). The data we obtained from VA were current as of October 2011 and our visit took place in December 2011. Based on VA s comments, we clarified this information in our report to show that we accounted for the time between 2010 FRPP reporting and our visit in December 2011. Based on our visits to the buildings and our discussions about the history of the buildings use with the VA officials who manage them, we do not believe that VA s explanation accounts for the inconsistencies we found in utilization as detailed below: Local VA officials who manage the buildings told us that the first building VA discussed in its comments is used for accounting and payroll purposes and that it was always fully occupied during the period of our review (dating back to 2008). However, the building was reported to the FRPP as underutilized during each of these years. In fact, just two months prior to our visit, VA s October 2011 source data showed a utilization of 45 percent for this building even though it was fully occupied. Local VA officials who manage the second building VA discussed in its comments told us that the building was mostly unoccupied because they had recently acquired it from the Department of Defense and that multiple improvements had to be made before it could be occupied by staff. Based on this, the local officials told us that it could not have been utilized at 59 percent in October 2011 as VA source data indicated. 4. VA made a comment related to individually metered buildings. We clarified VA s statement in the report so that it is consistent with these comments. 5. In reference to our findings on problems with cost savings associated with the June 2010 presidential memorandum, VA stated that it disagreed with findings in a previous GAO report (GAO-12-305) that we referenced. In its comments on GAO-12-305, VA officials did not concur with certain parts of the report related to decreasing energy costs and improving non-recurring maintenance contracting. However, we did not reference the previous GAO report on these matters. Rather, we referenced that report s discussion on savings associated with reducing leased space through telework. VA confirmed the problems that the previous GAO team found with the savings associated with the telework program in its comments on GAO-12-305, stating that the telework program is still in its infancy and actual real property savings requires reducing space that is currently leased. These reductions in leased space may not be fully realized in 2012. As a result, VA stated it its comments on GAO-12- 305 that the telework initiative was removed from the description of savings in its fiscal year 2013 budget. This is consistent with what we describe in this report. Therefore, VA s restatement of its disagreement with findings in GAO-12-305 has no bearing on this report. Appendix VI: Comments from the U.S. Department of Agriculture Appendix VII: GAO Contact and Staff Acknowledgments <14. GAO Contact> <15. Staff Acknowledgments> In addition to the contact named above, David Sausville, Assistant Director; Amy Abramowitz; Russell Burnett; Kathleen Gilhooly; Raymond Griffith; Amy Higgins; Amber Keyser; Michael Mgebroff; John Mingus Jr.; Joshua Ormond; Amy Rosewarne; Minette Richardson; Sandra Sokol; and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study The federal government has made some progress addressing previously identified issues with managing federal real property. This includes establishing FRPC—chaired by the Office of Management and Budget (OMB)—which created the FRPP database managed by GSA. GAO was asked to determine the extent to which (1) the FRPP database accurately describes the nature, use, and extent of excess and underutilized federal real property, and (2) progress is being made toward more effective management of these properties. GAO analyzed the data collection process and agency data, visited 26 sites containing excess and underutilized buildings from five civilian federal real property holding agencies with significant portfolios, and interviewed officials from these five agencies and OMB staff about how they collect FRPP data and manage excess and underutilized properties. What GAO Found The Federal Real Property Council (FRPC) has not followed sound data collection practices in designing and maintaining the Federal Real Property Profile (FRPP) database, raising concern that the database is not a useful tool for describing the nature, use, and extent of excess and underutilized federal real property. For example, FRPC has not ensured that key data elements—including buildings' utilization, condition, annual operating costs, mission dependency, and value—are defined and reported consistently and accurately. GAO identified inconsistencies and inaccuracies at 23 of the 26 locations visited related to these data elements (see the fig. for an example). As a result, FRPC cannot ensure that FRPP data are sufficiently reliable to support sound management and decision making about excess and underutilized property. The federal government has undertaken efforts to achieve cost savings associated with better management of excess and underutilized properties. However, some of these efforts have been discontinued and potential savings for others are unclear. For example, in response to requirements set forth in a June 2010 presidential memorandum for agencies to achieve $3 billion in savings by the end of fiscal year 2012, the General Services Administration (GSA) reported approximately $118 million in lease cost savings resulting from four new construction projects. However, GSA has yet to occupy any of these buildings and the agency’s cost savings analysis projected these savings would occur over a 30-year period—far beyond the time frame of the memorandum. The five federal agencies that GAO reviewed have taken some actions to dispose of and better manage excess and underutilized property, including using these properties to meet space needs by consolidating offices and reducing employee work space to use space more efficiently. However, they still face long-standing challenges to managing these properties, including the high cost of property disposal, legal requirements prior to disposal, stakeholder resistance, and remote property locations. A comprehensive, long-term national strategy would support better management of excess and underutilized property by, among other things, defining the scope of the problem; clearly addressing achievement goals; addressing costs, resources, and investments needed; and clearly outlining roles and coordination mechanisms across agencies. What GAO Recommends GAO recommends that, in consultation with FRPC, GSA develop a plan to improve the FRPP and that OMB develop a national strategy for managing federal excess and underutilized real property. GSA agreed with GAO’s recommendation and agreed with the report’s findings, in part. OMB agreed that real property challenges remain but raised concerns about how GAO characterized its findings on FRPP accuracy and other statements. GAO believes its findings are properly presented. The details of agencies’ comments and GAO’s response are addressed more fully within the report.
<1. The Year 2000 Poses a Serious Problem for Banks> The Federal Deposit Insurance Corporation is the deposit insurer of approximately 11,000 banks and saving institutions. Together, these institutions are responsible for about $6 trillion in assets and have insured deposits totaling upwards of $2.7 trillion. FDIC also has responsibility for directly supervising approximately 6,200 of these institutions (commonly referred to as state-chartered, nonmember banks), which on average have $250 million in assets. As part of its goal of maintaining the safety and soundness of these institutions, FDIC is responsible for ensuring that banks are adequately mitigating the risks associated with the century date change. To ensure consistent and uniform supervision on Year 2000 issues, FDIC and the other banking regulators coordinate their supervisory efforts through FFIEC. For example, the regulators established an FFIEC working group to develop guidance on mitigating the risks associated with using contractors that provide automated systems services and software to banks. The Year 2000 problem is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as 97 representing 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from 1900, or 2001 from 1901, etc. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results, or worse, not function at all. According to FDIC, virtually every insured financial institution relies on computers either their own or those of a third-party contractor to provide for processing and updating of records and a variety of other functions. Because computers are essential to their survival, FDIC believes that all its institutions are vulnerable to the problems associated with the year 2000. Failure to address Year 2000 computer issues could lead, for example, to errors in calculating interest and amortization schedules. Moreover, automated teller machines may malfunction, performing erroneous transactions or refusing to process transactions. In addition, errors caused by Year 2000 miscalculations may expose institutions and data centers to financial liability and loss of customer confidence. Other supporting systems critical to the day-to-day business of banks may be affected as well. For example, telephone systems, vaults, security and alarm systems, elevators, and fax machines could malfunction. In addressing the Year 2000 problem, banks must also consider the computer systems that interface with, or connect to, their own systems. These systems may belong to payment system partners, such as wire transfer systems, automated clearinghouses, check clearing providers, credit card merchant and issuing systems, automated teller machine networks, electronic data interchange systems, and electronic benefits transfer systems. Because these systems are also vulnerable to the Year 2000 problem, they can introduce errors into bank systems. In addition to these computer system risks, banks also face business risks from the Year 2000. That is exposure from its corporate borrower s inability to manage their own Year 2000 compliance efforts successfully. Consequently, in addition to correcting their computer systems, banks have to periodically assess the Year 2000 efforts of their large corporate customers to determine whether they are sufficient to avoid significant disruptions to operations. FDIC and the other regulators established an FFIEC working group to develop guidance on assessing the risk corporate borrowers pose to banks. To address Year 2000 challenges, GAO issued its Year 2000 Assessment Guide to help federal agencies plan, manage, and evaluate their efforts. It advocates a structured approach to planning and managing an effective Year 2000 program through five phases. These phases are (1) raising awareness of the problem, (2) assessing the extent and severity of the problem and identifying and prioritizing remediation efforts, (3) renovating, or correcting, systems, (4) validating, or testing, corrections, and (5) implementing corrected systems. As part of the assessment phase, the guide stipulates that interfaces with outside organizations be identified and agreements with these organizations executed for exchanging Year 2000-related data. Contingency plans must be prepared during the assessment phase to ensure that agencies can continue to perform even if critical systems have not been corrected. Working back from January 1, 2000, GAO and OMB have established a schedule for completing each of the five phases. According to that schedule, agencies should have completed assessment phase activities last summer and should complete the renovation phase by mid- to late 1998. <2. FDIC Has Developed a Strategy and Has Initiated Action to Address the Year 2000 Problem> FDIC has taken a number of actions to raise the awareness of the Year 2000 issue among banks and to assess the Year 2000 impact on the industry. To raise awareness, FDIC formally alerted banks in July 1996 to the potential dangers of the Year 2000 problem by issuing an awareness letter to bank Chief Executive Officers. The letter, which included a statement from the interagency Federal Financial Institutions Examination Council, described the Year 2000 problem and highlighted concerns about the industry s Year 2000 readiness. It also called on banks to perform a risk assessment of how systems are affected and develop a detailed action plan to fix them. In May 1997, FDIC issued a more detailed awareness letter that described the five-phase approach to planning and managing an effective highlighted external issues requiring management attention, such as reliance on vendors, risks posed by exchanging data with external parties, and the potential effect of Year 2000 noncompliance on corporate borrowers; discussed operational issues that should be considered in Year 2000 planning, such as whether to replace or repair systems; related its plans to facilitate Year 2000 evaluations by using uniform directed banks to (1) inventory core computer functions and set priorities for Year 2000 goals by September 30, 1997, and (2) to complete programming changes and to have testing of mission-critical systems underway by December 31, 1998. To manage both internal and external Year 2000 efforts, FDIC established a Year 2000 oversight committee, consisting of the deputy directors of all offices and divisions, that reports to the FDIC Board of Directors. Among other matters, the committee is responsible for coordinating interagency working groups, contingency planning, public information campaign, institution outreach and education, and reporting on the results of bank assessments and examinations. As of December 31, 1997, FDIC had completed its initial assessment of all banks for which it has supervisory responsibility. In doing so, FDIC surveyed banks on whether (1) their systems were ready to handle Year 2000 processing, (2) they had established a structured process for correcting Year 2000 problems, (3) they prioritized systems for correction, and (4) they had determined the Year 2000 impact on other internal systems important to day-to-day operations, such as vaults, security and alarm systems, elevators, and telephones. In addition, FDIC assessed whether sufficient resources were targeted at the Year 2000 problem and if bank milestones for renovating and testing mission-critical systems were consistent with those recommended by FFIEC. According to the FDIC, this assessment identified over 200 banks that were not adequately addressing the Year 2000 problem and over 500 banks that are very reliant on third-party servicers and software providers and have not followed up with their servicers and providers to determine their Year 2000 readiness. FDIC plans to follow up on this initial assessment, which was conducted largely by telephone, with on-site visits to all banks to be completed by the end of June 1998. FDIC has also been participating with other regulators to conduct on-site Year 2000 assessments of 275 major data processing servicers and 12 major software vendors. According to FDIC, these servicers and vendors provide support and products to a majority of financial institutions. FDIC and the other regulators expect to complete their first round of servicer and vendor assessments by March 31, 1998. FDIC is providing the results of the servicer assessments to FDIC-supervised banks that use these services. Together with the results of on-site assessments conducted at banks, FDIC expects to have a better idea of where the industry stands, which banks need close attention, and thus, where to focus its supervisory efforts. <3. Concerns With FDIC s Efforts to Ensure Banks Are Year 2000 Ready> The primary challenge facing FDIC, and indeed all the banking regulators, in providing a level of assurance that the banking industry will successfully address the Year 2000 problem is time. FDIC s late start in developing an industry assessment is further compounded by two other factors: (1) its initial assessment and the follow-on assessment to be completed in June 1998 are not collecting all the data required to be definitive about the status of individual banks and (2) key guidance being developed under the auspices of FFIEC needed by banks to complete their own preparations is also late which, in turn, could potentially hamper individual banks abilities to address Year 2000 issues. <3.1. Need for Additional Data Precision> Although late in getting its initial assessments completed, FDIC has developed a perspective of where the banks it regulates stand on being ready for Year 2000. As outlined earlier, it plans on completing a more detailed assessment of banks by the end of June 1998. FDIC plans to use this information along with information obtained from the FFIEC servicer and vendor assessments to further refine its oversight activities. We think FDIC s strategy in using this information to target activities over the remaining 18 months is appropriate and necessary to make the best use of limited time. However, we believe that neither the initial nor the follow-on assessment work program is collecting all the data needed to determine where (i.e., in which phase) the banks are in the Year 2000 correction process. For example, neither the guidance used to conduct the initial assessment nor the guidance that is to be used to conduct follow-on assessments contains questions that ask whether specific phases have been completed. In addition, the terms used in the FFIEC guidance to describe progress are vague. For example, it notes that banks should be well into assessment by the end of the third quarter of 1997, that renovation for mission-critical systems should largely be completed, and testing should be well underway by December 31, 1998. Without defining any of these terms, it will be very hard to deliver uniform assessments on the status of banks Year 2000 efforts. Furthermore, the tracking questionnaire examiners are required to complete after their on-site assessments is organized on the basis of the five phases; however, it does not ask enough questions within each of the five phases to determine whether the bank has fully addressed the phases. For example, for the assessment phase, the questionnaire asks whether (1) a formal assessment has been conducted and if all mission-critical application and hardware systems have been identified, (2) a budget has been established for testing and upgrading mission-critical systems, and (3) the budget is reasonable. But these questions do not specifically cover critical assessment steps recommended in our Assessment Guide, including: conducting an enterprisewide inventory of information systems; using the inventory to develop a comprehensive automated system portfolio; establishing Year 2000 project teams for business areas and major developing a Year 2000 program, which includes schedules for all tasks and phases, a master conversion and replacement schedule, and a risk assessment; developing testing strategies and plans; defining requirements for testing facilities; identifying and acquiring Year 2000 tools; addressing interface and data exchange issues; and formulating contingency plans. In discussing this concern with FDIC officials, they told us that they intended to rely on the judgment of the examination staff to place institutions in specific categories. We agree on the need to rely on examiner judgment; however, we believe that having additional information will allow the examiners to conduct a more thorough assessment and can greatly enhance their capability to make a more accurate judgement. In turn, this could improve the ability of FDIC to properly focus its resources over the remaining time available. <3.2. Contingency Planning Guidance Not Yet Available> FDIC and FFIEC have yet to complete and issue contingency planning guidance to the banks. Our Assessment Guide recommends that contingency planning begin in the assessment phase for critical systems and activities. FDIC officials told us they are working with the other regulators to establish a working group to address this issue. While this guidance is needed, it would have been more appropriate to make it available before banks began completing their assessment phase efforts. <3.3. Guidance Late for Bank Interaction With Vendors> Regulators have found that some financial institutions, relying on third-party data processing servicers or purchased applications software, have not taken a proactive approach in ensuring Year 2000 compliance by their vendors. In a May 1997 letter to banks, the regulators recommended that banks begin assessing their risks with respect to vendors and outlined an approach for dealing with vendors that included the need to (1) evaluate and monitor vendor plans and milestones, (2) determine whether contract terms can be revised to include Year 2000 covenants, and (3) ensure vendors have the capacity to complete the project and are willing to certify Year 2000 compliance. The regulators also agreed to provide guidance on how each of the steps should be implemented. However, the regulators do not plan to issue this guidance until the end of March 1998. While this time frame cannot be significantly shortened, the timing of this specific guidance is coming at a very late date for some banks that have not been active in working with their vendors or that may lack sufficient technical expertise to evaluate vendor preparedness. <3.4. Guidance Late on Corporate Customer Year 2000 Readiness> Banks even those who have Year 2000 compliant systems could still be at risk if they have significant business relations with corporate customers who, in turn, have not adequately considered Year 2000 issues. If these customers default or are late in repaying loans, then banks could experience financial harm. In its May 1997 letter, the regulators also recommended that banks begin developing processes to periodically assess large corporate customer Year 2000 efforts and to consider writing Year 2000 compliance into their loan documentation, and FDIC later informed its institutions that the Year 2000 risks associated with corporate customers and reliance on vendors would be included in FDIC s follow-up assessments. The regulators again agreed to provide guidance on how institutions should do this, and the criteria defining safe and sound practices. However, the guidance being developed on this issue is also not expected until the end of March 1998. These time lags in providing guidance increase the risk that banks may have initiated action that does not effectively mitigate vendor and borrower risks or that banks have taken little or no action in anticipation of pending regulator guidance. <4. Concerns With FDIC s Efforts to Correct Its Internal Systems> FDIC internal systems are critical to the day-to-day operation of the corporation. For example, they facilitate the collection of bank assessments, keep accounts and balances for failed banks, schedule examinations, and calculate FDIC employee payroll benefits. The effects of Year 2000 failure on FDIC, in its own words, could range from annoying to catastrophic. FDIC system failures could, for example, result in inaccurate or uncollected assessments, inaccurate or unpaid accounts payable, and miscalculated payroll and benefits. Accordingly, FDIC developed an internal Year 2000 project plan that followed the structured five-phased approach recommended in our Assessment Guide. To raise awareness among FDIC employees, FDIC conducted more than 40 briefings for corporate staff throughout its divisions and offices. It also has disseminated Year 2000 information through the Internet and internal newsletters. To assess Year 2000 impact, FDIC conducted an inventory and high-level assessment of approximately 500 internal systems that consist of about 15 million lines of program code. In addition, FDIC engaged a contractor to assist in conducting detailed system assessments, defining requirements for test environments, and renovating and testing code. We have two concerns with FDIC s effort to correct its internal systems. First, FDIC is taking much longer to assess its systems than is recommended by our guide as well as OMB and technology experts. Second, FDIC has yet to develop contingency plans to ensure continuity of core business processes, which our guide points out need to be started early in the Year 2000 effort. Currently, FDIC is still assessing which of its systems need Year 2000 corrections, and it does not expect to finish this assessment until March 1998. Specifically, FDIC has yet to fully assess its 40 mission-critical systems. Our guide recommends that agencies make these determinations by mid-1997 in order to have enough time to complete the next three stages of correction. By taking additional time to complete assessment, FDIC is leaving itself with much less time to complete renovation, testing, and implementation, and, thus, it is increasing the risk that it will not complete its Year 2000 fixes on time. Compounding this problem is the fact that FDIC has yet to develop contingency plans for its mission-critical systems and core business processes. Rather than begin developing contingency plans for critical systems and core business processes, as our Assessment Guide recommends, FDIC intends to develop plans only for those systems that experience unforeseen problems or delays in correction or replacement efforts. In addition, FDIC had not yet prepared a contingency plan to ensure continuity of its core business processes. In pursuing this approach, FDIC is failing to heed advice that it holds banks accountable to: preparing contingency plans that focus on ensuring that internal operations will be sustained. The FFIEC states that the board of directors and senior management are responsible for organizationwide contingency planning, which assesses the importance of an institution s departments, business units, and functions and determines how to restore critical areas should they be affected by disaster. In addition, preparing contingency plans on an as-needed basis is risky in several respects. First, programmers cannot always foresee system problems. Without contingency plans, FDIC will not be prepared to respond to unforeseen problems. Second, FDIC s Year 2000 strategy for many systems involves replacing systems in 1998 and 1999. In the event that replacement schedules slip, FDIC may not have enough time to renovate, test, and implement a legacy system or identify other alternatives, such as manual procedures or outsourcing. Third, even if systems are replaced on time, there is no guarantee that the new systems will operate correctly. FDIC tasked its contractor with providing guidance on preparing contingency plans for its mission-critical systems and the contractor provided draft guidelines on January 28, 1998, with the goal of making them final by the end of February 1998. In conclusion, Mr. Chairman, we believe that FDIC has a good appreciation for the Year 2000 problem and has made significant progress since last year. Further, we believe that FDIC s strategy of using the results of the service provider and vendor assessments in conjunction with the more complete assessments of individual banks in order to best focus resources is a reasonable approach. However, FDIC and the other regulators are facing a finite deadline that offers no flexibility. We believe that FDIC needs to take several actions to improve its ability to make informed judgments about bank status and to enhance the ability of banks to meet the century deadline with minimal problems. We, therefore, recommend that FDIC work with the other FFIEC members to expeditiously revise the Year 2000 assessment work program to include questions on each phase of the correction process, such as those outlined in our Assessment Guide, to better enable examiners to determine and report the exact status of each bank and vendor in addressing the Year 2000 problem. FDIC should also apply the revised Year 2000 assessment work program to each bank and vendor, including those where assessments are completed, to determine whether appropriate data have been obtained to make complete assessments. We also recommend that FDIC work with the other FFIEC members to complete by the end of March 1998, their guidance to institutions on mitigating the risks associated with corporate customers and reliance on vendors. Further, FDIC should work with the other FFIEC members to quickly establish a working group to develop contingency planning guidance and set a deadline for completing this effort. Additionally, we believe that a combination of factors including starting the bank assessment process late and issuing more specific guidance to banks at a relatively late date has greatly compressed the time schedule available for FDIC and other members of FFIEC to develop more positive assurance that banks will be ready for the year 2000. Accordingly, we recommend that FDIC work with the other FFIEC members to develop, in an expeditious manner, more explicit instructions to banks for carrying out the latter stages of the Year 2000 process renovation, validation, and implementation which are the critical steps to ensuring Year 2000 compliance. Because the results of the bank assessments to be completed this June are so critical to FDIC in focusing its activities through the year 2000, we recommend that FDIC develop a tactical plan that details the results of its assessments and provides a more explicit road map of the actions it intends to take based on those results. Finally, with regard to FDIC s internal systems, we recommend that the Chairman direct the Year 2000 oversight committee to (1) ensure that adequate resources are allocated to complete the internal systems assessment by the end of March 1998 and take necessary action to ensure this effort is completed on time and (2) develop contingency plans for each of FDIC s mission-critical systems and core business processes. Mr. Chairman, that concludes my statement. We welcome any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO discussed the progress being made by the Federal Deposit Insurance Corporation (FDIC) in ensuring that the thousands of banks it oversees are ready to handle the year 2000 computer conversion challenge. What GAO Found GAO noted that: (1) the year 2000 problem poses a serious dilemma for banks due to their heavy reliance on information systems; (2) it also poses a challenge for FDIC and the other bank regulators who are responsible for ensuring bank industry readiness; (3) regulators have a monumental task in making sure that financial institutions have adequate guidance in preparing for the year 2000 and in providing a level of assurance that such guidance is being followed; (4) further, regulators will likely face some tough decisions on the readiness of individual institutions as the millennium approaches; (5) GAO found that FDIC is taking the problem very seriously and is devoting considerable effort and resources to ensure that the banks it oversees mitigate year 2000 risks; (6) FDIC has been very emphatic in alerting banks to the year 2000 problem and has conducted a high-level assessment of the industry's year 2000 readiness; (7) despite aggressive efforts, FDIC still faces significant challenges in providing a high level of assurance that individual banks will be ready; (8) FDIC--as were the other regulators--was late in addressing the problem; (9) consequently, it is behind the year 2000 schedule recommended by both GAO and the Office of Management and Budget (OMB); (10) compounding this problem is that critical guidance, although under development, has not been released by the Federal Financial Institutions Examination Council (FFIEC) for banks and other financial institutions on contingency planning, assessing risks caused by corporate customers (borrowers), and assessing risks associated with third-party automated system service providers; (11) this guidance should have been provided earlier so that banks would have had more time to factor the guidance into their own assessments and plans; (12) additionally, FDIC's ability to report on individual banks' status in preparing for the year 2000 is limited by insufficient information being reported by bank examiners; (13) FDIC also needs to correct its internal systems used to support agency functions and has initiated efforts to do this; (14) FDIC is behind in assessing whether these systems are year 2000 compliant; and (15) although OMB guidance states that the assessment phase should have been completed in mid-1997, FDIC has not yet fully assessed its mission critical systems or established contingency plans in case systems repairs and replacements are not in place on time or do not work as intended.
<1. Background> The services and combatant commands both have responsibilities for ensuring servicemembers are trained to carry out their assigned missions. As a result, both the services and combatant commands have developed specific training requirements. <1.1. CENTCOM and Service Responsibilities> Combatant commanders and service secretaries both have responsibilities related to ensuring the preparedness of forces that are assigned to the combatant commands. Under Title 10 of the U.S. Code, the commander of a combatant command is directly responsible for the preparedness of the command to carry out its assigned missions. In addition, according to Title 10 of the U.S. Code, each service secretary is responsible for training their forces to fulfill the current and future operational requirements of the combatant commands. In addition, the Office of the Secretary of Defense has issued guidance for managing and developing training for servicemembers. Specifically, DOD issued a directive, which stated the services are responsible for developing service training, doctrine, procedures, tactics, and techniques, and another that required that training resemble the conditions of actual operations and be responsive to the needs of the combatant commanders. <1.2. Unit Commanders Responsibilities> According to Joint Publication 1, unit commanders are responsible for the training and readiness of their units. Army and Marine Corps guidance also assigns unit commanders responsibility for certifying that their units have completed all required training and are prepared to deploy. Specifically, Army Regulation 350-1 states that unit commanders are responsible for the training proficiency of their unit and, when required, for certifying that training has been conducted to standard and within prescribed time periods. In addition, a Department of the Army Executive Order states that, for the reserve component, unit commanders, in concert with service component commands, certify completion of training and the service component command the Army National Guar or U.S. Army Reserve validates units for deployment. Administrative Message 740/07 states that coordination of predep training is the responsibility of the unit commander and all questions concerning the training should be vetted through the commander or his operations element. Further, unit commanders validate that their units are certified for deployment, doing so through a certification message that documents the extent to which deploying Marines have successfully completed predeployment training. Headquarters, Department of the Army Executive Order 150-08, Reserve Component Deployment Expeditionary Force Pre- and Post-Mobilization Training Strategy (March 2008). Marine Corps Order 3502.6, Marine Corps Force Generation Process (Jan. 26, 2010). <1.3. CENTCOM Training Requirements> Combatant commanders have wide-reaching authority over assigned forces. In this capacity, CENTCOM has established baseline theater entry requirements that include training tasks that all individuals must complete before deploying to the CENTCOM area of operations. Specifically, these CENTCOM training requirements include minimum training tasks for both units and individuals. Required individual tasks include, but are not limited to, basic marksmanship and weapons qualification, high-mobility multipurpose wheeled vehicle (HMMWV) and mine resistant ambush protected (MRAP) vehicle egress assistance training, non-lethal weapons usage, first aid, counter-improvised explosive device training, and a number of briefings including rules of engagement. <1.4. Service Training Requirements> The services have established combat training requirements that their servicemembers must complete at various points throughout their careers. During initial entry training, recruits are trained on service tasks and skills, including basic military tactics, weapons training, and marksmanship. In addition, the services have annual training requirements that are focused on tasks such as crew-served weapons training, reacting to chemical and biological attacks, and offensive and defensive tactics. Prior to deploying overseas, servicemembers must also complete a set of service directed predeployment training requirements. These predeployment requirements incorporate the combatant commander s requirements for the area where the forces will be deployed. U.S. Army Forces Command and the Commandant of the Marine Corps have both issued training requirements for forces deploying to the CENTCOM area of operations or in support of operations in Iraq and Afghanistan. These documents also require that units complete a final collective event prior to deployment to demonstrate proficiency in collective tasks. <1.5. Collection and Dissemination of Lessons Learned> Lessons learned are defined as results from an evaluation or observation of an implemented corrective action that produced an improved performance or increased capability. The primary vehicle for formally collecting and disseminating lessons learned information is the after action report. Army and Marine Corps guidance require that units submit after action reports to the services respective lessons learned centers. Army Regulation 11-33 established its Army Lessons Learned Program to create an information sharing culture and a system for collecting, analyzing, disseminating, integrating, and archiving new concepts, tactics, techniques, and procedures. The regulation further assigned the Center for Army Lessons Learned (CALL) primary responsibility for the Army Lessons Learned Program. The Marine Corps established its Marine Corps Center for Lessons Learned (MCCLL) to provide a relevant, responsive source of institutional knowledge that facilitates rapid adaptation of lessons into the operating forces and supporting establishments. The Army and Marine Corps have both formal and informal approaches to collect and disseminate lessons learned information. Their formal approaches often rely on a wide network of MCCLL and CALL liaison officers at training centers and in Iraq and Afghanistan, but the centers also publish relevant information on their Web sites to make it widely available. The informal networks based on personal relationships between unit commanders, trainers, or individual soldiers and marines have also facilitated the sharing of lessons learned information. <1.6. Prior GAO Work> GAO has previously reported on combat skills training provided to nonstandard forces. In May 2008, we reported that the Air Force and Navy waived CENTCOM established training requirements without consistently coordinating with the command, so CENTCOM lacked full visibility over the extent to which all of its forces were meeting training requirements. We recommended that the Secretary of Defense direct the Office of the Secretary of Defense, Personnel and Readiness, in conjunction with the Chairman of the Joint Chiefs of Staff, develop and issue a policy to guide the training and use of nonstandard forces, to include training waiver responsibilities and procedures. DOD agreed with our recommendation, stating that it had work underway to ensure that the necessary guidance was in place for effective training of nonstandard forces. However, as of February 2010, it had not issued such guidance. <2. Army and Marine Corps Support Forces Receive Significant Combat Skills Training, but May Not Consistently Complete All Required Tasks> Although Army and Marine Corps support forces undergo significant training, they may not consistently or successfully complete all required training tasks prior to deploying. Both CENTCOM and the services have issued predeployment training requirements. However, some of CENTCOM s training requirements lack associated conditions and standards, and confusion exists over which forces the requirements apply to. In addition, the Army and Marine Corps have not included certain CENTCOM required tasks in their predeployment training requirements, and unit commanders can certify their units for deployment even if all the required individual and collective training tasks have not been successfully completed. <2.1. Army and Marine Corps Support Forces Receive Significant Combat Skills Training> The services provide combat skills training to their servicemembers, including support forces, at various points throughout their careers. During initial entry training, recruits are trained on service tasks and skills, including basic military tactics, weapons training, and marksmanship. In addition, servicemembers participate in annual training that is focused on tasks such as crew-served weapons training, reacting to chemical and biological attacks, and offensive and defensive tactics. Soldiers and marines also participate in combat skills training prior to deploying for any overseas operations. As a result, the predeployment combat skills training that support unit personnel receive should be viewed as a significant piece of their training to operate in an asymmetric environment, but not as their only training to operate in that environment. <2.2. Some of CENTCOM s Training Requirements Do Not Clearly Define Conditions and Standards, and Confusion Exists over to Whom the Requirements Apply> CENTCOM has issued a list of training tasks that all individuals assigned to its area of responsibility, including support unit personnel, must complete before deploying in support of ongoing operations in Iraq and Afghanistan. While the CENTCOM training requirements outline tasks that must be trained, the command does not always clearly define the conditions and standards to which all of the tasks should be trained. Task conditions identify all equipment, tools, materials, references, job aids, and supporting personnel required to perform the task, while standards indicate the basis for judging effectiveness of task performance. For some training tasks, CENTCOM includes specific guidance. For example, weapons qualification requirements include a detailed discussion of when the qualification must take place, equipment that must be worn, and range distances. For some training tasks, however, CENTCOM does not provide any conditions or standards. For example, as noted above, CENTCOM requires that all deploying forces complete HMMWV rollover training, but it does not specify how the training should be conducted. Consequently, service training has varied within and among the Army and Marine Corps. At one Marine Corps site, training officials explained that HMMWV rollover training could be completed in less than a half hour. On the other hand, trainers at one Army training site noted that their HMMWV rollover training consisted of a full day of training that included a classroom overview and hands-on practice in a simulator with both day and night scenarios, pyrotechnics to simulate improvised explosive devices, and the incorporation of casualty evacuation procedures. For other training tasks, the CENTCOM requirements contain only general guidance on training conditions. For example, for some tasks such as first aid and improvised explosive device training, CENTCOM requires that classroom training be followed up with practical application during field training that mimics the harsh, chaotic, and stressful conditions servicemembers encounter in the CENTCOM area of operations. However, the requirements do not identify the materials or training aides to be used in conducting the training and they do not indicate the standard for successfully completing the training. While service officials acknowledged that, as outlined in Title 10 of the U.S. Code, it is their responsibility to train servicemembers, they stated that CENTCOM s list of minimum theater entry training tasks was unclear, which resulted in varying service interpretations of the tasks. Furthermore, CENTCOM training requirements are communicated to the services in a document that also outlines training requirements for joint sourced forces. Service officials have expressed confusion over these training requirements and the extent to which they apply to all forces given that the tasks are listed in a document that focuses primarily on unit training requirements for joint sourced forces. Service officials reported that changes to training requirements have also added to the confusion over training requirements and priorities. While the latest set of CENTCOM requirements contained in the joint sourced forces document was issued on May 7, 2009, ground commanders have issued several requirements since then. For example, in January 2010, the Commander, U.S. Forces- Afghanistan, issued an order that contained additional training requirements for all forces deploying to Afghanistan. However, CENTCOM officials said that these Afghanistan-specific requirements had not yet been validated. When CENTCOM validates new requirements it promulgates them in several different ways, including in updates to the training requirements contained in the joint sourced forces document, in individual request for forces, or by CENTCOM messages. <2.3. The Services Are Providing Training on Most of CENTCOM s Required Tasks, but Have Not Included Certain Tasks> While the Army and Marine Corps have provided most of the CENTCOM required training, in some cases, they have not provided training on the specific tasks called for by CENTCOM. For example, neither service has provided MRAP vehicle rollover training to all of their support forces. MRAP vehicle rollover training has been identified as a key combat skill for deploying forces. MRAP vehicles have much larger profiles and weights than the vehicles they replaced in theater, and as a result, pose a greater risk of tip or rollover when negotiating slopes, trenches, ditches, and other obstacles. Further, rollover risks are higher in Afghanistan due to uneven terrain and sub-par road conditions. A November 2009 DOD study on MRAP vehicle rollovers noted that since 2007, 178 MRAP vehicle mishaps involved some type of rollover that resulted in a total of 215 injuries and 11 fatalities. The study recommended more practice on rollover drills, and CENTCOM has required this training for all deploying forces. According to Marine Corps officials, the Marine Corps is prioritizing MRAP vehicle rollover training, and current Marine Corps guidance requires this training only for marines expected to utilize MRAP vehicles. However, use of these vehicles in theater has been increasing, and officials at I Marine Expeditionary Force explained that they are trying to train deploying forces to meet the MRAP vehicle rollover training requirement. A rollover trainer was originally scheduled to arrive at their training area in February 2010, but the delivery has been delayed and there is currently not a projected delivery date. Army officials explained that they have attempted to meet the CENTCOM requirement, but that a lack of MRAP rollover trainers at the Army s training bases in the United States has prevented them from fully training all forces on this task prior to deployment. In the meantime, some support forces are getting required training after they deploy, but Army officials were unable to confirm whether all forces were getting the required training. Moreover, neither the Army nor the Marine Corps have provided non- lethal weapons training to all deploying support forces. CENTCOM requires that all individuals deploying to its area of responsibility complete training in non-lethal weapons usage, planning, and understanding of non- lethal weapons capability sets. DOD reported in December 2009 that operational experience dictates the need for forces to be trained in non- lethal weapons and that current operations have highlighted the imperative for the discriminate use of force to minimize civilian casualties and the integral role that non-lethal weapons capabilities provide in achieving that objective. In that report, DOD noted that non-lethal weapons training has been mandated by CENTCOM for all deploying forces and that non-lethal weapons training must be further integrated into service training. Further, GAO has previously reported that DOD needed to provide clearer weapons employment guidance for non-lethal weapons and incorporate this guidance into training curricula. Due to the confusion over what forces CENTCOM s joint sourced training requirements apply to, Marine Corps officials explained that they do not believe the non-lethal weapons training requirement applies to them and do not require this training. The Army requires non-lethal weapons training only for combat arms units. Army officials explained that they do not have sufficient resources to train all deploying forces, including support forces, on non-lethal weapons, but have not sought formal waivers for this task. <2.4. Unit Commanders Can Certify Units for Deployment without Successfully Completing All Tasks in Their Final Collective Training Event> According to Joint Publication 1, unit commanders are responsible to their respective Service Chiefs for the training and readiness of their unit. Service guidance emphasizes this responsibility, assigning unit commanders responsibility for the coordination and completion of predeployment training and validating that servicemembers are certified for deployment. Before forces deploy, Army and Marine Corps guidance requires that units complete a final collective training event. These events can vary based on unit type, assigned mission, and the theater of operations and provide an opportunity for the unit to demonstrate proficiency in collective tasks. While service guidance requires that units undergo a final collective training event, the guidance does not specifically require that units successfully complete the training before commanders can certify their units for deployment. Army and Marine Corps officials explained that if a support unit does not demonstrate combat skills proficiency during the final event, when and where remediation is to occur is left to the discretion of the individual unit commander and can be completed in theater after deploying. For example, a Marine Corps combat logistics battalion that deployed in January 2010 was assessed fully trained in its logistics mission, but not proficient in basic warrior tasks during its final collective training event at Exercise Mojave Viper. Specifically, the unit was not proficient in fifteen of sixteen warrior tasks including reacting to ambush, escalation of force, individual continuing actions, and casualty evacuation procedures. The Marine Corps logistics training officer who conducts the final unit after action reviews for combat logistics battalions explained that poor ratings on basic warrior skills were not uncommon for support units during their final collective training event. While the unit conducted remedial training on casualty evacuation procedures prior to deployment, it did not conduct remedial training in other areas, since the unit had 15 days to complete both required training that they were unable to accomplish prior to Exercise Mojave Viper and remedial training, and the unit deployed on time. Service officials explained that it is the responsibility of unit commanders to exercise judgment in assessing whether the unit has the collective skills needed to accomplish its mission. However, without visibility over the completion of remediation, Army and Marine Corps support forces may not successfully complete all CENTCOM or service required training tasks prior to deploying. <3. CENTCOM and the Services Lack Complete Information on Servicemembers Completion of Required Combat Skills Training> The Army and Marine Corps take steps to document the completion of required combat skills training tasks, but face inconsistencies in the way the services track completion of training. While the Army has a service- wide system of record for tracking the completion of training requirements, the system is not being fully utilized. Furthermore, the Marine Corps lacks a service-wide system for tracking the completion of training requirements. Instead, both services rely on paper rosters and stand-alone spreadsheets and databases to track training completion. In addition, even though CENTCOM requires that all forces deploying to its area of responsibility complete a set of required training tasks, the command lacks a clearly defined process for waiving individual training requirements if they cannot be met. <3.1. Unit Commanders Lack Full Visibility over Completion of Required Training Tasks Due to Inconsistent Service Tracking Systems> According to Joint Publication 1, unit commanders are responsible to their respective Service Chiefs for the training and readiness of their units. Service guidance emphasizes this responsibility, assigning unit commanders responsibility for coordinating and completing predeployment training and validating that servicemembers are ready for deployment. Higher level decision-makers, including the higher headquarters elements of the units in training, are then responsible for validating the unit commanders assessments. The Army and Marine Corps take slightly different approaches to validating units for deployment, particularly as it applies to the Army s reserve component. While the Army and Marine Corps active components rely heavily on unit commanders to validate units and higher headquarter elements, such as brigade and division commanders for the Army s active component and the Marine Logistics Groups and Marine Expeditionary Forces for the Marine Corps, to validate the commander s assessment, the Army s reserve component relies heavily on a validation board that convenes at the completion of a unit s training at a mobilization training center. However, according to Army officials, in the end, the final decision is largely based on individual unit commanders assessments of the readiness of their units. While the Army issued guidance requiring tracking of training completion through a servicewide system, the system has not been fully utilized. In December 2009, the Army updated a training regulation and required that all individual and collective training tasks be documented for soldiers through the Digital Training Management System (DTMS) in order to better standardize training. Army units were required to report completion of certain requirements, such as suicide prevention classes and the Army physical fitness test tasks, in DTMS prior to the revision of this regulation. However, the revised regulation designates DTMS as the only authorized automated system for managing unit training and requires units to track each individual soldier s completion of all required training tasks, to include all predeployment individual and collective training. The regulation was effective as of January 18, 2010, and states that DTMS will be able to provide units with the ability to plan, resource, and manage unit and individual training. However, as of February 2010, the system was not fully operational, and while active component units were able to enter all of their data into DTMS, reserve component units were not yet able to do so because of a lack of interfaces among existing tracking systems and DTMS. The Army has not yet developed a detailed schedule with milestones and resource requirements for fully developing the capability for reserve component units to input data. Neither has it established milestones for active and reserve component units to enter data into the system. Furthermore, the guidance does not assign responsibility for ensuring compliance and does not make it clear whether previously completed training needs to be entered into the system or only training that is completed after the January 18, 2010, implementation date. The Army s active and reserve components have both begun using DTMS, but DTMS is not being fully or consistently used by either component. U.S. Army Forces Command officials reported that the capabilities of DTMS are fully operational among the active component, but that units have not consistently used the system. During our discussions with commanders from four active component battalions in February 2010, we found that the system, while operational, was not being fully utilized. We noted that the battalions used DTMS to different degrees. Specifically, two commanders said that their battalions relied on DTMS to track training schedules and some tasks, such as weapons qualification and physical fitness, but they said that their battalions did not track completion of all required tasks down to the individual soldier level. The other two battalion commanders noted that they did not use DTMS to track completion of any training tasks. Overall, none of the four battalions used DTMS the way the Army intended it to be used, but emphasized interest in incorporating the system into how they track training. First Army officials reported that DTMS is not fully operational among the reserve component. Army officials reported that not all of the individual systems the reserve component used to track completion of training were interchangeable with DTMS, and as such, the system was not fully operational. Moreover, in our discussions with unit commanders from five Army Reserve units and one National Guard unit in November 2009, we noted that the system was not being utilized. In fact, none of those commanders were familiar with DTMS despite the fact that the Army had required the entry of suicide prevention classes and the Army physical fitness test tasks into DTMS by September 2009. Instead of using DTMS, Army support units rely on tools such as paper rosters and stand-alone spreadsheets and databases to track completion of individual and unit training, and the tools used are not consistent among units and commands. For the reserve component, First Army has established an Excel spreadsheet, referred to as the Commander s Training Tool, to track completion of individual training tasks. According to officials, the tool, intended to serve as an in-lieu-of system until DTMS reached full operational capability, is used as a model for tracking systems at the individual mobilization training centers. Specifically, officials at one mobilization training center told us that they had developed an individualized tracking system based on the Commander s Training Tool, but had tailored the system to meet the needs of the individual command. Within the active component, unit commanders we spoke with noted that they also rely on tools such as paper rosters and stand-alone spreadsheets and databases to track completion of individual and unit training at the battalion level and below, providing regular status updates to the brigade and division commanders. Reliance on various inconsistent tracking mechanisms instead of the servicewide DTMS limits the visibility unit commanders have over completion of required training tasks. The Marine Corps also uses inconsistent approaches to track completion of required training and relies instead on paper rosters and stand-alone spreadsheets for tracking. Specifically, 2nd Marine Logistics Group officials said that individual units are responsible for tracking completion of individual training and that this tracking is completed through large Excel spreadsheets, but that the information is regularly reviewed by the Marine Logistics Group. A commander from a support unit within the 2nd Marine Logistics Group noted that training was tracked and reviewed using Excel spreadsheets. Further, the unit s operations officer noted that within the battalion, individual training is tracked at the company level, and once a week, the information is provided to the battalion operations officer, who then briefs the battalion commander on overall percentages of marines who have completed the required tasks. We also spoke with officials from the 1st Marine Logistics Group who noted that the individual units are responsible for tracking the completion of both individual and unit training requirements. While the 1st Marine Logistics Group provides units with a summary level spreadsheet to report the status of the unit training, the individual units are responsible for tracking the completion of individual training and the Marine Logistics Group does not track the completion of individual training. Officials from the 1st Marine Logistics Group noted that unit operations officers have visibility over individuals and their respective training, and this information is rolled up and provided at a high level to the Commanding Officer. A commander of a support unit we spoke with noted that his unit used the Excel spreadsheet provided by the 1st Marine Logistics Group to track completion of individual training requirements, with individual tracking being done at the company level. Further, sometimes when marines transfer among units, documentation of completed training tasks is not provided to the receiving unit. For example, a support battalion operations officer we spoke with noted that the battalion received many marines throughout the deployment process, but some marines arrived without documentation of the training they had previously completed. In the absence of a consistent approach to track completion of training tasks, the Marine Corps relies on inconsistent tracking mechanisms among individual units and commands. These inconsistent tools limit the visibility unit commanders have over completion of required training tasks, particularly when marines are transferred from one unit to another for deployment purposes. <3.2. CENTCOM Lacks a Process for Waiving Training Requirements, Limiting the Command s Visibility over Whether Forces Are Completing Required Training> While CENTCOM has issued a consolidated list of minimum theater entry requirements for all individuals deploying to its area of responsibility, it has not issued overarching waiver guidance or established a formal process for waiving each of these requirements (e.g., basic marksmanship and weapons qualification, law of land warfare, and HMMWV and MRAP vehicle egress assistance training) in circumstances where the requirements are not going to be met. However, CENTCOM officials provided an example of a case where waiver requirements for one specific task were outlined. In September 2007, the command issued a message requiring HMMWV egress assistance training for all forces deploying to its area of responsibility. This requirements message included steps the services needed to take to waive the requirement in the event that the training could not be completed by 100 percent of the deploying personnel before deployment. However, a similar waiver process is not outlined for other required CENTCOM tasks. Officials from both the Army and Marine Corps noted that there are instances where servicemembers are not completing all of the required training. Specifically, when we spoke to unit commanders and unit training officers, we were told that some personnel were not meeting these individual training requirements and that units were not requesting formal waivers from CENTCOM or communicating this information to CENTCOM. For example, an operations officer from a Marine Corps combat logistics battalion reported that some of the unit s deploying marines would not complete their required individual training tasks, such as the CENTCOM-required MRAP vehicle egress training. Moreover, the commander of an active component Army support battalion noted that in validating his unit for deployment, he did not focus on completion of individual tasks, instead assessing the unit s ability to complete tasks collectively. As such, the unit commander s decision was not based on whether all individuals completed all of the required individual training tasks. There is no clearly defined process for waiving these training requirements, and there is no clear or established method for the services to report to CENTCOM that some servicemembers are not completing CENTCOM s required training. As a result, CENTCOM cannot determine if additional training is required following arrival in theater. In May 2008, we reported that the Air Force and Navy implemented procedures for waiving CENTCOM-required training without fully coordinating with the CENTCOM headquarters office responsible for developing the training requirements. Specifically, we reported that Navy nonstandard forces that completed Navy combat skills training more than 90 days prior to their deployment would normally have to update their training by repeating the course, but that they could waive this requirement if they completed relevant combat skills training that significantly exceeded what they would have received in the Navy course. We further reported that the Air Force granted waivers for combat skills training on a case-by-case basis. At the time, CENTCOM officials noted that the services had not consistently coordinated these waiver policies with their command. Therefore, CENTCOM did not have full visibility over the extent to which its assigned forces had met its established training requirements. At the time, we recommended that the Office of the Secretary of Defense develop a policy to guide the training and use of nonstandard forces, and the policy include training waiver responsibilities and procedures. In February 2010, an official from the Office of the Secretary of Defense reported that they planned to issue a revised policy on non-standard forces by the end of the year, and that the revised guidance would address the issue of granting waivers. Furthermore, during our review, we learned that CENTCOM s lack of visibility applies to a larger population of forces than just the Air Force and Navy nonstandard forces, instead applying to all forces deploying to the CENTCOM area of responsibility. <4. The Army and Marine Corps Have Made Significant Changes to Combat Skills Training as a Result of Lessons Learned, but Information Concerning These Changes Is Not Being Consistently Shared> The Army and Marine Corps have made significant changes to their combat skills training for support forces as a result of lessons learned, but the services have not uniformly applied lessons learned. Both the Army and Marine Corps require the collection of lessons learned information, and each service relies on formal and informal collection methods to obtain relevant information. While it can take time to incorporate lessons learned into service doctrine, service training facilities are often able to utilize lessons learned to adjust their training almost immediately. However, training facilities do not consistently share information obtained as a result of lessons learned or share changes made to training as a result of lessons learned among other facilities, resulting in servicemembers being trained inconsistently. As such, support forces have been deploying for similar missions with different training. <4.1. The Army and Marine Corps Have Incorporated Changes from Lessons Learned into Training and Deployment Preparation> The Army and Marine Corps collect lessons learned information through both formal and informal processes, and they have made significant changes to their training and deployment preparations as a result of this information. Army and Marine Corps doctrine require the formal collection of lessons learned and designate after action reports as the primary vehicle for this formal collecting of lessons learned information. Trainers and units noted that they prepare after action reports at several different times including after final collective training exercises and during and after deployment. Depending on the complexity of the deficiency that is addressed in an after action report and the resources required to address the deficiency, it can sometimes take considerable time to see actions that result from formal after action reports. However, after action reports have resulted in changes to the way the services train and deploy their forces, as the following examples illustrate. In July 2009, the Marine Corps officially established and began training Female Engagement Teams, small detachments of female marines whose goal was to engage Afghan women. The concept of a Female Engagement Team was first introduced in February 2009 as part of a special operations mission in Afghanistan. An after action report emphasizing the need for forces to be organized and trained to engage Afghan women was submitted in response to an incident in May 2009, in which the enemy escaped dressed as women because male Marines were not allowed to engage Afghan women. As a result, the Marine Corps expanded the use of the Female Engagement Team concept, developing an actual program and implementing a training plan. In December 2009, U.S. Forces-Afghanistan released a memorandum that emphasized the need for increased training and use of Female Engagement Teams. Prior to that time, the use of Female Engagement Teams was primarily a Marine Corps effort. However, the memorandum stated that all services should create these teams, and since the memorandum was issued, officials noted that the Army has begun to assess how it can best meet the needs in theater for these teams with its available personnel. In November 2009, the 1st Marine Logistics Group established and conducted a new predeployment training course for support forces that focused on combat logistics patrols. The course was developed in response to at least two different units after action reports, one submitted by a unit returning from Afghanistan and another submitted by a unit undergoing final predeployment training, which highlighted the need for leaders of support units to receive additional training and experience with combat patrols. The redeploying unit s after action report identified shortcomings in how support units conducting convoy missions outside of forward operating bases were trained, and the unit undergoing final training s after action report identified deficiencies in the amount of time spent on training. The new 5-day course the Combat Logistics Patrol Leaders Course focuses on providing support units with the skills they need to conduct combat logistics patrols, which require support forces to leave protected areas where they can become the target for enemies, as opposed to simply convoy missions conducted inside protected forward operating bases. The services also rely on lessons collected through informal means when adjusting predeployment training. Informal collection methods include obtaining feedback from units currently deployed in Iraq and Afghanistan through informal discussions, observations made by trainers or deploying unit leaders during brief visits to theater, and informal conversations among personnel within service commands and training organizations. Army and Marine Corps officials stated that there is regular communication between personnel who are deployed in theater and the personnel who are preparing to deploy to replace them. Furthermore, they said that the deployed personnel often provide vital information regarding the current conditions in Iraq and Afghanistan, which the deploying unit commander and trainers can use to make immediate adjustments to training. Much like changes made as a result of formal lessons learned, the informal collections have also resulted in changes to the way the services train and deploy their forces, as the following examples illustrate. An Army installation established an Individual Replacement Training program to provide individual replacement soldiers with the combat skills needed to join their parent units in theater. Army officials noted that approximately 2 years ago, certain units were tasked to train these individual replacements on a 4- to 5-month rotating basis. However, the units that conducted the training were unable to keep pace with the flow of individual replacements because of their high pace of operations. Based on feedback obtained from the units and observations by unit leadership, Army civilians were assigned responsibility for the training, which resulted in the Individual Replacement Training program. As of 2009, the Individual Replacement Training program trained approximately 3,400 soldiers, and combat skills have been trained more consistently. Since improvised explosive devices are commonly used against military forces in Iraq and Afghanistan, training regarding the defeat of these devices is a CENTCOM predeployment training requirement and was cited as a key focus at the training facilities we visited. Officials we spoke with explained that improvised explosive devices pose a serious threat to military forces because the types of devices the enemies use constantly change. While training facilities have incorporated the most recent improvised explosive device defeat tactics into their training based on information provided by the Joint Improvised Explosive Device Defeat Organization, they also obtain and immediately incorporate the tactics provided informally by individuals in theater. <4.2. The Services Would Benefit from Sharing Changes Made as a Result of Lessons Learned> Trainers at the sites we visited told us that they had made adjustments to training based on both informal and formal lessons learned information that they had received. However, they also told us that they did not consistently share information about the adjustments they had made with other sites that were training forces on the same tasks, and even in cases where the information was shared, there were still some differences in the training that was being provided to deploying support forces. For example: One site significantly enhanced its HMMWV rollover training based on informal feedback. Specifically, the training was enhanced to include hands-on practice in a simulator with both day and night and land and water scenarios, as well as an emphasis on new vehicle features, such as the dual release seatbelts, when exiting the vehicle in an emergency. While trainers from this site provided information about these enhancements to some of their counterparts at other training facilities, HMMWV rollover training varies significantly from site to site. At one of the sites we visited, HMMWV rollover training consisted simply of a short demonstration. At one training site we visited, trainers were teaching Army Reserve support forces who had not been mobilized specific tactics for entering and clearing buildings, while other trainers at the same site were teaching soldiers who had been mobilized different tactics for the same task. Officials we spoke with stated that these differences in tactics are a result of a lack of sharing of information among trainers. Specifically, the First Army trainers who were training soldiers after mobilization were not consistently sharing information with U.S. Army Reserve trainers who were training soldiers prior to mobilization. Since one of the primary purposes for conducting repetitive training is to develop an intuitive response to certain circumstances, repetitive training that employs different tactics may not be as effective as repetitive training that uses consistent tactics. Although officials at the training facilities we visited note that they have made efforts to share some of the information obtained and subsequent changes made as a result of lessons learned with their counterparts at other training facilities, the sharing has been inconsistent. According to a Chairman of the Joint Chiefs of Staff Instruction, organizations participating in the joint lessons learned program are to coordinate activities and collaboratively exchange observations, findings, and recommendations to the maximum extent possible. While the services have formal and informal means to facilitate the sharing of lessons learned information, trainers at the various training sites are not consistently sharing information about the changes they have made to their training programs. As a result, servicemembers are trained inconsistently and units that are deploying for similar missions sometimes receive different types and amounts of training. <5. Conclusions> U.S. forces deployed to CENTCOM s area of responsibility, including support forces, are operating in an environment that lacks clear distinctions between the front lines and rear support areas. As a result, support units such as military police, engineers, and medical personnel may be exposed to hostile fire and other battlefield conditions. The Army, Marine Corps, and CENTCOM continue to emphasize the importance of training and have identified specific tasks to be accomplished as part of predeployment training that they believe will better prepare forces to operate in the current operational environment. While forces clearly undergo significant training, clarifying CENTCOM s training requirements, including more clearly defining the specific tasks to be completed by different types of forces and the conditions and standards for the content of training, would enhance the service s ability to ensure that forces are consistently trained on required tasks. Furthermore, in order to make informed decisions on deploying forces and assigning missions once deployed, the services and CENTCOM need information on the extent of training completed by forces prior to deployment. Inconsistencies in existing approaches for documenting the completion of training and the lack of a formal process for granting waivers to training and communicating waiver decisions hamper the services and CENTCOM in their ability to get a clear picture of which units or individuals have been fully trained for certain missions and whether any capability gaps might exist upon the forces arrival in theater. Last, the services are making significant adjustments in training regimens based on captured lessons learned from actual operational experiences. However, additional efforts to share information on these adjustments among and within training facilities would provide greater assurance that the training is consistent. <6. Recommendations> To improve the consistency of training, we recommend that the Secretary of Defense: direct the commander, U.S. Central Command to: clarify which of the command s mandatory training requirements apply to all forces deploying to CENTCOM s area of responsibility and which requirements apply only to joint sourced forces, and clearly communicate this information to the services. clearly outline the conditions under which CENTCOM s mandatory training requirements are to be accomplished and the standards to which the tasks should be trained. direct the Secretary of the Army and the Commandant of the Marine include all of CENTCOM s minimum training requirements in their service training requirements. To improve commanders visibility over the extent to which support forces are completing required combat skills training, we recommend that the Secretary of Defense direct the Secretary of the Army to fully implement the service s system of record for tracking training completion the Digital Training Management System by (1) developing a schedule for fully implementing the system, including the work to be performed and the resources to be used, and (2) including the actual start and completion dates of work activities performed so that the impact of deviations on future work can be proactively addressed. We further recommend that the Secretary of Defense direct the Commandant of the Marine Corps to establish and fully implement consistent approaches for documenting the completion or waiving of combat skills training requirements. We are also broadening our prior recommendation on waiver oversight and recommending that the Secretary of Defense direct the commander, U.S. Central Command, to establish a formal process for waiving training requirements for all deploying forces, not just nonstandard forces, and to communicate this process to the services. To maintain training consistency as training evolves in response to ongoing operations, we recommend that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to develop a method for consistently sharing information concerning changes that are made to training programs in response to formal or informal lessons learned. <7. Agency Comments and Our Evaluation> In written comments on a draft of this report, DOD concurred or partially concurred with our recommendations. Specifically, DOD concurred with our six recommendations related to the definition, completion, and waiver of training requirements, and sharing information on changes to training based on lessons learned. DOD stated that it has inserted draft language into its 2010 update to the Guidance for the Development of the Force and its draft DOD Instruction 1322.mm entitled Implementing DOD Training to address our recommendations. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army to fully implement the Digital Training Management System (DTMS) the service s system of record for tracking training completion by (1) developing a schedule for fully implementing the system, including the work to be performed and the resources to be used, and (2) including the actual start and completion dates of work activities performed so that the impact of deviations on future work can be proactively addressed. In its comments, DOD stated that the Army s training management system of record has been directed to be implemented and that in order to fully leverage this capability, it will take time, training and resources to extend the system to the entire organization. Instead of stipulating DTMS, DOD requested that GAO address (in our recommendation) more generally the Army s training management system of record. We recognize that it will take time for the Army to fully implement the system, but also note that it has not set a specific schedule, with key elements, such as work to be performed, resources needed, and milestones for start and completion of activities, which we believe will add discipline to the process, help guide its efforts, and help the Army to plan for any schedule deviations. We recognize that the Army continues to refine DTMS and that changes could occur. However, at this point in time, Army guidance specifically characterizes DTMS as the Army s training management system of record; therefore, we do not agree that our recommendation should be adjusted. Furthermore, DOD stated that some findings in the draft report are partially accurate, but that a number of points of information and clarification related to DTMS provided by the Department of the Army do not appear in the findings. For example, DOD noted that ongoing efforts by the Army designed to improve DTMS will expand existing functionality and interfaces to enhance and broaden operational use of the application by Army units. It noted the Army has a review process that, among other things, monitors progress of DTMS implementation and allows for the establishment and approval of priorities for developing interfaces with other existing legacy systems and manual processes. In addition, DOD stated that the report cites that DTMS is not fully operational because all interfaces are not completed to the satisfaction of a subordinate organization, which, in DOD s view, does not drive the level of program functionality or define the point in time when the system is fully operational. DOD noted that the inclusion of updated interfaces enables data input from other sources and that the basic functionality of DTMS is in place, operational, and available for use by units across the Army. DOD also noted some Army units are still using spreadsheets and/ or legacy systems to track individual training rather than DTMS, but that this is a function of compliance, not operational capability or the availability of system interfaces. It further stated that the Army is currently working to institute methods to improve compliance as outlined in AR 350-1, the Army s regulation that guides training. We recognize that the basic functionality of DTMS exists and that the Army is continuing to take steps to implement DTMS, improve the interfaces between DTMS and legacy systems and processes, and improve overall compliance with the requirement for units to report in DTMS. However, our work suggests that it is not only a lack of compliance preventing full utilization of the system, but also a lack of awareness among all of the operational units that DTMS even exists. For example, within the reserve component, some unit commanders we interviewed were unfamiliar with DTMS or that they were required, by Army guidance, to use the system to report training completion. Further, while we recognize interfaces exist, our work shows they are not fully mature to the point where they are compatible with existing tracking systems, thereby limiting the ability of the reserve component to fully use DTMS as intended. DOD further noted that the report infers that DTMS could or should be the source for CENTCOM and the Army to certify and/ or validate unit training for deployments, but due to it not being fully utilized, the completion of combat skills training could be in question. DOD explained that DTMS is a training management system, and it is the responsibility of Commanders and Army Service Component Commands to certify and validate units. As stated in our report, we recognize that commanders and the service component commands are responsible for the certification and validation of units for deployment. However, in order to be more fully informed about the training and readiness status of units before making decisions about deployments, those making these decisions need visibility over the completion of the combatant command and service pre-deployment training requirements. Currently, DTMS does not provide unit commanders or service component commands with this type of visibility, and therefore, these individuals and commands must rely on the tracking mechanisms we outlined in this report when certifying and validating units, and these tracking mechanisms are not always complete or consistent. The full text of DOD s written comments is reprinted in appendix II. We are sending copies of this report to the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To assess the extent to which Army and Marine Corps support forces are completing required combat skills training, we reviewed combatant commander and service individual and unit predeployment training requirements, including CENTCOM s Theater Entry Requirements, the U.S. Army Forces Command s Predeployment Training Guidance for Follow-on Forces Deploying In Support of Southwest Asia, and Marine Corps Order 3502.6, Marine Corps Force Generation Process. To determine if the services were fully addressing the CENTCOM minimum requirements, we compared the CENTCOM minimum training requirements to the Army and Marine Corps minimum requirements, making linkages where possible and obtaining service explanations when linkages did not appear to exist. We also reviewed policy documents on service training, such as the services common skills manuals and training programs of instruction. Additionally, we interviewed and analyzed information from officials responsible for developing and implementing training requirements at CENTCOM, Department of the Army Training Directorate, U.S. Army Forces Command, First Army, U.S. Army National Guard, U.S. Army Reserve Command, Marine Corps Training and Education Command, and Marine Forces Command. Lastly, we observed support force training at four of the Army and Marine Corps largest training facilities Fort Dix, Camp Lejeune, Camp Pendleton, and Twentynine Palms Marine Corps Base. At the training sites, we interviewed and collected various training-related documents from Army and Marine Corps active and reserve component units participating in predeployment training as well as training command officials on the implementation of service training guidance. We also obtained information from Army active component support forces stationed at Fort Hood. To assess the extent to which the services and Central Command have information to validate the completion of required combat skills training, we reviewed Army and Marine Corps policies on training, including Army Regulation 350-1, which outlines requirements for servicewide tracking through the Digital Training Management System, and Marine Corps Order 3502.6, Marine Corps Force Generation Process. We also coordinated with the U.S. Army Audit Agency regarding their ongoing efforts in reviewing the Digital Training Management System. We interviewed service headquarters officials to discuss the processes the services use to track completion of training requirements. We reviewed Joint Publication 1, and other joint and service policies that document the role and responsibilities of unit commanders in tracking and reporting completion of training requirements. We interviewed Department of the Army Training Directorate, Marine Corps Training and Education Command, U.S. Army Forces Command, Marine Forces Command, First Army, and U.S. Army Reserve Command officials and reviewed documents from these commands, which are involved in the process of tracking the completion of combat skills training. Additionally, we interviewed an Army training command and the 1st, 2nd, and 4th Marine Corps Logistics Groups to discuss the processes used to track completion of training requirements at the unit level. We reviewed the means these organizations use to document the extent to which servicemembers were completing required training paper records, automated spreadsheets, and databases. We further interviewed thirteen unit commanders of units preparing to deploy or returning from deployment to identify individual processes being used to track completion of training requirements. Lastly, we interviewed and obtained information from officials representing CENTCOM, Army and Marine Corps headquarters, and the Army and Marine Corps force providers and training commands to discuss the processes the services use to waive service and combatant command training requirements. We also reviewed past related GAO reports regarding the tracking and waiving of training requirements. To assess the extent to which the Army and Marine Corps have applied lessons learned from operational experiences to adjust combat skills training for support forces, we reviewed service policies on the collection and dissemination of lessons learned, specifically Army Regulation 11-33 for the Army Lessons Learned Program and Marine Corps Order 3504.1 for the Marine Corps Lessons Learned Program and the Marine Corps Center for Lessons Learned. These policies, which establish the services lessons learned centers, also require the collection of after action reports. Further, we reviewed joint guidance to determine whether requirements existed for the training facilities and services to collaborate and share lessons learned information. We interviewed and obtained information on the collection and implementation of lessons learned from officials representing the Center for Army Lessons Learned and the Marine Corps Center for Lessons Learned. We also interviewed lessons learned liaisons, training command officials, trainers, and officials responsible for developing unit training plans at five of the Army and Marine Corps largest training sites Fort Hood, Fort Dix, Camp Lejeune, Camp Pendleton, and Twentynine Palms. While interviewing officials from the lessons learned centers and the training facilities, discussions included: the use of various lessons learned to alter and improve predeployment training; the types of products the centers create and distribute; and the extent to which trainers shared the information among training sites. Based on these discussions with lessons learned officials, we identified and reviewed a nongeneralizable sample of the formal lessons learned reports and handbooks that applied specifically to training for support forces. We also reviewed past related GAO and DOD reports regarding lessons learned. To gain insight on support forces perspectives on completion of combatant command and service combat skills training requirements, we conducted discussions with five Army Reserve and one Army National Guard support units military intelligence, movement control, combat camera, medical, and human resources located at the combined pre- and post-mobilization training center Fort Dix, New Jersey, and three active component Marine Corps combat logistics battalions from the two Marine Corps Divisions located in the continental United States that were preparing to deploy to either Iraq or Afghanistan, as well as four of Fort Hood s active component Army support battalions that have recently returned from deployment. To conduct these discussion sessions, we traveled to one Army installation and three Marine Corps installations in the continental United States from August 2009 through December 2009 and conducted telephone discussions with representatives from one active duty Army installation in February 2010. In selecting units to speak with, we asked the service headquarters and force providers to identify all support units that would be in pre-mobilization or predeployment training during the time frame of our visit. The basic criteria used in selecting these units was that they were an Army or Marine Corps support unit participating in pre-mobilization or predeployment training and preparing to deploy to or recently redeployed from either Iraq or Afghanistan. Thus, our selection was limited since the time frame was so narrow. Once units were identified, we spoke with the unit command elements and senior enlisted servicemembers from nine support units that were available at the individual sites we visited. Overall, we spoke with Army and Marine Corps support units preparing to deploy to Iraq and Afghanistan, and within these units, some servicemembers who had previously deployed to Iraq or Afghanistan. We also spoke with four available active component Army support unit representatives who had recently returned from Iraq. Topics of discussion during the sessions included development and implementation of unit training plans, verification of training completion, and equipment and manning challenges that impact training. We also administered a short questionnaire to participants in the senior enlisted discussion sessions to obtain their feedback on the combat skills training their unit received. Comments provided during the discussion groups, as well as on the questionnaire, cannot be projected across the entire military community because the participants were not selected using a generalizable probability sampling methodology. To validate information we heard in the discussion groups, we interviewed the unit s higher headquarters, where available, as well as officials from the training commands and service headquarters and force providers. Table 1 outlines all of the organizations we interviewed during the course of our review. We conducted this performance audit from August 2009 through February 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> Sharon L. Pickup, (202) 512-9619 or [email protected]. <9. Acknowledgments> In addition to the contact named above, key contributors to this report were Michael Ferren (Assistant Director), Susan Ditto, Lonnie McAllister, Terry Richardson, Michael Silver, Christopher Watson, Natasha Wilder, Erik Wilkins-McKee, and Kristy Williams.
Why GAO Did This Study In conventional warfare, support forces such as military police, engineers, and medical personnel normally operate behind the front lines of a battlefield. But in Iraq and Afghanistan--both in U.S. Central Command's (CENTCOM) area of responsibility--there is no clear distinction between front lines and rear areas, and support forces are sometimes exposed to hostile fire without help from combat arms units. The House report to the National Defense Authorization Act for fiscal year 2010 directed GAO to report on combat skills training for support forces. GAO assessed the extent to which (1) Army and Marine Corps support forces are completing required combat skills training; (2) the services and CENTCOM have information to validate completion of required training; and (3) the services have used lessons learned to adjust combat skills training for support forces. To do so, GAO analyzed current training requirements, documentation of training completion, and lessons learned guidance; observed support force training; and interviewed headquarters officials, trainers, and trainees between August 2009 and February 2010. What GAO Found Army and Marine Corps support forces undergo significant combat skills training, but additional actions could help clarify CENTCOM's training requirements, ensure the services fully incorporate those requirements into their training requirements, and improve the consistency of training that is being conducted. CENTCOM has issued a list of training tasks to be completed, in addition to the services' training requirements, before deploying to its area of operations. However, there is confusion over which forces the CENTCOM requirements apply to, the conditions under which the tasks are to be trained, and the standards for successfully completing the training. As a result, interpretations of the requirements vary and some trainees receive detailed, hands-on training for a particular task while others simply observe a demonstration of the task. In addition, while the Army and Marine Corps are training their forces on most of CENTCOM's required tasks, servicemembers are not being trained on some required tasks prior to deploying. While units collect information on the completion of training tasks, additional actions would help higher level decision-makers assess the readiness of deploying units and servicemembers. Currently, both CENTCOM and the services lack complete information on the extent to which Army and Marine Corps support forces are completing required combat skills training. The Army has recently designated the Digital Training Management System as its system of record for tracking the completion of required training, but guidance concerning system implementation is unclear and the system lacks some needed capabilities. As a result, support forces are not fully utilizing the system, and are inconsistently tracking completion of individual and unit training using paper records, stand-alone spreadsheets, and other automated systems. The Marine Corps also uses inconsistent approaches to document training completion. Furthermore, as GAO reported in May 2008, CENTCOM does not have a clearly defined waiver process to provide visibility over the extent to which personnel are deploying to its area of operations without having completed its required training tasks. As a result, CENTCOM and the services have limited visibility over the extent to which servicemembers have or have not completed all required training. While trainers at Army and Marine Corps training sites have applied lessons learned information and made significant changes to the combat skills training they provide support forces, the changes to training have varied across sites. Army and Marine Corps doctrine requires the collection of after action reports, the primary formal vehicle for collecting lessons learned. Lessons are also shared informally, such as through communication between deployed forces and units training to replace them. While the services have these formal and informal means to facilitate the sharing of lessons learned information, trainers at the various training sites are not consistently sharing information about the changes they have made to their training programs. As a result, servicemembers are trained inconsistently and units that are deploying for similar missions sometimes receive different types and amounts of training.
<1. Background> <1.1. DOD s Forward-Deployed Locations in Afghanistan> At any given time, the United States has military personnel serving abroad in forward-deployed locations to support U.S. strategic interests. The number of personnel and locations vary with the frequency and type of military operations and deployment demands. In general, operational control of U.S. military forces at forward-deployed locations is assigned to the nation s six geographic, unified overseas regional commands, including Central Command. Central Command s area of responsibility includes Afghanistan, where military operations have led to the creation of several hundred locations that vary in size and structure to meet mission requirements, and the military service components have been responsible for establishing and maintaining these locations. Some forward operating bases such as Bagram Air Field support thousands of personnel and are large consumers of energy. Forward operating bases generally support a brigade or larger population and are typically composed of temporary or semi-permanent structures that require energy for lighting, heating, and air conditioning; electrical power grids; water and sewage systems; and force protection systems. At the other end of the spectrum, small units at the company level and below have established combat outposts to enhance local operations. These outposts have a short life-cycle and unique configurations. Since these forward-deployed locations can be constructed in a variety of ways, the amount of fuel they consume can vary. Figure 1 shows the forward-deployed locations we visited during the course of our review. Military deployments generally rely on petroleum-based fuels, which power communication equipment, expeditionary bases, tactical vehicles, aircraft, some naval vessels, and other platforms. According to DOD officials, more than 43 million gallons of fuel, on average, were supplied each month to support U.S. forces in Afghanistan in 2011. Equipment such as generators provides power for base support activities such as air conditioning, heating, lighting, and communications, and consumes a significant amount of fuel In Afghanistan, the Defense Logistics Agency-Energy (DLA-Energy) delivers fuel to multiple points of delivery throughout the country via contracted trucking assets, depending on the location of the bases. DLA- Energy tracks the aggregate amount of fuel the services consume based on sales receipts, and the U.S. government pays for fuel that is delivered to each of these designated delivery points. The North Atlantic Treaty Organization delivers fuel in the southern part of Afghanistan. While the cost of fuel represents only about 2 percent of DOD s total budget, it can have a significant impact on the department s operating costs. Since the military services prepare their annual budgets based on the approved fuel price projections in the President s budget, market volatility in the year of execution can result in out-of-cycle fuel price increases that are difficult for the services to absorb. A prior DOD report has estimated that for every $10 increase in the price of a barrel of oil, DOD s operating costs increase by approximately $1.3 billion. The department has received supplemental appropriations from Congress in prior years to cover budget shortages associated with rising fuel prices. Moreover, the total cost of delivering fuel to a consumer on the battlefield which includes the aggregate cost of buying, moving, and protecting fuel during combat operations can be much greater than the cost of the fuel itself. A 2008 Defense Science Board task force report noted that preliminary estimates by the OSD Program Analysis and Evaluation office and the Institute for Defense Analyses showed that the fully burdened cost of a $2.50 gallon of fuel (DLA s standard price for fuel in 2008) begins at about $15, not including force protection requirements for supply convoys. In addition, fuel delivered in-flight was estimated to cost about $42 a gallon at that time. However, the report notes that these figures were considered low when the report was published in 2008 and, according to DOD officials, in 2011, the cost of a gallon of this fuel had risen to $3.95 (DLA standard price in 2011), making the fully burdened cost of fuel even higher than previously reported. In fiscal year 2009 Congress required the Secretary of Defense to incorporate the fully burdened cost of fuel into its cost analyses, including acquisition analyses of alternatives and program design trade decisions.At the time of our review, DOD officials stated the department was in the process of analyzing the fully burdened cost of fuel and how it will be applied throughout DOD s acquisition process. <2. DOD Has Taken Steps to Establish an Approach for Fuel Demand Management, but Is Still Developing Comprehensive Guidance> DOD has taken steps since our 2009 report to establish an approach for managing overall fuel demand, but is still developing comprehensive guidance to address fuel demand management. In 2009, we reported that DOD faced difficulty in reducing its reliance on fuel at forward-deployed locations because managing fuel demand had not been a departmental priority and its fuel reduction efforts had not been well coordinated or comprehensive. As such, we recommended that DOD develop requirements for managing fuel demand at forward-deployed locations, and DOD concurred with this recommendation. Since that time, DOD has taken several steps to increase its visibility and accountability for fuel demand management, and is developing comprehensive guidance on how DOD will incorporate energy efficiency considerations into operations, planning, and training decisions for current and future military operations. <2.1. Progress in Establishing Visibility and Accountability for Overall DOD Fuel Demand Management> DOD has made progress in establishing visibility and accountability for fuel demand management since our 2009 report by making organizational changes and issuing an Operational Energy Strategy (operational energy strategy) and related Operational Energy Strategy Implementation Plan (implementation plan) to provide direction for DOD s overall fuel demand management efforts, including efforts at forward-deployed locations in Afghanistan. Specifically, in our prior report we noted that DOD s organizational framework did not provide the department with visibility or accountability over fuel demand management issues at forward-deployed locations because there was no one office or official specifically responsible for these issues. We also found that fuel demand reduction efforts were not consistently shared across DOD. Our prior work has shown that visibility and accountability for results are established by assigning roles and responsibilities, establishing goals and metrics, and monitoring performance. Congress and DOD have taken multiple steps to address this issue. For instance, the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 established a Director of Operational Energy Plans and Programs (OEP&P) responsible for serving as the principal advisor to the Secretary of Defense for operational energy plans and programs, which includes, among other responsibilities, monitoring and reviewing all operational energy initiatives in DOD. Since its establishment, OEP&P has worked in conjunction with all of the services energy offices to provide visibility and accountability for operational energy issues, including fuel demand management issues. For example, with input from the services, OEP&P published the Operational Energy Strategy Implementation Plan in March 2012 that assigns responsibilities for key tasks and specifies milestones and reporting requirements that will provide accountability for implementing the operational energy strategy (see appendix II). Also, in March 2012, DOD established a Defense Operational Energy Board to help provide visibility and accountability over operational energy efforts that included fuel demand management. The board will be cochaired by the Assistant Secretary of Defense for Operational Energy Plans and Programs and the Joint Staff s Director of Logistics. According to OEP&P officials, the board will help review, synchronize, and support departmentwide operational energy policies, plans, and programs. In addition, the board will monitor and, where necessary, recommend revisions to DOD policies, plans, and programs needed to implement the operational energy strategy. DOD s operational energy strategy, the implementation plan, and the Defense Operational Energy Board are intended to support departmentwide operational energy efforts while also having a direct impact on DOD s efforts to manage fuel demand at forward-deployed locations in Afghanistan. Figure 2 provides a timeline of key events in OEP&P s efforts to manage operational energy issues. To further enhance DOD s operational energy efforts, the National Defense Authorization Act for Fiscal Year 2012 required the Chairman of the Joint Chiefs of Staff to designate a senior official under the jurisdiction of the Chairman to be responsible for operational energy plans and programs. In August 2011, the Chairman appointed the Joint Staff s Director of Logistics to this position with responsibility for coordinating with OEP&P and implementing initiatives pursuant to the operational energy strategy. According to Joint Staff officials, the Joint Staff is committed to addressing operational energy capability gaps and in April 2012 formed a Joint Capabilities Task Group to identify and address fuel demand management issues. The task group s mission includes: providing recommendations to better integrate operational energy into current and future materiel and nonmateriel solutions to improve operational capabilities, and supporting evaluation of the operational energy requirements process, and providing recommendations through the requisite Functional Capabilities Boards and the Joint Logistics Board to the Joint Capabilities Board or Joint Requirements Oversight Council for validation/decision. According to Joint Staff officials, the Joint Capabilities Task Group will focus on developing a framework for analysis that supports service and DOD efforts to inform leaders such as commanders in Afghanistan about operational energy vulnerabilities. The group will also propose guidance to support the combatant commands in assessing logistics plans and evaluating energy assumptions that will influence the execution of operational plans. OEP&P officials told us that the Joint Staff plays a key role in collaborating with OSD to create policy, develop joint doctrine, and advocate for combatant commander requirements. Joint Staff officials told us their goal is to incorporate energy efficiency guidance into existing joint publications when such documents are up for review. As part of this process the Joint Staff s Joint Capabilities Task Group will prioritize which guidance documents will be revised first, then work toward updating other applicable guidance documents. According to DOD officials, when these guidance documents are updated, operational energy issues, including priorities for addressing fuel demand management, should be included in the services and combatant commanders mission planning activities. Our prior work on government performance and management also notes the importance of establishing goals and metrics to assess progress and provide accountability. DOD s operational energy strategy established three overarching operational energy goals to: (1) reduce demand for energy in military operations, (2) expand and secure energy supplies, and (3) build energy security into the future force, and DOD has begun to take steps to establish metrics to measure progress toward these goals. OEP&P officials told us that the Defense Operational Energy Board will develop departmental operational energy performance metrics to promote the energy efficiency of military operations by the end of fiscal year 2012. The board will also monitor and, as needed, recommend revisions to DOD policies needed to implement the operational energy strategy and monitor progress to ensure DOD is meeting its operational energy goals. OEP&P officials stated that establishing such strategies, goals, and metrics will not only provide DOD with the direction and tools needed to assess progress towards meeting fuel demand management goals at forward-deployed locations, including those in Afghanistan, but will enhance DOD s efforts to achieve its overall fuel demand management objectives worldwide. <2.2. DOD Funding and Incentives for Fuel Demand Management at Forward-deployed Locations> Since our 2009 report, DOD has taken action to fund fuel demand management initiatives and restructure maintenance contract task orders to include energy efficiency considerations and incentives. In our 2009 report on fuel demand management, we found that DOD had not established incentives or a viable funding mechanism for fuel reduction projects at forward-deployed locations and commanders were not encouraged to identify fuel reduction projects as a priority. Specifically, we found that much of the funding provided to support military operations in Iraq and Afghanistan was provided through supplemental funding measures,fuel demand management initiatives. As such, we recommended that DOD establish incentives for commanders of forward-deployed locations to promote fuel demand reduction at their locations, as well as identify a making it difficult to plan for and fund costly projects such as viable funding mechanism for the department and commanders of forward-deployed locations to pursue fuel reduction initiatives. DOD partially concurred with our recommendation and said it was not convinced that financial incentives represent the best fuel reduction strategy for forward-deployed locations, but stated that it will seek to incorporate fuel reduction incentives while recognizing the primacy of mission accomplishment. Since the release of our 2009 report, DOD s increased focus on fuel demand management at forward-deployed locations, and the establishment of OEP&P and the U.S. Forces- Afghanistan Operational Energy Division, increased priority has been given to fuel demand management initiatives at forward-deployed locations in Afghanistan. For example, DOD has undertaken a widespread initiative to replace spot generation with centralized power and the U.S. Forces-Afghanistan s Operational Energy Division secured $108 million in fiscal year 2011 from the Army to invest in more efficient power production and distribution equipment for the Afghanistan area of operations. According to DOD s analysis, this investment will remove as many as 545 spot generators saving an estimated 17.5 million gallons of fuel per year, the equivalent of removing over 7,000 fuel trucks from the roads in Afghanistan. Furthermore, the Marine Corps committed fiscal year 2011 funds to support the accelerated procurement of a suite of more efficient tactical energy systems. Also, in 2011, DOD completed the Afghanistan Micro-Grid Project, which was an effort at Bagram Airfield to replace less efficient generator sets with a smart, more energy-efficient power source. DOD provided over $2 million to fund this project. Furthermore, to reinforce DOD s commitment to reducing its reliance on fuel at forward-deployed locations, in September 2011 the Under Secretary of Defense for Acquisition, Technology and Logistics issued a memorandum to support reprogramming overseas contingency operations funds to expedite the deployment of more efficient generators, centralized power projects, and shelter modification kits to forward- deployed locations in Afghanistan. With the establishment of OEP&P, DOD also has increased its efforts to obtain visibility over funding for initiatives aimed at reducing fuel consumption at forward-deployed locations. For example, to help ensure the services budgets support the implementation of DOD s operational energy strategy, OEP&P is now required by law to publish an annual operational energy budget certification report. This report certifies that the proposed services budgets are adequate for the implementation of the operational energy aspects of their respective energy strategies. According to OEP&P s fiscal year 2012 budget certification report, the services anticipate spending approximately $4 billion on operational energy initiatives over the next 5 years. Although the operational energy initiatives identified through OEP&P s budget certification process are not specifically targeted for use at forward-deployed locations in Afghanistan, many of them have been tested and fielded there, and will be applicable to DOD s fuel demand management efforts both in Afghanistan and elsewhere. To improve the energy efficiency of DOD s operational forces, the fiscal year 2012 President s Budget also included an additional $19 million in funding for an Operational Energy Capabilities Improvement Fund. Its mission is to fund innovation to improve operational effectiveness by investing in research and development for operational energy innovation. These funds are intended as seed money to consolidate or initiate promising operational energy programs. The initial funding for these efforts will be administered by OEP&P, but the programs will be ultimately sustained by the services. According to DOD, the initiatives funded by this program will support efforts to develop and rapidly transition energy technologies for the combat force, resulting in improved military capabilities, fewer energy-related casualties, and lower costs for the taxpayer. As part of this fund, in January 2012 DOD allotted funds to begin developing six new operational energy initiatives. Although these initiatives are not finalized and are still being developed, DOD expects these efforts to play a role in reducing fuel demand at forward- deployed locations. Initiatives such as the development of new energy- efficient containerized living units used in expeditionary bases around the world, energy-efficient heating and air conditioning systems, and newly designed shelter systems used to decrease fuel demand at forward- deployed locations are some of the products being developed under this program. In addition to the initiatives mentioned above, DOD has placed a higher priority on ensuring contractors responsible for executing operations and maintenance contracts are addressing energy efficiency issues at forward-deployed locations. For instance, the U.S. Army Materiel Command has taken steps to enforce the existing language included in Logistics Civil Augmentation Program ( LOGCAP) contracts to require more attention be given to increasing energy efficiency at forward- deployed locations. To address power generation concerns, a June 2011 LOGCAP policy letter indicates that contractors should complete assessments for the more than 4,000 generators located on over 130 bases in Afghanistan to assess power load demand and energy efficiency. The U.S. Army Materiel Command and U.S. Forces- Afghanistan also plan to include energy efficiency standards in the technical specifications for new and refurbished facilities maintained by support contractors. Further, contractors will also now be required to provide energy assessments and make recommendations for improved efficiency to supported units. According to DOD and LOGCAP officials, these and other efforts are ongoing and are expected to assist DOD in reducing its fuel consumption at forward-deployed locations. Officials also told us that by increasing efforts to reduce fuel demand, U.S. forces will both reduce operational costs associated with high fuel consumption and increase combat capability by freeing up forces used to protect fuel convoys and reduce forces exposure to hostile action. <2.3. DOD Guidance for Fuel Demand Management> DOD has issued guidance for fuel demand management and is developing comprehensive guidance for its operational, planning, and training decisions. Since our 2009 report on fuel demand management, various DOD organizations have issued guidance for fuel demand management and the department is still developing more comprehensive guidance on how to incorporate energy efficiency considerations into DOD s operational, planning, and training decisions. In our 2009 report, we found that DOD had not developed overarching fuel demand management guidance to require commanders to manage and reduce fuel consumption at forward-deployed locations. In addition, we found that there was little or no written guidance that addressed fuel demand management or energy efficiency for base camp construction or for other business decisions such as maintenance or procurement actions. We recommended that multiple organizations within DOD develop specific guidance on fuel demand management in their areas of responsibility. DOD has since issued overarching, theater-level, and base camp construction and development guidance, but is still developing policy and doctrine to provide guidance on how energy efficiency considerations will be included in operational decisions that affect fuel demand management at forward-deployed locations, such as those in Afghanistan. To provide overarching guidance to DOD s operational energy efforts, including reducing its reliance on fuel at forward-deployed locations, DOD published its 2011 operational energy strategy and its 2012 implementation plan. As noted above, the implementation plan provides DOD stakeholders involved in fuel demand management with a roadmap for accomplishing key tasks to reduce fuel demand. However, because OEP&P is a new organization and in the early stages of working within DOD to develop guidance and policies, DOD has yet to address how energy efficiency considerations will be incorporated into its joint doctrine, which provides the principles that guide the employment of U.S. military forces in an operational environment and is essential to organizing, training, and equipping its units. DOD s Operational Energy Strategy Implementation Plan also acknowledges the need for additional comprehensive guidance and directs the Joint Staff and military departments to report to the Defense Operational Energy Board by the fourth quarter of fiscal year 2012 on how the strategy s goals will be reflected in policy, doctrine, and professional military education. The plan further states that the scope of this task includes examining departmental directives, instructions, field manuals, doctrine, professional military education curricula, and other relevant guidance in order to include energy efficiency considerations in its operational, planning, and training decisions. Central Command has updated its guidance for construction and base camp development to place more emphasis on energy efficiency for contingency and permanent base camps that support missions in its area of responsibility. Specifically, in 2009 we noted that some of DOD s combatant commands and military services had developed construction standards for forward-deployed locations, but our analysis showed that this existing guidance was largely silent with regard to fuel demand management and energy efficiency. Pertinent Central Command guidance in 2009 included only one reference to energy efficiency requiring that semi-permanent facilities those facilities with a life expectancy of more than 2 years, but less than 25 years be designed and constructed with finishes, materials, and systems selected for moderate energy efficiency. According to the guidance in effect at that time, semi-permanent construction standards were to be considered for operations expected to last more than 2 years. In 2009, we found that the temporary status of many forward-deployed locations, combined with a focus on quickly establishing the locations rather than on sustaining them, limited DOD s emphasis on constructing energy-efficient facilities. We recommended that DOD develop specific guidelines that address energy efficiency considerations in base construction. In October 2011, Central Command revised its policy for base camp construction standards to include a greater emphasis on energy efficiency. For example, the revised policy now calls for energy conservation best practices to be incorporated into all new construction that is to be environmentally controlled. Also, in an effort to reduce fuel consumption at forward- deployed locations, the policy requires all bases receiving power generation support from contingency contracting programs, such as LOGCAP, to conduct an electrical infrastructure assessment. According to Central Command officials, conducting electrical infrastructure assessments will allow base planners and commanders to determine areas where energy efficiency shortfalls may be occurring, and identify areas where energy generation and distribution adjustments should be made in order to save fuel. The policy also includes other notable provisions to promote energy efficiency such as encouraging the insulation of temporary facilities when funds and time allow. Central Command and OEP&P officials told us that revisions to this policy encourage commanders to consider incorporating energy efficiency standards into base camp construction and development, which may not have otherwise been an area of concern. In addition, in April 2012, the Commander of Bagram Airfield, one of the major U.S. logistics bases in Afghanistan, issued additional guidance to direct the use of energy efficiency design and construction standards for all new and renovation construction projects on Bagram Airfield. For example, the guidance requires new or renovated projects to use energy- saving equipment such as fluorescent or Light-Emitting Diodes (LED) lighting, energy-efficient motors, and that windows, ceilings, walls, and roofs be insulated, among other things. According to an OEP&P official, all requests for approval to build or alter facilities must be reviewed by Bagram s Joint Facilities Utilization Board, which provides a way to enforce efficiency standards throughout this location. In 2011 and 2012, commanders in Afghanistan issued theater-level fuel demand management guidance regarding maintenance and procurement decisions for forward-deployed locations. In our 2009 report on fuel demand management we found a lack of attention to fuel demand management in guidance, including an absence of fuel usage guidelines and metrics to evaluate progress of reduction efforts, as forward-deployed locations are maintained and sustained over time. We also found the procurement of products for forward-deployed locations presents opportunities for DOD to consider making purchases that take into account fuel demand or energy efficiencies when practical. Since that time, commanders in Afghanistan have issued general policy memoranda on repairing, maintaining, and procuring equipment to help reduce fuel consumption at forward-deployed locations. Specifically, in June and December of 2011 the Commander, U.S. Forces-Afghanistan, issued operational energy guidance in the form of policy memoranda to soldiers, sailors, airmen, Marines, and civilians of U.S. Forces- Afghanistan located at forward-deployed locations. These memoranda stated that commanders are expected to take ownership of fuel demand management issues and explore methods for reducing fuel demand at forward-deployed locations. For example, commanders are to ensure personnel take action to repair faulty equipment, avoid using heating and air conditioning in unoccupied buildings, and work with support contractors, suppliers, and the services to improve inefficient facilities and devices such as generators and air conditioning units. In addition, commanders should push for rapid fielding of new fuel savings methods, where appropriate, and pursue existing, proven alternative energy options that reduce the use and transport of fuel. During our visit to forward- deployed locations in Afghanistan in October 2011, however, many of the commanders and personnel we spoke with were unaware of this guidance or commented that it did not provide specific direction on how to implement needed fuel demand management actions. As such, many of the commanders with whom we spoke had not establish specific guidance or protocols to address day-to-day fuel use, such as establishing a base policy on turning off lights in unoccupied buildings or immediately repairing faulty equipment. In addition, we found that some of the commanders we spoke with in Afghanistan were not using available energy efficient equipment and/or had not fixed faulty equipment. For example, at Camp Sabalu-Harrison we observed inefficient generator configurations in which multiple generators were used to power individual tents when one generator could have provided adequate power for multiple tents (see fig. 3). At Camp Leatherneck we observed, and were told that available tent shading used to provide cover from the sun was not being used consistently throughout the base (see fig. 4). Additionally, at Joint Combat Outpost Pul-A-Sayed, we observed an entry control checkpoint powered by a 60-kilowatt generator when, according to the commander in charge of this outpost, a smaller more energy-efficient 5- or 10-kilowatt generator would have provided adequate power (see fig. 5). Army officials at this location told us that the previous generator used to power this entry control checkpoint had failed and had not been replaced because it was considered a lower priority. According to officials at the outpost, these types of equipment breakdowns happen frequently, and due to the lack of adequately trained personnel and other mission requirements, may take weeks to be repaired or replaced. After our visit to Afghanistan, U.S. Forces-Afghanistan developed and issued a fragmentary order to provide specific guidance on fuel demand management procedures, and specific operational energy practices needed to comply with the policy memoranda. The April 2012 operational energy fragmentary order established milestone dates for accomplishing tasks for reducing fuel demand at select forward-deployed locations. According to DOD officials, this type of guidance provides U.S. Forces- Afghanistan s subordinate commands with the specific direction necessary to begin reducing fuel demand at its forward-deployed locations. The order requires commanders located at forward-deployed locations in Afghanistan to distribute the December 2011 operational energy policy memo so that personnel will be aware of fuel demand management goals and objectives for forward-deployed locations. In addition, the order requires commanders to develop, distribute, and implement policies that will complement the operational energy policy memorandum no later than 30 days after the order was published. Furthermore, the guidance requires that fuel accountability metrics be established and made available by U.S. Forces-Afghanistan s Joint Staff (J-4) by the end of May 2012. Further, service officials acknowledged the need for additional training throughout the department on fuel demand management, and told us the services are developing various curricula and training programs to make sure personnel deployed to forward-deployed locations know how to operate relevant equipment and understand the importance of reducing fuel demand. For example, U.S. Marine Corps Training and Education Command has begun developing and adding operational energy courses to its expeditionary warfighting school curricula and, according to officials, has begun working with other services to further educate military personnel on the importance of energy conservation. OEP&P officials stated that DOD s focus on operational energy issues and the organizations supporting this effort are new, and expect these efforts to have an impact on fuel demand management at forward- deployed locations as they are implemented. OEP&P officials added they are monitoring progress and will report to the congressional defense committees on operational energy management and the implementation of the operational energy strategy as required by the 2009 National Defense Authorization Act. <3. DOD has Efforts Underway to Promote Fuel Efficiency, Coordination, and Collaboration but Opportunities Exist to Enhance Efforts to Identify and Track All Fuel Demand Management Initiatives> DOD has several ongoing initiatives to promote fuel efficiency at forward- deployed locations in Afghanistan and has established various methods to facilitate some coordination and collaboration among the services. However, it is still in the process of developing a systematic approach to identify and track the numerous fuel demand management initiatives that have been fielded, or are in the research and development phase throughout DOD. Without a systematic approach, DOD may be limited in its ability to provide full visibility over all of its fuel demand management initiatives, achieve efficiencies, and avoid unintended duplication or overlap of activities. <3.1. Fuel Demand Management Initiatives> We found that DOD, the services, and Central Command have numerous efforts underway to develop and test various fuel demand management initiatives. The Army and the Marine Corps have each established facilities to test fuel demand management initiatives being pursued by their respective service for potential use at forward-deployed locations. For a list of fuel demand management initiatives being evaluated by the services for possible use in Afghanistan see appendix III. The services are engaged in several fuel demand management initiatives that can be applied to forward-deployed locations in Afghanistan. In 2011, the Army s Base Camp Integration Laboratory located at Fort Devens, Massachusetts, began assessing new systems and technology that may help increase energy efficiency and reduce fuel usage at base camp operations. The Base Camp Integration Laboratory seeks to integrate and verify new technology concepts and allows product testing before field evaluation by soldiers. According to Army officials, by conducting laboratory, systems, and interoperability testing on the items at the lab, the Army can improve survivability, and sustainability, and reduce the risks that may occur after new technology is deployed to the field. Some of the specific initiatives currently being tested at the Base Camp Integration Laboratory are: Energy-efficient shelter testing to determine the energy efficiencies of various tent shelter alternatives. Soft Wall Shelter/Environmental Control Unit/Insulated Liner/Solar Shade testing to determine the effects of solar shades and insulated liners in reducing the solar load and temperature differential in soft- sided shelters. Additionally, these tests will determine if downsizing the environmental control unit can sustain interior temperatures in soft-sided shelters, thereby reducing power consumption. Force Provider Micro Grid testing to determine the efficiency and energy savings from replacing six generators with a microgrid within a 150-man base camp environment. In a separate initiative to evaluate Marine Corps-specific equipment, the Marine Corps Experimental Forward Operating Base (ExFOB) was established to provide industry with an opportunity to demonstrate their latest capabilities to enhance the Marine Corps self-sufficiency and reduce its need for bulk fuel and water at forward-deployed locations such as those in Afghanistan. To date there have been four iterations of the ExFOB. The first was conducted at Quantico, Virginia in March 2010 and involved the evaluation of, among other things, tent liners, Light-Emitting Diodes (LED) lights, soldier-portable solar recharging power devices, and a solar power energy collection and storage device. Those technologies were determined to have the potential to increase combat effectiveness by reducing the requirements for fuel and batteries, and were deployed to Afghanistan for further evaluation. The second was conducted at Twentynine Palms, California in August 2010 and evaluated hybrid solar systems, direct-current-powered efficient air conditioners, and solar power refrigerators. As a result of this ExFOB demonstration, the Marine Corps has finalized its evaluation of four items, which are now ready for use at forward-deployed locations in Afghanistan. The third ExFOB was conducted in Twentynine Palms, California in August 2011 and included an evaluation of the fuel efficiency of tactical vehicles. The fourth ExFOB was conducted at Camp Lejune, North Carolina in April and May 2012 and included an evaluation of wearable electronic power systems, lightweight, man-portable, water purification systems. In addition, in fiscal year 2008 the U.S. Central Command and the OSD Energy Task Force cosponsored an initiative called the NetZeroPlus Joint Capabilities Technology Demonstration, an initiative used to determine fuel demand reduction solutions for forward-deployed locations. This demonstration assessed technologies for reducing fuel demand and improving infrastructure and alternative energy supply for the warfighter. According to DOD officials, this demonstration used research and development efforts from military research development and engineering centers, federal and private labs, and commercial and government off- the-shelf technology. DOD plans to use the combined capabilities developed from these tests to establish more energy-efficient forward operating base blueprints for use by operational commanders, theater planners, and interagency organizations. The emphasis for this initiative was on improving or replacing current facilities with more energy-efficient structures and integrating renewable energy technologies with improved energy generation solutions to power those structures. Some of the initiatives tested as part of the technology demonstration included: air beam energy-efficient tents; power shades; solar shades; insulation liners; and flexible lighting surfaces. See appendix III for an overview of these initiatives. <3.2. Efforts Underway to Foster Coordination and Collaboration but Challenges Remain> DOD has taken some steps to foster coordination and collaboration on the department s fuel demand management initiatives, but because there are multiple organizations within DOD engaged in developing these initiatives, challenges remain. Our prior work has shown that leading practices for collaborating to meet modern national security challenges include,collaborative organizations, and sharing and integrating information across agencies via a comprehensive database to track initiatives. DOD has multiple organizations including some engaged in coordination and collaboration in the area of energy efficiency, but it currently lacks a developing and implementing overarching strategies, creating formal means of sharing and integrating information across various offices engaged in these efforts. Numerous organizations within each of the services and DOD have a role in managing, researching, and developing energy efficient technologies. See table 1 below for a list of the DOD organizations involved in fuel demand management efforts. While these organizations have different responsibilities and missions, they are each involved in fuel demand management efforts. Since our 2009 report on fuel demand management, DOD has taken steps to facilitate collaboration and coordinate among the services fuel demand management efforts. In that report, we found that each of the services had efforts planned or underway to reduce fuel demand at forward-deployed locations, but lacked a systematic approach to share this information among the services. In addition, officials also reported that forward-deployed locations often pursued different initiatives, and the department, other services, or other forward-deployed locations were often unaware of these different initiatives. To address these concerns, we recommended that the services assign senior energy officials to identify and promote sharing of fuel reduction best practices and solutions to identified challenges and communicate those practices and solutions to the DOD Director of Operational Energy Plans and Programs (since renamed to be the Assistant Secretary of Defense for Operational Energy Plans and Programs) for potential use across the department. Since 2009 DOD has taken several steps to promote and facilitate coordination and collaboration in order to improve information sharing among various DOD organizations involved in fuel demand management efforts at forward-deployed locations such as those in Afghanistan. Some of these steps include the following activities: DOD published DOD Directive 5134.15 specifying OEP&P s responsibilities which include: coordinating and overseeing the operational energy planning and program activities of DOD and the services related to implementation of the operational energy strategy; coordinating R&D investments related to operational energy demand and supply technologies, and monitoring and reviewing all operational energy initiatives in DOD. DOD established some organizations such as the Defense Operational Energy Board cochaired by the Assistant Secretary of Defense (OEP&P) and the Joint Staff Director for Logistics to serve as a collaborative organization to promote operational energy security, oversee implementation of the operational energy strategy, and measure the department s success. This board will provide a forum for DOD components to share information and provide recommendations on fuel demand management initiatives. OEP&P in collaboration with Central Command and other DOD stakeholders sponsored an operational energy conference in May 2011 to identify operational energy problem areas and solutions. OEP&P and the Pacific Command repeated this effort in March 2012 and held an Operational Energy Summit targeting energy efficiency applications in the Pacific. U.S. Forces-Afghanistan established an Operational Energy Division within U.S. Forces- Afghanistan. The Operational Energy Division will assist commanders located in Afghanistan to improve operational capabilities by reducing the military s reliance on petroleum fuels. According to its charter, the Operational Energy Division will work with commanders develop, coordinate, and implement materiel and nonmateriel energy solutions. Central Command established a formal coordination body for operational energy in its area of responsibility. This organization will focus on maintaining mission effectiveness while reducing energy demand, expanding and securing energy supply, and changing the culture through energy awareness. Membership and supporting agencies include a wide range of leaders throughout DOD and the service components assigned to Central Command s area of responsibility. In addition, the services continue to use several collaborative organizations that predate the establishment of OEP&P to coordinate and collaborate on their fuel demand reduction efforts including those that are applicable to forward-deployed locations. For example: Program Manager for Mobile Electric Power. This program, established in 1967, was created to consolidate research and development efforts, establish common military operational requirements, and prevent duplication in the development of equipment such as generators that are used to supply power at forward-deployed locations. This effort has resulted in the development of a new energy-efficient family of generators called Advanced Medium Mobile Power Source (AMMPS) to be used by both Army and Marine Corps units. AMMPS includes Army and Marine Corps specifications and according to DOD officials, is a good example of how coordination and collaboration can help DOD accomplish its goals in a more cost-effective manner while still meeting the unique needs of each service. The Joint Committee on Tactical Shelters. This committee was created in 1975 to prevent the duplication of tactical shelter research and development efforts. According to DOD, since its establishment, this committee has reduced the number of shelter types from 100 to 21 easing the logistics burden among the four services. Collaboration through this committee has allowed DOD to limit the number of shelter systems developed to decrease fuel consumption at forward-deployed locations. Other collaborative forums. The USMC-SOCOM board, Army-Marine Corps board, and the Power Source Technical Working Group, all provide a means to coordinate and communicate on initiatives such as fuel demand management efforts. According to DOD officials, these collaborative forums take place at least twice a year and help the services discuss and share information related to issues such as fuel demand management and other programs of mutual interest. DOD has established multiple organizations and forums to facilitate coordination and collaboration, but does not have a mechanism to systematically identify and track information on the numerous fuel demand management initiatives that have been fielded, or are in the research and development phase throughout DOD. For instance, in an attempt to identify a list of fuel demand management initiatives, we sent a request to OEP&P asking for a comprehensive list of initiatives that had been fielded or were expected to be fielded to forward-deployed locations in Afghanistan within the next 12 months. OEP&P officials could not provide us with a comprehensive list of initiatives at the time of our request, and told us they did not have a mechanism in place to track or catalog all ongoing fuel demand management initiatives. In order for us to obtain a comprehensive list of initiatives an OEP&P official told us they would have to query all of the services and agencies involved to obtain this type of information. Both DOD s experience and our prior work have shown the benefits of enhanced information sharing for increasing coordination and collaboration, especially when multiple entities are involved in similar efforts. tracking specific detailed program information can enhance visibility and oversight efforts, and provide decision makers with timely and comprehensive information needed to determine management priorities. Moreover, OEP&P s directive outlining its roles and responsibilities states that OEP&P will recommend appropriate funding levels for operational energy programs relating to the operational energy strategy. GAO, Warfighter Support: Actions Needed to Improve Visibility and Coordination of DOD s Counter-Improvised Explosive Device Efforts, GAO-10-95 (Washington, D.C.: Oct. 29, 2009); and Defense Acquisitions: Opportunities Exists to Improve DOD s Oversight of Power Source Investments, GAO-11-113 (Washington, D.C.: Dec. 30, 2010). improve program management, visibility, and avoid investing in duplicative efforts. According to an OEP&P official, the number of initiatives and organizations involved in DOD s efforts to reduce its reliance on fuel has increased, and oversight and continued efforts to coordinate and collaborate across DOD are necessary. During our visit to forward-deployed locations in Afghanistan, Army officials also reiterated that frequently the various DOD organizations involved in developing fuel demand management solutions are unaware of ongoing efforts and establishing a mechanism to increase DOD s visibility to identify all ongoing fuel demand management efforts would be useful. Since OEP&P did not have a mechanism in place to catalog fuel demand management initiatives underway within DOD, we queried the services and various DOD organizations to collect data on the initiatives being pursued within DOD. Based on the information they provided, we identified over 30 initiatives being developed by the services and other DOD organizations to reduce DOD s fuel demand at forward-deployed locations. (See app. III for the list of initiatives). Additionally, during our visit to the U.S. Army Soldier Systems Center in Natick, Massachusetts, officials told us that although our review was limited to fuel demand management initiatives for base camps at forward-deployed locations in Afghanistan, DOD had numerous projects aimed at reducing fuel demand at forward-deployed locations around the world, but at the time of our visit no office or organization was tracking all of these initiatives. An official with the Office of the Assistant Secretary of Defense for Research and Engineering ASD(R&E) involved in identifying operational energy investments and initiatives confirmed that ASD(R&E) was not tracking such initiatives and relied upon the services to coordinate and manage these issues. According to DOD officials, at the time of our request, OEP&P did not have a mechanism in place to systematically track initiatives because its responsibilities are to develop and influence policy and provide guidance, oversight, and coordination of DOD s operational energy efforts and they are not involved in the services decisions about how to equip the forces with specific energy efficiency technologies. As such, officials told us that they had not developed a systematic approach for identifying and tracking fuel demand management initiatives. Since our request, officials told us that OEP&P has started working with DOD s Office of Cost Assessment and Program Evaluation to develop an automated budget exhibit that captures detailed program and funding data on operational energy initiatives included in DOD and the component s budgets. OEP&P is in the process of refining this exhibit to capture improvements suggested by the components. This budget exhibit with consolidated information on operational energy initiatives funded in the fiscal year 2013 President s Budget submission will help the office in its oversight and coordination role, but OEP&P officials acknowledge that its effort has a knowledge gap. For example, it does not include information on initiatives that are the subject of rapid fielding efforts or are locally procured. OEP&P officials stated that the Operational Energy Division in Afghanistan has started to collect information on ongoing operational energy activities in theater. However, these efforts have just begun and it is unclear to what extent they will provide a comprehensive list of all operational energy initiatives underway within DOD. As mentioned earlier, over the next 5 years, the services plan to spend approximately $4 billion dollars on operational energy initiatives, and without an established mechanism to identify and track fuel demand management initiatives, DOD may miss opportunities to improve its return on investment, reduce life-cycle costs, consolidate efforts, and increase interoperability among fuel demand management technologies. <4. DOD Has Measured the Results of Some Fuel Demand Management Initiatives, and Is Developing Baseline Data to Assess Progress Toward Achieving Operational Energy Goals> DOD has measured the results of some of the fuel demand management initiatives used in Afghanistan, but only recently has focused on collecting and assessing the data needed to develop a comprehensive baseline measure of its current fuel consumption at forward-deployed locations. Recognizing the need for information to manage fuel demand effectively, DOD has tasked the services with establishing baselines for operational energy consumption in all activities (air, sea, land) in its March 2012 implementation plan and provided funding for this purpose. Once collected, this baseline data will provide information across DOD s operational activities, including those conducted in Afghanistan, and help the department better understand how specific assets consume fuel in an operational environment. <4.1. Service Efforts to Measure the Results of Some Fuel Demand Management Initiatives Are in the Early Stages and Face Some Challenges> As noted above, DOD has developed fuel demand management initiatives, and has begun, in some cases to measure their results. However, the services are still in the process of collecting and analyzing comprehensive baseline data for all activities to include fuel consumption at forward-deployed locations in Afghanistan and have encountered some implementation challenges. In 2011, DOD issued guidance that emphasizes the importance of collecting data to assess progress and program effectiveness. Both DOD s strategic management plan and its operational energy strategy highlight the importance of collecting and analyzing data for use in assessing and managing performance of its initiatives. Specifically, DOD s strategic management plan states that one of its business goals is to increase operational energy efficiency in order to lower risks to warfighters, reduce costs, and improve energy security. To help achieve this goal, the plan calls for establishing an operational energy baseline for the department that is based on credible, verifiable fuel usage data. Furthermore, the operational energy strategy states that a greater understanding of how energy is used will allow DOD to target investments to improve energy efficiency in places such as Afghanistan. Recognizing the lack of sufficient data to manage fuel demand effectively, the Army and Marine Corps, which have the largest presence at forward-deployed locations in Afghanistan, have begun to collect fuel use and behavior data to understand how equipment is being used in combat to inform decision making on how to best employ equipment in the future. At the time of our report, the Army had begun collecting and analyzing data on particular fuel demand management initiatives and on its current fuel consumption at forward-deployed locations in Afghanistan. However, its data collection efforts face some continuing challenges. Among its ongoing fuel demand management initiatives, the Army has collected preliminary fuel consumption data on its new Advanced Medium Mobile Power Source (AMMPS) generators (see fig. 6). According to Army officials, replacing 273 Tactical Quiet Generators in Afghanistan with AMMPS generators is estimated to save about 1,100 gallons of fuel per day. microgrid at Bagram Airfield that replaced 13 60-kilowatt Tactical Quiet Generators (see fig. 7). The Army collected data from the microgrid to analyze its fuel consumption and identified a savings of 7,344 gallons of fuel (17 percent), over the test period. The Army s February 2012 report of the microgrid concluded that producing energy can be done more efficiently if the Army understands how the energy will be used. It stated that without these types of data, the Army is currently running generators inefficiently in the field, which places a burden on logistical operations. According to the report, by using information such as forecasted scenarios and energy demand, the department can weigh the trade-offs and implement a system with optimum efficiency. Army officials provided these data based on preliminary testing of AMMPS generators that were run on full-load conditions and assumes the generators were run 24 hours per day. <4.1.1. The Marine Corps Has Begun Efforts to Measure Fuel Consumption> As discussed above, the Marine Corps has developed operational energy initiatives, including those to decrease fuel demand, and also has begun measuring the results of some of these initiatives, primarily those that serve battalion-sized units. As noted above, the Marine Corps established the Experimental Forward Operating Base (ExFOB) in 2009 to bring stakeholders together across the service s requirements, acquisitions, and technology communities to inform requirements and rapidly evaluate new technologies for potential deployment. The four ExFOB demonstrations conducted thus far have evaluated initiatives such as renewable energy power generation, tent liners, hybrid solar systems, more efficient air conditioners, and solar-powered refrigerators. After evaluation, infantry battalions deployed to Afghanistan with selected equipment items to further assess their performance. A Marine Corps assessment found that during deployment: two platoon positions were able to run completely on renewable energy for 1 month, one patrol base was able to save 175 gallons of fuel in a 1-month period by utilizing the ExFOB initiatives, the Green Renewable Expeditionary Energy Network System (GREENS) provided full power for a platoon guard station, and Marines were able to reduce the number of batteries they had to carry by using the Solar Portable Alternatives Communications Energy System (SPACES) to recharge tactical batteries. The Army and Marine Corps face challenges in collecting information on current fuel consumption at forward-deployed locations in Afghanistan. Until recently, information related to fuel demand in-theater has been available only in the form of sales receipts and fuel delivery summaries, since DOD only tracks the movement and delivery of fuel up to the point that a forward-deployed location receives it, and as indicated above, efforts to collect current fuel consumption data face challenges. As a result, DOD lacks comprehensive data on how much fuel specific assets such as generators and air conditioning units consume in an operational environment. The Army and Marine Corps have begun collecting information on fuel consumption at their forward-deployed locations in Afghanistan. For this effort Army and Marine Corps officials told us that both services are using the Tactical Fuels Manager Defense system technology (see fig. 9). To date, the Tactical Fuels Manager Defense system has been deployed to 36 locations in Afghanistan. Army officials stated that the information gathered by this system can assist a base commander in making decisions regarding energy use on the base, but they indicated that this technology is not yet being used at all forward- deployed locations and cited several difficulties they face. For instance, additional funding will be required to extend the Tactical Fuels Manager Defense system to the majority of locations in Afghanistan. In addition, during our site visit to forward-deployed locations in Afghanistan, officials reported that they had experienced difficulty in connecting to the system s website, which resulted in an inability to load fuel data points, receipts, and stock levels into the system. In addition to these technical challenges, the program manager stated that additional training and oversight procedures were needed to ensure soldiers and Marines use this system and are held accountable for importing data. For example, the program manager told us that some bases are not entering fuel consumption data into the system and from September 2011 to March 2012, the data captured had declined by 50 percent making it more difficult for DOD to meet its goal in obtaining baseline fuel data. In response, the International Security Assistance Force Joint Command issued a fragmentary order in April 2012 specifically to ensure all bases follow existing accountability and reporting procedures, including using the Tactical Fuels Manager Defense system to capture fuel data. While the system is providing improved data on fuel consumption at forward- deployed locations, Army officials also recognize that continued evaluation and improvements will be needed before deciding whether this should be an Army-wide system. <4.2. DOD Has Begun Collecting Baseline Data to Assess Effectiveness of Its Fuel Demand Management Efforts> While the services have efforts underway to obtain a better understanding of how specific assets consume fuel in-theater, DOD has limited ability to assess the effectiveness of its fuel demand management initiatives because it has only recently begun efforts to collect comprehensive baseline data across the services. DOD recognizes the need for baseline data on fuel consumption in an operational environment and has taken several steps to address this issue. Specifically, OEP&P s implementation plan tasks the services with establishing operational energy consumption baselines and projecting consumption for fiscal years 2012 2017 by the second quarter of fiscal year 2012. DOD s implementation plan states that these projections will inform required reports to Congress on current and future energy needs. In addition, the implementation plan calls for the services to report to the Defense Operational Energy Board by the third quarter of fiscal year 2012 on any actions taken or needed to improve these baselines. The plan states that this effort may not necessarily entail the real-time measurement of energy consumption by individual pieces of equipment. Instead, the military departments and defense agencies may evaluate a range of options including new systems, improvements to current and related systems, and/or application of sampling and extrapolation to existing data to improve the department s understanding of the location, purpose, and end use of operational energy consumption. This implementation plan is an important step towards improving the department s management of its energy consumption at forward-deployed locations such as those in Afghanistan; however, the focus on establishing a baseline of fuel consumption is relatively recent. In addition, U.S. Forces-Afghanistan is in the process of improving their visibility and accountability over fuel consumption at forward-deployed locations. To help with this task, OSD officials informed us that DLA- Energy sent an analyst to Afghanistan in March 2012 to work with the U.S. Forces-Afghanistan s Operational Energy Division to capture a better picture of fuel consumption. Officials stated that with improved visibility, they expect that the Operational Energy Division will be able to articulate to combatant commanders or service officials the costs associated with certain operational decisions and leverage this improved picture of fuel consumption to target areas for improvement. Further, to support fuel demand management efforts at forward-deployed locations, OEP&P provided additional funding for a demonstration effort to evaluate the operational benefits of fuel demand management. Specifically, DOD provided $1.4 million to fund the Operation Enduring Freedom Energy Initiative Proving Ground to evaluate initiatives including heat and air conditioning units, tent liners, solar tent shades, and hybrid- solar electrical power technology, and analyze the effect these initiatives have on fuel consumption and identify opportunities to deploy them in Afghanistan to achieve the greatest impact and return on investment. The group in charge of this effort has already begun to take inventory of the power and energy used at some forward-deployed locations and to monitor areas where there are opportunities for potential energy efficiency improvements. <5. Conclusions> In its extended war in Afghanistan, DOD reports that its heavy reliance on petroleum-based fuel at forward-deployed locations continues to create risk for the warfighters, pose difficult logistical challenges for military planners, and increase the department s operating costs. With consistent and heightened visibility from Congress and OSD, DOD has made progress in its efforts to develop an approach for managing its fuel demand at forward-deployed locations since the time of our 2009 report on this issue. The creation of the Assistant Secretary of Defense for Operational Energy Plans and Programs and the services operational energy offices, OEP&P s publication of its operational energy strategy and implementation plan, the services strategies, and the ongoing fuel demand management initiatives the services have deployed or are developing all represent positive steps toward reducing the department s reliance on petroleum-based fuel at forward-deployed locations such as those in Afghanistan. DOD s efforts to develop specific guidance on how military forces should factor operational energy considerations into its operational, planning, and training decisions are important steps toward minimizing key problems identified by DOD risk to warfighters, logistical- related disruptions, and high operating costs associated with heavy reliance on petroleum-based fuel. However, without a mechanism for systematically collecting and sharing information across the services on the fuel demand management initiatives that have been fielded, or are in the research and development phase, DOD may forgo an opportunity to improve interoperability of new technologies, consolidate research and development efforts, and save costs. Lastly, DOD s recent efforts to begin collecting accurate baseline data on fuel demand at the individual asset level at forward-deployed locations should enhance its planning, programming, and operational decisions, and help measure the impact of its fuel demand management efforts as well as progress toward meeting its overall operational energy goals. At a time when the federal government faces increasing fiscal challenges and competition across the government for discretionary funds, these efforts by DOD could help maximize the benefits of its energy efficiency investments for forward- deployed locations and better position the department for future missions. <6. Recommendation for Executive Action> To further enhance DOD s approach for managing fuel demand, including at forward deployed locations such as those in Afghanistan, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Operational Energy Plans and Programs, in consultation with the Joint Staff, combatant commanders, and military service components, to finalize and implement a systematic approach that includes establishing a mechanism to identify and track fuel demand management initiatives that have been fielded, or are in the research and development phase to ensure information concerning these efforts is effectively shared across the services. <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix IV, DOD partially concurred with our recommendation to finalize and implement a systematic approach that includes establishing a mechanism to identify and track fuel demand management initiatives that have been fielded, or are in the research and development phase to ensure information concerning these efforts is effectively shared across the services. DOD also provided technical comments that were incorporated, as appropriate. DOD stated that it signed the DOD Operational Energy Strategy Implementation Plan in March 2012 and established the Defense Operational Energy Board with the purpose of providing a mechanism for reviewing, synchronizing, and supporting departmentwide operational energy policies, plans, and programs. DOD also stated that the Defense Operational Energy Board s membership ensures departmentwide coordination. Furthermore, DOD stated that the Operational Energy Implementation Plan addresses energy improvements in current operations, and the Board will oversee the tracking and sharing of information on fuel demand improvements. Lastly, DOD stated that the department conducts an annual review of the components budgets and activities to determine their adequacy for implementing the Operational Energy Strategy, and this review also encompasses fuel demand management initiatives that are being developed, fielded, or supported by the budget. As such, DOD stated that while our recommendation has merit, further action by the Secretary of Defense is unnecessary. We acknowledge the intended actions described in DOD s Operational Energy Strategy Implementation Plan, the function and scope of the Defense Operational Energy Board, and DOD s annual review process, which may eventually provide DOD with an approach and mechanism for identifying and tracking fuel demand management initiatives that have been fielded, or are in the research and development phase. However, until these initiatives are fully implemented, we are unable to assess the extent to which they will address our recommendation. During the course of our review, DOD officials explained that many of the initiatives included in its Operational Energy Strategy Implementation Plan, such as identifying investment gaps in the department s science and technology portfolio necessary to reduce fuel demand, would be completed at the end of fiscal year 2012 or beyond. DOD officials also told us they were in the process of finalizing the department s annual review of the components budgets and activities to include fuel demand management initiatives that were being developed or fielded. However, at the conclusion of our review, this budget review process had not been finalized and the department acknowledges that its annual budget review efforts do not include initiatives that are part of rapid fielding or are locally procured. We continue to believe that a comprehensive mechanism for sharing information on all initiatives underway within the department, including those that are part of rapid fielding or are locally procured, would further enhance DOD s approach for managing fuel demand at forward-deployed locations such as those in Afghanistan, and help ensure information concerning these efforts is effectively shared across the services. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at [email protected] or (202) 512-5257. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Scope and Methodology Our objectives were to assess the extent to which DOD has (1) established an approach to provide visibility and accountability for fuel demand management at forward-deployed locations, (2) initiatives underway to promote fuel efficiency across the services in Afghanistan and has facilitated coordination and collaboration among the services on the development and fielding of these initiatives, and (3) measured the results of its fuel demand management initiatives at forward-deployed locations. To gather information for these objectives, we reviewed documentation and interviewed officials from: Office of the Secretary of Defense Office of the Assistant Secretary of Defense for Operational Energy Office of the Assistant Secretary of Defense for Research and J-4 Logistics Directorate, Engineering Division Deputy Assistant Secretary of the Army (Energy and Sustainability Office) Army G-4 Army Corps of Engineers 249th Prime Power Battalion Army Rapid Equipping Force Army Petroleum Center Program Manager Mobile Electric Power Green Warrior Initiative; Contingency Basing & Operational Energy Natick Solider Research, Development, Engineering Command Logistics Civil Augmentation Program Deputy Assistant of Secretary of the Navy, Energy Office Navy Energy Coordination Office Air Force Office of the Assistant Secretary, Installation, Environment, Air Mobility Command Fuel Efficiency Office Marine Corps Expeditionary Energy Office Marine Corps Systems Command Marine Corps Training and Education Command U.S. Forces Afghanistan-Operational Energy Division New Kabul Compound Camp Phoenix Camp Sabalu-Harrison Joint Combat Outpost Pul-A-Sayed Camp Leatherneck, Patrol Base Boldak, Bagram Airfield. Defense Logistics Agency Energy We concentrated our review on the steps the Army and Marine Corps have taken to reduce fuel demand because these two services have the responsibility for managing forward-deployed locations in Afghanistan. Our review focused on fuel demand management initiatives planned for or underway at forward-deployed locations in Afghanistan. For the purposes of our review, we defined fuel demand management initiatives to include nonmateriel and materiel solutions to assist DOD in reducing its reliance on fuel consumed at forward-deployed locations. We did not examine energy efficiency initiatives for naval vessels, aircraft, or combat vehicles. We asked officials to identify key initiatives planned or under way to reduce fuel demand. After consultation with U.S. Central Command and U.S. Forces Afghanistan officials, we selected and visited forward-deployed locations because they were using energy-efficient technologies that were included in our review and/or are illustrative of DOD s fuel demand management initiatives and challenges. The locations chosen are illustrative case studies in our report and information obtained from these locations is not generalizable to all forward-deployed locations. We also reviewed DOD guidance related to energy reduction for the department s permanent or U.S. facilities. To address the first objective, we identified DOD s approach for fuel demand management from our prior work examining DOD s fuel demand management efforts at forward-deployed locations. These elements include: (1) establishing visibility and accountability for achieving fuel reduction by assigning roles and responsibilities, establishing metrics, and monitoring performance; (2) issuing guidance and policies that address fuel demand at forward-deployed locations; and (3) establishing incentives and a viable funding mechanism to support the implementation of fuel demand reduction projects. We reviewed DOD and Service guidance, operational energy strategies and plans, OEP&P s budget certification report, project status reports, and briefings to identify DOD s approach for fuel demand management. We also interviewed OSD, Joint Staff, service, and U.S. Central Command officials at the headquarters and operational level to discuss DOD s fuel demand management approach, and to determine the extent to which DOD has implemented the initiatives contained in its operational energy strategy. We also met with officials responsible for administering the Logistics Civil Augmentation Program contracts to discuss how energy efficiency guidance and requirements were being incorporated into contracts to incentivize fuel demand management efforts. Furthermore, we met with OEP&P, Joint Staff, and service officials to discuss the processes and steps needed to ensure an effective approach was established to provide oversight and accountability for fuel demand management and the anticipated time frames for accomplishing fuel demand management goals. To determine the extent to which DOD has initiatives underway to manage fuel demand across the services in Afghanistan and has facilitated coordination and collaboration, we queried OEP&P, the services, and various DOD organizations involved in operational energy research and development to collect data on the initiatives to reduce fuel demand at forward-deployed locations. These initiatives included ones that had been fielded or were expected to be fielded within 12 months of our data request. Based on the information provided and the scope of our review, we identified over 30 fuel demand management initiatives already fielded or being developed by the services and other DOD organizations to reduce DOD s fuel demand at forward-deployed locations. We also reviewed data on the current status of initiatives that were identified in our 2009 report. In addition, we met with Army and Marine Corps officials located at the headquarters level and at forward- deployed locations to discuss the purpose and function of these initiatives, as well as any opportunities for greater coordination and collaboration. To determine the extent to which the department has efforts underway to facilitate coordination and collaboration among the services, we conducted an analysis of DOD energy strategies and plans, reviewed DOD energy conference summary reports, attended DOD energy symposia, and interviewed DOD and service officials. Additionally, we reviewed relevant DOD, Joint, and service policies and guidance, and assessed the extent to which the policies and guidance were consistent with leading practices for coordination and collaboration identified in our prior work. We also met with DOD and research and development officials to discuss the challenges, if any, that they faced to coordinate and collaborate on fuel demand management initiatives. To determine the extent to which DOD has efforts in place to accurately capture the results of its fuel demand management initiatives in forward- deployed locations, we assessed DOD and the services strategies that detail their goals and methods for measuring the results of their fuel demand management initiatives, and determined whether these plans addressed key elements from leading practices for measuring results (e.g. goals, milestones, quantifiable metrics, evaluation of benefits, etc.). Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance (Oct. 5, 2009). on documents provided to us by DOD and the services regarding the initiatives and the results from testing their performance. We reviewed select DOD studies that assessed various initiatives being used in Afghanistan with the goal of reducing fuel use at forward-deployed locations. We concluded that the studies clearly describe the methodology and assumptions behind the study results, and they do not attempt to generalize the results beyond the context of the studies. Although the results of these studies cannot be generalized to all fuel demand management initiatives, they provide examples of how DOD is assessing the results of these initiatives. We also conducted interviews with DOD and service officials to obtain information regarding DOD s progress in collecting fuel data on fuel demand management initiatives and establishing a baseline on fuel demand at forward-deployed locations in Afghanistan. We conducted this performance audit from April 2011 through June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Tasks and Milestones Included in DOD s Operational Energy Strategy Implementation Plan Description ASD(OEPP), in consultation with relevant offices within OSD, the Military Departments, Defense agencies, and the Joint Staff, will present the charter at the meeting of the Board. The Military Departments and Defense agencies will report to the Defense Operational Energy Board an operational energy baseline, using all available data on actual energy consumption in support of military operations in fiscal year 2011 and projected consumption in fiscal year 2012 2017. Combatant Commands will report to the Defense Operational Energy Board on how they guide their forces to improve energy performance and efficiency in operations and the effectiveness of this guidance. The Military Departments will report to the Defense Operational Energy Board progress against their own current or updated energy performance goals and metrics and demonstrate how such progress supports the Operational Energy Strategy priority to reduce the demand for fuel and increase capability in military operations. In accordance with forthcoming Joint Staff policy, the Joint Staff, U.S. Special Operations Command (USSOCOM), and the Military Departments will meet the congressional intent of an energy performance attribute in the requirements development process. Through the Joint Requirements Oversight Council, the Vice Chairman of the Joint Chiefs of Staff (VCJCS) will oversee implementation of this effort in individual programs. The Joint Staff, USSOCOM, and the Military Departments will report overall progress in implementing an energy performance attribute to the Defense Operational Energy Board. In accordance with forthcoming policy from the Under Secretary of Defense for Acquisition, Technology and Logistics (USD(AT&L)), the Military Departments will develop and apply Fully Burdened Cost of Energy (FBCE) analyses throughout the acquisition process. The Military Departments will report overall progress on implementing FBCE to the Defense Operational Energy Board. Description The Military Departments and other asset owners will brief the Defense Operational Energy Board on energy-related risks to fixed installations that directly support military operations, to include those identified through Assistant Secretary of Defense for Homeland Defense and America s Security Affairs (ASD(HD&ASA)) Defense Critical Infrastructure Program (DCIP). The Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) will identify investment gaps in the Department s science and technology portfolio necessary to reduce demand, improve system efficiency, and expand supply alternatives, as articulated in the Operational Energy Strategy. ASD(R&E) will provide the final report to the Defense Operational Energy Board and include recommendations on possible options for filling the gaps. The Joint Staff and Military Departments will report to the Defense Operational Energy Board on how policy, doctrine, and professional military education (PME) will support reduced energy demand, expanded energy supply, and future force development. As appropriate and consistent with annual classified guidance to the Combatant Commands, the Joint Staff and Combatant Commands will report to the Defense Operational Energy Board on command measures to incorporate Operational Energy Strategy goals into theater campaign plans, security cooperation initiatives, joint and combined exercises, and other activities designed to achieve theater and country objectives. Appendix III: Fuel Demand Management Initiatives for Forward-Deployed Locations Identified by DOD The list of fuel demand management initiatives included below provides an overview of the materiel initiatives identified by DOD organizations during the course of our review. This list does not include the nonmateriel initiatives underway such as those to change policies and procedures, or modify staffing to perform fuel demand management functions. The list also provides a status update on the initiatives discussed in our 2009 report on fuel demand management. The first nine initiatives listed below were identified in our 2009 report. An application of foam insulation on tent structures to decrease fuel demand. According to Army officials, spray foam reduces power use for heating, ventilation, and air conditioning. CURRENT STATUS: The effort to insulate tents with spray polyurethane foam has been suspended. Even though the tent insulation effort was demonstrated in-theater with successful results, the Army is no longer moving forward with a large- scale effort to install foam insulation in all tents and portable structures while it examines the environmental implications of disposal of the solidified tent foam when the life span of the tent is complete. The AMMPS, a replacement for the Tactical Quiet Generators (TQGs). It takes advantage of current technology to provide power generation capabilities that are more fuel efficient and reduce overall costs. CURRENT STATUS: The Army is currently procuring AMMPS generators and will field them throughout the service. Some Army units will take the AMMPS with them when they deploy to Afghanistan in the future. Also, Program Manager-Mobile Electric Power is fielding approximately 200 AMMPS to Afghanistan starting in 2012 to replace legacy tactical quiet generators (TQGs). Once in place, the DOD expects AMMPS can save as much as 300,000 gallons of fuel per month over the TQGs they are replacing. Improved-Environmental Control Unit (I-ECU) The I-ECU is a replacement of military standard environmental control units. It is designed for military environments, with reduced power consumption and weight, and increased reliability over current environmental control units. CURRENT STATUS: Program Manager for Mobile Electric Power (PM-MEP) begins low- rate initial production of the I-ECU in fiscal year 2012. An experimental device that converts trash (paper, plastic, cardboard, and food waste) into energy for forward-deployed locations, reducing the need for convoys to deliver fuel and haul away trash. CURRENT STATUS: TGER has been successfully tested and full system integration is underway. The Army is now targeting a field demonstration starting in mid-June for 90 days. The original destination was Bagram, but now more likely will be Camp Virginia, Kuwait. The scrap tire recycling process produces diesel, gas, carbon char, and steel byproducts that can either be used to power generators, boilers, and other items or recycled into products such as asphalt and paint. CURRENT STATUS: This effort no longer has research investment, and is not a product being further developed. Description/Status update A hybrid generator system that uses wind and solar energy to supplement diesel generators. CURRENT STATUS: Due to issues regarding usability the system was dismantled and disposed of in early fiscal year 2011. The THEPS are mobile generators with solar panels, wind turbine, diesel generator, and storage batteries. CURRENT STATUS: The Transportable Hybrid Electric Power Station was not successful but spurred the Defense Advanced Research Projects Agency to allocate $30 million to the Army to develop the Hybrid Intelligent Power (HIPower) system, a micro grid system. This is a concrete, dome-shaped structure that is designed to be energy efficient with energy supplied by a combination of solar panels and windmills. CURRENT STATUS: Although successful, using domes in-theater would require some changes in current operations, as domes would be considered permanent structures and thus subject to MILCON constraints. A collection of various deployable shelters powered by solar and fuel cell generators. CURRENT STATUS: There have been no Renewable Energy Tent Cities fielded in Afghanistan by the Air Force, but Air Force Central has fielded a number of sets elsewhere in-theater. The Air Force purchased and shipped a total of 920 units (flys and inserts) for the CENTCOM AOR, for Air Force training sites, and for storage at Holloman Air Force Base. Air Force Central received a total of 575 units and the majority are in use at Manas and Ali Al Salem. Units were also sent to Air Force sites for training. <8. Initiatives identified after 2009 10. Deployable Renewable Energy> This module is intended to be towed by a vehicle and is designed to be used at combat posts in forward-deployed locations to power equipment via solar, wind turbine, battery, and generator technologies. SAGE is an integrated effort to develop design specifications for base camp infrastructure that when employed will reduce the quantity of petroleum fuel required for electrical power generation for expeditionary camps by employing smart Micro-grid technologies and energy efficient modular structures. The REPPS is a lightweight, portable power system capable of recharging batteries and/or acting as a continuous power source. 13. Afghanistan Microgrid Project (AMP) The AMP initiative involves the operation of load sensing monitors and fuel consumption logs, which are being captured by the Army Materiel Systems Analysis Activity team and will be analyzed to quantify the impact of the microgrid on fuel consumption against the baseline of 13 TQGs that the system is replacing. SPACES is a lightweight man-portable lightweight device with tailorable adaptors that can energize equipment such as radios, laptop computers, and rechargeable batteries. Energy Network System (GREENS) GREENS is a man-transportable device with renewable energy collection and storage that can energize communications equipment, sensors, and radios. EARLCON is hybrid power system, using solar, traditional generators, and battery storage, with an energy management system. It is designed to improve efficiency and reduce demand for fuel. The SunDanzer direct current powered air cooler is an air conditioning system that features a variable speed compressor. This design allows for low energy consumption and the ability to connect directly to a photovoltaic array without the need for batteries. Description/Status update The Integrated Trailer-ECU-Generator II is a self-contained, highly mobile power generation and environmental control system. It is a second generation system of the current Integrated Trailer ECU Generator. Energy to the Edge focuses on meeting energy and water requirements at locations that are hard to support logistically, while simultaneously reducing dependence on ground and aerial resupply operations. This is a 30-kilowatt hybrid energy system designed to integrate renewable energy with the Army s currently fielded Tactical Quiet Generators. The TEMPER air-supported tent photovoltaic fly provides supplemental power generation without increasing the operational footprint of the base camp. This solar shade tent fly has integrated photovoltaic power and can reduce solar load up to 80-90 percent. The ZeroBase H-Series ReGenerator is a hybrid system that has solar generation, battery storage, and a 5-kilowatt generator. The system maximizes generator efficiency by operating the generator at peak efficiencies by capturing excess power through the battery bank. Mobile Max Pure System is a commercial, off-the-shelf system that provides over 3 kilowatts of photovoltaic power but also integrates water pumping and purification systems as options. The RENEWS system consists of wind turbines, flexible solar panels, a battery module, and output adapter plugs/connectors. The Insulating Liner is a lightweight, radiant, reflective insulating liner. It is installed behind the existing liner to enhance the radiant and insulating capability, which reduces both heating and cooling requirements / needs. The Insulating liner has zippered doors and sealable openings for ducts and electrical cords to enter the shelter. These liners fit different shelter systems and provide varying levels of insulation. SunDanzer refrigerators and freezers have exceptionally low energy consumption and require smaller, less expensive power systems and low operating expense. This technology allows refrigeration in remote locations where it was previously unavailable or prohibitively expensive. BRITES is an Air Force power system that stores energy and serves as a power management distribution system. The Alaska Small Shelter System is the official Air Force shelter system and the only shelter successfully tested to meet all the requirements, such as wind and snow load, of the U.S. Air Force s 1999 Operational Requirements Document. The Utilis Thermal Fly is an external solar shade used to reduce the severe radiant heat transfer from the outside environment to the inside of the shelter. Flexible Solar Cells Technique works by scanning a nanoscale stylus across an array of microscopic solar cells which causes them to illuminate with simulated light so that they function. These flexible solar cells are plastic-based, and work via photovoltaic properties of the plastic, which convert a portion of the light that hits the solar cells into electricity. This generator is in development and will be designed to be a portable, integrated, and ruggedized, polymer electrolyte membrane-based fuel cell, power generator, capable of operating on certain raw fuel. This generator will produce 10 kilowatts peak power output that is suitable for deployment to forward operating locations. This shelter system uses lightweight, flexible solar panels to cool a tent shelter. It is currently undergoing field testing and will be deployed initially in Kuwait in fiscal year 2013. Description/Status update These are lightweight, deployable, rigid-wall, and thermally insulated shelters that can be used as part of various fielding options. This cellular insulation project leverages NASA s multilayer film insulation concept resulting in flat panels that when mechanically deployed provide energy. Balance of Systems is designed for multiple applications, including Quadrant, Temper Fly, and PowerShade. This system consists of a charge controller, power monitor, AC inverter, and two storage batteries. The power is generated by the photovoltaics flows to the charge controller, which then uses that power to charge the batteries if they are depleted. This shower system is designed to improve the energy, water, and waste efficiency and reduce environmental risks of life support areas. This initiative is an extended solar-power solution to operate a wireless surveillance system for combat outpost force protection. This is an energy and power initiative that includes a generator and solar photovoltaics. 39. Hunter Defense Technologies (HDT) The HDT Heat Shield Radiant Blanket is a 114-pound liner designed to help thermally insulate a Base-X tent. The HDT liner fits inside of the tent by attaching the liner to the walls and ceiling. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments <9. GAO Contact> <10. Staff Acknowledgments> In addition to the contact named above, Suzanne Wren (Assistant Director), Virginia Chanley, Carole Coffey, Mark Dowling, Jason Jackson, Tamiya Lunsford, Christopher Mulkins, Charles Perdue, Amie Steele, Erik Wilkins-McKee, and Delia P. Zee made major contributions to this report. Related GAO Products Defense Acquisitions: Opportunities Exist to Improve DOD s Oversight of Power Source Investments. GAO-11-113. Washington, D.C.: December 30, 2010. Defense Management: Increased Attention on Fuel Demand Management at DOD s Forward-Deployed Locations Could Reduce Operational Risks and Costs. GAO-09-388T. Washington, D.C.: March 3, 2009. Defense Management: DOD Needs to Increase Attention on Fuel Demand Management at Forward-Deployed Location. GAO-09-300. Washington, D.C.: February 20, 2009. Defense Management: Overarching Organizational Framework Needed to Guide and Oversee Energy Reduction Efforts for Military Operations. GAO-08-426. Washington, D.C: March 13, 2008.
Why GAO Did This Study According to DOD, the U.S. military’s dependence on liquid fuel in countries like Afghanistan creates an enormous logistics burden that exposes forces to enemy attack and diverts operational resources from other mission areas to support delivery of this critical resource. In 2011, DOD consumed almost 5 billion gallons of fuel in military operations worldwide, at a cost of approximately $17.3 billion. GAO was asked to (1) assess DOD’s approach for fuel demand management, including at forward-deployed locations in Afghanistan, (2) determine the extent to which DOD has initiatives to promote fuel efficiency at forward-deployed locations in Afghanistan and efforts to coordinate and collaborate on such initiatives, and (3) assess efforts to measure the results of its fuel demand management initiatives and establish a baseline measure of fuel consumption in Afghanistan. To conduct this review, GAO analyzed DOD and service guidance and strategies related to fuel demand management and fuel demand management initiatives, visited locations in Afghanistan, and met with DOD officials. What GAO Found The Department of Defense (DOD) has taken steps to establish an approach for managing DOD’s overall fuel demand, but is still developing comprehensive guidance to address fuel demand management, including at forward-deployed locations in countries such as Afghanistan. In 2009, GAO reported that DOD lacked (1) visibility and accountability for achieving fuel reduction, (2) incentives and a viable funding mechanism to invest in the implementation of fuel demand reduction projects, and (3) guidance and policies that addressed fuel demand at forward-deployed locations. In response to GAO recommendations, DOD has taken steps since 2009 to increase its visibility and accountability for fuel demand management at forward-deployed locations, including those located in Afghanistan. In addition, with an increased focus on fuel demand management, DOD has also provided funding and incentives to implement fuel demand management projects. Further, DOD has issued some guidance on fuel demand management at forward-deployed locations since 2009 and is developing more comprehensive guidance on how DOD will incorporate energy efficiency considerations into operations, planning, and training decisions for current military operations in Afghanistan and for future military operations. DOD’s 2012 Operational Energy Strategy Implementation Plan acknowledges the need for additional comprehensive guidance and directs the Joint Staff and military departments to report, by the end of fiscal year 2012, on how operational energy considerations will be reflected in policy, doctrine, and professional military education. The Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires DOD to report to Congress annually on its progress in implementing its operational energy strategy. DOD has yet to submit its first report. Multiple DOD organizations are developing initiatives to decrease fuel demand at forward-deployed locations, including in Afghanistan, and the department has worked to facilitate some coordination and collaboration among the services on fuel demand management efforts. However, it is still developing an approach to systematically identify and track all of the fuel demand management initiatives that have been fielded, or are in the research and development phase throughout DOD. GAO’s prior work found that utilizing a mechanism such as a database can help organizations enhance their visibility and oversight of DOD programs. Until DOD finalizes its approach to systematically identify and track fuel demand management initiatives, it may be limited in its ability to foster collaboration, achieve efficiencies, and avoid unintended duplication or overlap of activities. DOD has started to measure the results of some of the fuel demand management initiatives used in Afghanistan, but is still in the process of collecting and assessing comprehensive baseline data needed to measure current fuel consumption at forward-deployed locations. The Army and Marine Corps have begun collecting data on the amount of fuel consumed by their current assets in Afghanistan. Recognizing the need for additional information, DOD’s 2012 Implementation Plan has tasked the services with developing and refining their fuel consumption baselines by mid-2012 and DOD has provided funding for this purpose. Once collected, these data should enhance DOD’s planning, programming, and operational decisions and help DOD assess progress toward meeting its operational energy goals. What GAO Recommends GAO recommends that DOD finalize and implement a systematic approach that includes establishing a mechanism to identify and track fuel demand management initiatives that have been fielded, or are in the research and development phase. DOD partially concurred with GAO’s recommendation, citing ongoing efforts to identify and track initiatives. Until fully implemented, GAO is unable to assess whether these efforts fully address the recommendation
<1. Background> <1.1. Eligibility Criteria for School Participation in the Title IV Program> In order for students attending a school to receive Title IV funds, a school must be: 1. licensed or otherwise legally authorized to provide higher education in the state in which it is located, 2. accredited by an agency recognized for that purpose by the Secretary 3. deemed eligible and certified to participate in federal student aid programs by Education. Under the Higher Education Act, Education does not determine the quality of higher education institutions or their programs; rather, it relies on recognized accrediting agencies to do so. As part of its role in the administration of federal student aid programs, Education determines which institutions of higher education are eligible to participate in Title IV programs. Education is responsible for overseeing school compliance with Title IV laws and regulations and ensuring that only eligible students receive federal student aid. As part of its compliance monitoring, Education relies on department employees and independent auditors of schools to conduct program reviews and audits of schools. Institutions that participate in Title IV programs must comply with a range of requirements, including consumer disclosure requirements, which include information schools must make available to third parties, as well as reporting requirements, which include information schools must provide to Education. <1.2. Sources of Federal Requirements> Congress and the President enact the statutes that create federal programs; these statutes may also authorize or direct a federal agency to develop and issue regulations to implement them. Both the authorizing statute and the implementing regulations may contain requirements that recipients must comply with in order to receive federal funds. The statute itself may impose specific requirements; alternatively, it may set general parameters and the implementing agency may then issue regulations further clarifying the requirements. Federal agencies may evaluate and modify their regulatory requirements, but they lack the authority to modify requirements imposed by statute. In addition, when issuing rules related to programs authorized under Title IV, Education is generally required by the HEA to use negotiated rulemaking, a process that directly involves stakeholders in drafting proposed regulations. Once the department determines that a rulemaking is necessary, it publishes a notice in the Federal Register, announcing its intent to form a negotiated rulemaking committee, and holds public hearings to seek input on the issues to be negotiated. Stakeholders, who are nominated by the public and selected by Education to serve as negotiators, may include schools and their professional associations, as well as student representatives and other interested parties. A representative from Education and stakeholders work together on a committee that attempts to reach consensus, which Education defines as unanimous agreement on the entire proposed regulatory language. If consensus is reached, Education will generally publish the agreed-upon language as its proposed rule. If consensus is not reached, Education is not bound by the results of the negotiating committee when drafting the proposed rule. According to proponents, the negotiated rulemaking process increases the flow of information between the department and those who must implement requirements. Once a proposed rule is published, Education continues the rulemaking process by providing the public an opportunity to comment before issuing the final rule. <1.3. Information Collections and the Paperwork Reduction Act> The Paperwork Reduction Act (PRA) requires federal agencies to assess and seek public comment on certain kinds of burden, in accordance with its purpose of minimizing the paperwork burden and maximizing the utility of information collected by the federal government. Under the PRA, agencies are generally required to seek public comment and obtain Office of Management and Budget (OMB) approval before collecting information from the public, including schools. Agencies seek OMB approval by submitting information collection requests (ICR), which include among other things, a description of the planned collection efforts, as well as estimates of burden in terms of time, effort, or financial resources that respondents will expend to gather and submit the information. Agencies are also required to solicit public comment on proposed information collections by publishing notices in the Federal Register. If a proposed information collection is part of a proposed rulemaking, the agency may include the PRA notice for the information collection in the Notice of Proposed Rulemaking for that rule. The PRA authorizes OMB to approve information collections for up to 3 years. Agencies seeking an extension of OMB approval must re-submit an ICR using similar procedures, including soliciting public comment on the continued need for and burden imposed by the information collection. <1.4. Past and Ongoing Initiatives to Examine Schools Federal Regulatory Burden> Over the last two decades, there have been several efforts to examine the federal regulatory burden faced by schools (see table 1). While intending to make regulations more efficient and less burdensome, several of these efforts also acknowledge that regulation provides benefits to government and the public at large. The specific results of initiatives varied, as described below. For example, Executive Order 13563, which was issued in 2011, requires agencies to, among other things, develop plans to periodically review their existing significant regulations and determine whether these regulations should be modified, streamlined, expanded, or repealed to make the agencies regulatory programs more effective or less burdensome. Consistent with the order s emphasis on public participation in the rulemaking process, OMB guidance encourages agencies to obtain public input on their plans. The specific results of initiatives varied, as described below. <2. Experts Cited a Range of Requirements as Burdensome> Although the 18 experts we interviewed offered varied opinions on which Title IV requirements are the most burdensome, 16 said that federal requirements impose burden on postsecondary schools. While no single requirement was cited as most burdensome by a majority of experts, 11 cited various consumer disclosures schools must provide or make available to the public, students, and staff (see table 2). Among other things, these disclosure requirements include providing certain information about schools, such as student enrollment, graduation rates, and cost of attendance. The most frequently mentioned consumer disclosure requirement cited by 5 experts as burdensome was the Clery Act campus security and crime statistics disclosure requirement. Two experts noted the burden associated with reporting security data, some of which may overlap with federal, state, and local law enforcement agencies. Beyond consumer disclosures, 4 experts stated that schools are burdened by requirements related to the return of unearned Title IV funds to the federal government when a student receiving financial aid withdraws from school. According to 2 experts, schools find it particularly difficult both to calculate the precise amount of funds that should be returned and to determine the date on which a student withdrew. Finally, 6 experts we interviewed stated that, in their view, it is the accumulation of burden imposed by multiple requirements rather than burden derived from a single requirement that accounts for the burden felt by postsecondary schools. Three stated that requirements are incrementally added, resulting in increased burden over time. Experts also described some of the benefits associated with Title IV requirements. For example, one expert stated that requiring schools to disclose information to students to help them understand that they have a responsibility to repay their loans could be beneficial. Another expert noted that consumer disclosures allow students to identify programs relevant to their interests and that they can afford. <3. Schools Shared Similar Views on Types of Burdens and Named a Few Benefits> School officials who participated in our discussion groups told us that Title IV requirements impose burden in a number of ways, as shown in table 3. Participants in all eight groups discussed various requirements that they believe create burden for schools because they are, among other things, too costly and complicated. For example, participants in four groups said the requirement that schools receiving Title IV funds post a net price calculator on their websites an application that provides consumers with estimates of the costs of attending a school has proven costly or complicated, noting challenges such as those associated with the web application, obtaining the necessary data, or providing information that may not fit the schools circumstances. School officials from six discussion groups also noted that complying with requirements related to the Return of Title IV Funds can be costly because of the time required to calculate how much money should be returned to the federal government (see Appendix III for information on selected comments on specific federal requirements school officials described as burdensome). Participants in six of eight discussion groups said that consumer disclosures were complicated, and participants in seven groups said that Return of Title IV Funds requirements were complicated. For example, participants in one discussion group stated that consumer disclosures are complicated because reporting periods can vary for different types of information. Another explained that the complexity of consumer disclosures is a burden to staff because the information can be difficult to explain to current or prospective students. Also, participants in two groups stated that the complexity of consumer disclosures makes it difficult for schools to ensure compliance with the requirements. Likewise, participants noted that calculating the amount of Title IV funds that should be returned can be complicated because of the difficulty of determining the number of days a student attended class as well as the correct number of days in the payment period or period of enrollment for courses that do not span the entire period. Participants in three discussion groups found the complexity of Return of Title IV requirements made it difficult to complete returns within the required time frame. In addition, participants from four groups noted the complexity increases the risk of audit findings, which puts pressure on staff. Discussion group participants identified other types of concerns that apply primarily to consumer disclosures. For example, participants in two groups said that it is burdensome for schools to make public some disclosures, such as graduates job placement data, because they cannot easily be compared across schools, thereby defeating the purpose of the information. Like six of the experts we interviewed, participants in six discussion groups noted that burden results from the accumulation of many requirements rather than a few difficult requirements. Two participants said that when new requirements are added, generally, none are taken away. Similarly, two other participants commented that the amount of information schools are required to report grows over time. Another commented that it is difficult to get multiple departments within a school to coordinate in order to comply with the range of requirements to which schools are subject under Title IV. Other federal requirements, in addition to those related to Title IV, may also apply to postsecondary schools (see Appendix IV for selected examples). School officials also described some benefits of Title IV requirements. Participants in three discussion groups pointed out that some consumer information can be used to help applicants choose the right school. Other participants commented that consumer disclosures encourage transparency. For example, participants in two groups said the information schools are required to disclose regarding textbooks helps students compare prices and consider the total cost of books. Regarding Return of Title IV Funds, participants in three discussion groups stated that the process helps restore funds to the federal government that can be redirected to other students. <4. Education Seeks Feedback Mainly through Formal Channels but Schools Do Not Always Respond to These Opportunities> Education seeks feedback on burden through formal channels such as publishing notices seeking comments on its burden estimates for proposed information collections, its retrospective analysis plan, and negotiated rulemaking. As shown in table 4, the department publishes notices in the Federal Register, on its website, and through a listserv to make the public aware of opportunities to provide feedback on burden.Department officials also said they receive some feedback from school officials through informal channels such as training sessions and open forums at conferences. Although Education has published notices seeking feedback on burden, officials said the department has received few comments in response to its solicitations. For example, Education said it received no comments in response to its request for public comment on burden estimates included in its 2010 Program Integrity Notices of Proposed Rulemaking, which proposed multiple regulatory changes with increased burden estimates. In addition, Education officials said some of the comments they receive about burden estimates are too general to make modifications in response to them. We focused on ICRs submitted by two Education offices that manage postsecondary issues: the Office of Federal Student Aid and the Office of Postsecondary Education. We selected the time period because it coincides with the 2006 launch of the OMB and General Services Administration web portal used by agencies to electronically post comments and other documents related to information collections to reginfo.gov; includes the enactment of the Higher Education Opportunity Act in 2008, which resulted in regulatory changes; and includes ICRs recently submitted. See Appendix I for additional information on the types of ICRs included in our review. shows that fewer than one-fourth (65 of 353) received public comments, of which 25 included comments that addressed burden faced by schools (see fig 1). For example, 2 ICRs received input on the difficulties of providing data requested by the department. We identified 40 ICRs that did not receive comments on burden faced by schools; several ICRs, for example, received input on simplifying the language of student loan related forms. Further, in a review of the 30 comments received by the department in response to its proposed retrospective analysis plan, we identified 11 comments related to higher education, of which 9 mentioned regulatory burden. For example, one commenter described difficulties that smaller schools may have meeting reporting requirements. Negotiated rulemaking presents another opportunity for schools and others to provide feedback on burden. Six experts and participants in six discussion groups thought aspects of negotiated rulemaking are beneficial overall. However, some experts and discussion group participants said certain aspects of the process may limit the impact of feedback on burden. Specifically, four experts and participants in six of our discussion groups expressed concern that when the negotiated rulemaking process does not achieve consensus, the department may draft regulations unencumbered by negotiators input, which may have addressed burden. According to those we spoke with, consensus may not be achieved, for example, if Education includes controversial topics over which there is likely to be disagreement or declines to agree with other negotiators. Education officials responded that their goal during negotiated rulemakings is to draft the best language for the regulation. Further, department officials said that negotiators can collectively agree to make changes to the agenda, unanimous consensus provides negotiators with an incentive to work together, and that the department cannot avoid negotiated rulemaking on controversial topics. Education officials said that when consensus is not achieved, the department rarely deviates from any language agreed upon by negotiators. <5. Concluding Observations> Notwithstanding the benefits of Title IV requirements, school officials believe that the burden created by federal requirements diverts time and resources from their primary mission of educating students. Our findings as well as those of previous studies indicate that the burden reported by school officials and experts not only stems from a single or a few requirements, but also from the accumulation of many requirements. While Education has solicited feedback on the burdens associated with federal requirements, our findings show that stakeholders do not always provide this feedback. As a result, stakeholders may be missing an opportunity to help reduce the burden of federal requirements on schools. <6. Agency Comments and Our Evaluation> We provided a draft of this report to Education for comment. Education s written comments are reproduced in Appendix II. Education sought a clearer distinction in the report between statutory and regulatory requirements as well as Education s authority to address statutory requirements. We have added information accordingly. Education also recommended the report distinguish between reporting and disclosure requirements, and we have provided definitions in the background in response. Education expressed concern that the report did not sufficiently consider the benefits of federal requirements. We agree that federal requirements generally have a purpose and associated benefits such as benefits associated with program oversight and consumer awareness which we acknowledge in our report. Analyzing the costs and benefits associated with individual requirements was beyond the scope of this report, as our primary objective was to obtain stakeholder views on burdens. Education also suggested we report more on its efforts to balance burden and benefits when designing information collections. We acknowledged these efforts in our report and incorporated additional information that Education subsequently provided. Education also provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. We are sending copies of this report to the appropriate congressional committees and the Secretary of Education. In addition, the report is available at no charge on GAO s web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix V. Appendix I: Objectives, Scope, and Methodology To identify which, if any, federal requirements experts say create burden for postsecondary schools, we interviewed a range of experts. We chose these experts based on factors such as: familiarity or experience with Title IV requirements, recognition in the professional community, relevance of their published work to our topic, and recommendations from others. We conducted interviews with representatives of nine higher education associations that represent public, private nonprofit, private for- profit schools, including associations representing research universities, community colleges, and minority-serving institutions. We also conducted interviews with nine other postsecondary experts, including researchers and officials from individual schools with knowledge of Title IV requirements. Because our review focused on the burden and benefits experts say requirements create, we did not evaluate consumers perspectives on information schools provide. To determine the types of burdens and benefits that schools say federal requirements create, we conducted eight discussion groups at two national conferences with a nongeneralizable sample of officials from 51 schools. Discussions were guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. To optimize time during each session, we focused part of the discussion on the perceived benefits and burdens associated with one of the two sets of requirements most often cited as burdensome during the interviews we conducted with experts: consumer disclosures and Return of Title IV Funds. Specifically, four groups primarily focused on the burdens and benefits associated with consumer disclosures and four groups focused primarily on Return of Title IV Funds. In addition, each group was provided the opportunity to discuss other requirements that officials found to be burdensome, as well as how, if at all, officials communicate feedback on burden to Education. Discussion groups are not an appropriate means to gather generalizable information about school officials awareness of feedback opportunities because participants were self-selected and may be more aware of federal requirements and feedback opportunities than others in the population. Methodologically, group discussions are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the discussion group participants attitudes on specific topics and to offer insights into their concerns about and support for an issue. In addition, the discussion groups may be limited because participants represented only those schools that had representatives at the specific conferences we attended and because participants are comprised of self-selected volunteers. To determine how Education solicits feedback from stakeholders on burden, we conducted interviews with Education officials and reviewed documentation, such as agency web pages and listserv postings used by Education to inform schools and other interested parties about negotiated rulemaking and information collections. We also solicited the views of experts during interviews, and asked school officials in discussion groups about how, if at all, they communicate feedback on burden to Education. Because participants were self-selected, they are more likely to be aware of federal requirements and feedback opportunities than the general population. We reviewed Education s ICRs related to postsecondary education submitted to OMB from August 1, 2006, to October 31, 2012, to determine how many received public comments. We also reviewed the ICRs that received comments to determine how many received comments related to burden. To do so, we used OMB s reginfo.gov website, and took steps to verify the reliability of the database. We interviewed agency officials, tested the reliability of a data field, and reviewed documentation. We found the database to be reliable for our purposes. In our review of ICRs, we included new information collections along with revisions, reinstatements, and extensions of existing information collections without changes. We excluded ICRs that agencies are not required to obtain public comment on, such as those seeking approval of nonsubstantive changes. We also excluded ICRs for which the associated documents did not allow us to interpret the comments. To determine how many ICRs received comments that discussed burden faced by schools, one analyst reviewed comments for each ICR and classified them as being related or not related to the burden faced by schools. Another analyst verified these categorizations and counts. We also reviewed the number and nature of comments on Education s preliminary plan for retrospective analysis by downloading comments from regulations.gov. We verified with Education the total number of comments received. To determine whether comments discussed burdens faced by schools, one analyst reviewed each comment and classified it as being related or not related to higher education regulations and whether it referenced burden faced by schools. Another analyst verified these categorizations and counts. We did not review comments submitted to Education in response to proposed rules. Education has received thousands of comments in response to proposed regulations in recent years, and the site does not contain a search feature that would have allowed us to distinguish comments regarding burden estimates from other topics. For all objectives, we reviewed relevant federal laws and regulations. We conducted this performance audit from April 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Education Appendix III: Selected Federal Requirements Described as Burdensome in Discussion Group Comments The table below lists some of the specific concerns expressed by school officials we spoke to in discussion groups in response to questions about burdensome federal requirements. GAO identified statutory or regulatory provisions that relate to the burdens described by school officials and compiled these summaries to better illustrate the underlying requirements about which we received comments. These are only examples, not a list of every requirement specifically reported to us as burdensome. The summaries provided below are not intended to be complete descriptions of each requirement, and additional statutory or regulatory provisions related to these comments may also apply. In some cases a provision may have multiple sources, such as where statutory requirements are further interpreted in a regulation or guidance document. Discussion Group Participant Concern Consumer Disclosures: This category encompasses a number of different federal requirements to collect information on various topics and make that information available to specified groups or entities. Students, prospective students, and others can use this information to be better informed. The information can help people make decisions such as whether or not to attend or seek employment at a school. Summary of Related Federal Provisions The statute and regulations require eligible institutions to collect certain information on campus crime statistics and security policies and prepare, publish, and distribute an annual security report to all current students and employees (and to any prospective student or employee upon request). The report must contain, among other information, statistics on certain crimes reported to campus security authorities or local police agencies. 20 U.S.C. 1092(f)(1)(F), 34 C.F.R. 668.41(e), 668.46. The regulations require that an institution make a reasonable, good faith effort to obtain the required statistics and may rely on information supplied by a local or state police agency. If the institution makes such a reasonable, good faith effort, it is not responsible for the failure of the local or State police agency to supply the required statistics. 34 C.F.R. 668.46(c)(9). Discussion Group Participant Concern Placement rates. Placement rate calculations are different for different schools or within schools and confusing to students, requiring school staff to give additional explanation to some data. Summary of Related Federal Provisions The statute requires that institutions produce and make readily available upon request through appropriate publications, mailings, and electronic media to an enrolled student and to any prospective student the placement in employment of, and types of employment obtained by, graduates of the institution s degree or certificate programs, gathered from such sources as alumni surveys, student satisfaction surveys, the National Survey of Student Engagement, the Community College Survey of Student Engagement, State data systems, or other relevant sources. 20 U.S.C. 1092(a)(1)(R). According to the regulations, information concerning the placement of, and types of employment obtained by, graduates of the institution s degree or certificate programs may be gathered from: (1) the institution s placement rate for any program, if it calculates such a rate; (2) state data systems; (3) alumni or student satisfaction surveys; or (4) other relevant sources. The institution must identify the source of the information provided, as well as any time frames and methodology associated with it. In addition, the institution must disclose any placement rates it calculates. 34 C.F.R. 668.41(d)(5). Return of Title IV Funds: In general, if a recipient of Title IV grant or loan assistance withdraws from an institution, the statute and regulations establish a procedure for calculating and returning unearned funds. Returning these funds can protect the interests of the federal government and the borrower. The statute provides that, for institutions required to take attendance, the day of withdrawal is determined by the institution from such attendance records. 20 U.S.C. 1091b(c)(1)(B). The regulations prescribe in further detail which institutions are required to take attendance and how to determine the withdrawal date: For a student who ceases attendance at an institution that is required to take attendance, including a student who does not return from an approved leave of absence, or a student who takes a leave of absence that does not meet the regulatory requirements, the student s withdrawal date is the last date of academic attendance as determined by the institution from its attendance records. 34 C.F.R. 668.22(b). Institutions that are required to take attendance are expected to have a procedure in place for routinely monitoring attendance records to determine in a timely manner when a student withdraws. Except in unusual instances, the date of the institution s determination that the student withdrew should be no later than 14 days (less if the school has a policy requiring determination in fewer than 14 days) after the student s last date of attendance as determined by the institution from its attendance records. Federal Student Aid Handbook, June 2012, and Education Dear Colleague Letters GEN-04-03 Revised, Nov. 2004, and DCL GEN-11-14, July 20, 2011. Summary of Related Federal Provisions An institution is required to return any unearned Title IV funds it is responsible for returning within 45 days of the date the school determined the student withdrew. 20 U.S.C. 1091b(b)(1), 34 C.F.R. 668.22(j)(1), 668.173(b). For a student who withdraws from a school that is not required to take attendance without providing notification, the school must determine the withdrawal date no later than 30 days after the end of the earlier of (1) the payment period or the period of enrollment (as applicable), (2) the academic year, or (3) the student s educational program. 34 C.F.R. 668.22(j)(2). If a student who began attendance and has not officially withdrawn fails to earn a passing grade in at least one course over an entire period, the institution must assume, for Title IV purposes, that the student has unofficially withdrawn, unless the institution can document that the student completed the period. In some cases, a school may use its policy for awarding or reporting final grades to determine whether a student who failed to earn a passing grade in any of his or her classes completed the period. For example, a school might have an official grading policy that provides instructors with the ability to differentiate between those students who complete the course but failed to achieve the course objectives and those students who did not complete the course. If so, the institution may use its academic policy for awarding final grades to determine that a student who did not receive at least one passing grade nevertheless completed the period. Another school might require instructors to report, for all students awarded a non- passing grade, the student s last day of attendance (LDA). The school may use this information to determine whether a student who received all F grades withdrew. If one instructor reports that the student attended through the end of the period, then the student is not a withdrawal. In the absence of evidence of a last day of attendance at an academically related activity, a school must consider a student who failed to earn a passing grade in all classes to be an unofficial withdrawal. Federal Student Aid Handbook, June 2012, and Education Dear Colleague Letter GEN-04-03 Revised, Nov. 2004. All references to statute or regulations are references to the Higher Education Act of 1965 (HEA), as amended, and Education s implementing regulations. All references to eligible institutions refer to eligible institutions participating in Title IV programs, as defined by the HEA, as amended. Appendix IV: Selected Examples of Other Federal Requirements That May Apply to Postsecondary Schools Postsecondary schools may be subject to numerous federal requirements in addition to those related to Title IV of the Higher Education Act of 1965, as amended, which may be established by various other statutes or regulations promulgated by different agencies. The specific requirements to which an individual school is subject may depend on a variety of factors, such as whether it conducts certain kinds of research or is tax- exempt (see the following examples). This is not intended to be a comprehensive list; rather the examples were selected to represent the variety of types of requirements to which schools may be subject. <7. Examples of Requirements Related to Research Activities> Nuclear Research: Schools licensed to conduct medical research using nuclear byproduct material must follow Nuclear Regulatory Commission requirements on safety and security, or compatible requirements issued by a state that has entered into an agreement with the Nuclear Regulatory Commission. Schools that house nuclear reactors for research purposes are also subject to additional regulations, including those on emergency management. Research Misconduct: To receive federal funding under the Public Health Service Act for biomedical or behavioral research, institutions (including colleges and universities) must have written policies and procedures for addressing research misconduct and must submit an annual compliance report to the federal government. The Public Health Service has issued regulations detailing institutions responsibilities in complying with these requirements. Research on animals: Applicants for funding for biomedical or behavioral research under the Public Health Service Act must provide an assurance to the National Institutes of Health that the research entity complies with the Animal Welfare Act and the Public Health Service Policy on Humane Care and Use of Laboratory Animals, and that it has appointed an appropriate oversight committee (an Institutional Animal Care and Use Committee). The oversight committee must review the care and treatment of animals in all animal study areas and facilities of the research entity at least semi-annually to ensure compliance with the Policy. <8. Examples of Requirements Related to Discrimination> Employment Discrimination: Title VII of the Civil Rights Act of 1964, as amended, prohibits employment practices that discriminate based on race, color, religion, sex and national origin. These requirements apply to schools that qualify as employers as defined by Title VII, generally including private and state or local employers that employ 15 or more employees. Disabilities. The Americans with Disabilities Act of 1990 prohibits discrimination against individuals with disabilities in several areas, including employment, state and local government activities, and public accommodations. Act of 1973, as amended, prohibits discrimination on the basis of disability under any program or activity that receives federal financial assistance. Colleges, universities, other postsecondary institutions, and public institutions of higher education are subject to these requirements. In addition, section 504 of the Rehabilitation Sex Discrimination. Title IX of the Education Amendments of 1972 prohibits discrimination on the basis of sex in any federally funded education program or activity. Title IX applies, with a few specific exceptions, to all aspects of education programs or activities that receive federal financial assistance, including athletics. <9. Examples of Other Requirements> Byrd Amendment: Educational institutions that receive federal funds must hold an annual educational program on the U.S. Constitution. 42 U.S.C. 12101 12213. Different agencies administer different aspects of the Americans with Disabilities Act, including the Equal Employment Opportunity Commission and the Department of Justice. Internal Revenue Service Form 990: Schools that have tax-exempt status generally must annually file IRS Form 990. The form requires a range of information on the organization s exempt and other activities, finances, governance, compliance with certain federal tax requirements, and compensation paid to certain persons. Appendix V: GAO Contact and Staff Acknowledgments <10. GAO Contact> <11. Staff Acknowledgments> In addition to the contact named above, Bryon Gordon (Assistant Director), Debra Prescott (Assistant Director), Anna Bonelli, Joy Myers, and Daren Sweeney made key contributions to this report. Additionally, Deborah Bland, Kate Blumenreich, Tim Bober, Sarah Cornetto, Holly Dye, Kathleen van Gelder, and Elizabeth Wood aided in this assignment.
Why GAO Did This Study Postsecondary schools must comply with a variety of federal requirements to participate in student financial aid programs authorized under Title IV. While these requirements offer potential benefits to schools, students, and taxpayers, questions have been raised as to whether they may also distract schools from their primary mission of educating students. GAO examined (1) which requirements, if any, experts say create burden, (2) the types of burdens and benefits schools say requirements create, and (3) how Education solicits feedback from stakeholders on regulatory burden. GAO reviewed relevant federal regulatory and statutory requirements, and past and ongoing efforts examining postsecondary regulatory burden; interviewed Education officials and 18 experts, including officials from associations that represent postsecondary schools; and conducted eight discussion groups at two national conferences with a nongeneralizable sample of 51 school officials from public, nonprofit, and for-profit sectors. GAO also reviewed documentation associated with Education's requests for public comment on burden for proposed postsecondary information collections and its retrospective analysis of regulations. What GAO Found Experts GAO interviewed offered varied opinions on which student financial aid requirements under Title IV of the Higher Education Act of 1965, as amended, are the most burdensome. While no single requirement was cited as burdensome by a majority of the 18 experts, 11 cited various consumer disclosure requirements--such as those pertaining to campus safety--primarily due to the time and difficulty needed to gather the information. Beyond consumer disclosures, 4 experts cited "Return of Title IV Funds"--which requires schools to calculate and return unearned financial aid to the federal government when a recipient withdraws from school--as burdensome because schools find it difficult to calculate the precise amount of funds that should be returned. More broadly, 6 experts said that the cumulative burden of multiple requirements is a substantial challenge. Experts also noted some benefits. For example, an expert said required loan disclosures help students understand their repayment responsibilities. School officials who participated in each of the eight discussion groups GAO conducted expressed similar views about the types of burdens and benefits associated with Title IV requirements. Participants in all groups said requirements for consumer disclosures and Return of Title IV Funds are costly and complicated. Regarding consumer disclosures, participants questioned the value of disclosing data that cannot be readily compared across schools, like data on graduates' employment, which may be calculated using different methodologies. Participants in four groups found Return of Title IV Funds requirements difficult to complete within the required time frame. Participants also cited some benefits, such as how consumer disclosures can help applicants choose the right school and unearned Title IV funds can be redirected to other students. Education seeks feedback from schools on regulatory burden mainly through formal channels, such as announcements posted in the Federal Register, on its website, and on a department listserv. However, Education officials said they have received a limited number of comments about burden in response to these announcements. GAO reviewed Education's notices soliciting public comments on burden estimates for its postsecondary information collections--which require the public, including schools, to submit or publish specified data--and found that 65 of 353 notices (18 percent) received comments, of which 25 received comments related to burden. For example, 2 notices received input on the difficulties of providing data requested by the department. What GAO Recommends GAO makes no recommendations in this report. In its comments, Education sought clarification regarding types of federal requirements and additional information on its efforts to balance burden and benefits. We provided clarifications and additional information, as appropriate.
<1. Background> Congress established the trade advisory committee system in Section 135 of the Trade Act of 1974 as a way to institutionalize domestic input into U.S. trade negotiations from interested parties outside the federal government. This system was considered necessary because of complaints from some in the business community about their limited and ad hoc role in previous negotiations. The 1974 law created a system of committees through which such advice, along with advice from labor and consumer groups, was to be sought. The system was originally intended to provide private sector input to global trade negotiations occurring at that time (the Tokyo Round). Since then, the original legislation has been amended to expand the scope of topics on which the President is required to seek information and advice from negotiating objectives and bargaining positions before entering into a trade agreement to the operation of any trade agreement, once entered into, and on other matters regarding administration of U.S. trade policy. The legislation has also been amended to include additional interests within the advisory committee structure, such as those represented by the services sector and state and local governments. Finally, the amended legislation requires the executive branch to inform the committees of significant departures from their advice. The Trade Act of 1974 required the President to seek information and advice from the trade advisory committees for trade agreements pursued and submitted for approval under the authority granted by the Bipartisan Trade Promotion Authority Act of 2002. The Trade Act of 1974 also required the trade advisory committees to provide a report on the trade agreements pursued under the Bipartisan Trade Promotion Authority Act of 2002 to the President, Congress, and USTR. This requirement lapsed with TPA on June 30, 2007. The trade advisory committees are subject to the requirements of the Federal Advisory Committee Act (FACA), with limited exceptions pertaining to holding public meetings and public availability of documents. One of FACA s requirements is that advisory committees be fairly balanced in terms of points of view represented and the functions the committees perform. FACA covers most federal advisory committees and includes a number of administrative requirements, such as requiring rechartering of committees upon renewal of the committee. Four agencies, led by USTR, administer the three-tiered trade advisory committee system. USTR directly administers the first tier overall policy committee, the President s Advisory Committee for Trade Policy and Negotiations (ACTPN), and three of the second tier general policy committees, the Trade Advisory Committee on Africa (TACA), the Intergovernmental Policy Advisory Committee (IGPAC), and the Trade and Environment Policy Advisory Committee (TEPAC), for which the Environmental Protection Agency also plays a supporting role. The Department of Labor co-administers the second tier Labor Advisory Committee (LAC) and the Department of Agriculture co-administers the second tier Agricultural Policy Advisory Committee (APAC). The Department of Agriculture also co-administers the third tier Agricultural Technical Advisory Committees (ATACs), while the Department of Commerce co-administers the third tier Industry Trade Advisory Committees (ITACs). Ultimately, member appointments to the committees have to be cleared by both the Secretary of the managing agency and the U.S. Trade Representative, as they are the appointing officials. Figure 1 illustrates the committee structure. <2. Consultations with Trade Advisory Committees Have Generally Improved> Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of significant departures from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. As a result, we made a series of recommendations to USTR and the other agencies to improve those aspects of the consultation process. Specifically, we recommended the agencies adopt or amend guidelines and procedures to ensure that (1) advisory committee input is sought on a continual and timely basis, (2) consultations are meaningful, and (3) committee advice is considered and committees receive substantive feedback on how agencies respond to their advice. In response to those recommendations, USTR and the other agencies made a series of improvements. For example, to improve consultations between the committee and the agencies, including member input, USTR and TEPAC members established a communications taskforce in 2004. As a result of the taskforce, USTR and EPA changed the format of principals meetings to allow more discussion between the members and senior U.S. government officials, and they increased the frequency of liaison meetings. In addition, USTR instituted a monthly conference call with the chairs of all committees, and now holds periodic plenary sessions for ATAC and ITAC members. Furthermore, the agencies created a new secure Web site to allow all cleared advisors better access to important trade documents. When we interviewed private sector advisory committee chairs again in 2007, they were generally pleased with the numerous changes made to the committee system in response to our 2002 report. In particular, they found the secure Web site very useful. Reviews of the monthly chair conference call and plenary sessions were mixed, however. Chairs told us that their out-of-town members might find the plenaries a helpful way to gain an overall perspective and to hear cabinet-level speakers to whom they would not routinely have access, whereas others found them less valuable, largely due to the perceived lack of new or detailed information. The chairs also said that USTR and the relevant executive branch agencies consulted with the committees on a fairly regular basis, although overall views on the opportunity to provide meaningful input varied. For example, we heard from committee chairs who felt the administration took consultations seriously, while other chairs felt the administration told them what had already been decided upon instead of soliciting their advice. USTR officials told us that the fact that the advice of any particular advisory committee may not be reflected in a trade agreement does not mean that the advice was not carefully considered. <3. Changes Made to Improve Committee Logistics Have Not Been Fully Tested> In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee s ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2- year charter. To address these concerns, we recommended that USTR and other agencies start the rechartering and member appointment processes with sufficient time to avoid any lapse in the ability to hold committee meetings and that they notify Congress if a committee is unable to meet for more than 3 months due to an expired charter or delay in member appointments. Furthermore, we recommended that USTR work with the Department of Labor to extend the Labor Advisory Committee s charter from 2 years to 4 years, to be in alignment with the rest of the trade advisory committee system. USTR and the other agencies have taken some steps to address these recommendations. In May 2008, for example, the Labor Advisory Committee s charter was extended to 4 years. Not enough time has passed, however, to assess whether steps taken fully address the problems associated with rechartering and member appointment, since at present all committees have current charters and members appointed. Furthermore, even though committees are now chartered and populated, some of them have not met for over three years, despite ongoing negotiations of the Doha Round of the World Trade Organization (WTO), including the July 2008 ministerial meeting in Geneva. For example, although the ATAC charters were renewed in May 2007 and members appointed in January 2008, the FACA database shows that no ATAC has held a meeting since fiscal year 2006. In addition, although USTR held multiple teleconferences for all first and second tier advisors in fiscal year 2008, LAC and APAC members did not participate. It is unclear, therefore, whether the administration received official advice from all trade advisory committees for the Doha negotiations. <4. Representation of Key Stakeholders Remains Important for Any Review of Trade Advisory Committee System> In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition. In our 2002 report, we found that the structure and composition of the committee system had not been fully updated to reflect changes in the U.S. economy and U.S. trade policy. For example, representation of the services sector had not kept pace with its growing importance to U.S. output and trade. Certain manufacturing sectors, such as electronics, had fewer members than their sizable trade would indicate. In general, the system s committee structure was largely the same as it was in 1980, even though the focus of U.S. trade policy had shifted from border taxes (tariffs) toward other complex trade issues, such as protection of intellectual property rights and food safety requirements. As a result, the system had gaps in its coverage of industry sectors, trade issues, and stakeholders. For example, some negotiators reported that some key issues such as investment were not adequately covered. In addition, nonbusiness stakeholders such as environment and labor reported feeling marginalized because they have been selected to relatively few committees. The chemicals committee, representing what at the time was one of the leading U.S. export sectors, had been unable to meet due to litigation over whether the apparent denial of requests by environmental representatives for membership on the committee was consistent with FACA s fair balance requirements. In 2007, several committee chairs we interviewed also expressed the perception that the composition of their committees was not optimal, either favoring one type of industry or group over another or industry over nonbusiness interests. Furthermore, some members were the sole representative of a nonbusiness interest on their committee, and those we spoke with told us that although their interest was now represented, they still felt isolated within their own committee. The result was the perception that their minority perspective was not influential. At the same time, while Congress mandates that the advisory committee system is to involve representative segments of the private sector (e.g., industry, agriculture, and labor and environmental groups), adherence to these statutory requirements has been deemed non-justiciable. For example, although the Departments of Agriculture and Commerce solicit new members for their committees through Federal Register notices which stipulate members qualifications, including that they must have expertise and knowledge of trade issues relevant to the particular committees, neither the notices nor the committee charters explained how the agencies would or have determined which representatives they placed on committees. Without reporting such an explanation, it was not transparent how agencies made decisions on member selection or met statutory representation requirements. As a result, we made a series of recommendations suggesting that USTR work with the other agencies to update the system to make it more relevant to the current U.S. economy and trade policy needs. We also suggested that they seek to better incorporate new trade issues and interests. Furthermore, we recommended they annually report publicly on how they meet statutory representation requirements, including clarifying which interest members represent and explaining how they determined which representatives they placed on committees. In response, USTR and the other agencies more closely aligned the system s structure and composition with the current economy and increased the system s ability to meet negotiator needs more reliably. For example, the Department of Agriculture created a new ATAC for processed foods because exports of high-value products have increased. USTR and Commerce also split the service industry into several committees to better meet negotiator needs. Furthermore, USTR and the Department of Agriculture now list which interest members represent on the public FACA database, as the Department of Commerce has been doing for years. USTR s 2009 Trade Policy Agenda and 2008 Annual Report also includes descriptions of the committees and their composition. It does not, however, explain how USTR and the agencies determined that the particular membership appointed to each committee represents a fair balance of interests in terms of the points of view represented and the committee s functions. <5. Conclusion> Mr. Chairman, we appreciate the opportunity to summarize our work related to the Trade Advisory System. Based on the recommendations we have made in the areas of quality and timeliness of consultations, logistical issues, and representation of key stakeholders, we believe that USTR and other managing agencies have strengthened the Trade Advisory System. However, we support the Committee s oversight and the ongoing policy review of the system to ensure that it works smoothly and the input received from business and non-business stakeholders is sufficient, fairly considered, and representative. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony provides a summary of key findings from the comprehensive report on the trade advisory system that we provided to the Congress in 2002, as well as from our more recent report in 2007 on the Congressional and private sector consultations under Trade Promotion Authority. In particular, this testimony highlights our recommendations in three key areas--committee consultations, logistics, and overall system structure--as well as the changes that have been made by the U.S. agencies since those reports were published. What GAO Found Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of "significant departures" from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee's ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2-year charter. In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition.
<1. Background> The Department of the Navy is a large and complex organization with a wide range of mission operations and supporting business functions. For example, the Navy has about 350,000 active duty officers and enlisted personnel, 130,000 ready reserve, and 175,000 civilian employees. Navy s fleet operations involve approximately 280 ships and 4,000 aircraft operating throughout the world. Further, the Navy s annual operating budget is about $120 billion and is used to fund such things as ship and aircraft operations, air depot maintenance, and Marine Corps operations. The department s primary organizational components are the Secretary of the Navy, the Chief of Naval Operations, and the Commandant of the Marine Corps. The structural relationships among these components are summarized later and in figure 1. Secretary of the Navy: Department of the Navy headquarters recruits, organizes, supplies, equips, trains, and mobilizes, naval forces. Among other things, this includes construction, outfitting, and repair of Navy and Marine Corps ships, equipment, and facilities. It also includes formulating and implementing policies and programs. Naval and Marine Corps Operating Forces: The operating forces commanders and fleet commanders have two chains of command. Administratively, they report to the Chief of Naval Operations, and are responsible for providing, training, and equipping naval forces. Operationally, they provide naval forces and report to the appropriate Unified Combatant Commanders. The operating forces include a variety of organizations with diverse missions, such as the Atlantic and Pacific Fleets, Naval Network Warfare Command, and Naval Reserve Forces. Naval shore establishment: The Navy shore establishment includes facilities and activities for repairing machinery, electronics, ships, and aircraft; providing communications capabilities; providing training; providing intelligence and meteorological support; storing repair parts, fuel, and munitions; and providing medical support. It consists of organizations such as the Naval Sea Systems Command (which includes shipyards), Naval Air Systems Command (which includes aviation depots), Space and Naval Warfare Systems Command, Navy Personnel Command, Naval Education and Training Command, and the Office of Naval Intelligence. The Navy s many and dispersed organizational components rely heavily on IT to help them perform their respective mission operations and business functions. For fiscal year 2006, the Navy s IT budget was about $5.8 billion, which included funding for the development, operation, and maintenance of Navy-owned IT systems, as well as funding for contractor-provided IT services and programs, such as NMCI. The Assistant Secretary of the Navy for Research, Development and Acquisition is responsible for Navy acquisition programs. Reporting to the Assistant Secretary are numerous entities that have authority, responsibility, and accountability for life-cycle management of acquisition programs within their cognizance. These entities include certain program managers, system command, and program executive officers. The Navy Chief Information Officer (CIO) is responsible for developing and issuing IT management policies and standards in coordination with the above Assistant Secretary, the system commands, and others. The Navy CIO is also responsible for ensuring that major programs comply with the Clinger-Cohen Act (1996) and for recommending to the Secretary of the Navy whether to continue, modify, or terminate IT programs, such as NMCI. <1.1. NMCI Purpose, Scope, and Status> NMCI is a major, Navy-wide IT services program. Its goals are to provide information superiority an uninterrupted information flow and the ability to exploit or deny an adversary s ability to do the same and to foster innovative ways of operating through interoperable and shared network services. The program is being implemented through a multiyear IT services contract that is to provide desktop, server, infrastructure, and communications-related services at Navy and Marine Corps sites located in the United States and Japan. Through this contract, the Navy is replacing independent local and wide area networks with a single network and related desktop hardware and software that are owned by the contractor. Among other things, the contractor is to provide voice, video, and data services; infrastructure improvements; and customer service. This type of contract is commonly referred to as seat management. Generally speaking, under seat management, contractor-owned desktop and other computing hardware, software, and related services are bundled and provided on the basis of a fixed price per unit (or seat). In October 2000, the Navy s goal was to have between 412,000 and 416,000 seats operational by fiscal year 2004. As of June 2006, the Navy reported that about 303,000 seats were operational at about 550 sites. According to the Navy, initial delays in meeting deployment schedules were due to underestimates in its existing inventory of legacy applications that needed to be migrated to NMCI. Subsequent delays were attributed to developing and implementing a certification and accreditation process for all applications, as well as legislation requiring certain analyses to be completed before seat deployment could exceed specific levels. The number of seats at each site ranges from a single seat to about 10,000. These sites include small sites, such as office facilities located throughout the United States, and large sites, such as shipyards and air depots, which use unique software to assist in repair work. <1.2. NMCI Program Management Structure> Various organizations in the Navy are responsible for NMCI management and oversight (see fig. 2). The Program Executive Officer for Enterprise Information Systems (PEO-EIS) along with the NMCI Program Manager are responsible for NMCI acquisition and contract management. The program is also overseen and supported by several groups. One is the Navy s Information Executive Committee, which provides guidance for, and oversight of, NMCI and other information issues. The committee is made up of CIOs from a range of Navy commands, activities, offices, and other entities within the Navy. Another is the NMCI Executive Committee, which includes representatives of the heads of a broad cross section of organizations throughout the Navy, and the contractor. Its mission is to help in the review, oversight, and management of the Navy s implementation of NMCI, as well as to assist in identifying and resolving process and policy impediments within the Navy that hinder an efficient and effective implementation process. Additionally, the Network Warfare Command (NETWARCOM) and the Marine Corps Network Operations and Security Command (MCNOSC), are the two entities primarily responsible for network operations management in the Navy and Marine Corps, respectively. The Navy CIO is responsible for overall IT policy. <1.3. NMCI Contract Description> On October 6, 2000, the Navy awarded a 5-year contract for NMCI services to a single service provider EDS for an estimated 412,000 to 416,000 seats and minimum value of $4.1 billion. The original contract also included a 3-year option for an additional $2.8 billion in services, bringing the potential total contract value to $6.9 billion. The department and EDS subsequently restructured the contract to be a 7-year, $6 billion contract with a 3-year option for an additional $2.8 billion beginning in fiscal year 2008. Following further contract restructuring and the Navy s decision to exercise the 3-year option, the total contract period and minimum value is now 10 years and about $9.3 billion. Figure 3 illustrates the value of the NMCI contract. The NMCI contract type is commonly referred to as seat management because pricing for the desktop services is based on a fixed price per seat. Seats include desktop computers, as well as other devices, such as cellular phones. Pricing for these seats varies depending on the services provided. For example, having classified connectivity, mission-critical service, additional user accounts, or additonal software installation increases the amount paid per seat. The NMCI contract is performance-based, which means that it contains monetary incentives to provide services at specified levels of quality and timeliness. The contract includes several types of incentives, including incentives tied to SLA performance, and customer satisfaction surveys. The contract currently specifies 23 SLAs divided into three tiers: 100 SLAs, 200 SLAs, and 300 SLAs. The 100 tier is referred to as base agreements, the 200 as transitional agreements, and the 300 as additional agreements. Examples of agreements for each tier are provided below. 100 End user services (SLA 103) 200 Web access services (SLA 206) 300 Network management services (SLA 328) SLAs are further categorized as enterprisewide, site-specific, or both. Unlike site-specific SLAs, enterprisewide SLAs are not analyzed on a site- by-site basis. See table 1 for a list of agreements organized by tier and category. Each agreement has one or more performance categories. For example, SLA 102 has 1 performance category (Network Problem Resolution), while SLA 107 has 3 performance categories (NMCI Intranet Availability, Latency/Packet Loss, and Voice and Video Quality of Service). Collectively, there are 51 performance categories. Each performance category has specific performance targets that the contractor must reach in order for the category to be met. An example of a target is providing e-mail server services to users 99.7 percent of the time that they are supposed to be available. The contract currently specifies two levels of performance to be used in determining, on a site-by-site basis, what performance-based payment incentives, if any, EDS will earn in a given quarter (3-month period). If either of these levels of performance is not met, the contractor is to be paid 85 percent of the amount allowed under the contract for each seat that has been cut over (i.e., is operational). 1. Full payment. To achieve this level for a given seat, the contractor must meet 100 percent of the applicable SLAs for that seat, and 50 to 90 percent of the planned seats at the site must be cut over. Meeting a quarterly agreement is defined as performance at or above the applicable target(s) for either (1) 2 out of the 3 months preceding an invoice or (2) the current month of the invoice. If these conditions are met, the contractor is paid 100 percent of the amount allowed per seat. If, in subsequent months, the contractor fails to achieve 100 percent of the agreements, the amount paid is 85 percent of the amount allowed per seat. 2. Full performance. To achieve this level for a given seat, the contractor must meet 100 percent of the applicable SLAs for that seat, and over 90 percent of the planned seats at the site must be cut over. Meeting an agreement is defined as performance at or above the target(s) for either (1) 2 out of the 3 months preceding a quarterly invoice or (2) the current month of the invoice. If these conditions are met, the contractor is paid 100 percent of the amount allowed per seat. Once a site has achieved full performance, it remains eligible for full payments, regardless of changes to the numbers of seat orders. However, the contractor is required to provide financial credits to the Navy in the event that the agreements are not met at some future time. <1.3.1. Customer Satisfaction Surveys> The contract also provides for administration of three customer satisfaction surveys: End User, Echelon II/ Major Command, and Network Operations Leaders. These surveys and their related financial incentives are discussed below. The contractor began conducting quarterly satisfaction surveys of Navy end users in June of 2002 and Marine Corps end users in March 2005. These surveys are administered to a different mix of 25 percent of eligible users each quarter, with nearly all users being surveyed each year. Since March 2004, the survey has consisted of 14 questions, all relating to satisfaction with the NMCI program and 10 focusing on satisfaction with EDS. For each question, users are asked to indicate their level of dissatisfaction/satisfaction according to a 10-point scale, with 1-5 denoting levels of dissatisfaction, and 6-10 denoting levels of satisfaction. The Navy considers end users to be satisfied in general, with the program, or with the contractor, if the average response across the 14, 4, or 10 questions, respectively, is 5.5 or higher. The survey instrument also includes space for additional comments and asks the end users to identify and rank reasons for dissatisfaction or suggestions for improvements. See table 2 for a list of the 14 questions. Based on the quarterly survey results, the contractor is eligible for an incentive payment of $12.50 per seat if 85 to 90 percent of the average responses is 5.5 or higher, and $25 per seat if greater that 90 percent respond in this way. No incentive is to be paid if fewer than 85 percent respond as being satisfied. Echelon II (Navy) and Major Command (Marine Corps) Commander Survey and Network Operations Leader Survey In October 2004, the Navy designated two additional categories of customers commanders and network operations leaders and developed separate satisfaction surveys for each. In general, the commander survey focuses on whether NMCI is adequately supporting a command s mission needs and strategic goals; the network operations leader survey focuses on whether the contractor is meeting certain operational network requirements. The surveys are administered every 6 months. The latest commander survey was distributed to the heads of 23 Navy and Marine Corps command units. The network operations leader survey was distributed to NETWARCOM and MCNOSC. Both surveys are organized by major topic and subtopic. For the commander survey, the major topics and subtopics are as follows: Warfighter support including classified network support, deployable support, and emergent requirement support. Cutover services including planning, preparation, and execution. Technical solutions including the new service order and delivery process, and technical performance. Service delivery including organizational understanding, customer service, and issue management. For the network operations leader surveys, the major topics and subtopics are as follows: Mission support and planning including interoperability support, continuity of operations, future readiness, and public key infrastructure. Network management including network status information, information assurance, urgent software patch implementation, and data management. Service delivery including organizational understanding, communications, issue management, and flexibility and responsiveness. Appendix II provides a complete listing of the questions included in the commander survey and the network operations leader survey. Responses to the questions in both surveys are solicited on a scale of 0-3, with 0 being dissatisfied, and 3 being extremely satisfied. To aggregate the respective surveys results, the Navy averages the responses by command units, and network operations units. Based on the 6-month survey results, the contractor is eligible for an incentive payment of up to $50 per seat, with average scores of less than 0.5 receiving no incentive, 0.5 to less than 1.5 receiving 25 percent of the incentive, between 1.5 to less than 2.25 receiving 50 percent of the incentive, and at least 2.25 receiving 100 percent of the incentive. <1.4. Previous GAO Work on NMCI> We have reported on a number of NMCI issues since the program s inception. For example, in March 2000, we reported that the Navy s acquisition approach and implementation plan had a number of weaknesses, and thus introduced unnecessary program risk. In particular, we said that the Navy lacked a plan for addressing many program requirements and information on NMCI s potential impacts on Navy personnel. In October 2002, we reported that NMCI s transition costs for shipyards and air depots was unclear, which in turn limited the ability of such industrially funded entities to set the future rates that they would charge their customers. Accordingly, we recommended that the program, in collaboration with the Naval Sea Systems Command and the Naval Air Systems Command, systematically and expeditiously resolve implementation issues that affect the ability of shipyards and depots to plan and budget. In response to these recommendations, the Navy took a number of actions, including establishing an Executive Customer Forum to, among other things, adjudicate issues requiring collaborative decision making among Navy component CIOs, including those from the Naval Sea Systems Command and the Naval Air Systems Command, which represent Navy shipyards and air depots, respectively. In April 2003, we reported on the extent to which five DOD IT services projects, including NMCI, had followed leading commercial outsourcing practices. For NMCI, we found that while the Navy had employed most of these practices, it did not follow the key practice related to establishing an accurate baseline of the existing IT environment, choosing instead to rely on a preexisting and dated inventory of its legacy applications. Because of this, we concluded that the Navy substantially underestimated the number of legacy applications that needed to transition to NMCI, in turn causing the program s time frame for transitioning to slip considerably. We recommended that DOD take steps to learn from such lessons, so that such mistakes are not repeated on future IT outsourcing projects. <2. Navy Has Not Met NMCI Strategic Goals and Has Not Focused on Measuring Strategic Program Outcomes> Consistent with relevant laws and guidance, the Navy defined strategic goals for its NMCI program and developed a plan for measuring and reporting on achievement of these goals. However, the Navy did not implement this plan, choosing instead to focus on defining and measuring contractually specified SLAs. According to Navy officials, implementing the goal-oriented plan was not a priority, compared with swiftly deploying NMCI seats and measuring satisfaction of contract provisions. While program officials told us that NMCI has produced considerable mission value and achieved much, they did not have performance data to demonstrate progress in relation to either the program s strategic goals or nine performance categories that its plan and related efforts defined relative to these goals. Given this, we mapped SLAs to the nine performance categories and two strategic goals, which prompted the Navy to do the same. The Navy s mapping shows that NMCI has met few of the categories performance targets, and thus has yet to meet either of the strategic goals. This means that the mission-critical information superiority and operational innovation outcomes that were used to justify investment in NMCI have yet to be attained. Without effective performance management, the Navy is increasing the risk that the program will continue to fall short of its goals and expected results. <2.1. Navy Developed a Performance Management Plan to Measure and Report NMCI Progress in Meeting Strategic Goals but Did Not Implement It> Various laws such as the Government Performance & Results Act and Clinger-Cohen Act require federal agencies to identify and report on mission and strategic goals, associated performance measures, and actual performance. Federal IT guidance also recognizes the importance of defining program goals and related measures and performance targets, as well as determining the extent to which targets, measures, and goals are being met. In initiating NMCI, the Navy established two strategic goals for the program. According to the Navy, the program s primary goal is to support information superiority, which it characterizes as providing the capability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary s ability to do the same. In this regard, NMCI was to create an integrated network in which connectivity among all parts of the shore establishment, and with all deployed forces at sea and ashore, enables all members of the network to collaborate freely, share information, and interoperate with other services and nations. The second goal is to foster innovation by providing an interoperable and shared services environment that supports innovative ways of integrating doctrine and tactics, training, and supporting activities into new operational capabilities and more productive ways of using resources. Related to these goals, the Navy also cited significant benefits that were to accrue from NMCI, including (1) an uninterrupted flow of information; (2) improvements to interoperability, security, information assurance, knowledge sharing, productivity, and operational performance; and (3) reduced costs. To determine its progress in meeting these program goals and producing expected benefits, the Navy included a performance measurement plan in its 2000 Report to Congress on NMCI. According to the Navy, the purpose of this 2000 performance measurement plan was to document its approach to ensuring that key NMCI outcomes (i.e., results and benefits) and measures were identified and collected. In this regard, the plan identified eight strategic performance measurement categories, and related them to the NMCI strategic program goals. Subsequently, the Navy added a ninth performance category. According to program office and the Navy CIO officials, the nine performance categories are all relevant to determining program performance and strategic goal attainment. Moreover, the plan states that these categories provide for making NMCI an integrated portion of the Navy and Marine Corps strategic vision, support the principles of using IT to support people, and focus on the mission value of technology. These nine categories, including the Navy s definition of each, are as follows: Interoperability: ability to allow Navy systems and applications to communicate and share information with, and for providing services to and accepting services from, other military services. Security and information assurance: compliance with relevant DOD, Navy, and Marine Corps information assurance policies and procedures. Workforce capabilities: ability to (1) increase people s access to information, (2) provide tools and develop people s skills for obtaining and sharing information, and (3) support a knowledge-centric and sharing culture that is built on mutual trust and respect. Process improvement: role as a strategic enabler for assessment and benchmarking of business and operational processes, and for sharing of data, information, applications, and knowledge. Operational performance: ability to support improved mission (operational and business) performance. Service efficiency: economic effectiveness (i.e., its cost versus services and benefits). Customer satisfaction: key stakeholders (e.g., end users,) degree of satisfaction. Program management: ability to (1) meet the seat implementation schedule and the NMCI budget, (2) achieve specified levels of network performance, and (3) proactively manage program risks. Network operations and maintenance: includes such things as virus detection and repair, upgradeability, scalability, maintainability, asset management, and software distribution. The performance plan also included metrics, targets, and comparative baselines that were to be used for the first annual performance report, although it noted that progress in meeting some performance targets would not be measured until after contract award and that some of the cited measures could at some point cease to provide useful information for making decisions, while others may need to be collected continuously. The plan also stated that the Navy would fully develop performance measures for each of the categories and that it would produce an annual report on NMCI s performance in each of the categories. However, the Navy has not implemented its 2000 performance management plan. For example, the Navy did not develop performance measures for each of the performance categories and has not reported annually on progress against performance targets, categories and goals. Instead, Navy officials told us that they focused on defining and measuring progress against contractually specified SLAs, deploying NMCI seats, and reducing the number of Navy applications that are to run on NMCI workstations. According to these officials, measuring progress against the program s strategic goals was not a priority. Because measurement of goal attainment has not been the Navy s focus to date, when we sought (from both the program office and the Navy CIO office) performance data demonstrating progress in meeting NMCI s strategic goals and performance categories, the Navy was unable to provide data in this context. Instead, these officials said that data were available relative to contract performance, to include SLA performance levels and customer satisfaction survey results. Given this, we mapped the available contract-related performance data to the nine performance categories and targets and provided our analysis to the program office and the Navy CIO office. The Navy provided additional performance data and revisions to our mappings. Our analysis of the Navy-provided mapping, including associated fiscal year 2005 data, is discussed in the next section. <2.2. NMCI Strategic Goals and Associated Performance Category Targets Have Not Been Met> The Navy has not fully met any of its performance categories associated with achieving NMCI strategic goals and realizing program benefits. For example, the performance category of Program management has four performance targets relative to cost, schedule, performance, and risk. For fiscal year 2005, the NMCI program met one of the performance targets. It did not meet the other three targets and thus did not meet this performance category. Overall, the Navy defined 20 targets for the 9 performance categories. Of these 20, the Navy met 3, did not meet 13, and was unable to determine if it met 4. The specific performance targets for each performance category are described below, along with performance in fiscal year 2005 against each target. Table 3 summarizes the number of targets met and not met for each category. Interoperability: The Navy defined information systems interoperability, critical joint applications interoperability, and operational testing targets as its measures of this category. For fiscal year 2005, it met the information systems interoperability target. However, it did not meet the critical joint applications interoperability target, and it could not determine whether it met the operational testing target because of insufficient data. Information systems interoperability: The target was to be level 2 on the DOD Levels of Information Systems Interoperability (LISI) Scale. The Navy reports that NMCI was a level 2. Critical joint applications interoperability: The target was for all critical joint applications to be interoperable with NMCI. In fiscal year 2005, the Navy did not transition all of its critical joint applications to NMCI. Moreover, of the 13 applications that were fully or partially transitioned, one was determined not to be interoperable. Operational testing: The target was to be Potentially Operationally Effective and Potentially Operationally Suitable. However, Navy reported that the Joint Interoperability Test Command operational testing did not produce sufficient data to determine this. Security and information assurance: The Navy identified SLAs and information assurance incentive targets as its measures of this category. For fiscal year 2005, it did not meet either target. SLAs: The target was to meet 100 percent of all security-related agreements. The Navy reported that it met this target during 4 months of the fiscal year but did not meet it for 8 months, including the last 6 months of the fiscal year. Information assurance incentives: The target was to have the contractor earn 100 percent of the incentive each year. However, the contractor did not earn 100 percent of the incentive for the last 6 months of this fiscal year. Workforce capabilities: The Navy defined the reduction of civilian IT workforce, percentage of workforce with access to NMCI, and the amount of professional certifications as its measures of this category. For fiscal year 2005, it reported that it met the reduction of civilian IT workforce target but did not meet the percent of workforce with access target and could not determine whether it met the professional certifications target. Reduction of civilian IT workforce: The target was to have a zero reduction in its civilian IT workforce. The Navy reported that it met this target. Percent of workforce with access: The target was for 100 percent of its workforce to have access. As of September 30, 2005, 82 percent of the applicable workforce had a seat. Amount of professional certifications: While Navy officials stated that the target is professional certifications, they could not provide a measurable target. Therefore, it cannot be determined whether the target was met. Process improvement: The Navy defined certain customer survey and technology refreshment targets as its measures of this category. For fiscal year 2005, the Navy did not meet the leadership survey target and could not determine whether it met the technology refreshment target. Information from customer surveys: The target was to have the contractor earn 100 percent of the Echelon II survey and the Network Operations Leaders survey incentives. However, the contractor earned 25 percent of the incentive for the Echelon II survey, and 0 percent of the incentive for the Network Operations Leaders survey in fiscal year 2005. Technology refreshment: While Navy officials stated that the target is technology refreshment, they could not provide measurable targets. Therefore, it cannot be determined whether the target was met. Operational performance: The Navy identified information from the network Operations Leaders survey as its target for measuring this category. For fiscal year 2005, it did not meet this target. Network Operations Leaders survey: The target was for the contractor to earn 100 percent of the Network Operations Leaders survey incentive. The contractor earned 0 percent of the incentive in fiscal year 2005. Service efficiency: The Navy defined SLA performance and cost/service ratio per seat targets as measures of this category. For fiscal year 2005, the Navy did not meet the SLA performance target, and it could not determine if it met the cost/service ratio per seat target. SLA performance: The target was to have 100 percent of seats at the full performance or full payment level. As of September 2005, the Navy reported that 82 percent of seats achieved full payment or full performance. This is down from March 2005, when the Navy reported that 96 percent of seats achieved full payment or full performance. Cost/service per seat: The target was to have the cost/service ratio per seat to not exceed what it was prior to NMCI. According to the Navy, while the per seat cost for NMCI is higher, the service level is also higher. However, the Navy did not have sufficient information to determine if the target was met. Customer satisfaction: The Navy identified information from the end user satisfaction survey as a target for measuring this category. It did not meet this target in fiscal year 2005. Customer satisfaction survey: The target was to have 85 percent of NMCI end users satisfied. However, the percentage of users reported as satisfied from December 2004 through September 2005 ranged from 75 to 80 percent. Program management: The Navy defined cost, schedule, performance, and risk-related performance targets as measures of this category. For fiscal year 2005, it reports that it met the cost target because it did not obligate more than 100 percent of available NMCI funding but did not meet the schedule, performance, and risk targets. Cost: The target was to obligate up to 100 percent of program funds on NMCI in fiscal year 2005. The Navy reports that it obligated 97 percent of these funds in this fiscal year. Program officials stated that the other 3 percent was spent on legacy IT infrastructure. Schedule: The target was to deploy all seats that were scheduled for deployment in fiscal year 2005. The Navy reports that it deployed 77 percent of these scheduled seats. Performance: The target was to have 100 percent of eligible seats at full payment or full performance. The Navy reports that, as of September 2005, 82 percent of the seats achieved full payment or full performance. Risk: The target is to be green in all risk areas. The Navy reports that it was yellow in several risk areas, such as schedule and organizational change management. Network operations and maintenance: The Navy defined SLA performance, leadership survey results, and technology refreshment targets for measuring this category. For fiscal year 2005, it did not meet the SLA performance or the leadership survey results targets. Further, it could not determine if it met the technology refreshment target. SLA performance: The target was to have 100 percent of eligible seats at either full payment or full performance. As of September 2005, the Navy reported that 82 percent of seats were achieving full payment or full performance. This is down from March 2005, when the Navy reported that 96 percent of seats achieved full payment or full performance. Leadership survey results: The target was to have the contractor earn 100 percent of both the Echelon II and Network Operations Leaders survey incentives. Through September 30, 2005, the contractor earned 25 percent of the Echelon II incentive, and 0 percent of the operator s incentive. Notwithstanding the above described performance relative to performance category targets and strategic goals, Navy CIO and program officials described the program as a major success. CIO officials, for example stated that NMCI has significantly improved the Navy s IT environment, and will increase productivity through greater knowledge sharing and improved interoperability. They also stated that a review and certification process for all applications deployed on the network has been implemented and thus compliance with security and interoperability requirements has been ensured. According to these officials, NMCI s value has been demonstrated repeatedly over the last few years. In this regard, they cited the following examples but did not provide verifiable data to support them. Improved security through continuous security assessments, a centralized distribution of vulnerability information, configuration control of critical servers, and an improved response to new vulnerabilities/threats. Improved continuity of operations (e.g., the Navy reports that it had no prolonged disruptions due to recent hurricanes and fires on the West Coast). Increased personnel training and certification by increasing the amount of offerings. Identified opportunities for improving efficiency through the use of performance metrics. Improved software and hardware asset management and implementation of standard and secure configurations. Provided pier-side (waterfront) connectivity and Navy-wide public key infrastructure. The Navy s mapping of fiscal year 2005 data to performance categories and targets as summarized above shows that the NMCI program has not yet met either of its strategic goals. Specifically, the information superiority and innovation goals that were used to justify the program have yet to be attained. Further, although the Navy developed a plan to measure and report on NMCI progress in meeting the strategic goals, this plan was not implemented. As a result, the development and reporting of program performance relative to strategic goals has not occurred. <3. Contractor Has Largely Met Many but Has Not Met Other SLAs> Our analysis of Navy contractor performance data since September 2004 shows that the extent to which the site-specific agreements have been met for all operational seats (regardless of site) varies widely by individual agreement, with some always being met but others having varied performance over time and by seat type. Our analysis also showed that, although the contractor has met most of the enterprisewide agreements during this time period, it has not met a few. The Navy s analysis and reporting of contractor performance relative to the SLAs, using data for the same time period, showed that the percentage of operational seats meeting the agreements averaged about 89 percent from March 2005 to September 2005, then declined to 74 percent in October 2005 and averaged about 56 percent between November 2005 and March 2006. These differences in how SLA performance can be viewed illustrate how contractor performance against the agreements can be viewed differently depending on how available data are analyzed and presented. They also illustrate the importance of having a comprehensive, transparent, and consistent approach to program performance management that considers a range of perspectives and metrics. <3.1. Contractor Satisfaction of SLAs Has Varied by Agreement and Seat Type, with Not All Agreements Being Met> For the period beginning October 2004 and ending March 2006, the contractor s performance relative to site-specific SLAs has varied, with certain agreements consistently being met regardless of seat type, other agreements being met to varying degrees over time, and still others largely not being met for certain seat types. Variability in performance has also occurred for enterprisewide agreements, although most have been met. <3.1.1. Significant Percentage of All Applicable Seat Types Have Met Certain Site-Specific Agreements> Between October 2004 and March 2006, the contractor has met, or usually met, the agreement for each seat type for many SLAs. For example, the contractor met SLA 324, which covers wide area network connectivity, for all seat types all of the time. Also, SLA 325, covering network communication services, and SLA 332, measuring application server connectivity, were met for all seat types over the same time period. SLA 225, which measures base area network and local area network performance, was met for essentially all seat types (see fig. 4). Similarly, SLA 328, which measures the time to implement new seats and application servers, was met for 94 percent or more of deployed seat types in January 2005 through March 2006 (see fig. 5). (See app. III for descriptions of each SLA and figures illustrating levels of performance relative to each applicable seat type.) The contractor has not consistently met certain agreements between October 2004 and March 2006. For example, satisfaction of SLA 102, which covers response time for network problem resolution, has ranged from a high of 100 percent in March 2005 and June 2005 to a low of 79 percent in February 2006. As of March 2006, this SLA was met by 97 percent of all seat types (see fig. 6). Also, satisfaction of SLA 107, which is a measure of network performance in areas of availability, latency/packet loss, and quality of service in support of videoconferencing and voice-over-IP, has varied over time. Specifically, satisfaction has ranged from a high of 99 percent in January 2006 to a low of 71 percent in January 2005. As of March 2006, this agreement was met by 90 percent of all seat types (see fig. 7). Between October 2004 and March 2006, the contractor has not met certain agreements for all seat types. For example, for SLA 101, which is a measure of the time it takes to resolve NMCI user issues, the percentage of seats meeting the agreement has widely varied. Specifically, the percentage of mission-critical seats that met the agreement has been consistently and significantly lower than was the case for the basic or high end seats. In particular, as of March 2006, SLA 101 was met for about 90 percent of basic seats, 77 percent of high end seats, and 48 percent of mission-critical seats (see fig. 8). Similarly, for SLA 103, which is a measure of performance of end user services, the percentage of basic seats that met the agreement was consistently and significantly lower than that of high end or mission-critical seats. In March 2006, SLA 103 was met for about 63 percent of basic seats, 74 percent of high end seats, and 86 percent of mission-critical seats (See fig. 9). The contractor generally met most of the SLAs that have enterprisewide applicability. In particular, of the 13 such SLA s, 8 were met each month between October 2004 and March 2006, and another was met all but 1 month during this time period. Further, a tenth SLA was met for 14 out of the 18 months during this period. However, the contractor has not consistently met 3 of the 13 enterprisewide SLAs. Specifically, SLA 103, which covers end user services, was not met 12 of the 18 months. SLA 104, which covers the help desks, was not met 11 out of the 18 months, including 8 out of the last 9 months of this period. SLA 106, which covers information assurance services including identifying incidents, responding to incidents, and configuration of NMCI, was not met for 11 out of 18 months, including the last 9 months of the period. (See fig. 10 for a summary of the months in which the contractor met and did not meet the enterprisewide SLAs.) <3.2. Contractor Satisfaction of SLAs Relative to Contractually Defined Performance Levels Has Varied> NMCI program officials told us that they measure the contractor s SLA- related performance in terms of the percentage of eligible seats that have met the contractual definitions of full payment and full performance. More specifically, they compare the number of seats on a site-by-site basis that have met these definitions with the number of seats that are eligible. As discussed earlier, full payment means that the contractor has met 100 percent of the applicable agreements at a given site, and 50 to 90 percent of the planned seats at that site have been cut over (i.e., are operational). Full performance means that the contractor has met 100 percent of the applicable agreements at a given site, and over 90 percent of the planned seats at that site have been cut over. In effect, this approach focuses on performance for only those seats that are at sites where at least 50 percent of the planned number of seats are actually operating. It excludes performance at sites where less than 50 percent of the ordered seats are operating. Moreover, it combines the results for all SLAs and, therefore, does not highlight differences in performance among service areas. For the period beginning in October 2004 and ending in March 2005, the contractor s performance in meeting the agreements from a contractual standpoint increased, with the percentage of operational seats that met either performance level having jumped markedly between October and December 2004 (about 5 to 65 percent), then generally increasing to a high of about 96 percent in March 2005. Since then, the percentage of seats meeting either of the two performance levels fluctuated between 82 and 94 percent through September 2005 and then decreased to 74 percent in October 2005. From November 2005 through March 2006, the percentage of seats meeting either performance level decreased to 55 percent. (See fig. 11 for the trend in the percentage of operational seats meeting either the full payment or full performance levels; see fig. 12 for the number of seats achieving either performance level versus the number eligible for doing so for the same time period.) The preceding descriptions of SLA performance illustrate that contractor performance against the agreements can be viewed differently depending on how relevant data are analyzed and presented. Further, they illustrate the importance of considering different perspectives and metrics in order to have a comprehensive, transparent, and consistent approach to program performance management. <4. NMCI Customer Groups Satisfaction Levels Vary, but Overall Customer Satisfaction Is Low> The Navy s three groups of NMCI customers end users, organizational commanders, and network operators vary in the extent to which they are satisfied with the program, but collectively these customers are generally not satisfied. With respect to end users, the Navy reports that overall satisfaction with NMCI improved between 2003 and 2005; however, reported satisfaction levels have dropped off since September 2005. In addition, while the Navy reports that this overall level of end user satisfaction with contractor provided services has averaged about 76 percent since April 2004, this is below the Navy-wide target of 85 percent and includes many survey responses at the lower end of the range of scores that the Navy has defined satisfied to mean. With respect to commanders and network operations leaders, neither is satisfied with NMCI. In addition, officials representing each of the customer groups at five shipyard or air depot installations that we visited expressed a number of NMCI concerns and areas of dissatisfaction with the program. Without satisfied customers, the Navy runs the risk that NMCI will not attain the widespread acceptance necessary to achieve strategic program goals. <4.1. End User Surveys Show Dissatisfaction with NMCI> Despite reported improvements in end user satisfaction levels since 2002, end user responses to quarterly satisfaction surveys have been consistently at the low end of the range of scores that the Navy defines the term satisfied to mean, and the percentage of end users that Navy counts as being satisfied have consistently been below the Navy s satisfaction target level. Specifically, although the Navy s satisfied users dropped from about 66 percent in June 2002 to around 54 percent for the next two quarters (September and December 2002), satisfaction reportedly rose steadily from March 2003 through September 2005, peaking at that time at about 80 percent. Since then, the percentage of end users that the Navy reports to be satisfied has declined, leveling off at around 76 percent over the next several months. This means that even with the Navy s forgiving definition of what constitutes a satisfied end user, at least 24 percent of end users are dissatisfied with NMCI. (See fig. 13 for the trends in end user satisfaction with the program and the contractor.) Exacerbating this long-standing shortfall in meeting end user satisfaction expectations is the fact that the Navy considers a satisfied end user to include users that are at best marginally satisfied and arguably somewhat dissatisfied. That is, the Navy uses an average score of 5.5 or greater (on its 10-point satisfaction scale, where 1 is dissatisfied, and 10 is satisfied) as the threshold for categorizing and counting end users as satisfied. This means that users counted as satisfied may include a large contingent that are at the low end of the satisfaction range (e.g., between 5.5 and 7). When the results of the March 2006 survey are examined in this context, we see that this is the case. For example, we see that 8 of the 14 questions received an average score below 7.0. Additional insights into the degree and nature of end user satisfaction (and dissatisfaction) are apparent when the reported percentage of satisfied users are examined from different perspectives, such as by (1) individual survey questions and (2) organizational units. For example, Navy-reported end user satisfaction survey results for the quarter ending March 31, 2006, show that while the percentage of users deemed satisfied with the program averaged about 74 percent, the percentage reported as satisfied relative to each survey question ranged from a low 52 to a high of 87 percent. These insights into end user sources of satisfaction and dissatisfaction are summarized as follows: Variations in satisfaction levels by question. While the percentage of end users who are categorized as satisfied with the program and the contractor do not significantly differ (74 versus 76 percent, respectively), variations do exist among the percentage satisfied with the 14 areas that the questions address. For example, far fewer (66 percent) were satisfied with the reliability of the NMCI network than were satisfied with the professionalism of EDS personnel (87 percent). (See table 4 for the percentage of users satisfied and dissatisfied according to each of the 14 survey questions.) Variations in satisfaction levels by organizational unit. The percentage of end users who were categorized as being satisfied with the NMCI program varied by organizational unit as much as 18 percentage points. For example, about 66 percent of users in the Naval Sea Systems Command were deemed satisfied with the program as compared with about 84 percent in the Commander of Navy Installations. Similarly, the percentage of end users who were categorized as satisfied with the contractor also varied by 17 percentage points, with the Naval Sea Systems Command and Naval Air Systems Command having about 69 percent of its users viewed as satisfied and the Commander of Navy Installations having about 86 percent. (See tables 5 and 6 for percentages of satisfied end users by Navy and Marine Corps, respectively, organizations as of March 31, 2006.) <4.2. Commander and Network Operator Surveys Show That Both Customer Groups Are Dissatisfied> The Navy conducted surveys of commander and network operations leader units in September 2005 and in March 2006. Overall, survey results show that neither commanders nor operators are satisfied with NMCI. <4.2.1. Commander Survey Results> The results from the two commander satisfaction surveys conducted to date show that the customers are not satisfied, with NMCI. Specifically, on a scale of 0-3 with 0 being not satisfied, and 1 being slightly satisfied with the contractor s support in meeting the mission needs and strategic goals of these organizations, the average response from all organizations was 0.65 and 0.76 in September 2005 and March 2006, respectively. The latest survey results show minor differences in the degree of dissatisfaction with the four types of contractor services addressed (cutover services, technical solutions, service delivery, and warfighter support). (See table 7 for results of the September 2005, and March 2006, commander satisfaction surveys.) The Navy-reported results of the two network operations leader satisfaction surveys conducted to date show that these customers are also not satisfied with NMCI. Specifically, on a scale of 0-3 with 0 being not satisfied and 1 being slightly satisfied with the contractor s support in meeting the mission needs and strategic goals of these two organizations, the average of the responses from NETWARCOM in September 2005 was 0.33, rising to 0.67 in March 2006. For MCNOSC, the average of the responses to both surveys was 0.00. (See table 8 for these results.) Of the three types of contractor services addressed in the survey (mission support and planning, network management, and service delivery), network management services, which includes information assurance and urgent software patching, received a score of 0 from both organizations on both surveys. <4.3. Shipyard and Air Depot Customers Consistently Identified a Range of Concerns and Areas of Dissatisfaction with NMCI> Consistent with the results of the Navy s customer satisfaction surveys, officials representing end users, commanders, and network operations personnel at five shipyards or air depots that we interviewed cited a number of concerns or sources of dissatisfaction with NMCI. The anecdotal information that they provided to illustrate their concerns are described in the next section. <4.3.1. Continued Reliance on Legacy Systems> Shipyard and air depot officials for all five sites told us that they have continued to rely on their legacy systems rather than NMCI for various reasons. For example, officials at one air depot stated that NMCI provided less functionality than their legacy systems and thus they have continued to use these legacy systems to support mission operations. Also, officials at one shipyard told us that site personnel lack confidence in NMCI and thus they continue to use legacy systems. Officials at the other two shipyards voiced even greater concerns, with officials at one saying that only NMCI seats (i.e., workstations) are running on the NMCI intranet (their servers are still running on their legacy network), and officials at the other saying that NMCI does not support their applications and thus they primarily use it for e-mail. Similarly, officials at an air depot stated that NMCI workstations are not capable of supporting certain applications, such as high- performance modeling, and thus they operate about 233 other workstations to support their needs. <4.3.2. Loss in Workforce Productivity> According to a memo from the Commander of one shipyard to the Naval Sea Systems Command dated December 2005, NCMI software updates adversely affect the operation of network applications. Consistent with this, officials at two of the sites stated that NMCI is hurting workforce productivity, with officials at one shipyard saying that system downtime, particularly as it relates to major applications, has deteriorated and is unacceptable, and officials at another shipyard said that NMCI response time is slow both on- and off-site. To illustrate, officials at one air depot said that personnel cannot download more than one file at a time, while officials at shipyards stated that reach back to legacy systems through NMCI is slow, sometimes taking 45 minutes to open a document. Further, officials at shipyards complained that users profiles do not follow the user from one workstation to another, causing users to recreate them, while officials at one air depot stated that NMCI does not provide them the capability to monitor employees inappropriate use of the Internet (e.g., excessive use or accessing unauthorized sites). <4.3.3. Lack of Support of Dynamic Work Environments> Both air depot and shipyard officials described their respective work environments as dynamic, meaning that they are frequently changing, and thus require flexibility in moving and configuring workstations. Further, shipyards operate at the waterfront, which we were told is an environment that requires quick responses to changing needs. For example, ships come in, barges are created to service them, and these barges must be outfitted with computers. Decisions occur in a short amount of time regarding new barge set ups and equipment movements. According to shipyard officials, NMCI has not been able to support these barge-related requirements because it is not flexible enough to quickly react to shifting work priorities. As a result, officials with one shipyard stated that they have had to provide their own waterfront support using legacy systems. Similarly, officials with the air depots stated that the NMCI contractor has a difficult time moving seats fast enough to keep up with changing needs. Limitations in Help Desk Support Officials from each of the shipyards and air depots voiced concerns and dissatisfaction with help desk assistance. According to officials with the air depots, the quality of help desk support is inconsistent, and thus they have had to assume more of the burden in dealing with IT system problems since they transitioned to NMCI. Shipyard officials were even more critical of help desk support. According to officials at one shipyard, help desk support is not working, as it is almost impossible to get a help desk call done in 1 hour. Similarly, officials at another shipyard told us that help desk responsiveness has been poor because it takes hours, if not days, to get problems fixed. The previously cited memo from the Commander of one shipyard to the Naval Sea Systems Command cited an average time of 2.4 days to respond to customer inquiries. <4.3.4. Problems with NMCI Site Preparation and Transition> Officials from all five sites expressed concerns with the manner in which they were prepared for transitioning to NMCI. According to officials at one air depot, certain seat management requirements were overlooked, and NMCI users have struggled with understanding the contract processes that govern, for example, how to order new software and hardware, or how to relocate machines, because the contractual terms are difficult to follow, and training was not adequate. In particular, they said users do not understand with whom they should talk to address a given need, and officials with one air depot noted that NMCI has no solution for their electronic classroom needs. Officials at one shipyard attributed the lack of NMCI site preparation to insufficient planning prior to deploying NMCI and a lack of transparency in how NMCI was being managed, including how deployment issues were to be resolved. As stated by officials at another shipyard, the transition to NMCI was difficult and very disruptive to operations because they had no control over the contractor transition team. NMCI program officials told us that they are aware of the concerns and sources of dissatisfaction of shipyard and air depot customers, however, they added that many of them are either not supported by data or reflect customers lack of familiarity with the services available under the contract. In particular, they said that they have not been provided any data showing a drop in workforce productivity caused by NMCI. They also said that continued reliance on legacy systems illustrates a lack of familiarity with the contract because provisions exist for moving legacy servers onto NMCI and supporting certain applications, such as high-performance modeling. Further, they said that the contract supports monitoring Internet usage, provides waterfront support to shipyards, and provides help desk service 24 hours a day, 7 days a week. Nevertheless, they acknowledged that both a lack of customer understanding, and customer perceptions about the program are real issues affecting customer satisfaction that need to be addressed. <5. Customer Satisfaction Improvement Efforts Are Not Being Guided by Effective Planning> The NMCI program office reports that improving customer satisfaction is a program priority. Accordingly, it has invested and continues to invest time and resources in a variety of activities that it associated with customer satisfaction improvement, such as holding user conferences and focus groups. However, these efforts are not being guided by a documented plan that defines prioritized improvement projects and associated resource requirements, schedules, and measurable goals and outcomes. Given the importance of improved customer satisfaction to achieving NMCI program goals and benefits, it is important for the Navy to take a structured and disciplined approach to planning its improvement activities. Without it, the program office cannot adequately ensure that it is effectively investing scarce program resources. As we have previously reported, effectively managing program improvement activities requires planning and executing such activities in a structured and disciplined fashion. Among other things, this includes developing an action plan that defines improvement projects and initiatives, assigns roles and responsibilities, sets priorities, identifies resource needs, establishes time lines with milestones, and describes expected results in measurable terms. The Software Engineering Institute s IDEALSM model, for example, is one recognized approach for managing process improvement efforts. According to this model, improvement efforts should include a written plan that serves as the foundation and basis for guiding improvement activities, including obtaining management commitment to and funding for the activities, establishing a baseline of commitments and expectation against which to measure progress, prioritizing and executing activities and initiatives, determining success, and identifying and applying lessons learned. Through such a structured and disciplined approach, improvement resources can be invested in a manner that produces optimal results. Without such an approach, improvement efforts can be reduced to trial and error. The NMCI program office identified seven initiatives that are intended to increase customer satisfaction with the program. According to program officials, the initiatives are (1) holding user conferences, (2) conducting focus groups, (3) administering diagnostic surveys, (4) strengthening help desk capabilities, (5) expanding network services (e.g., adding broadband remote access), (6) assessing infrastructure performance, and (7) initiating a lean six sigma effort. Following are descriptions of each initiative: User conferences. The program office has conducted semiannual NMCI user conferences since 2000. According to program officials, these conferences provide a forum for users to directly voice to program leaders their sources of dissatisfaction with NMCI. During the conferences, users ask questions, participate in issue-focused breakout sessions, and engage in informal discussions. We attended the June 2005 user conference and observed that Navy and contractor program officials provided information, such as updates on current and planned activities and capabilities, while users had opportunities to provide comments and ask questions. According to program officials, the conferences are useful in making program officials aware of customer issues and are used to help diagnose NMCI problems. Focus groups. According to program officials, they conduct user focus groups to, among other things, solicit reasons for customer dissatisfaction and explore solutions and to test newly proposed end user satisfaction survey questions. The focus group sessions include invited participants and are guided by prepared scripts. The results of the sessions are summarized for purposes of identifying improvements such as revisions to user satisfaction survey questions. Diagnostic surveys. The program office performs periodic surveys to diagnose the source of user dissatisfaction with specific services, such as e- mail, printing, and technical support. According to program officials, these surveys help identify the root causes of user dissatisfaction and support analysis of areas needing improvement. However, they could not identify specific examples of where such causes have been identified and addressed and measurable improvements have resulted. Help desk improvement team. The program office established a team to identify the reasons for declining end user satisfaction survey scores relative to the technical support services provided by the help desk. According to program officials, the team traced declining satisfaction levels to such causes as help desk agents knowledge, training, and network privilege shortfalls. To address these limitations, the program office reports that it has redesigned and restructured help desk operations to organize help desk agents according to skills and experience, route calls according to the skill level needed to address the call, target needed agent training, hold daily meetings with agents to apprise them of recent issues, and monitor help desk feedback. However, program officials could not link these efforts to measurable improvements in help desk performance, and NMCI customers that we interviewed during our visits to shipyards and air depots voiced concerns with help desk support. Expanded network services. NMCI program officials stated that a key improvement initiative has been expanding the scope of network-related services that are available under the contract. In particular, they point to such new services as broadband remote access for all laptop users, antispam services for all e-mail accounts, and antispyware services for all accounts as having improved customer satisfaction. Further, they said that the planned addition of wireless broadband access will increase customer satisfaction. However, they could not provide data showing how these added services affected customer satisfaction, or how future services are expected to affect satisfaction. Infrastructure performance assessment. Working with EDS, the program office undertook an NMCI network infrastructure assessment that was intended to identify and mitigate performance issues. This assessment included establishing metrics and targets for common user functions such as opening a Web site, then determining actual network performance at the Washington Navy Yard and Marine Corps installations in Quantico, Virginia. According to program officials, assessment results included finding that network performance could be improved by balancing traffic among firewalls and upgrading wide area network circuits. As a result of this initial assessment, the program has begun adjusting network settings and upgrading hardware at additional NMCI sites. Further, program officials said they are expanding their use of network infrastructure metrics to all sites. However, they neither provided us with a plan for doing so, nor did they demonstrate that these efforts have affected customer satisfaction. Lean six sigma. Program officials said they are applying lean six sigma techniques to improve customer satisfaction. In particular, they have established a customer satisfaction workgroup, which is to define a process for identifying customer problems and prioritizing improvement projects. They said that, for each project, they will perform concept testing using pilot projects and focus groups. They also said that they plan to establish a steering committee that includes representatives from the Navy and the contractor. The officials told us that they have initiated seven projects using lean six sigma techniques, although they did not provide us with any information about the results of these projects or their impact on customer satisfaction. While any or all of these initiatives could result in improvements to customer satisfaction, the program office could not demonstrate that they have produced or will produce measurable improvements. Moreover, the latest customer satisfaction data provided to us show that satisfaction levels are not improving. Further, it is unclear how these various initiatives relate to one another, and various aspects of these initiatives appear redundant such as multiple teams and venues to identify root causes and propose solutions. One reason for this lack of demonstrable improvements and redundancy is the way in which the program office has pursued its improvement initiatives. In particular, they have not been pursued as an integrated portfolio of projects that were justified and prioritized on the basis of relative costs and benefits. Further, they have not been guided by a well- defined action plan that includes explicit resource, schedule, and results- oriented baselines, as well as related steps for knowing whether expected outcomes and benefits have actually accrued. Rather, program officials stated that customer satisfaction improvement activities have been pursued as resources become available and have been in reaction to immediate issues and concerns. Without a proactive, integrated, and disciplined approach to improving customer satisfaction, the Navy does not have adequate assurance that it is optimally investing its limited resources. While the lean six sigma techniques that program officials told us they are now applying to customer satisfaction improvement advocate such an approach, program officials did not provide us with documentation demonstrating that they are effectively planning and executing these projects. <6. Conclusions> IT service programs, like NMCI, are intended to deliver effective and efficient mission support and to satisfy customer needs. If they do not, or if they are not being managed in a way to know whether or not they do, then the program is at risk. Therefore, it is important for such programs to be grounded in outcome-based strategic goals that are linked to performance measures and targets, and it is important for progress against these goals, measures, and targets to be tracked and reported to agency and congressional decision makers. If such measurement does not occur, then deviations from program expectations will not become known in time for decision makers to take timely corrective action. The inevitable consequence is that program results will fall short of those that were promised and used to justify investment in the program. The larger the program, the more significant these deviations and their consequences can be. NMCI is an enormous IT services program and thus requires highly effective performance management practices. However, such management, to include measurement of progress against strategic program goals and reporting to key decision makers of performance against strategic goals and other important program aspects, such as examining service level agreement satisfaction from multiple vantage points and ensuring customer satisfaction, has not been adequate. One reason for this is that measurement of progress against strategic program goals has not been a priority for the Navy on NMCI, giving way to the Navy s focus on deploying NMCI seats to more sites despite a long-standing pattern of low customer satisfaction with the program and known performance shortfalls with certain types of seats. Moreover, despite investing in a range of activities intended to improve customer satisfaction, plans to effectively guide these improvement efforts, including plans for measuring the success of these activities, have not been developed. Given that the Navy reports that it has already invested about 6 years and $3.7 billion in NMCI, the time to develop a comprehensive understanding of the program s performance to date, and its prospects for the future, is long overdue. To its credit, the Navy recognizes the importance of measuring program performance, as evidenced by its use of service level agreements, its extensive efforts to survey customers, and its various customer satisfaction improvement efforts. However, these steps need to be given the priority that they deserve and be expanded to obtain a full and accurate picture of program performance. Doing less increases the risk of inadequately informing ongoing NMCI investment management decisions that involve huge sums of money and carry important mission consequences. <7. Recommendations for Executive Action> To improve NMCI performance management and better inform investment decision making, we recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that the NMCI program adopts robust performance management practices that, at a minimum, include (1) evaluating and appropriately adjusting the original plan for measuring achievement of strategic program goals and provides for its implementation in a manner that treats such measurement as a program priority; (2) expanding its range of activities to measure and understand service level agreement performance to provide increased visibility into performance relative to each agreement; (3) sharing the NMCI performance results with DOD, Office of Management and Budget, and congressional decision makers as part of the program s annual budget submissions; and (4) reexamining the focus, scope, and transparency of its customer satisfaction activities to ensure that areas of dissatisfaction described in this report are regularly disclosed to the aforementioned decision makers and that customer satisfaction improvement efforts are effectively planned and managed. In addition, we recommend that the Secretary of Defense direct the Secretary of the Navy, in collaboration with the various Navy entities involved in overseeing, managing, and employing NMCI, to take appropriate steps to ensure that the findings in this report and the outcomes from implementing the above recommendations are used in considering and implementing warranted changes to the NMCI s scope and approach. <8. Agency Comments and Our Evaluation> In its comments on a draft of this report, signed by the Deputy Assistant Secretary of Defense (Command, Control, Communications, Intelligence, Surveillance, Reconnaissance & Information Technology Acquisition Programs) and reproduced in appendix IV, DOD agreed with our recommendations and stated that it has implemented, is implementing, or will implement each of them. In this regard, the department stated that the report accurately highlights the need to adjust the NMCI strategic goals and associated measures, and it committed to, among other things, sharing additional NMCI performance data with decision makers as part of the annual budget process. Notwithstanding this agreement, DOD also commented that the Navy believes that our draft report contained factual errors, misinterpretations, and unsupported conclusions. We do not agree with the Navy's position. The Navy's points are summarized below along with our response. The Navy stated that our review focused on Navy shipyards and air depots to the exclusion of Marine Corps sites. We disagree. As the Objectives, Scope and Methodology section of our report points out, the scope of our review covered the entire NMCI program and extended to Navy and Marine Corps sites based on data we obtained from program officials. For example, our work on the extent to which NMCI had met its two strategic goals was programwide, and our work on SLA performance and customer satisfaction surveys included Navy and Marine Corps sites at which NMCI was operating and Navy and Marine Corps customers that responded to the program s satisfaction surveys. The Navy stated that NMCI is a strategic success, noting that the program is meeting its goals of providing information superiority (as well as information security) and fostering innovation. As part of these statements, the Navy cited such things as the number of users supported and seats deployed, the types of capabilities fielded, and contracting actions taken. In addition, the Navy stated that NMCI has thwarted intrusion attacks that have penetrated other DOD systems, and it concluded that NMCI represents a major improvement in information superiority over the Navy s legacy network environment in such areas as virus protection and firewall architecture. It also noted that more Naval commands now have access to state-of-the-art workstations and network services, which it concluded means that NMCI is fostering innovation. While we do not question these various statements about capabilities, improvements, and access, we would note they are not results-oriented, outcome-based measures of success. Moreover, we do not agree with the statements about NMCI meeting its two strategic goals and being a strategic success. As we show in our report using the Navy s own performance categories, performance targets, and actual SLA and other performance data, NMCI met only 3 of the 20 performance targets spanning nine performance categories that the Navy established for determining goal attainment. Concerning these results, the Navy stated that our report s use of SLA performance data constitutes a recommendation on our part for using such data in determining program goal attainment, which the Navy said is awkward because SLAs do not translate well into broad goals. We do not agree that our report recommends the use of any particular performance data and targets for determining program goal attainment. Our report s use of these data and targets is purely because the NMCI program office provided them to us in response to our inquiry for NMCI performance relative the nine Navy-established performance categories. We are not recommending any particular performance targets or data. Rather, we are recommending that the approach for measuring achievement of strategic goals be reevaluated and adjusted. Accordingly, we support DOD s comment that the Navy needs to adjust the original NMCI strategic goals and associated measures. The Navy stated that we misinterpreted SLA data as they relate to the contractor performance categories of full payment and full performance. We disagree. The report presents a Navy-performed analysis of SLA data relative to the full payment and full performance categories that offers no interpretation of these data. However, because the Navy s analysis of SLA data is an aggregation, we performed a different analysis to provide greater visibility into individual SLA performance that the Navy s full payment and full performance analyses tends to hide. Our analysis also avoids the bundling and averaging concerns that the Navy raised. The Navy stated that some of our customer satisfaction conclusions were unsupported. Specifically, the Navy said that the way it collects end-user satisfaction responses, 5.5 or higher on a scale of 10 indicates a satisfied user, and such a scale is in line with industry practice. Therefore, the Navy said that user satisfaction survey responses do not break out in a way that supports our conclusion that scores of 5.5 through 7 are marginally satisfied users. We do not agree. While we recognize that the Navy s 1-10 scale does not differentiate between degrees of satisfaction, we believe that doing so would provide insight and perspective that is lacking from merely counting a user as satisfied or not satisfied. When we analyzed the responses to individual questions in terms of degrees of satisfaction, we found that average responses to 10 of 14 survey questions were 5.5 to 7, which is clearly close to the lower limit of the satisfaction range. Also, with regard to customer satisfaction, the Navy stated that our inclusion in the report of subjective statements from shipyard and air depot officials did not include any data to support the officials statements and thus did not support our conclusions. We recognize that the officials statements are subjective and anecdotal, and our report clearly identified them as such. Nevertheless, we included them in the report because they are fully consistent with the customer satisfaction survey results and thus help illustrate the nature of NMCI user concerns and areas of dissatisfaction that the survey results show exist. The Navy stated that NMCI provides adequate reports to key decision makers. However, we disagree because the reporting that the Navy has done has yet to disclose the range of performance and customer satisfaction issues that our report contains. Our message is that fully and accurately disclosing program and contractor performance and customer satisfaction to the various entities responsible for overseeing, managing, and employing NMCI will serve to strengthen program performance and accountability. The Navy also provided various technical comments, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this information, please contact me at (202) 512-6256 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Objectives, Scope, and Methodology Our objectives were to review (1) whether the Navy Marine Corps Intranet (NMCI) is meeting its strategic goals, (2) the extent to which the contractor is meeting its service level agreements (SLA), (3) whether customers are satisfied with the program, and (4) what is being done to improve customer satisfaction. To determine whether NMCI is meeting its strategic goals, we reviewed documents provided by Department of the Navy describing the mission need for NMCI, strategic goals, performance measures, and data gathered on actual performance, conducted interviews with officials from the offices of the Department of Defense Chief Information Officer (CIO), Department of the Navy CIO, and Assistant Secretary of the Navy for Research, Development, and Acquisition, including officials in the NMCI program office, identified the NMCI strategic goals, related performance categories, associated performance targets, and actual performance data through document reviews and interviews, developed an analysis showing NMCI s performance relative to the strategic goals, performance categories, and targets based upon available actual performance data, and shared our analysis with program officials and adjusted the analysis based on comments and additional data they provided. To determine the extent to which performance expectations defined in NMCI SLAs have been met, we conducted interviews with NMCI program office and contractor officials to gain an understanding of available SLA performance data and potential analysis methods, obtained data on actual SLA performance that are used by the Navy as the basis for making performance-based payments to the contractor and, for each SLA, these data indicated whether one or more measurement(s) were taken and if so, whether the measure was met or not, for each seat type (i.e., basic, high end, and mission-critical), at every site for each month from October 2004 through March 2006, analyzed data for site-specific SLAs by calculating the number of seats that met each agreement at each site for each month and when measurement data were available according to seat type, we calculated the number of seats that met each agreement for each seat type. Otherwise, we calculated the total number of seats that met each agreement. We counted an agreement as met at a site if all of the agreement s measured targets were met at the site for a given month. To calculate the percentage of seats for which an agreement was met, we added the total number of seats at all sites for which an agreement was met, and divided it by the total number of seats at all sites for which measurements were made, analyzed data for enterprisewide SLAs by determining whether an agreement was met at all Navy (excluding the Marine Corps) and all Marine Corps sites for each month, and we counted an agreement as met if all of the agreement s measured targets were met for a given month, compared our site specific and enterprisewide SLA analyses across months to identify patterns and trends in overall SLA performance and in situations were an SLA is composed of site specific and enterprisewide measures, we did not aggregate our site specific and enterprisewide results. Thus, an SLA could have been met at the site level but not at the enterprisewide, and vice versa, and described our analysis method and shared our results with program office and contractor officials and made adjustments based on their comments. To determine whether NMCI customers are satisfied, we obtained and analyzed results of end users surveys conducted from June 2002 through March 2006 and commanders and network operations leaders surveys from September 2005 through March 2006, conducted interviews with NMCI program office and contractor officials to gain an understanding of how the surveys were developed and administered and their procedures for validating and auditing reported results, analyzed data in the survey reports by comparing actual with desired results, and we also analyzed the data to identify trends in satisfaction levels over time and variation in satisfaction by question, organization, and type of service, and conducted interviews with a broad range of NMCI users at Navy sites: Portsmouth Naval Shipyard, Norfolk Naval Shipyard, Puget Sound Naval Shipyard, Jacksonville Naval Air Depot, and North Island Naval Air Depot. We selected these sites because they are among the largest, include diverse user communities, and represent different stages of program implementation. Participants in the interviews included officials from the Offices of the Commander, CIO, Information Technology and Communications Services, end users relying on NMCI desktop services in day-to-day operations, and the contractor. To determine what has been done to improve customer satisfaction, we interviewed program office and contractor officials to identify and develop an understanding of customer satisfaction improvement efforts. To determine the results and impact of each effort, and we interviewed program officials and obtained and analyzed relevant documentation, researched best practices into effective management of improvement activities and compared the program office s approach with the practices we identified to evaluate the overall effectiveness of the customer satisfaction improvement activities, and attended the June 2005 NMCI enterprise conference to observe the proceedings. We performed our work from April 2005 to August 2006 in accordance with generally accepted government auditing standards. Customer Satisfaction Survey Questions This appendix includes the questions used in the three customer satisfaction surveys: End User Customer Satisfaction Survey, Navy Echelon II and Marine Corps Major Command Commander s Incentive Survey, and Navy and Marine Corps Network Operations Leader s Survey. <9. End User Customer Satisfaction Survey Questions> The end user customer satisfaction survey consists of 14 questions, 10 of which are tied to incentives. Users are asked to think only of the experiences they have had with the services during the prior 3 months. If a question is not relevant to their experience, they are asked to indicate that it is not applicable. Otherwise, they are asked to score it on a 1-10 scale with 1-5 being levels of dissatisfaction, and 6-10 being levels of satisfaction. Users are also currently asked demographic information in the survey, as well as suggestions for improvement, and sources of dissatisfaction. Table 9 lists the end user customer satisfaction survey questions. <10. Navy Echelon II Commanders and Marine Corps Major Command Commander s Customer Satisfaction Incentive Survey> The commander s customer satisfaction incentive survey consists of four topics (warfighter support services, cutover services, technology solutions, and service delivery) corresponding to key mission and/or business objective-related services or capabilities. Each topic is broken down into a number of subtopics. Under each subtopic, the survey asks commanders to indicate whether they agree, disagree, or have no basis to respond to a series of statements about EDS s performance. The survey also asks commanders to rate their overall satisfaction with each topic as extremely satisfied, mostly satisfied, slightly satisfied, not satisfied, or no basis to respond. The last section of each topic contains two open-ended questions soliciting feedback on satisfaction with NMCI services. Table 10 is a condensed version of the commander s customer satisfaction survey that includes each of the subtopics, statements about EDS s performance, the overall topic satisfaction question, and the two open- ended questions. <11. Navy and Marine Corps Network Operations Leaders Customer Satisfaction Incentive Survey> The network operations leaders customer satisfaction incentive survey consists of three topics (mission support and planning, network management, and service delivery) corresponding to key mission and/or business objective-related services or capabilities. Each topic is broken down into a number of subtopics. Under each subtopic, the survey asks the leaders to indicate whether they agree, disagree, or have no basis to respond to a series of statements about EDS s performance. The survey also asks the leaders to rate their overall satisfaction with each topic as extremely satisfied, mostly satisfied, slightly satisfied, not satisfied, or have no basis to respond. The last section of each topic contains two open-ended questions soliciting feedback on satisfaction with NMCI services. Table 11 is an abbreviated version of the network operations leader s surveys that includes each of the subtopics, statements about EDS s performance, the overall topic satisfaction question, and the two open- ended questions. SLA Descriptions and Performance This appendix contains descriptions and performance trends for NMCI s service level agreements. SLAs are measured at site level, enterprisewide, or both the site and enterprisewide. Site level SLA performance is based on the percentage of operational seats that met the SLA, meaning that all performance targets for a given SLA were met for a particular month. Where applicable, the percentage of seats meeting an SLA was analyzed by seat type (i.e., basic, high end, and mission-critical). Enterprisewide SLA performance is based on whether the SLA was met for a given month, meaning that all performance targets for a given SLA were met for a particular month. SLA 101-End user problem resolution: This SLA measures the percentage of all resolved NMCI problems against identified performance target values. Figure 14 portrays the contractor s historical site level performance with SLA 101. SLA 102-Network problem resolution: This SLA measures the resolution of problems associated with the contractor provided network devices and connections. Figure 15 portrays the contractor s historical site level performance with SLA 102. SLA 103-End user services: This SLA measures performance with end user services, including E-mail, Web and Portal, File Share, Print, Network Logon, Access to Government Applications, and RAS services. Figure 16 portrays the contractor s historical site level performance with SLA 103. Figure 17 portrays the contractor s historical enterprisewide performance with SLA 103. SLA 104-Help desk: This SLA measures help desk services including, average speed of answer, average speed of response, call abandonment rate, and first call resolution. Figure 18 portrays the contractor s historical enterprisewide performance with SLA 104. SLA 105-Move, add, change (MAC): This SLA measures the time to complete MAC activity, from the receipt of the MAC request from an authorized government submitter to the completion of the MAC activity. MACs include activities such as moving a seat from one location to another and adding seats at a location. Figure 19 portrays the contractor s historical site level performance with SLA 105. SLA 106-Information assurance (IA) services: This SLA measures the contractor s IA services, including security event detection, security event reporting, security event response, and IA configuration management. Figure 20 portrays the contractor s historical enterprisewide performance with SLA 106. SLA 107-NMCI intranet: This SLA measures performance of the NMCI Intranet in areas of availability, latency/packet loss, and quality of service in support of videoteleconferencing and voice-over-IP. Figure 21 portrays the contractor s historical site level performance with SLA 107. SLA 203-E-mail services: This SLA measures the performance of e-mail transfers. Figure 22 portrays the contractor s historical enterprisewide performance with SLA 203. SLA 204-Directory services: This SLA measures the availability and responsiveness of directory services. Directory services include supporting the management and use of file services, security services, messaging, and directory information (e.g., e-mail addresses) for users. Figure 23 portrays the contractor s historical site level performance with SLA 204. Figure 24 portrays the contractor s enterprisewide performance with SLA 204. SLA 206-Web access services: This SLA measures the performance of user access to internal and external Web content. Figure 25 portrays the contractor s historical site level performance with SLA 206. Figure 26 portrays the contractor s historical enterprisewide performance with SLA 206. SLA 211-Unclassified but Sensitive Internet Protocol Router Network (NIPRNET) access: This SLA measures the performance of NIPRNET access, including latency and packet loss. Figure 27 portrays the contractor s historical site level performance with SLA 211. Figure 28 portrays the contractor s historical enterprisewide performance with SLA 211 SLA 225 Base area network/local area network (BAN/LAN) communications services: This SLA measures BAN/LAN performance, including availability and latency. Figure 29 portrays the contractor s historical site level performance with SLA 225. SLA 226-Proxy and caching service: This SLA measures the availability of the proxy and caching services. Proxy servers are located between a client and a network server and are intended to improve network performance by fulfilling small requests. Figure 30 portrays the contractor s historical enterprisewide performance with SLA 226. SLA 231-System service-domain name server: This SLA measures the availability and latency of Domain Name Server services. The Domain Name Server translates domain names to IP addresses and vice versa. Figure 31 portrays the contractor s historical site level performance with SLA 231. Figure 32 portrays the contractor s historical enterprisewide performance with SLA 231. SLA 324-Wide area network connectivity: This SLA measures the percent of bandwidth used to provide connection to external networks. Figure 33 portrays the contractor s historical site level performance with SLA 324. SLA 325-BAN/LAN communication services: This SLA measures the percent of bandwidth utilized on shared network segments. Figure 34 portrays the contractor s historical site level performance for SLA 325. SLA 328-Network management service asset management: This SLA measures the time it takes to implement new assets, such as seats, and application servers. Figure 35 portrays the contractor s historical site level performance with SLA 328. SLA 329-Operational support services: This SLA measures the effectiveness of NMCI s disaster recovery plan. Figure 36 portrays the contractor s historical enterprisewide performance with SLA 329. SLA 332-Application server connectivity: This SLA measures both the time it takes for the contractor to implement the connectivity between the network backbone and an application server and the percentage of available bandwidth from an application server to the local supporting backbone. Figure 37 portrays the contractor s historical site level performance with SLA 332. SLA 333-NMCI security operational services general: This SLA measures the percentage of successful accreditations on the first attempt, based on compliance with DOD certification and accreditation policies and procedures. Figure 38 portrays the contractor s historical enterprisewide performance with SLA 333. SLA 334-Information assurance operational service PKI: This SLA measures the timeliness of revoking a PKI certificate when required, ability of a NMCI user to obtain the DOD PKI certificate of another NMCI user, and the time it takes for user registration of DOD PKI within NMCI. Figure 39 portrays the contractor s historical enterprisewide performance with SLA 334. SLA 336-Information assurance planning services: This SLA measures the time it takes to distribute new or revised security products (hardware and software). Figure 40 portrays the contractor s historical enterprisewide performance with SLA 336. Comments from the Department of Defense GAO Contact and Staff Acknowledgments <12. GAO Contact> <13. Staff Acknowledgments> In addition to the individual named above, Mark Bird, Assistant Director; Scott Borre; Timothy Case; Barbara Collier; Vijay D Souza; Neil Doherty; Jim Fields; Mike Gilmore; Peggy Hegg; Wilfred Holloway; George Kovachick; Frank Maguire; Charles Roney; Sidney Schwartz; Karl Seifert; Glenn Spiegel; Dr. Rona Stillman; Amos Tevelow; and Eric Winter made key contributions to this report.
Why GAO Did This Study The Navy Marine Corps Intranet (NMCI) is a 10-year, $9.3 billion information technology services program. Through a performance-based contract, the Navy is buying network (intranet), application, and other hardware and software services at a fixed price per unit (or "seat") to support about 550 sites. GAO prepared this report under the Comptroller General's authority as part of a continued effort to assist Congress and reviewed (1) whether the program is meeting its strategic goals, (2) the extent to which the contractor is meeting service level agreements, (3) whether customers are satisfied with the program, and (4) what is being done to improve customer satisfaction. To accomplish this, GAO reviewed key program and contract performance management-related plans, measures, and data and interviewed NMCI program and contractor officials, as well as NMCI customers at shipyards and air depots. What GAO Found NMCI has not met its two strategic goals--to provide information superiority and to foster innovation via interoperability and shared services. Navy developed a performance plan in 2000 to measure and report progress towards these goals, but did not implement it because the program was more focused on deploying seats and measuring contractor performance against contractually specified incentives than determining whether the strategic mission outcomes used to justify the program were met. GAO's analysis of available performance data, however, showed that the Navy had met only 3 of 20 performance targets (15 percent) associated with the program's goals and nine related performance categories. By not implementing its performance plan, the Navy has invested, and risks continuing to invest heavily, in a program that is not subject to effective performance management and has yet to produce expected results. GAO's analysis also showed that the contractor's satisfaction of NMCI service level agreements (contractually specified performance expectations) has been mixed. Since September 2004, while a significant percentage of agreements have been met for all types of seats, others have not consistently been met, and still others have generally not been met. Navy measurement of agreement satisfaction shows that performance needed to receive contractual incentive payments for the most recent 5-month period was attained for about 55 to 59 percent of all eligible seats, which represents a significant drop from the previous 9-month period. GAO's analysis and the Navy's measurement of agreement satisfaction illustrate the need for effective performance management, to include examining agreement satisfaction from multiple perspectives to target needed corrective actions and program changes. GAO analysis further showed that NMCI's three customer groups (end users, commanders, and network operators) vary in their satisfaction with the program. More specifically, end user satisfaction surveys indicated that the percent of end users that met the Navy's definition of a satisfied user has remained consistently below the target of 85 percent (latest survey results categorize 74 percent as satisfied). Given that the Navy's definition of the term "satisfied" includes many marginally satisfied and arguably somewhat dissatisfied users, this percentage represents the best case depiction of end user satisfaction. Survey responses from the other two customer groups show that both were not satisfied. GAO interviews with customers at shipyards and air depots also revealed dissatisfaction with NMCI. Without satisfied customers, the Navy will be challenged in meeting program goals. To improve customer satisfaction, the Navy identified various initiatives that it described as completed, under way, or planned. However, the initiatives are not being guided by a documented plan(s), thus limiting their potential effectiveness. This means that after investing about 6 years and $3.7 billion, NMCI has yet to meet expectations, and whether it will is still unclear.
<1. Introduction> Since the end of World War II, the U.S. military has maintained a presence in Japan and on Okinawa, first as an occupation force and later as an ally committed to maintaining security in the Asia-Pacific region. The security relationship between the United States and Japan is defined through bilateral agreements and is managed through a joint process. Over half of the U.S. forces in Japan are on Okinawa, a presence that has caused increasing discontent among the people of Okinawa. In September 1995, after three U.S. servicemen raped an Okinawan schoolgirl, Japan and the United States formed the Special Action Committee on Okinawa (SACO) to find ways to limit the impact of the U.S. military presence on Okinawa. The Committee developed 27 recommendations to reduce the impact of U.S. operations. <1.1. The U.S. Military Has Maintained a Presence in Japan Since World War II> Since the end of World War II, the U.S. military has based forces in Japan and Okinawa. The U.S. military occupation of Japan began after World War II and continued until 1952, but the United States administered the Ryukyu Islands, including Okinawa, until 1972. Since the end of World War II, U.S. forces have mounted major operations from Japan when needed. Among the most important of these operations was the initial defense of South Korea in the 1950-53 Korean War, when Eighth U.S. Army units left occupation duties in Japan to help defend South Korea. The United States again used its bases in Japan and on Okinawa to fight the Vietnam War. Finally, elements of the III Marine Expeditionary Force deployed from their bases on Okinawa to the Persian Gulf during Operation Desert Storm in the early 1990s. To demonstrate a commitment to peace and security in the Asia-Pacific region, the United States has about 47,000 servicemembers, about half of all U.S. forces deployed in the Pacific region, stationed in Japan. Of the 47,000 U.S. servicemembers in Japan, over half are based on Okinawa, a subtropical island about 67 miles long and from 2 to 18 miles wide, with coral reefs in many offshore locations. In fiscal year 1997, U.S. forces on Okinawa occupied 58,072 acres of the land in the Okinawa prefecture. <1.2. The U.S.-Japan Security Relationship Is Managed Through Bilateral Agreements and a Joint Process> The security relationship between the United States and Japan is defined through bilateral agreements. The Treaty of Mutual Cooperation and Security, signed in January 1960 by the United States and Japan, specifies that each country recognizes that an attack against either country in the territory of Japan is dangerous to its peace and security and declares that both countries would respond to meet the common danger under their constitutional processes. The treaty also commits the two countries to consult with each other from time to time and grants to U.S. military forces the use of facilities and areas in Japan. Lastly, the treaty specifies that a separate Status of Forces Agreement will govern the use of these facilities and areas as well as the status of U.S. forces in Japan. The Status of Forces Agreement, signed on the same day as the treaty, permits the United States to bring servicemembers and their dependents into Japan. It also contains certain stipulations regarding U.S. forces in Japan, including some exemptions from import duties for items brought into Japan for the personal use of U.S. servicemembers; the right of the U.S. military services to operate exchanges, social clubs, newspapers, and theaters; and legal jurisdiction over U.S. servicemembers and their dependents accused of committing a crime in Japan. The agreement also (1) requires the United States to return land to Japan when the land is no longer needed, (2) specifies that the United States will perform maintenance on bases its occupies in Japan, and (3) relieves the United States of the obligation to restore bases in Japan to the condition they were in when they became available to the United States. U.S. Forces-Japan (USFJ) has interpreted this latter provision to mean that the United States is not required to conduct environmental cleanup on bases it closes in Japan. The agreement also required the United States and Japan to establish a Joint Committee as the means for consultation in implementing the agreement. In particular, the Joint Committee is responsible for determining what facilities U.S. forces need in Japan. The U.S.-Japan security relationship is managed through a joint process that includes meetings between the U.S. Secretaries of State and Defense and Japan s Minister of Foreign Affairs and Minister of State for Defense, who make up the Security Consultative Committee. The Committee sets overall bilateral policy regarding the security relationship between the United States and Japan. Japan pays part of the cost of the U.S. forces stationed in its country with annual burden-sharing payments that totaled about $4.9 billion in fiscal year 1997. The annual payments fall into four categories. First, Japan paid about $712 million for leased land on which U.S. bases sit. Second, Japan provided about $1.7 billion in accordance with the Special Measures Agreement, under which Japan pays the costs of (1) local national labor employed by U.S. forces in Japan, (2) public utilities on U.S. bases, and (3) the transfer of U.S. forces training from U.S. bases to other facilities in Japan when Japan requests such transfers. Third, USFJ estimated that Japan provided about $876 million in indirect costs, such as rents foregone at fair market value and tax concessions. Last, although not covered by any agreements, Japan provided about $1.7 billion from its facilities budget for facilities and new construction which included new facilities under the Japan under the Japan Facilities Improvement Program, vicinity improvements, and relocation construction and other costs. Finally, in September 1997, the United States and Japan issued new Guidelines for U.S.-Japan Defense Cooperation that replaced the existing 1978 guidelines. The new guidelines provide for more effective cooperation between U.S. forces and Japan s self-defense forces under normal circumstances, when an armed attack against Japan has occurred, and as a response to situations in areas surrounding Japan that could threaten Japan s security. <1.3. The SACO Process Is a Reaction to Discontent About the U.S. Military Presence on Okinawa> Discontent among the people of Okinawa about the impact of the U.S. presence on their land has been rising for years, particularly as the economic benefits of the U.S. presence have diminished and the people of Okinawa became relatively more prosperous, according to the Congressional Research Service. Among the chief complaints of the Okinawan people is that their prefecture hosts over half of the U.S. force presence in Japan and that about 75 percent of the total land used by U.S. forces in Japan is on Okinawa. Figure 1.1 shows the location and approximate size of major U.S. installations in the Okinawa prefecture. Some Okinawans feel the U.S. military presence has hampered economic development. Other Okinawans object to the noise generated by U.S. operations, especially around the Air Force s Kadena Air Base and Marine Corps Air Station (MCAS) Futenma (which are located in the middle of urban areas), and risks to civilians from serious military accidents, including crashes of aircraft. In addition, some have objected to artillery live-fire exercises conducted in the Central Training Area. When the exercises were held, firing took place over prefectural highway 104, and the highway had to be closed to civilian traffic until the exercises concluded. The Okinawa prefectural government has also objected to the destruction of vegetation on nearby mountains in the artillery range s impact area. Lastly, some perceive that crime committed by U.S. personnel and their dependents on Okinawa is a problem. The public outcry in Okinawa following the September 1995 abduction and rape of an Okinawan schoolgirl by three U.S. servicemembers brought to a head long-standing concerns among Okinawans about the impact of the U.S. presence and made it difficult for some members of the Japanese Diet to support the continued U.S. military presence in Japan. According to the Office of the Secretary of Defense, the continued ability of the United States to remain in Japan was at risk due to the outcry over the rape incident, and the United States and Japan had to do something to reduce the impact of the presence on Okinawans. To address Okinawans and Japanese legislators concerns, bilateral negotiations between the United States and Japan began, and the Security Consultative Committee established the Special Action Committee on Okinawa in November 1995. The Committee developed recommendations on ways to limit the impact of the U.S. military presence on Okinawans. On December 2, 1996, the U.S. Secretary of Defense, U.S. Ambassador to Japan, Japanese Minister of Foreign Affairs, and Minister of State and Director-General of the Defense Agency of Japan issued the Committee s final report. According to USFJ, the SACO Final Report is not a binding bilateral agreement, but it does contain a series of recommendations to which the U.S. and Japanese governments have committed themselves. Officials from USFJ and Marine Corps Bases, Japan, told us that the United States approaches the recommendations as if they were agreements by making reasonable efforts to implement the recommendations. However, they also stated that if Japan does not provide adequate replacement facilities or complete action needed to implement some recommendations, the United States will not be obligated to implement those particular recommendations. <1.4. Objectives, Scope, and Methodology> In response to Representative Duncan Hunter s concerns about the impact of implementing SACO s recommendations on U.S. force readiness, we describe (1) the benefit or necessity of retaining U.S. forces in Japan and on Okinawa and (2) SACO s report recommendations and identify the impact of implementation on U.S. operations, training, and costs. The report also identifies two environmental issues that may remain after the SACO recommendations have been implemented. To determine DOD s views on the benefit or necessity of having U.S. forces stationed on Okinawa, we interviewed officials and obtained relevant documents, including the Quadrennial Defense Review report, the President s National Security Strategy for a New Century, The Security Strategy for East Asia, the Commander-in-Chief of the Pacific Command s regional strategy, and other documents. Because it was outside the scope of our work, we did not evaluate any alternatives to forward deployment. However, in a June 1997 report, we concluded that DOD had not adequately considered alternatives to forward presence to accomplish its stated security objectives. To determine U.S. and Japanese obligations under the bilateral security relationship, we reviewed the Treaty of Mutual Cooperation and Security between Japan and the United States, the Status of Forces Agreement, the Special Measures Agreement, Joint Statement of the Security Consultative Committee on the review of 1978 guidelines for defense cooperation, the new 1997 Guidelines for U.S.-Japan Defense Cooperation, and other documents. To determine SACO s report recommendations, we reviewed the Final Report of the Special Action Committee on Okinawa, Joint Committee meeting minutes and related documents, briefings, the testimony of the Commander-in-Chief of the U.S. Pacific Command to the Senate Committee on Armed Services on March 18, 1997, and other documents. To determine the impact of the SACO report recommendations on readiness, training, and costs of operations of U.S. forces, we interviewed officials and reviewed memorandums, cables, reports, analyses, and other documents discussing the impact on readiness and training or providing evidence of the impact. To review the feasibility of construction and operation of a sea-based facility, we interviewed officials and reviewed relevant documents, including the Functional Analysis and Concept of Operations report prepared by DOD officials from several organizations, briefing documents, memorandums, and other documents. We also reviewed a number of scholarly papers presented at the Japanese Ministry of Transport s International Workshop on Very Large Floating Structures, held in Hayama, Japan, in November 1996. To identify the environmental issues that could remain after the SACO recommendations are implemented, we reviewed the Status of Forces Agreement and DOD environmental policy and interviewed DOD and Department of State officials. We also interviewed officials at the Office of the Secretary of Defense/International Security Affairs, the Joint Staff, headquarters of the U.S. Marine Corps, headquarters of the U.S. Air Force, Office of Naval Research, Defense Logistics Agency, Military Traffic Management Command, and Department of State in Washington, D.C., and the U.S. Special Operations Command in Tampa, Florida. We also interviewed officials from the U.S. Pacific Command; Marine Forces, Pacific; Pacific Air Forces; Naval Facilities Engineering Command; Army Corps of Engineers; Military Traffic Management Command; and East-West Center in Honolulu, Hawaii. We interviewed officials from U.S. Forces-Japan, the 5th Air Force, U.S. Naval Forces-Japan, U.S. Army-Japan, and the U.S. Embassy-Tokyo in the Tokyo, Japan, area. Lastly, we interviewed officials from Marine Corps Bases, Japan; the 1st Marine Air Wing; the Air Force s 18th Wing; the Army s 1/1 Special Forces Group (Airborne); the Army s 10th Area Support Group; the Navy s Fleet Activities, Okinawa; and the Navy s Task Force 76 on Okinawa. To discuss the feasibility of very large floating structures, we interviewed two ocean engineering professors at the University of Hawaii who were instrumental in organizing the 1996 conference in Japan. We also viewed the proposed site for a sea-based facility by helicopter and inspected several U.S. bases affected by the SACO process, including MCAS Futenma; Kadena Air Base; Camp Schwab; and the Northern, Central, Gimbaru, and Kin Blue Beach training areas on Okinawa. We also visited the Ie Jima parachute drop zone on Ie Jima Island. We obtained comments from the Departments of Defense and State on this report and have incorporated their comments where appropriate. We conducted our work from June 1997 to March 1998 in accordance with generally accepted government auditing standards. <2. U.S. Forces on Okinawa Support U.S. National Security Strategy> U.S. forces on Okinawa support U.S. national security and national military strategies to promote peace and maintain stability in the region. These forces can also deter aggression and can deploy throughout the region if needed. According to the Office of the Secretary of Defense, the Pacific Command, and USFJ, relocating these forces outside the region would increase political risk by appearing to decrease commitment to regional security and treaty obligations and undercut deterrence. Furthermore, relocating U.S. forces outside of Japan could adversely affect military operations by increasing transit times to areas where crises are occurring. Finally, the cost of the U.S. presence in Japan is shared by the government of Japan, which also provides bases and other infrastructure used by U.S. forces on Okinawa. <2.1. U.S. Forces on Okinawa Are Part of the Pacific Command s Regional Forward Presence> The Commander-in-Chief of the Pacific Command, who is the geographic combatant commander for the Asia-Pacific region, develops a regional strategy to support the national security strategy and the national military strategy. The Pacific Command s area of responsibility is the largest of that of the five geographic combatant commands: it covers about 105 million square miles (about 52 percent of the earth s surface) and contains 44 countries, including Japan, China, India, and North and South Korea (see fig. 2.1). Pacific Command forces provide a military presence in the Asia-Pacific region, promote international security relationships in the region, and deter aggression and prevent conflict through a crisis response capability, according to the Pacific Command. These forces include over 300,000 servicemembers, of which about 100,000 are in Alaska, Hawaii, Japan, South Korea, and certain other locations overseas. The Quadrennial Defense Review reaffirmed the need for the U.S. forward presence of about 100,000 U.S. troops in the Asia-Pacific region. About 47,000 U.S. servicemembers are stationed in Japan. Of those, about 28,000 are based on Okinawa, including about 17,000 assigned to the Marine Corps III Marine Expeditionary Force and supporting establishment. The III Marine Expeditionary Force, the primary Marine Corps component on Okinawa, consists of the (1) 3rd Marine Division, the ground combat component; (2) 1st Marine Air Wing, the air combat component; (3) 3rd Force Service Support Group, the logistics support component; and (4) command element. The Force, and other deployed U.S. forces, support the security strategy by providing the forces that could be employed if crises arise. The III Marine Expeditionary Force can deploy throughout the region, using sealift, airlift, and amphibious shipping, and operate without outside support for up to 60 days. <2.2. U.S. Forces in the Asia-Pacific Region Provide Political Benefits> Under the national strategy, U.S. forward deployment is necessary because it demonstrates a visible political commitment by the United States to peace and stability in the region, according to DOD. The United States has mutual defense treaties with Japan, South Korea, the Philippines, Australia, and Thailand. In addition to demonstrating commitment, the U.S. forward deployment also deters aggression, according to the Pacific Command, because a regional aggressor cannot threaten its neighbors without risking a military confrontation with U.S. forces in place on Okinawa (or elsewhere in the region). To help maintain peace and stability in the region, the Pacific Command strategy features engagement through joint, combined, and multilateral military exercises; military-to-military contacts; and security assistance, among other activities. According to the Pacific Command, the III Marine Expeditionary Force is a key force that is employed to carry out these activities. According to the Office of the Secretary of Defense, Pacific Command, and USFJ, a withdrawal of U.S. forces from the region could be interpreted by countries in the region as a weakening of the U.S. commitment to peace and stability in Asia-Pacific and could undercut the deterrent value of the forward deployment. While U.S. forces may not have to be on Okinawa specifically for the United States to demonstrate such commitments, USFJ officials told us that U.S. forces do need to be located somewhere in the Western Pacific region. <2.3. The U.S. Presence in Okinawa Provides Operational Benefits> If hostilities erupt in the Asia-Pacific region, U.S. forces need to arrive in the crisis area quickly to repel aggression and end the conflict on terms favorable to the United States. U.S. forces could be used in a conflict and could deploy from their bases on Okinawa. The forward deployment on Okinawa significantly shortens transit times, thereby promoting early arrival in potential regional trouble spots such as the Korean peninsula and the Taiwan straits, a significant benefit in the initial stages of a conflict. For example, it takes 2 hours to fly to the Korean peninsula from Okinawa, as compared with about 5 hours from Guam, 11 hours from Hawaii, and 16 hours from the continental United States. Similarly, it takes about 1 1/2 days to make the trip from Okinawa by ship to South Korea, as compared with about 5 days from Guam, 12 days from Hawaii, and 17 days from the continental United States. In addition to its strategic location, Okinawa has a well-established military infrastructure that is provided to the United States rent-free and that supports the III Marine Expeditionary Force (and other U.S. forces). Housing, training, communications, and numerous other facilities are already in place on Okinawa, including those at MCAS Futenma, a strategic airfield for the 1st Marine Air Wing, and Camp Courtney, home of the 3rd Marine Division. Marine Corps logistics operations are based at Camp Kinser, which has about a million square feet of warehouse space for Marine forces use in the Pacific. For example, warehouses hold war reserve supplies on Okinawa that would support U.S. operations, including 14,400 tons of ammunition, 5,000 pieces of unit and individual equipment, and 50 million gallons of fuel. Military port facilities capable of handling military sealift ships and amphibious ships are available at the Army s Naha Military Port and the Navy s White Beach. In addition to providing base infrastructure, Japan provides about $368 million per year as part of its burden-sharing to help support the III Marine Expeditionary Force deployment on Okinawa. <3. Some SACO Recommendations Carry Risk That Must Be Overcome to Maintain U.S. Operational Capability> The SACO Final Report calls for the United States to (1) return land at 11 U.S. bases on Okinawa and replace MCAS Futenma with a sea-based facility, (2) change 3 operational procedures, (3) implement 5 noise abatement procedures, and (4) implement 7 Status of Forces Agreement changes. Japan agreed to implement one Status of Force Agreement procedure change. Of all of the SACO report recommendations, replacing MCAS Futenma with a sea-based facility poses the greatest challenge. Most of the other SACO report recommendations can be implemented with few problems. <3.1. The United States Plans to Return Land Used on Okinawa> As called for in the SACO Final Report, the United States plans to return to Japan about 12,000 acres, or 21 percent of the total acreage, used by U.S. forces on 11 installations. The plan is to relocate personnel and facilities from bases to be closed to new bases or to consolidate them at the remaining bases. Table 3.1 shows the land to be returned, the planned return date, and the plan for replacing capabilities that would be lost through the land return. The most significant land deal involves the planned closure and return of MCAS Futenma. The installation is a critical component of the Marine Corps forward deployment because it is the home base of the 1st Marine Air Wing. The Wing s primary mission is to participate as the air component of the III Marine Expeditionary Force. The wing s Marine Air Group-36 provides tactical fixed and rotary wing aircraft and flies about 70 aircraft, including CH-46 and CH-53 helicopters and KC-130 aerial refueling airplanes. MCAS Futenma s primary mission is to maintain and operate facilities and provide services and materials to support Marine aircraft operations. MCAS Futenma covers 1,188 acres of land and is completely surrounded by the urbanized growth of Ginowan City, as shown in figure 3.1. Officials in the Office of the Secretary of Defense, USFJ, and Marine Corps Bases, Japan, told us that encroachment along the perimeter of MCAS Futenma is a concern. In fact, according to Marine Corps Bases, Japan, in one instance, the owner of land outside MCAS Futenma erected a building at the end of the runway that was tall enough to create a hazard to aircraft using the base. The building was removed. The land at MCAS Futenma is leased from about 2,000 private landowners by the government of Japan. About 40 percent of the base is used for runways, taxiways, and aircraft parking. The remaining portions of the base are used for air operations, personnel support facilities, housing, and administrative activities. MCAS Futenma has a runway and parallel taxiway that are 9,000 feet long as well as an aircraft washrack, maintenance facilities, vehicle maintenance facilities, fuel storage facilities, a hazardous waste storage and transfer facility, a control tower, an armory, and other facilities needed to operate a Marine Corps air station. If the Marine Corps presence is to be maintained with air and ground combat units and logistical support collocated on Okinawa, then MCAS Futenma or a suitable replacement is required to maintain the operational capability of the III Marine Expeditionary Force s air combat element. <3.2. MCAS Futenma Is Scheduled to Be Largely Replaced by a Sea-Based Facility> The U.S. and Japanese governments established a working group to examine three options for replacing MCAS Futenma. The options were relocation of the air station onto (1) Kadena Air Base, (2) Camp Schwab, or (3) a sea-based facility to be located in the ocean offshore from Okinawa Island. The SACO Final Report stated that the sea-based facility was judged to be the best option to enhance the safety and quality of life of the Okinawan people and maintain the operational capabilities of U.S. forces. The report also cited as a benefit that a sea-based facility could be removed when no longer needed. Acquisition of the sea-based facility would follow a process that began with the United States establishing operational and quality-of-life requirements and would conclude with Japan s selecting, financing, designing, and building the sea-based facility to meet U.S. requirements. The government of Japan has decided to locate the sea-based facility offshore from Camp Schwab. However, at the time of our review some residents living near the proposed site had opposed having the sea-based facility near their community, but U.S. officials are proceeding on the basis that the facility will be built. The Security Consultative Committee established the Futenma Implementation Group to identify a relocation site and an implementation plan for the transfer from MCAS Futenma to the sea-based facility. On the U.S. side, the Group is chaired by the Deputy Assistant Secretary of Defense for International Security Affairs and has representatives from the Joint Staff; the headquarters of the Marine Corps; the Assistant Secretary of the Navy for Installations and Environment; the Pacific Command; USFJ; the Office of Japanese Affairs, Department of State; and the Political-Military Affairs Section of the U.S. Embassy-Tokyo. The Group was established to oversee the design, construction, testing, and transfer of assets to the sea-based facility. MCAS Futenma will not be closed until the sea-based facility is operational. Only when U.S. operating and support requirements have been met will Marine Air Group-36 and its rotary wing aircraft relocate to the sea-based facility. As part of the closure and return of MCAS Futenma, 12 KC-130 aircraft are scheduled to relocate to MCAS Iwakuni, on the Japanese mainland, after Japan builds new maintenance and other facilities to support the relocation. In addition, Japan is scheduled to build other support facilities at Kadena Air Base to support aircraft maintenance and logistics operations that are to relocate there. Ground elements of the 1st Marine Air Wing not relocated to the sea-based facility would relocate to other bases on Okinawa. <3.2.1. DOD Has Established Requirements for the Sea-Based Facility> The sea-based facility is to be designed by Japan to meet U.S. operational requirements. During regular operations, about 66 helicopters and MV-22 aircraft (when fielded) would be stationed aboard the sea-based facility. The MV-22 can operate in either vertical takeoff and landing mode, like a helicopter, or short takeoff and landing mode, like an airplane. The sea-based facility airfield requirements are based on MV-22 operating requirements. According to a Marine Corps study, a runway length of 2,600 feet is sufficient for normal day-to-day operations, training missions, and self-deployment to Korea in its vertical takeoff and landing mode under most conditions. The Pacific Command has established a 4,200-foot runway for all MV-22 operations based on aircraft performance and meteorological data. The Marine Corps study indicates that a 4,200-foot runway is sufficient for most training and mission requirements. However, the study also stated that for missions requiring an MV-22 gross weight near the maximum of 59,305 pounds, the aircraft would have to operate in its short takeoff mode and would require a runway of 5,112 feet under certain weather conditions. The United States has established a runway length requirement of about 4,200 feet for the sea-based facility. Arresting gear would be located about 1,200 feet from either end of the runway to permit carrier aircraft to land. In addition, the runway would have 328-foot overruns at each end to provide a safety margin in case a pilot overshoots the optimal landing spot during an approach and a parallel taxiway about 75 feet wide alongside the runway. Additional aircraft facilities include a drive-through rinse facility for aircraft corrosion control, an air traffic control tower, and aircraft firefighting and rescue facilities. Up to 10,000 pounds of ordnance would be stored in a magazine collocated with an ordnance assembly area aboard the sea-based facility. Also, flight simulators and security and rescue boat operations, among other capabilities, are required aboard the sea-based facility. Aircraft maintenance would be performed aboard the sea-based facility. Marine Air Group-36 requires hangar space for five helicopter squadrons, including space for Marine Corps air logistics; corrosion control; aircraft maintenance; secure storage; administrative functions; ground support equipment; and engine test cells, among other facilities. Logistics operations requirements aboard the sea-based facility include aircraft supply and fuel/oil supply, spill response capability, and parking for up to 800 personally owned and government-owned vehicles. MCAS Futenma can store about 828,000 gallons of aircraft fuel. At the time of our review, the United States had not determined how much fuel storage capacity was needed, or how fuel is to be provided to support sea-based facility operations. Food service for about 1,400 on-duty servicemembers per meal would be required on the sea-based facility to provide meals during the day and for crews working nights. The United States planned to locate the headquarters, logistics, and most operational activities aboard the sea-based facility and most quality-of-life activities, including housing, food service, and medical and dental services, ashore at Camp Schwab. U.S. officials estimate that over 2,500 servicemembers currently stationed at MCAS Futenma would transfer to the sea-based facility and Camp Schwab. To accommodate the incoming arrivals from MCAS Futenma, Marine Corps Bases, Japan, plans to relocate about 800 to 1,000 servicemembers currently housed at Camp Schwab to Camp Hansen and absorb the remainder at Camp Schwab. U.S. engineers estimated that about 1,900 people would work on the sea-based facility. Due to a lack of DOD dependent schools in the Camp Schwab area, only unmarried servicemembers will be housed at Camp Schwab. Servicemembers accompanied by dependents will be housed where most of them and most of the DOD schools (including the only two high schools) are located now, although not on MCAS Futenma. Marine Corps Bases, Japan, would have to either house all incoming servicemembers on or near Camp Schwab and bus their dependent children to the schools or keep servicemembers who have dependents housed in the southern part of the island and have them commute to work. Marine Corps Bases, Japan, chose the latter. <3.2.2. Contractors Have Developed Three Concepts for a Sea-Based Facility> Japan will design, build, and pay for the sea-based facility and plans to locate it offshore from Camp Schwab. The sea-based facility is be provided rent-free to USFJ, which would then provide it to the 1st Marine Air Wing. Government of Japan, ocean engineering and other university professors, and other experts have concluded that three types of sea-based facilities are technically feasible the pontoon-type, pile-supported-type, or semisubmersible-type. <3.2.3. Pontoon-Type Sea-Based Facility> A pontoon-type sea-based facility would essentially be a large platform that would float in the water on pontoons (see fig. 3.2). The structure would be located about 3,000 feet from shore in about 100 feet of water. Part of the platform would be below the water line. To keep the sea relatively calm around the platform, a breakwater would be installed to absorb the wave action. The breakwater would be constructed in about 60 feet of water atop a coral ridge. To prevent the structure from floating away, it would be attached to a mooring system attached to the sea floor. The pontoon-type sea-based facility envisioned would have a runway and control tower on the deck and most maintenance, storage, and personnel support activities (such as food service) below deck. According to documents that we obtained, no floating structure of the size required has ever been built. In addition, Naval Facilities Engineering Command officials told us that construction of a breakwater in about 60 feet of water would be at the edge of technical feasibility. <3.2.4. Pile-Supported Sea-Based Facility> A pile-supported sea-based facility essentially would be a large platform supported by columns, or piles, driven into the sea floor (see fig. 3.3). The structure would be located in about 16 feet to 82 feet of water and relatively closer to shore than the proposed pontoon-type sea-based facility. According to Naval Engineers, about 7,000 piles would be needed to support a structure of the size proposed. The pile-supported sea-based facility envisioned would have one deck. In addition to the runway and control tower, maintenance, storage, and personnel support activities would be in buildings on the deck. Structures similar to the pile-supported sea-based facility have already been built for other purposes. <3.2.5. Semisubmersible-Type Sea-Based Facility> The semisubmersible-type sea-based facility would consist of a platform above the water line supported by a series of floating underwater hulls (see fig. 3.4). The facility would have interconnected modules with a runway and control tower atop the deck and maintenance, storage, and other functions on a lower deck. The semisubmersible sea-based facility relies on technology that does not yet exist, according to documents provided by DOD. For example, documents indicate that semisubmersible sea-based facilities are limited by current technology to about 1,000 feet in length. <3.3. Costs, Challenges, and Complications Threaten Capability of the Sea-Based Facility> The United States and/or Japan are likely to encounter high costs, technological challenges, and operational complications in designing, constructing, and operating the sea-based facility. <3.3.1. Costs> The sea-based facility is estimated to cost Japan between $2.4 billion and $4.9 billion to design and build. Operations and support costs are expected to be much higher on the sea-based facility than at MCAS Futenma. Under the Status of Forces Agreement, the United States pays for the maintenance of bases it uses in Japan. Based on a $4-billion sea-based facility design and construction cost, U.S. engineers have initially estimated maintenance costs to be about $8 billion over the 40-year life span of the facility. Thus, annual maintenance would cost about $200 million, compared with about $2.8 million spent at MCAS Futenma. At the time of this report, the United States and Japan were discussing having Japan pay for maintenance on the sea-based facility. If Japan does not pay maintenance costs, then the U.S. cost related to the SACO recommendations could be much higher. In addition to potential increased maintenance costs, the United States may spend money to renovate facilities at MCAS Futenma previously identified by both the U.S. and Japanese governments for replacement by Japan. Because of the planned closure of MCAS Futenma, the government of Japan cancelled about $140 million worth of projects at the air station that were to be funded under Japan s Facilities Improvement Program. The United States believes these facilities are important to Futenma s operations until the sea-based facility is ready. Marine Forces, Japan, has requested $13.6 million in U.S. funds to complete some of those projects. During the 10-year sea-based facility acquisition period, some of the other projects may be needed to continue to operate MCAS Futenma. If the government of Japan does not fund these projects for MCAS Futenma, the United States will have to choose between the added risk of operating from decaying facilities or pay additional renovation costs at a base scheduled for closure. <3.3.2. Technological Challenges> Technological challenges may arise because no sea-based facility of the type and scale envisioned has ever been built to serve as an air base. To address these challenges and develop sea-based facility operational and support requirements, the Naval Facilities Engineering Command convened a working group in August 1997. In its report, the group concluded that for the three sea-based facilities being considered, none of these technologies has been demonstrated to the scale envisioned. The report described numerous challenges that would have to be overcome to make a sea-based facility viable. For example, the sea-based facility would have to survive natural events such as typhoons, which strike within 180 nautical miles of Okinawa Island an average of four times per year. During a typhoon, personnel would evacuate the sea-based facility, but the aircraft would remain aboard the facility in hangars to ride out the storm, according to 1st Marine Air Wing officials. U.S. engineers we spoke with indicated that a pile-supported sea-based facility s underside would have to withstand pressure caused by storm-tossed waves slamming beneath the deck, and the pontoon- and semisubmersible-type sea-based facilities must be designed to avoid instability or sinking. Tsunamis are also a threat. In a tsunami, the water level near shore generally drops (sometimes substantially) and then rises to great heights, causing large, destructive waves. U.S. engineers we spoke with indicated that a floating sea-based facility s mooring system would have to permit the floating structure to drop with the water level without hitting bottom and then rise as the waves returned. Also, structural issues pose technological challenges. The sea-based facility would have to be invulnerable to sinking or capsizing and resume normal operations within 24 to 48 hours after an aircraft crash, an accident involving ordnance aboard the facility, or an attack in wartime or by terrorists. An issue involving the pontoon and semisubmersible facilities is the potential for them to become unstable if an interior compartment is flooded. Thus, watertight doors and compartments (similar to those on ships) may be required. Corrosion control is a major concern because the facility would always be in salt water. Therefore, that part of the structure below the waterline would have to be built to minimize or resist corrosion for the 40-year life span of the facility, or a method of identifying and repairing corrosion (possibly underwater) without disrupting military operations would have to be devised. <3.3.3. Operational Complications> The Marine Corps may experience operational complications because the proposed length of the sea-based facility runway can compromise safety margins when an MV-22 aircraft is taking off at maximum weight under wet runway conditions. Since the MV-22 requires a 5,112-foot runway to take off at its maximum weight of 59,305 pounds and maintain maximum safety margins on a wet runway, the proposed 4,200-foot runway for the sea-based facility is too short. While the MV-22 can take off from a 4,200-foot runway at its maximum weight, in the event of an engine failure, or other emergency, on a wet runway, the safety margin is reduced. This risks the loss of the aircraft because the stopping distance for an aborted takeoff is longer on a wet runway than the runway planned. According to the Pacific Command, conditions that require more than 4,200 feet for takeoff would not preclude effective MV-22 contingency missions. A commander would need to make a decision to accept the increased risk of aircraft loss based on the criticality of the mission, or to reduce the aircraft s load. The Pacific Command considers the risk acceptable and accepted the reduced the size of the sea-based facility. Alternatively, with a reduced load, MV-22s could take off from the sea-based facility without a full fuel load, use Kadena Air Base to finish fueling to capacity, and take off from its longer runway to continue the mission. However, this requires Kadena Air Base to absorb increased air traffic and risks later arrival in an area of operations. Ultimately, the added risk, time, and coordination are problems that would not occur at MCAS Futenma because its 9,000-foot runway is long enough for all MV-22 missions. Also, if Kadena Air Base is not available for MV-22 operations, the Marines would have no alternative U.S. military runway of sufficient length on Okinawa to support MV-22 missions at its maximum weight and maintain maximum safety margins in certain weather conditions. Moreover, the loss of MCAS Futenma s runway equates to the loss of an emergency landing strip for fixed-wing aircraft in the area. However, safety margins may not be compromised even if Kadena Air Base is shut down (for weather or other reasons), MCAS Futenma is closed, and the sea-based facility s runway as currently designed is too short for certain aircraft, because Naha International Airport would be available as an emergency landing strip for U.S. military aircraft. <3.3.4. U.S. Project Oversight Is Currently Limited> USFJ and Naval Facilities Engineering Command officials told us that the United States must oversee the design, engineering, and construction of the sea-based facility to ensure that it meets U.S. requirements, is operationally adequate, and is affordable to operate and maintain. However, current staff and funding resources are dedicated to managing other programs associated with the U.S. presence in Japan. Therefore, USFJ has requested establishment of a Project Management Office to oversee and coordinate SACO implementation while the Naval Facilities Engineering Command has asked for funding for a special project office to oversee the design and construction of the sea-based facility. In addition to the high cost, technological challenges, and operational complications that stem from the planned sea-based facility and limited U.S. oversight of the project, Japan s sea-based facility acquisition strategy compounds the risk. At the time of our review, Japan did not have a risk-reduction phase planned to demonstrate that the design of the sea-based facility meets U.S. operating and affordability requirements. A risk-reduction phase could include risk assessments, life-cycle cost analyses, and design tradeoffs. DOD s policy is to include a risk-reduction phase in its acquisition of major systems. U.S. officials believe it will take up to 10 years to design, build, and relocate to the sea-based facility as compared with the 5 to 7 years estimated in the SACO Final Report. On the other hand, these officials also believe that adding time to the project is a price worth paying to include a risk-reduction phase. Given the scope, technical challenges, and unique nature of the sea-based facility, including a risk-reduction phase would permit the U.S. and Japanese governments to establish that the proposed sea-based facility will be affordable and operationally suitable. The inclusion of a risk-reduction phase in the sea-based facility s acquisition schedule is currently being discussed between the U.S. and Japanese governments. <3.4. Problems Associated With Remaining 10 Land Return Recommendations Are Minimal, and Some Benefits Are Likely> U.S. forces on Okinawa will face minimal risks to operations from the remaining 10 land return issues. The services can maintain training opportunities and deployment plans and schedules, because land to be returned is no longer needed or will be returned only after Japan provides adequate replacement facilities on existing bases or adds land by extending other base boundaries. First, while the Northern Training Area is still used extensively for combat skills training, about 9,900 acres can be returned to Japan because that land is no longer needed by the United States. The Marine Corps will retain about 9,400 acres of the Northern Training Area and expects to be able to continue all needed training on the remaining acreage. The return of the 9,900 acres is contingent on Japan s relocating helicopter landing zones within what will remain of the Northern Training Area. In addition, the adjacent Aha training area can be returned without risk once Japan provides new shoreline access to the Northern Training Area to replace what would be lost by the closure and return of the Aha training area. Likewise, return of the Gimbaru training area presents low risk because the helicopter landing zone is to be relocated to the nearby Kin Blue Beach training area and the vehicle washrack and firefighting training tower will be relocated to Camp Hansen. The Yomitan auxiliary airfield can be returned because parachute drop training conducted there has already been transferred to the Ie Jima auxiliary airfield on Ie Jima Island, just off the northwest coast of Okinawa Island. Lastly, the Sobe communication station can be returned because it will be relocated to the remaining Northern Training Area, and Naha Port can be returned when it is replaced by a suitable facility elsewhere on Okinawa. While risks from the return of land (other than that related to MCAS Futenma) are minimal, the United States expects some benefits from the consolidation of housing on the remaining portion of Camp Zukeran. First, the SACO Final Report calls on Japan to build a new naval hospital on Camp Zukeran to replace the existing hospital on that part of Camp Kuwae scheduled for return. Marine Corps Bases, Japan estimated the construction cost to be about $300 million, which Japan is scheduled to pay. In addition, Japan is to provide 2,041 new or reconstructed housing units at Camp Zukeran as part of the SACO process and another 1,473 reconstructed housing units at Kadena Air Base, which is not part of SACO s recommendations. Air Force 18th Wing civil engineering officials estimated the total housing construction cost at about $2 billion. The 18th Wing has requested establishment of a special project office to help with the design of the housing units and to ensure that the units meet U.S. health and safety code standards. The current estimated cost to the United States to implement the recommendations related to the return of land is about $193.5 million over about 10 years. This includes (1) $80 million to furnish the new hospital; (2) $71 million for the Futenma Implementation Group; (3) $8.2 million to furnish 2,041 housing units; (4) $8.1 million for USFJ to oversee and coordinate SACO implementation; (5) $8 million for the Naval Facilities Engineering Command project office to oversee the sea-based facility s engineering and construction; (6) $4.4 million for a special project office for oversight of the housing project and master plan; and (7) $13.6 million for MCAS Futenma projects that would have been paid for by Japan had it not cancelled funding for the base. DOD officials told us that the U.S. and Japanese governments were negotiating an arrangement whereby Japan might assume those portions of the $71 million in costs which they can pay (and still comply with their domestic laws), for the Futenma Implementation Group. This arrangement could reduce U.S. costs below the current estimate of $193.5 million. Also, some initial costs may be offset in later years because the 18th Wing expects maintenance costs will be lower at the new hospital and housing. However, U. S. costs could be significantly higher than the $193.5 million estimate because the United States and Japan have not agreed on which country would be responsible for the sea-based facility s maintenance. <3.5. Some Problems and Risks in Implementing One of the Three Operational Changes> The United States has already implemented all three changes in training and operational procedures called for in the SACO Final Report (see table 3.2). The 3rd Marine Division s artillery live-fire exercises have been relocated from the Central Training Area on Okinawa to the Kita-Fuji, Higashi-Fuji, Ojojihara, Yausubetsu, and Hijudai training ranges on the Japanese mainland. Prior to the SACO Final Report, the 3rd Marine Division was already conducting 60 to 80 days of artillery live-fire exercises at the two Fuji ranges. Under the SACO relocation, another 35 days of training will be split among the five ranges. Japan has agreed to pay transportation costs to the artillery ranges and wants to use Japanese commercial airliners for this purpose. The III Marine Expeditionary Force believes the training at the five ranges is comparable to that available on Okinawa and other ranges in the United States. At the time of our review, the Marine Corps had successfully completed one relocated artillery live-fire exercise each at the Kita-Fuji and Yausubetsu ranges. The relocation has had virtually no impact on deployment plans and schedules, according to III Marine Expeditionary Force officials. In addition to the artillery training relocation, the United States has transferred parachute jump training conducted by the Army s 1st Battalion, 1st Special Forces Group (Airborne), from the Yomitan auxiliary airfield (which was closed) to the auxiliary airfield on Ie Jima Island, just off the northwest coast of Okinawa. However, special forces soldiers are at increased risk of failing to maintain airborne qualifications because parachute operations training has proven more difficult to complete on Ie Jima Island. About 73 percent of the training jumps scheduled between July 1996 and September 1997 on Ie Jima Island were canceled due to adverse weather at the drop zone; adverse weather at sea, preventing required safety boats from standing by in the event a parachutist landed in the water; and equipment problems that prevented the safety boats from departing their berths. The relocation has not affected operational deployments and schedules, although training deployments have been disrupted. Lastly, the Marine Corps has already ended conditioning hikes for troops on public roads off base and transferred those hikes to roads within U.S. bases. USFJ and Marine Corps Bases, Japan, indicated that this has not cost the United States any money and has had no impact on operational capability, deployment plans and schedules, or training. As requested, we also reviewed the impact of the SACO Final Report recommendations on bomber operations in the Pacific, although bomber operations were not specifically addressed by the SACO report. According to the headquarters of the Air Force, Pacific Air Forces, and 18th Wing, the SACO Final Report recommendations will have no impact on bomber operations in the Pacific. <3.6. Risks Are Minimal From Five Noise Reduction Initiatives> The United States has implemented two noise reduction initiatives at Kadena Air Base and MCAS Futenma called for in the SACO Final Report. Three more noise reduction initiatives are to be implemented after Japan constructs new facilities. Table 3.3 shows the status of the five noise reduction initiatives and U.S. plans for maintaining training and operational capability after their implementation. The United States will encounter few problems from the noise abatement procedures, according to USFJ; Marine Corps Bases, Japan; and the 18th Wing. Commanders at MCAS Futenma and Kadena Air Base retain the right to order nighttime flying operations to maintain aircrew proficiency and meet all training, mission, and safety requirements. In fact, the noise abatement countermeasures have been in effect since March 1996, and commanders at both installations indicated that the procedures have not affected operational capability, deployment plans and schedules, or training. <3.7. Risks Are Minimal From Eight Status of Forces Agreement Changes> The United States has implemented seven of the eight changes to Status of Forces Agreement procedures called for in the SACO Final Report. Table 3.4 shows the new Status of Forces Agreement procedures. According to USFJ officials, with the exception of affixing number plates to official vehicles, the changes in Status of Forces Agreement procedures cost the United States nothing and had no impact on deployment plans, schedules, and training. The number plates cost about $30,000 according to USFJ officials. <3.8. Recommendations> We recommend that the Secretary of Defense decide on the means to monitor the design, engineering, and construction of the sea-based facility; work with Japan to include a risk-reduction phase in the acquisition schedule to establish that the designed sea-based facility will be affordable and operationally suitable; take steps to ensure that all U.S. concerns, especially the costs of operations and maintenance on the sea-based facility and operational concerns, have been satisfactorily addressed before Japan begins to build the sea-based facility; and request the Japanese government to allocate funds for those projects at Futenma that were cancelled by Japan due to the planned closure of Futenma and are deemed essential to continued operations of the station and the 1st Marine Air Wing until completion of the replacement facility. <3.9. Agency Comments> In written comments on a draft of this report, DOD concurred with GAO s recommendations and noted that the report effectively outlines the major operational and technical issues involved in realigning, consolidating, and reducing U.S. force presence on Okinawa, as set forth in the SACO process. DOD also noted that the role of Congress will be critical in maintaining the strategic relationship with Japan and therefore the GAO report was timely and welcome. DOD provided technical comments, which we have incorporated in our report where appropriate. The DOD response is printed in its entirety in appendix II. We also provided a copy of our draft report to the Department of State. In oral comments, the Department of State concurred with our report and offered one technical change which we incorporated into the report. <4. Two Environmental Issues Could Arise From Implementing the SACO Recommendations> It may take a decade or more to fully achieve all of the SACO s recommendations, but two environmental issues may arise and remain during and after implementation. The first concerns the potential for environmental contamination on U.S. bases scheduled for closure. The second concerns the potential adverse impact on the environment from construction and operation of the sea-based facility. <4.1. Environmental Cleanup Issues Could Affect Land Return> If environmental contamination is found on bases to be closed under the SACO process, cleanup could be expensive. As we noted in chapter 1, the Status of Forces Agreement does not require the United States to return bases in Japan to the condition they were in at the time they were provided to U.S. forces or to compensate Japan for not having done so. Thus, USFJ and Marine Corps Bases, Japan, officials believe that the United States is not obligated to do environmental cleanup at bases to be closed. Nevertheless, a 1995 DOD policy calls for the removal of known imminent and substantial dangers to health and safety due to environmental contamination caused by DOD operations on installations or facilities designated for return to the host nation overseas. Furthermore, if the bases are closed and the land returned to Japan and environmental contamination is subsequently found, redevelopment and reuse efforts planned for some of these facilities could be hampered. In fact, Marine Corps Bases, Japan, and other Okinawa-based U.S. forces were informed by a letter dated August 25, 1997, from the government of Japan s Naha Defense Facilities Administration Bureau that the toxic substances mercury and polychlorinated biphenyls were found on the Onna communications site. The United States had closed the base and returned the land to Japan in November 1995 (a land return unrelated to the SACO process). The letter indicated that the presence of these substances has prevented the land from being returned to its owners and thus being available for reuse. The letter concludes by requesting that the United States conduct a survey, identify any contamination that may exist, and clean up bases scheduled for closure in the future. If the United States agrees to this request, land return under the SACO process could be affected. At the time of our review, the United States had not responded to the letter. If such a survey, sometimes called an environmental baseline survey, is conducted and contamination is found, cleanup could prove expensive. For example, environmental remediation at MCAS Tustin in California is expected to cost more than $53 million when completed. If a survey is conducted and contamination is found, a decision would be needed as to whether the United States or Japan would pay the cost. <4.2. Construction and Operation of the Sea-Based Facility Could Harm the Environment> DOD s position is that the sea-based facility should be constructed and operated in a manner that preserves and protects the natural resources of Okinawa, including the ocean environment and coral reefs that partially surround the island. Further, the United States and Japan, along with a substantial number of other countries, support an international coral reef initiative aimed at conservation and management of coral reefs and related ecosystems. Coral reefs are in the area in which the sea-based facility is tentatively to be located. However, two sea-based facility options currently under consideration have the potential to harm the coral reefs. The pontoon-type facility requires the installation of a large breakwater and several mooring stations onto the seafloor. The pile-supported facility requires several thousand support pilings that would need to be driven into the coral reef or seafloor and reinforced to withstand storm conditions. Both of these options require at least one, and possibly two, causeways connecting them to shore facilities. Numerous scientific studies show that large construction projects can cause damage to coral reefs and the nearby coastal areas. The government of Japan is evaluating the condition of the coral reef. The environment could also be contaminated through routine operations aboard the sea-based facility. The accidental runoff of cleaning fluids used to wash aircraft or unintentional fuel system leaks could contaminate the nearby ocean environment.
Why GAO Did This Study Pursuant to a congressional request, GAO reviewed the contents of the Final Report of the Special Action Committee on Okinawa (SACO), focusing on: (1) the impact on readiness of U.S. forces based on Okinawa after implementation of the report recommendations; (2) the U.S. cost of implementing the recommendations; and (3) the benefit or necessity of having U.S. Marine Corps forces on Okinawa. What GAO Found GAO noted that: (1) the Department of Defense (DOD) believes that Marine Corps forces along with other U.S. forces on Okinawa satisfy the U.S. national security strategy by visibly demonstrating the U.S. commitment to security in the region; (2) these forces are thought to deter aggression, provide a crisis response capability should deterrence fail, and avoid the risk that U.S. allies may interpret the withdrawal of forces as a lessening of U.S. commitment to peace and stability in the region; (3) Okinawa's proximity to potential regional trouble spots promotes the early arrival of U.S. military forces due to shorter transit times and reduces potential problems that could arise due to late arrival; (4) the cost of this presence is shared by the government of Japan, which provides bases and other infrastructure on Okinawa rent-free and pays part of the annual cost of Okinawa-based Marine Corps forces; (5) the SACO Final Report calls on the United States to: (a) return land that includes one base and portions of camps, sites, and training areas on Okinawa to Japan; (b) implement changes to three operational procedures; and (c) implement changes to five noise abatement procedures; (6) the United States has established requirements that Japan must meet as it designs, builds, and pays for the sea-based facility before the Marine Corps Air Station Futenma is closed and operations are moved to the sea-based facility; (7) such a facility has never been built and operated; (8) annual operations and maintenance costs for the sea-based facility were initially estimated at $200 million; (9) the United States requested that the Japanese government pay the cost to maintain the new sea-based facility, but as of the date of this report, it had not agreed to do so; (10) excluding the cost to operate the sea-based facility, the current estimated cost to the United States to implement the SACO land return recommendations is about $193.5 million over about 10 years; (11) the United States and Japan are negotiating an arrangement under which Japan would assume some SACO-related responsibilities consistent with their domestic laws; (12) this arrangement could result in reduced U.S. costs; (13) while final implementation of the SACO recommendations is intended to reduce the burden of U.S. forces' presence in Okinawa, two environmental issues could arise; (14) the first issue concerns the potential for environmental contamination being found on military facilities returned to Japan and responsibility for cleanup of those facilities; and (15) the second issue concerns the potential adverse effects that the construction and operation of the sea-based facility could have on the environment.
<1. Background> Air cargo ranges in size from 1 pound to several tons, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, fresh produce, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices (ULD) that allow many packages to be consolidated into one container that can be loaded onto an aircraft, wooden crates, consolidated pallets, or individually wrapped/boxed pieces, known as loose or bulk cargo. Participants in the air cargo shipping process include shippers, such as individuals and manufacturers; freight forwarders; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and air carriers that load and transport cargo. A shipper may take or send its packages to a freight forwarder who in turn consolidates cargo from many shippers onto a master air waybill a manifest of the consolidated shipment and delivers it to air carriers for transport. A shipper may also send freight by directly packaging and delivering it to an air carrier s ticket counter or sorting center, where the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. According to TSA, the mission of its air cargo security program is to secure the air cargo transportation system while not unduly impeding the flow of commerce. TSA s responsibilities for securing air cargo include, among other things, establishing security requirements governing domestic and foreign passenger air carriers that transport cargo and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections, and, in coordination with DHS s Directorate for Science and Technology (S&T Directorate), for conducting research and development of air cargo security technologies. Of the nearly $4.8 billion appropriated to TSA for aviation security in fiscal year 2009, approximately $123 million is directed for air cargo security activities. TSA was further directed to use $18 million of this amount to expand technology pilots and for auditing participants in the CCSP. Air carriers and freight forwarders are responsible for implementing TSA security requirements. To do this, they utilize TSA-approved security programs that describe the security policies, procedures, and systems they will implement and maintain to comply with TSA security requirements. These requirements include measures related to the acceptance, handling, and screening of cargo; training of employees in security and cargo screening procedures; testing for employee proficiency in cargo screening; and access to cargo areas and aircraft. Air carriers and freight forwarders must also abide by security requirements imposed by TSA through security directives and amendments to security programs. The 9/11 Commission Act defines screening for purposes of the air cargo screening mandate as a physical examination or nonintrusive methods of assessing whether cargo poses a threat to transportation security. The act specifies that screening methods include X-ray systems, explosives detection systems (EDS), explosives trace detection (ETD), explosives detection canine teams certified by TSA, physical search together with manifest verification, and any additional methods approved by the TSA Administrator. For example, TSA also recognizes the use of decompression chambers as an approved screening method. However, solely performing a review of information about the contents of cargo or verifying the identity of the cargo s shipper does not constitute screening for purposes of satisfying the mandate. <2. TSA Has Made Progress in Meeting the Screening Mandate as It Applies to Domestic Cargo; However, TSA Cannot Yet Verify Whether the Mandated Level Is Being Met> <2.1. TSA Has Made Progress in Meeting the 50 Percent and 100 Percent Mandated Screening Levels as They Apply to Domestic Cargo> TSA has taken several key steps to meet the 9/11 Commission Act air cargo screening mandate as it applies to domestic cargo. TSA s approach involves multiple air cargo industry stakeholders sharing screening responsibilities across the air cargo supply chain. TSA, air carriers, freight forwarders, shippers, and other entities each play an important role in the screening of cargo, although TSA has determined that the ultimate responsibility for ensuring that screening takes place at mandated levels lies with the air carriers. According to TSA officials, this decentralized approach is expected to minimize carrier delays, cargo backlogs, and potential increases in cargo transit time, which would likely result if screening were conducted primarily by air carriers at the airport. Moreover, because much cargo is currently delivered to air carriers in a consolidated form, the requirement to screen individual pieces of cargo will necessitate screening earlier in the air cargo supply chain before cargo is consolidated. The specific steps that TSA has taken to address the air cargo screening mandate are discussed below. TSA revised air carrier security programs. Effective October 1, 2008, several months prior to the first mandated deadline, TSA established a new requirement for 100 percent screening of nonexempt cargo transported on narrow-body passenger aircraft. Narrow-body flights transport about 26 percent of all cargo on domestic passenger flights. According to TSA officials, air carriers reported that they are currently meeting this requirement. Effective February 1, 2009, TSA also required air carriers to ensure the screening of 50 percent of all nonexempt air cargo transported on all passenger aircraft. Although screening may be conducted by various entities, each air carrier must ensure that the screening requirements are fulfilled. Furthermore, effective February 2009, TSA revised or eliminated most of its screening exemptions for domestic cargo. As a result, most domestic cargo is now subject to TSA screening requirements. TSA created the Certified Cargo Screening Program (CCSP). TSA also created a program, known as the CCSP, to allow screening to take place earlier in the shipping process and at various points in the air cargo supply chain. In this program, air cargo industry stakeholders such as freight forwarders and shippers voluntarily apply to become Certified Cargo Screening Facilities (CCSF). This program allows cargo to be screened before it is consolidated and transported to the airport, which helps address concerns about the time-intensive process of breaking down consolidated cargo at airports for screening purposes. TSA plans to inspect the CCSFs in order to ensure they are screening cargo as required. TSA initiated the CCSP at 18 major airports that, according to TSA officials, account for 65 percent of domestic cargo on passenger aircraft. TSA expects to expand the CCSP nationwide at a date yet to be determined. CCSFs in the program were required to begin screening cargo as of February 1, 2009. While participation in the CCSP is voluntary, once an entity is certified by TSA to participate it must adhere to TSA screening and security requirements and be subject to annual inspections by TSIs. To become certified and to maintain certification, TSA requires each CCSF to demonstrate compliance with increased security standards to include facility, personnel, procedural, perimeter, and information technology security. As part of the program, and using TSA-approved screening methods, freight forwarders must screen 50 percent of cargo being delivered to wide-body passenger aircraft and 100 percent of cargo being delivered to narrow-body passenger aircraft, while shippers must screen 100 percent of all cargo being delivered to any passenger aircraft. Each CCSF must deliver the screened cargo to air carriers while maintaining a secure chain of custody to prevent tampering with the cargo after it is screened. TSA conducted outreach efforts to air cargo industry stakeholders. In January 2008, TSA initiated its outreach phase of the CCSP in three cities and subsequently expanded its outreach to freight forwarders and other air cargo industry stakeholders in the 18 major airports. TSA established a team of nine TSA field staff to conduct outreach, educate potential CCSP applicants on the program requirements, and validate CCSFs. According to TSA officials, in February 2009, the agency also began using its cargo TSIs in the field to conduct outreach. In our preliminary discussions with several freight forwarders and shippers, industry stakeholders reported that TSA staff have been responsive and helpful in answering questions about the program and providing information on CCSP requirements. TSA established the Air Cargo Screening Technology Pilot and is conducting additional technology pilots. To operationally test ETD and X-ray technology among CCSFs, TSA created the Air Cargo Screening Technology Pilot in January 2008, and selected some of the largest freight forwarders to use the technologies and report on their experiences. TSA s objectives for the pilot are to determine CCSFs ability to screen high volumes of cargo, test chain of custody procedures, and measure the effectiveness of screening technology on various commodity classes. TSA will provide each CCSF participating in the pilot with up to $375,000 for purchasing technology. As of February 26, 2009, 12 freight forwarders in 48 locations are participating in the pilot. The screening they perform as part of the operational testing also counts toward meeting the air cargo screening mandate. TSA expanded its explosives detection canine program. To assist air carriers in screening consolidated pallets and unit loading devices, TSA is taking steps to expand the use of TSA-certified explosives detection canine teams. TSA has 37 canine teams dedicated to air cargo screening operating in 20 major airports and is in the process of adding 48 additional dedicated canine teams. TSA is working with the air carriers to identify their peak cargo delivery times, during which canines would be most helpful for screening. In addition, we reported in October 2005 and April 2007 that TSA, working with DHS s S&T Directorate, was developing and pilot testing a number of technologies to screen and secure air cargo with minimal effect on the flow of commerce. These pilot programs seek to enhance the security of cargo by improving the effectiveness of screening for explosives through increased detection rates and reduced false alarm rates. A description of several of these pilot programs and their status is included in table 1. <2.2. TSA Cannot Yet Verify that Screening Is Being Conducted Domestically at the Mandated Level, and TSA s Current Approach Could Result in Variable Percentages of Screened Cargo> TSA estimates that it achieved the mandate for screening 50 percent of domestic cargo transported on passenger aircraft by February 2009, based on feedback from air cargo industry stakeholders responsible for conducting screening. However, TSA cannot yet verify that screening is being conducted at the mandated level. The agency is working to establish a system to collect data from screening entities to verify that requisite screening levels for domestic cargo are being met. Effective February 2009, TSA adjusted air carrier reporting requirements and added CCSF reporting requirements to include monthly screening reports on the number of shipments screened at 50 and 100 percent. According to TSA officials, air carriers will provide to TSA the first set of screening data by mid-March 2009. By April 2009, TSA officials expect to have processed and analyzed available screening data, which would allow the agency to determine whether the screening mandate has been met. Thus, while TSA asserts that it has met the mandated February 2009, 50 percent screening deadline, until the agency analyzes required screening data, TSA cannot verify that the mandated screening levels are being achieved. In addition, although TSA believes its current screening approach enables it to meet the statutory screening mandate as it applies to domestic cargo, this approach could result in variable percentages of screened cargo on passenger flights. This variability is most likely for domestic air carriers that have a mixed-size fleet of aircraft because a portion of their 50 percent screening requirement may be accomplished through the more stringent screening requirements for narrow-body aircraft, thus allowing them more flexibility in the amount of cargo to screen on wide-body aircraft. According to TSA, although this variability is possible, it is not a significant concern because of the small amount of cargo transported on narrow-body flights by air carriers with mixed-size fleets. However, the approach could result in variable percentages of screened cargo on passenger flights regardless of the composition of the fleet. As explained earlier, TSA is in the process of developing a data reporting system that may help to assess whether some passenger flights are transporting variable percentages of screened cargo. This issue regarding TSA s current air cargo security approach will be further explored during our ongoing review. Lastly, TSA officials reported that cargo that has already been transported on one passenger flight may be subsequently transferred to another passenger flight without undergoing additional screening. According to TSA officials, the agency has determined that this is an approved screening method because an actual flight mimics one of TSA s approved screening methods. For example, cargo exempt from TSA screening requirements that is transported on an inbound flight can be transferred to a domestic aircraft without additional screening, because it is considered to have been screened in accordance with TSA screening requirements. According to TSA, this scenario occurs infrequently, but the agency has not been able to provide us with data that allows us to assess how frequently this occurs. TSA reported that it is exploring ways to enhance the security of cargo transferred to another flight, including using canine teams to screen such cargo. This issue regarding TSA s current air cargo security approach will be further explored during our ongoing review. <3. TSA Faces Participation, Technology, Oversight, and Inbound Cargo Challenges in Meeting the Screening Mandate> <3.1. It Is Unclear Whether TSA Will Be Able to Attract the Voluntary Participants Needed to Meet the 100 Percent Screening Mandate> Although industry participation in the CCSP is vital to TSA s approach to spread screening responsibilities across the supply chain, it is unclear whether the number and types of facilities needed to meet TSA s screening estimates will join the CCSP. Although TSA is relying on the voluntary participation of freight forwarders and shippers to meet the screening goals of the CCSP, officials did not have precise estimates of the number of participants that would be required to join the program to achieve 100 percent screening by August 2010. As of February 26, 2009, TSA had certified 172 freight forwarder CCSFs, 14 shipper CCSFs, and 17 independent cargo screening facilities (ICSF). TSA estimates that freight forwarders and shippers will complete the majority of air cargo screening at the August 2010 deadline, with shippers experiencing the largest anticipated increase when this mandate goes into effect. According to estimates reported by TSA in November 2008, as shown in figure 1, the screening conducted by freight forwarders was expected to increase from 14 percent to 25 percent of air cargo transported on passenger aircraft from February 2009 to August 2010, while the screening conducted by shippers was expected to increase from 2 percent to 35 percent. For this reason, increasing shipper participation in the CCSP is necessary to meet the 100 percent screening mandate. As highlighted in figure 1, TSA estimated that, as of February 2009, screening of cargo delivered for transport on narrow-body aircraft would account for half of the mandated 50 percent screening level and 25 percent of all cargo transported on passenger aircraft. TSA expected screening conducted on cargo delivered for transport on narrow-body passenger aircraft to remain stable at 25 percent when the mandate to screen 100 percent of cargo transported on passenger aircraft goes into effect. TSA anticipated that its own screening responsibilities would grow by the time the 100 percent mandate goes into effect. Specifically, TSA anticipated that its canine teams and transportation security officers would screen 6 percent of cargo in August 2010, up from 4 percent in February 2009. It is important to note that these estimates which TSA officials said are subject to change are dependent on the voluntary participation of freight forwarders, shippers, and other screening entities in the CCSP. If these entities do not volunteer to participate in the CCSP at the levels TSA anticipates, air carriers or TSA may be required to screen more cargo than was projected. Participation in the CCSP may appeal to a number of freight forwarders and shippers, but industry participants we interviewed expressed concern about potential program costs. In preliminary discussions with freight forwarders, shippers, and industry associations, stakeholders told us that they would prefer to join the CCSP and screen their own cargo in order to limit the number of entities that handle and open their cargo. This is particularly true for certain types of delicate cargo, including fresh produce. Screening cargo in the CCSP also allows freight forwarders and shippers to continue to consolidate their shipments before delivering them to air carriers, which results in reduced shipping rates and less potential loss and damage. However, TSA and industry officials with whom we spoke agreed that the majority of small freight forwarders which make up approximately 80 percent of the freight forwarder industry would likely find prohibitive the costs of joining the CCSP, including acquiring expensive technology, hiring additional personnel, conducting additional training, and making facility improvements. TSA has not yet finalized cost estimates for industry participation in air cargo screening, but is in the process of developing these estimates and is planning to report them later this year. As of February 26, 2009, 12 freight forwarders in 48 locations have joined TSA s Air Cargo Screening Technology pilot and are thus eligible to receive reimbursement for the technology they have purchased. However pilot participants, to date, have been limited primarily to large freight forwarders. TSA indicated that it targeted high-volume facilities for the pilot in order to have the greatest effect in helping industry achieve screening requirements. In response to stakeholder concerns about potential program costs, TSA is allowing independent cargo screening facilities to join the CCSP and screen cargo on behalf of freight forwarders or shippers. However, it is unclear how many of these facilities will join. Moreover, according to industry stakeholders, this arrangement could result in freight forwarders being required to deliver loose freight to screening facilities for screening. This could reduce the benefit to freight forwarders of consolidating freight before delivering it to air carriers, a central part of the freight forwarder business model. <3.2. TSA Has Taken Some Steps to Develop and Test Technologies for Screening Air Cargo, but Has Not Yet Completed Assessments to Ensure Their Effectiveness> TSA has taken some steps to develop and test technologies for screening and securing air cargo, but has not yet completed assessments of the technologies it plans to allow air carriers and program participants to use in meeting the August 2010 screening mandate. To date, TSA has approved specific models of three screening technologies for use by air carriers and CCSFs until August 3, 2010 ETD, EDS, and X-ray. TSA chose these technologies based on its subject matter expertise and the performance of these technologies in the checkpoint and checked baggage environments. According to TSA officials, the agency has conducted preliminary assessments, but has not completed laboratory or operational testing of these technologies in the air cargo environment. After the technology pilot programs and other testing are complete, TSA will determine which technologies will be qualified for screening cargo and whether these technologies will be approved for use after August 3, 2010. However, TSA is proceeding with operational testing and evaluations to determine which of these technologies is effective at the same time that screening entities are using these technologies to meet air cargo screening requirements. For example, according to TSA, ETD technology, which most air carriers and CCSFs plan to use, has not yet begun the qualification process. However, it is currently being used to screen air cargo as part of the Air Cargo Screening Technology Pilot and by air carriers and other CCSFs. Although TSA s acquisition guidance recommends testing the operational effectiveness and suitability of technologies prior to deploying them, and TSA agrees that simultaneous testing and deployment of technology is not ideal, TSA officials reported that this was necessary to meet the screening deadlines mandated by the 9/11 Commission Act. While we recognize TSA s time constraints, the agency cannot be assured that the technologies it is currently using to screen cargo are effective in the cargo environment, because they are still being tested and evaluated. We will continue to assess TSA s technology issues as part of our ongoing review of TSA s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. Although TSA is in the process of assessing screening technologies, according to TSA officials, there is no single technology capable of efficiently and effectively screening all types of air cargo for the full range of potential terrorist threats. Moreover, according to industry stakeholders, technology to screen cargo that has already been consolidated and loaded onto a pallet or ULD may be critical to meet the 100 percent screening mandate. Although TSA has not approved any technologies that are capable of screening consolidated pallets or ULDs containing various commodities, according to TSA, it is currently beginning to assess such technology. TSA officials reported that they do not expect to qualify such technology prior to the August 2010 deadline. Air cargo industry stakeholders we interviewed also expressed some concerns regarding the cost of purchasing and maintaining screening equipment for CCSP participants. Cost is a particular concern for the CCSP participants that do not participate in the Air Cargo Screening Technology Pilot and will receive no funding for technology or other related costs; this includes the majority of CCSFs. Because the technology qualification process could result in modifications to TSA s approved technologies, industry stakeholders expressed concerns about purchasing technology that is not guaranteed to be acceptable for use after August 3, 2010. We will continue to assess this issue as part of our ongoing review of TSA s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. In addition to the importance of screening technology, TSA officials noted that an area of concern in the transportation of air cargo is the chain of custody between the various entities that handle and screen cargo shipments prior to its loading onto an aircraft. Officials stated that the agency has taken steps to analyze the chain of custody under the CCSP, and has issued cargo procedures to all entities involved in the CCSP to ensure that the chain of custody of the cargo is secure. This includes guidance on when and how to secure cargo with tamper-evident technology. TSA officials noted that they plan to test and evaluate such technology and issue recommendations to the industry, but have not set any time frames for doing so. Until TSA completes this testing, however, the agency lacks assurances that existing tamper-evident technology is of sufficient quality to deter tampering and that the air cargo supply chain is effectively secured. We will continue to assess this issue as part of our ongoing review of TSA s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. <3.3. Limited Staffing Resources May Hamper TSA s Ability to Effectively Oversee the Thousands of Additional Entities Involved in Meeting the Air Cargo Screening Mandate> Although the actual number of cargo TSIs increased each fiscal year from 2005 to 2009, TSA still faces challenges overseeing compliance with the CCSP due to the size of its current TSI workforce. To ensure that existing air cargo security requirements are being implemented as required, TSIs perform compliance inspections of regulated entities, such as air carriers and freight forwarders. Under the CCSP, TSIs will also perform compliance inspections of new regulated entities, such as shippers and manufacturers, who voluntarily become CCSFs. These compliance inspections range from an annual review of the implementation of all air cargo security requirements to a more frequent review of at least one security requirement. According to TSA, the number of cargo TSIs grew from 160 in fiscal year 2005 to about 500 in fiscal year 2009. However, cargo TSI numbers remained below levels authorized by TSA in each fiscal year from 2005 through 2009, which, in part, led to the agency not meeting cargo inspection goals in fiscal year 2007. As highlighted in our February 2009 report, TSA officials stated that the agency is still actively recruiting to fill vacant positions but could not provide documentation explaining why vacant positions remained unfilled. Additionally, TSA officials have stated that there may not be enough TSIs to conduct compliance inspections of all the potential entities under the CCSP, which TSA officials told us could number in the thousands, once the program is fully implemented by August 2010. TSA officials also indicated plans to request additional cargo TSIs in the future, although the exact number has yet to be formulated. According to TSA officials, TSA does not have a human capital or other workforce plan for the TSI program, but the agency has plans to conduct a staffing study in fiscal year 2009 to identify the optimal workforce size to address its current and future program needs. Until TSA completes its staffing study, TSA may not be able to determine whether it has the necessary staffing resources to ensure that entities involved in the CCSP are meeting TSA requirements to screen and secure air cargo. We will continue to assess this issue as part of our ongoing review of TSA s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. <3.4. TSA Has Taken Some Steps to Meet the Screening Mandate as It Applies to Inbound Cargo but Does Not Expect to Achieve 100 Percent Screening of Inbound Cargo by the August 2010 Deadline> To meet the 9/11 Commission Act screening mandate as it applies to inbound cargo, TSA revised its requirements for foreign and U.S.-based air carrier security programs and began harmonization of security standards with other nations. The security program revisions generally require carriers to screen 50 percent of nonexempt inbound cargo. TSA officials estimate that this requirement has been met, though the agency is not collecting screening data from air carriers to verify that the mandated screening levels are being achieved. TSA has taken several steps toward harmonization with other nations. For example, TSA is working with foreign governments to improve the level of screening of air cargo, including working bilaterally with the European Commission (EC) and Canada, and quadrilaterally with the EC, Canada, and Australia. As part of these efforts, TSA plans to recommend to the United Nations International Civil Aviation Organization (ICAO) that the next revision of Annex 17 to the Convention of International Civil Aviation (due for release in 2009) include an approach that would allow screening to take place at various points in the air cargo supply chain. TSA also plans to work with the International Air Transport Association (IATA), which is promoting an approach to screening cargo to its member airlines. Finally, TSA continues to work with U.S. Customs and Border Protection (CBP) to leverage an existing CBP system to identify and target high-risk air cargo. However, TSA does not expect to achieve 100 percent screening of inbound air cargo by the August 2010 screening deadline. This is due, in part, to TSA s inbound screening exemptions, and to challenges TSA faces in harmonizing its air cargo security standards with those of other nations. TSA requirements continue to allow screening exemptions for certain types of inbound air cargo transported on passenger aircraft. TSA could not provide an estimate of what percentage of inbound cargo is exempt from screening. In April 2007, we reported that TSA s screening exemptions on inbound cargo could pose a risk to the air cargo supply chain and recommended that TSA assess whether these exemptions pose an unacceptable vulnerability and, if necessary, address these vulnerabilities. TSA agreed with our recommendation, but has not yet reviewed, revised, or eliminated any screening exemptions for cargo transported on inbound passenger flights, and could not provide a time frame for doing so. Furthermore, similar to changes for domestic cargo requirements discussed earlier, TSA s revisions to inbound requirements could result in variable percentages of screened cargo on passenger flights to the United States. We will continue to assess this issue as part of our ongoing review of TSA s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. Achieving harmonization with foreign governments may be challenging, because these efforts are voluntary and some foreign countries do not share the United States view regarding air cargo security threats and risks. Although TSA acknowledges it has broad authority to set standards for aviation security, including the authority to require that a given percentage of inbound cargo be screened before it departs for the United States, TSA officials caution that if TSA were to impose a strict cargo screening standard on all inbound cargo, many nations likely would be unable to meet such standards in the near term. This raises the prospect of substantially reducing the flow of cargo on passenger aircraft or possibly eliminating it altogether. According to TSA, the effect of imposing such screening standards in the near future would be, at minimum, increased costs for international passenger travel and for imported goods, and possible reduction in passenger traffic and foreign imports. According to TSA officials, this could also undermine TSA s ongoing cooperative efforts to develop commensurate security systems with international partners. TSA agreed that assessing the risk associated with the inbound air cargo transportation system will facilitate its efforts to harmonize security standards with other nations. Accordingly, TSA has identified the primary threats associated with inbound air cargo, but has not yet assessed which areas of inbound air cargo are most vulnerable to attack and which inbound air cargo assets are deemed most critical to protect. Although TSA agreed with our previous recommendation to assess inbound air cargo vulnerabilities and critical assets, it has not yet established a methodology or time frame for how and when these assessments will be completed. We continue to believe that the completion of these assessments is important to the security of inbound air cargo. Finally, the amount of resources TSA devotes to inbound compliance is disproportionate to the resources for domestic compliance. In April 2007, we reported that TSA inspects air carriers at foreign airports to assess whether they are complying with air cargo security requirements, but does not inspect all air carriers transporting cargo into the United States. Furthermore, in fiscal year 2008, inbound cargo inspections were performed by a cadre of 9 international TSIs with limited resources, compared to the 475 TSIs that performed domestic cargo inspections. By mid-fiscal year 2008, international compliance inspections accounted for a small percentage of all compliance inspections performed by TSA, although inbound cargo made up more than 40 percent of all cargo on passenger aircraft in 2007. Regarding inbound cargo, we reported in May 2008 that TSA lacks an inspection plan with performance goals and measures for its international inspection efforts, and recommended that TSA develop such a plan. TSA officials stated in February 2009 that they are in the process of completing a plan to provide guidance for inspectors conducting compliance inspections at foreign airports, and intend to implement the plan during fiscal year 2009. Finally TSA officials stated that the number of international TSIs needs to be increased. Madam Chairwoman, this concludes my statement. I look forward to answering any questions that you or other members of the subcommittee may have at this time. <4. GAO Contact and Staff Acknowledgements> For questions about this statement, please contact Stephen M. Lord at (202) 512- 4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Steve D. Morris, Assistant Director; Scott M. Behen; Glenn G. Davis; Elke Kolodinski; Stanley J. Kostyla; Thomas Lombardi; Linda S. Miller; Yanina Golburt Samuels; Daren K. Sweeney; and Rebecca Kuhlmann Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Implementing Recommendations of the 9/11 Commission Act of 2007 mandates the Department of Homeland Security (DHS) to establish a system to physically screen 50 percent of cargo transported on passenger aircraft by February 2009 and 100 percent of such cargo by August 2010. This testimony provides preliminary observations on the Transportation Security Administration's (TSA) progress in meeting the mandate to screen cargo on passenger aircraft and the challenges TSA and industry stakeholders may face in screening such cargo. GAO's testimony is based on products issued from October 2005 through August 2008, and its ongoing review of air cargo security. GAO reviewed TSA's air cargo security programs, interviewed program officials and industry representatives, and visited two large U.S. airports. What GAO Found TSA has made progress in meeting the air cargo screening mandate as it applies to domestic cargo. TSA has taken steps that will allow screening responsibilities to be shared across the air cargo supply chain--including TSA, air carriers, freight forwarders (which consolidate cargo from shippers and take it to air carriers for transport), and shippers--although air carriers have the ultimate responsibility for ensuring that they transport cargo screened at the requisite levels. TSA has taken several key steps to meet the mandate, including establishing a new requirement for 100 percent screening of cargo transported on narrow-body aircraft; revising or eliminating most screening exemptions for domestic cargo; creating the Certified Cargo Screening Program (CCSP) to allow screening to take place at various points in the air cargo supply chain; and establishing a screening technology pilot. Although TSA estimates that it achieved the mandated 50 percent screening level by February 2009 as it applies to domestic cargo, the agency cannot yet verify that the requisite levels of cargo are being screened. It is working to establish a system to do so by April 2009. Also, TSA's screening approach could result in variable percentages of screened cargo on passenger flights. TSA and industry stakeholders may face a number of challenges in meeting the screening mandate, including attracting participants to the CCSP, and technology, oversight, and inbound cargo challenges. TSA's approach relies on the voluntary participation of shippers and freight forwarders, but it is unclear whether the facilities needed to meet TSA's screening estimates will join the CCSP. In addition, TSA has taken some steps to develop and test technologies for screening air cargo, but the agency has not yet completed assessments of these technologies and cannot be assured that they are effective in the cargo environment. TSA's limited inspection resources may also hamper its ability to oversee the thousands of additional entities that it expects to participate in the CCSP. Finally, TSA does not expect to meet the mandated 100 percent screening deadline as it applies to inbound air cargo, in part due to existing inbound screening exemptions and challenges it faces in harmonizing security standards with other nations.
<1. Background> The Social Security Act of 1935 required most workers in commerce and industry, then about 60 percent of the workforce, to be covered. Amendments to the act in 1950, 1954, and 1956 allowed states, generally acting for their employees, to voluntarily elect Social Security coverage through agreements with SSA. The amendments also permitted states and localities that elected coverage to withdraw from the program after meeting certain conditions. Policymakers have addressed the issue of extending mandatory Social Security coverage for state and local government employees on several occasions. In response to financial problems the Social Security system faced in the early 1970s, for example, the 1977 Social Security amendments directed that a study be made of the desirability and feasibility of extending mandatory coverage to employees at all levels of government, including state and local governments. The Secretary of the Department of Health, Education, and Welfare now the Departments of Health and Human Services and Education established the Universal Social Security Coverage Study Group to develop options for mandatory coverage and analyze the fiscal effects of each option. Recognizing the diversity of state and local systems, the study group selected representative plans for analysis. Two data sources were developed and analyzed. First, the Actuarial Education and Research Fund, sponsored by six professional actuarial organizations, established a task force of plan actuaries to study 25 representative large and small noncovered retirement systems. Second, the Urban Institute, under a grant from several government agencies, used an actuarial firm to obtain data on 22 of the largest 50 noncovered employee retirement systems. The study group report, issued in 1980, provided information on the costs and benefits of various options but did not draw conclusions about their relative desirability. In 1983, the Congress removed authority for states and localities that had voluntarily elected Social Security coverage to withdraw from the program, which effectively made coverage mandatory for many state and local employees. Additionally, in 1990, the Congress mandated coverage for state and local employees not covered by public pension plans. SSA estimates that 96 percent of the workforce, including 70 percent of the state and local government workforce, is now covered by Social Security. During 1997, Social Security had $457.7 billion in revenues and $369.1 billion in expenditures. About 89 percent of Social Security s revenues came from payroll taxes. The Social Security payroll tax is 6.2 percent of pay each for employers and employees, up to an established maximum. Maximum earnings subject to Social Security payroll taxes were $65,400 in 1997 and are $68,400 in 1998. Social Security provides retirement, disability, and survivor benefits to insured workers and their families. Insured workers are eligible for full retirement benefits at age 65 and reduced benefits at age 62. The retirement age was increased by the 1983 Social Security amendments. Beginning with those born in 1938, the age at which full benefits are payable will increase in gradual steps from age 65 to age 67. Benefit amounts are based on a worker s age and career earnings, are fully indexed for inflation, and as shown in table 1, replace a relatively higher proportion of the final year s wages for low earners. Social Security provides additional benefits for eligible family members, including spouses aged 62 or older or younger spouses if a child meeting certain requirements is in their care and children up to age 18 or older if they are disabled. The amount of a spouse s or child s benefit is one-half the insured worker s age-65 benefit amount. A spouse s benefit is reduced if taken earlier than age 65, unless the spouse has a child in his or her care. SSA estimates that about 5 million state and local government employees, excluding students and election workers, occupy positions not covered by Social Security. SSA also estimates that the noncovered employees have annual salaries totaling about $132.5 billion. Seven states California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas account for over 75 percent of the noncovered payroll. Based on a 1995 survey of public pension plans, the Public Pension Coordinating Council (PPCC) estimates that police, firefighters, and teachers are more likely to occupy noncovered positions than other employees are. According to a 1994 Bureau of Labor Statistics (BLS) survey, most full-time state and local employees participate in defined benefit pension plans. Table 2 shows membership and contribution rates for nine defined benefit state and local pension plans that we studied as part of the review. For the most part, active members in the nine plans occupy positions that are not covered by Social Security. Defined benefit plans promise a specific level of benefits to their members when they retire. Minimum retirement age and benefits vary; however, the BLS and PPCC surveys indicate that many public employees can retire with full benefits at age 55 or earlier with 30 years of service. The surveys also indicate that plan members typically have a benefit formula that calculates retirement income on the basis of specified benefit rates for each year of service and the members average salary over a specified time period usually the final 3 years. For example, the benefit rates for members of the Colorado Public Employees Retirement Association are 2.5 percent of highest average salary per year over a 3-year period for the first 20 years of service and 1.5 percent of highest average salary per year for each additional year of service. Full retirement benefits are available at any age with 35 years of service, at age 55 with 30 years of service, age 60 with 20 years of service, or at age 65 with 5 years of service. Therefore, plan members who retire at age 55 with 30 years of service receive annual retirement income amounting to 65 percent of their highest average salary. Reduced retirement benefits are available, for example, at age 55 with 20 years of service. In addition to retirement income benefits, most public pension plans provide other benefits, such as disability or survivor benefits. For example, BLS reported that of defined benefit plan members, 91 percent were provided with disability benefits, all have a survivor annuity option, and 62 percent receive some cost-of-living increases after retirement. Public pension plan coverage for part-time, seasonal, and temporary employees varies. In Ohio, for example, part-time and temporary state employees participate in a defined benefit plan. In California, the 16,000 part-time, seasonal, and temporary state employees have a defined contribution plan. Plan benefits are based on plan contributions, which consist of 7.5 percent of the employees gross pay deducted from their pay and returns on plan investments. <2. Mandatory Coverage Would Benefit the Social Security Program> SSA estimates that extending mandatory Social Security coverage to all newly hired state and local employees would reduce the trust funds 75-year actuarial deficit by about 10 percent. The surplus payroll tax revenues associated with mandatory coverage and interest on that surplus would extend the trust funds solvency by about 2 years. Extending mandatory coverage to newly hired employees would also increase program participation and, in the long run, simplify program administration. <2.1. Trust Funds Deficit Would Be Reduced> Table 3 shows SSA s analysis of the present discounted value of revenues and expenditures with and without mandatory coverage over the 75-year period beginning January 1, 1998. The analysis indicates that extending mandatory coverage to all state and local employees hired beginning January 1, 2000, would reduce the program s long-term actuarial deficit by 10 percent, from about 2.19 percent of payroll to 1.97 percent of payroll. Figure 1 shows that SSA s analysis indicates that extending mandatory coverage to new state and local employees would extend the trust funds solvency by about 2 years, from 2032 to 2034. As with most other elements of the reform proposals put forward by the 1994-1996 Social Security Advisory Council, extending mandatory coverage to newly hired state and local employees would contribute to the resolution of but not fully resolve the trust funds solvency problem. A combination of adjustments will be needed to extend the program s solvency over the entire 75-year period. SSA s analysis indicates that revenues resulting from an extension of mandatory coverage, including payroll taxes and interest on surplus revenues, would substantially exceed additional expenditures throughout the 75-year period. SSA assumes that payroll tax collections for new employees would accelerate early in the 75-year period, while benefits for those employees would not accelerate until later in the period. For example, annual revenues from payroll taxes collected from the newly covered employees and their employers are expected to exceed expenditures for benefits to those employees until 2050. In that year, however, revenues resulting from an extension of mandatory coverage, including interest on cumulative surplus revenues, are projected to exceed expenditures on those employees by over 300 percent. <2.2. Mandatory Coverage Would Have Other Beneficial Effects> While Social Security s solvency problems triggered the analysis of the effect of mandatory coverage on program revenues and expenditures, the inclusion of such coverage in a comprehensive reform package would likely be grounded in other considerations as well, such as broadening Social Security s coverage and simplifying program administration. an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program. According to SSA, one important way that noncovered employees benefit from, without contributing to, Social Security is that their parents, grandparents, or other relatives receive Social Security s retirement, disability, or survivor benefits. Social Security is designed as a national intergenerational transfer program where the taxes of current workers fund the benefits of current beneficiaries. SSA stated that those not contributing to the program still receive the benefits of this transfer. Extending mandatory Social Security coverage to all newly hired state and local employees would also simplify program administration by eliminating, over time, the need to administer and enforce special rules for noncovered state and local employees. For example, SSA s Office of Research, Evaluation, and Statistics estimates that 95 percent of state and local employees occupying noncovered positions become entitled to Social Security as either workers or dependents. Additionally, the Office of the Chief Actuary estimates that 50 to 60 percent of state and local employees in noncovered positions will be fully insured by age 62 from other, covered employment. The Congress established the Windfall Elimination Provision (WEP) and Government Pension Offset (GPO) to reduce the unfair advantage that workers eligible for pension benefits on the basis of noncovered employment may have when they apply for Social Security benefits. The earnings history for workers with noncovered earnings may appear to qualify them for increased Social Security benefits as low-income wage earners or for additional benefits for a nonworking spouse when in fact they have had substantial income from noncovered employment. With a few exceptions, WEP and GPO require SSA to use revised formulas to calculate benefits for workers with noncovered employment. In April 1998, we reported that SSA is often unable to determine whether applicants should be subject to WEP or GPO and this has led to overpayments. We estimated total overpayments to be between $160 million and $355 million over the period 1978 to 1995. In response, SSA plans to perform additional computer matches with the Office of Personnel Management and the Internal Revenue Service (IRS) to obtain noncovered pension data and ensure WEP and GPO are correctly applied. Mandatory coverage would reduce required WEP and GPO adjustments to benefits by gradually reducing the number of employees in noncovered employment. Eventually, all state and local employees with the exception of a few categories of workers, such as students and election workers would be in covered employment, and adjustments would be unnecessary. In 1995, SSA asked its Office of the Inspector General to review state and local government employers compliance with Social Security coverage provisions. In December 1996, the Inspector General reported that Social Security provisions related to coverage of state and local employees are complex and difficult to administer. The report stated that few resources were devoted to training state and local officials and ensuring that administration and enforcement roles and responsibilities are clearly defined. The report concluded that there is a significant risk of sizeable noncompliance with state and local coverage provisions. In response, SSA and IRS have initiated an effort to educate employers and ensure compliance with legal requirements for withholding Social Security payroll taxes. Extending coverage to all newly hired state and local government employees would eventually eliminate this problem. SSA stated that the time needed to fully phase in mandatory coverage could be 20 to 30 years, if it followed estimates of the time needed to phase in Medicare coverage, which was mandated for newly hired state and local employees starting in 1986. SSA also stated that mandatory Social Security coverage for new hires would possibly create another tier in the payroll reporting process resulting in additional compliance issues in the near term. Additionally, payroll practitioners would need to account for Social Security covered and noncovered government employment along with Medicare covered and noncovered employment and, as a result, they would face additional reporting burdens in the near term as they extended Social Security coverage to new employees. <3. Effect of Mandatory Coverage for Employers, Employees, and Their Public Pension Plans Would Vary> If Social Security becomes mandatory, all newly hired state and local employees would be provided with the minimum income protection afforded by Social Security. Also, they and their employers would pay Social Security s combined 12.4-percent payroll tax. Each state and locality with noncovered employees would then decide how to respond to the increase in benefits and costs. Possible responses range from the government s absorbing the added costs and leaving current pension plans unchanged to entirely eliminating state and local pension plan benefits for newly hired employees. From discussions with state and local representatives, however, noncovered employers would likely adjust their pension plans to reflect Social Security s benefits and costs. To illustrate the implications of mandatory coverage for public employers and employees, we examined three possible responses: States and localities could maintain similar total retirement benefits for current and newly hired employees. For example, employees who retire before age 62 would be paid supplemental retirement benefits until they become eligible for Social Security benefits. This response would likely result in an increase in total retirement costs and some additional family and other benefits for many newly hired employees. States and localities could examine other pension plans that are already coordinated with Social Security and provide newly hired employees with similar benefits. For example, employees who retire before age 62 would receive, on average, a smaller initial retirement benefit than current noncovered employees. This response would also likely result in an increase in total retirement costs and some additional family and other benefits for newly hired employees. States and localities could maintain level retirement costs. This response would likely require a reduction in pension benefits from the government s plans for many newly hired employees, but the new employees would also have Social Security benefits. According to pension plan representatives, the changes to current pension plans in response to mandatory coverage could result in reduced contributions to those plans, which could affect their long-term financing. <3.1. Maintaining Benefits of Noncovered Employees for Newly Hired Employees Would Likely Increase Costs> States and localities with noncovered employees could decide to provide newly hired employees with pension benefits at retirement, which, when combined with Social Security benefits, approximate the pension benefits of current employees. Studies indicate that such a decision would likely result in an increase in retirement costs. The amount of increase would vary depending on a number of factors; however, studies indicate the increase could be about 7 percent of new-employee payroll. The 1980 Universal Social Security Coverage Study Group report estimated that total retirement costs, including Social Security payroll taxes and pension plan contributions, would need to increase an average of 5 to 10 percent of payroll to maintain level benefits for current and newly hired employees. However, the estimated increase included the 2.9 percent of payroll Medicare tax that was mandated for all new state and local employees in 1986 6 years after the study was completed. Deducting the Medicare tax reduces the estimate of additional costs to between 2 and 7 percent of payroll. The 1980 study group assumed that most newly hired employees would have salary replacement percentages in their first year of retirement that would be comparable to the salary replacement percentages provided to current employees. For example, employees retiring before age 62 would receive a temporary supplemental pension benefit to more closely maintain the benefits of the current plan. Since Social Security benefits are weighted in favor of families and lower income employees and because Social Security benefits are fully indexed for inflation, while many pension plans provide limited or no cost-of-living protection total lifetime benefits for some new employees would be greater than those provided to current employees. More recent studies by pension plan actuaries in Colorado, Illinois, and Ohio also indicate the cost increase would be in the same range. For example, a December 1997 study for a plan in Ohio indicated that providing retirement benefits for new employees that, when added to Social Security benefits, approximate retirement benefits for current employees would require an increase in contributions of 6 to 7 percent of new-employee payroll. A 1997 study for a pension plan in Illinois indicated the increased payments necessary to maintain similar total retirement benefits for current and new employees would be about 6.5 percent of new-employee payroll. Since it would be limited to new employees, the cost increase would be phased in over several years. For example, the cost increase would be about 0.25 percent of total payroll starting the first year, 2.83 percent of total payroll in 10 years, and 6.54 percent of total payroll after all current employees have been replaced. The 1980 study group report stated that the causes of the cost increase cannot be ascribed directly to specific Social Security or pension plan provisions. According to the study, however, among the most important factors contributing to the cost increase are Social Security s strengthening of cost-of-living protection, provision of substantial additional benefits to some families, and reduction in pension benefit forfeitures occurring when employees move between jobs. The study stated that another contributing factor would be the need for pension plans to provide supplemental benefits to employees, especially police and firefighters, who retire before they begin receiving Social Security benefits at age 62. The study also found that the magnitude of the cost increase would depend on the pension plan s current benefits. Cost increases would be less for plans that already provide benefits similar to those provided by Social Security because those plans would be able to eliminate duplicate benefits. Maintaining level benefits for noncovered and newly hired employees would require states and localities in redesigning plans for the newly hired employees to adopt benefit formulas that explicitly integrate pension and Social Security benefits. For example, affected states and localities could adopt a benefit formula that offsets a portion of the member s pension benefit with a specified percentage of the member s Social Security benefit. This approach is more common in the private sector where a 1995 BLS survey of large and medium establishments found that about 51 percent of full-time employees had benefits integrated with Social Security than the public sector, where a survey found that only about 4 percent of full-time employees had pension benefits integrated with Social Security. In the public sector, pension plans for covered employees generally recognize Social Security benefits implicitly by providing their members with lower benefit rates than are provided to noncovered employees. <3.2. Providing Benefits of Currently Covered Employees Would Likely Increase Costs> SSA estimates that about 70 percent of the state and local workforce is already covered by Social Security. The 1980 study group examined the impact on retirement costs if states and localities with noncovered employees provide newly hired employees with pension benefits that are similar to the benefits provided to employees who are already covered by Social Security. The study group concluded that implementing such formulas would increase overall retirement costs by 6 to 14 percent of payroll or about 3 to 11 percent of payroll after deducting the Medicare tax. The study also concluded that for most pension plans, the present value of lifetime benefits for new employees covered by Social Security would be greater than the value of benefits of current noncovered employees. As shown in table 4, our analysis of 1995 PPCC data also indicates that total retirement costs for states and localities covered by Social Security are higher than the costs for noncovered states and localities. PPCC data also indicate that many employees, especially police and firefighters, retire before age 62, when they would first be eligible for Social Security retirement benefits. The data indicate, for example, that police and firefighters in noncovered plans retired, on average, at age 54. The average retirement age of other employees in noncovered plans was age 60. In covered plans, the average retirement age for police and firefighters and other employees was somewhat higher at ages 55 and 62, respectively. Analyses indicate that, initially, the percentage of salary that is replaced by retirement income is smaller for covered employees who retire before they are eligible for Social Security benefits than for noncovered employees. Our analysis of PPCC data indicates, for example, that public pension plans replace about 65 percent of the final average salary of members who retired with 30 years of service and were not covered by Social Security. For members who retired with 30 years of service and were covered by both a pension plan and Social Security, the PPCC data indicate that pension plans replace only about 53 percent of their members final average salary. After Social Security benefits begin, however, covered employees generally have higher salary replacement rates. For example, the average salary replacement rates in 1994 were higher for covered state and local employees than for noncovered employees, after they reach age 62 at all salary levels between $15,000 and $65,000. (See table 5.) We did not compare the expected value of total lifetime benefits for covered and noncovered employees because amounts would vary depending on the benefits offered by each plan. The extent to which the experience of states and localities with covered employees can be generalized to those with noncovered employees is limited. According to the 1980 study group report, most public pension plans that coordinated with Social Security did so in the 1950s and 1960s when Social Security benefits and payroll taxes were much smaller. As Social Security benefits grew, pension plan benefits remained basically unchanged. The study stated that, starting in the 1970s, however, rising pension costs caused several large state systems to consider reducing their relatively liberal pension benefits. In the 1980s, for example, California created an alternative set of reduced benefits for general employees to, among other things, reduce the state s retirement costs. Initially, general employees were permitted to select between the higher costs and benefits of the original plan and the lower costs and benefits of the revised plan. Subsequently, however, newly hired general employees were limited to the reduced benefits. Regardless, the circumstances surrounding the experiences of states with covered employees make it difficult to predict what changes would occur from further extension of coverage. <3.3. Level Retirement Spending Would Mean Reduced Benefits> Several employer, employee, and pension plan representatives with whom we spoke stated that spending increases necessary to maintain level retirement income and other benefits would be difficult to achieve. State and pension plan officials noted that spending for retirement benefits must compete for funds with spending for education, law enforcement, and other areas that cannot be readily reduced. For example, Ohio officials noted that the state is having difficulty finding the additional funds for education needed to comply with court ordered changes in school financing. A representative of local government officials in Ohio stated that payroll represents 75 to 80 percent of county budgets, and there is little chance that voters would approve revenue increases needed to maintain level retirement benefits. He stated the more likely options for responding to increased retirement costs were to decrease the number of employees or reduce benefits under state and local pension plans. If states and localities decide to maintain level spending for retirement, they might need to reduce pension benefits under public pension plans for many employees. For example, a June 1997 actuarial evaluation of an Ohio pension plan examined the impact on benefits of mandating Social Security coverage for all employees, assuming no increase in total retirement costs. The study concluded that level spending could be maintained if service retirement benefits were reduced (for example, salary replacement rates for employees retiring with 30 years of service would be reduced from 60.3 percent to 44.1 percent); retiree health benefits were eliminated for both current and future employees; and the funding period of the plan s unfunded accrued liability was extended from 27 years to 40 years. The study also stated that additional benefit reductions might be needed to maintain level spending if additional investment income was not available to subsidize pension benefits for newly hired employees. <3.4. Effect on Pension Plan Finances Is Uncertain> States and localities typically use a reserve funding approach to finance their pension plans. Under this approach, employers and frequently employees make systematic contributions toward funding the benefits earned by active employees. These contributions, together with investment income, are intended to accumulate sufficient assets to cover promised benefits by the time employees retire. However, many public pension plans have unfunded liabilities. The nine plans that we examined, for example, have unfunded accrued liabilities ranging from less than 1 percent to over 30 percent of total liabilities. Unfunded liabilities occur for a number of reasons. For example, public plans generally use actuarial methods and assumptions to calculate required contribution rates. Unfunded liabilities can occur if a plan s actuarial assumptions do not accurately predict reality. Additionally, retroactive increases in plan benefits can create unfunded liabilities. Unlike private pension plans, the unfunded liabilities of public pension plans are not regulated by the federal government. States or localities determine how and when unfunded liabilities will be financed. Mandatory coverage and the resulting pension plan modifications would likely result in reduced contributions to public pension plans. This would occur because pension plan contributions are directly tied to benefit levels and plan contributions would be reduced to the extent plan benefits are reduced and replaced by Social Security benefits. The impact of reduced contributions on plan finances would depend on the actuarial method and assumptions used by each plan, the adequacy of current plan funding, and other factors. For example, some plan representatives are concerned that efforts to provide adequate retirement income benefits for newly hired employees would affect employers willingness or ability to continue amortizing their current plans unfunded accrued liabilities at current rates. Actuaries also believe that reducing contributions to current pension plans could adversely affect the liquidity of some plans. In 1997, for example, an Arizona state legislative committee considered closing the state s defined benefit pension plan to new members and implementing a defined contribution plan. Arizona state employees are already covered by Social Security; however, states and localities faced with mandatory coverage might consider making a similar change to their pension plans. A March 1997 analysis of the proposed change stated that as the number of employees covered by the plan decreased, the amount of contributions flowing into the plan would also decrease. At the same time, the number of members approaching retirement age was increasing and benefit payments were expected to increase. As a result, external cash flow would become increasingly negative over time. The analysis estimated that about 10 years after the plan was closed to new members, benefit payments would exceed contributions by over $1 billion each year. In another 10 years, the annual shortfall would increase to $2 billion. The analysis stated that the large negative external cash flow would require that greater proportions of investment income be used to meet benefit payment requirements. In turn, this would require the pension plan to hold larger proportions of plan assets in cash or lower yielding short-term assets. Once this change in asset allocation occurs, the plan would find it increasingly difficult to achieve the investment returns assumed in current actuarial analyses and employer costs would increase. <4. Legal and Other Considerations> Mandatory coverage presents several legal and administrative issues, and states and localities with noncovered employees would require several years to design, legislate, and implement changes to current pension plans. <4.1. Legal Issues> Although mandating Social Security coverage for state and local employees could elicit a constitutional challenge, mandatory coverage is likely to be upheld under current U.S. Supreme Court decisions. Several employer, employee, and plan representatives with whom we spoke stated that they believe mandatory Social Security coverage would be unconstitutional and should be challenged in court. However, recent Supreme Court cases have affirmed the authority of the federal government to enact taxes that affect the states and to impose federal requirements governing the states relations with their employees. A plan representative suggested that the Supreme Court might now come to a different conclusion. He pointed out that a case upholding federal authority to apply minimum wage and overtime requirements to the states was a 5 to 4 decision and that until then, the Supreme Court had clearly said that applying such requirements to the states was unconstitutional. States and localities also point to several recent Supreme Court decisions that they see as sympathetic to the concept of state sovereignty. However, the facts of these cases are generally distinguishable from the situation that would be presented by mandatory Social Security coverage. Unless the Supreme Court were to reverse itself, which it seldom does, mandatory Social Security coverage of state and local employees is likely to be upheld. Current decisions indicate that mandating such coverage is within the authority of the federal government. <4.2. Administrative Issues> The states would require some time to adjust to a mandatory coverage requirement. The federal government required approximately 3 years to enact legislation to implement a new federal employee pension plan after Social Security coverage was mandated for federal employees. The 1980 study group estimated that 4 years would be required for states and localities to redesign pension formulas, legislate changes, adjust budgets, and disseminate information to employers and employees. Our discussions with employer, employee, and pension plan representatives also indicate that up to 4 years would be needed to implement a mandatory coverage decision. They indicated, for example, that developing revised benefit formulas for each affected pension plan would require complex and time-consuming negotiations among state legislatures, state and local budget and personnel offices, and employee representatives. Additionally, constitutional provisions or statutes in some states may prevent employers from reducing benefits for employees once they are hired. Those states would need to immediately enact legislation that would establish a demarcation between current and future employees until decisions were made concerning benefit formulas for new employees who would be covered by Social Security. According to the National Conference of State Legislators, the legislators of seven states, including Texas, meet biennially. Therefore, the initial legislation could require 2 years in those states. <5. Conclusions> In deciding whether to extend mandatory Social Security coverage to state and local employees, policymakers will need to weigh numerous factors. On one hand, the Social Security program would benefit from the decision. The solvency of the trust fund would be extended for 2 years, and the long-term actuarial deficit would be reduced by about 10 percent. Mandatory coverage would also address the fairness issue raised by the advisory council and simplify program administration. However, the implications of mandatory coverage for public employers, employees, and pension plans are mixed. To the extent that employers provide total retirement income benefits to newly hired employees that are similar to current employees, retirement costs would increase. While the increased retirement costs would be phased in over several years, employers and employees would also incur additional near-term costs to develop, legislate, and implement changes to current pension plans. At the same time, Social Security would provide future employees with benefits that are not available, or are available to a lesser extent, under current state and local pension plans. <6. Agency Comments> SSA stated that the report generally provides a balanced presentation of the issues to be weighed when considering mandating coverage. SSA provided additional technical comments, which we have incorporated as appropriate. SSA s comment letter is reprinted in appendix II. We are sending copies of this report to the Commissioners of the Social Security Administration and the Internal Revenue Service and to other interested parties. Copies will also be made available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix III. Scope and Methodology To examine the implications of a decision to extend mandatory coverage to newly hired state and local employees for the Social Security program, we reviewed documents provided by SSA and IRS and held discussions with their staff. We examined SSA estimates concerning the increase in taxable payroll and Social Security revenues and expenditures attributed to extending mandatory coverage to newly hired state and local employees and discussed data sources with SSA officials. We did not assess the validity of SSA s assumptions. SSA estimates used the intermediate assumptions reported by Social Security s Board of Trustees in 1998. To examine the implications of mandatory coverage for state and local government employers, employees, and their pension plans, we reviewed the 1980 study by the Universal Social Security Coverage Study Group, which was prepared for the Secretary of Health, Education, and Welfare at that time and transmitted to the Congress in March 1980. We discussed study results with the study s Deputy Director for Research and examined supporting documents for the study. We also held discussions and reviewed documentation of state and local government employer, employee, or pension plan representatives in the seven states that account for over 75 percent of the noncovered payroll. We examined financial reports for nine state and local retirement systems: the California State Teachers Retirement System, the Public Employees Retirement Association of Colorado, the Teachers Retirement System of the State of Illinois, the Louisiana State Employees Retirement System, the Massachusetts State Retirement System, the Massachusetts Teachers Contributory Retirement System, the State Teachers Retirement System of Ohio, the Public Employees Retirement System of Ohio, and the Teacher Retirement System of Texas. We also identified a number of states that have changed, or have considered changing, plan benefits in ways that are similar to those that might be made by states and localities with noncovered employees in response to mandatory Social Security coverage. We discussed the potential impact on plan finances of changing plan benefits with pension plan representatives in those states and examined study reports provided by them. For example, we contacted representatives of pension plans in Arizona, Kansas, Montana, South Dakota, Vermont, Washington, and West Virginia that have implemented or considered implementing defined contribution plans to replace some or all of the benefits provided by their defined benefit pension plans. Additionally, we reviewed survey reports addressing pension benefits, costs, investment practices, or actuarial valuation methods and assumptions prepared by BLS, PPCC, and the Society of Actuaries. We discussed the implications of mandatory coverage for public pension plans with actuaries at the Office of Personnel Management, the Pension Benefit Guarantee Corporation, the American Academy of Actuaries, and in private practice. To analyze differences between public pension costs and benefits for covered and noncovered state and local employees, we used PPCC survey data. We used the 1995 survey, which covered 1994, because the 1997 survey, which covered 1996, did not include some of the required data. Despite some limitations, the PPCC data are the best available. The data cover 310 pension systems, representing 457 plans and covering 80 percent of the 13.6 million active members in fiscal year 1994. The survey questionnaire was mailed to 800 systems, which were selected from member associations. Due to the nonrandom nature of the sample, no analysis can offer generalizations, nor can confidence intervals be calculated. Nevertheless, the survey describes the costs and benefits of a substantial majority of public pension plan members. For our analysis of PPCC data, we classified pension plans as (1) Social Security covered if 99 percent or more of the members participated in the Social Security program or (2) Social Security noncovered if 1 percent or less of the members participated in the program. We did not adjust cost and contribution rate data to standardize actuarial cost methods and assumptions. State and local governments may have legitimate reasons for choosing various cost methods, and we did not evaluate their choice. To identify potential legal or other problems with implementing mandatory coverage, we reviewed relevant articles and current case law. We conducted our work between September 1997 and May 1998 in accordance with generally accepted government auditing standards. Comments From the Social Security Administration Major Contributors to This Report Francis P. Mulvey, Assistant Director, (202) 512-3592 John M. Schaefer, Evaluator-in-Charge Hans Bredfeldt, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study Pursuant to a congressional request, GAO examined the implications of extending mandatory social security coverage to all newly hired state and local employees, focusing on: (1) the implications of mandatory coverage for the Social Security Program and for public employers, employees, and pension plans; and (2) potential legal or administrative problems associated with implementing mandatory coverage. What GAO Found GAO noted that: (1) the Social Security Administration (SSA) estimates that extending mandatory social security coverage to all newly hired state and local government employees would reduce the program's long-term actuarial deficit by about 10 percent and would extend the trust funds' solvency by about 2 years; (2) in addition to helping to some extent resolve the solvency problem, mandatory coverage would broaden participation in an important national program and simplify program administration; (3) the impact on public employers, employees, and pension plans would depend on how state and local governments with noncovered employees responded to the additional costs and benefits associated with social security coverage; (4) social security retirement benefits are fully protected from inflation and are weighted in favor of families and low-income employees; (5) many public pension plans, on the other hand, permit employees to retire earlier and provide a higher retirement income benefit than social security; (6) those states and localities that decide to maintain benefit levels for new employees consistent with the earlier retirement age and enhanced retirement income benefit would experience increased costs; (7) however, those employees would also have the additional family and other protection provided by social security; (8) alternatively, states and localities that choose to maintain level retirement spending might need to reduce some retirement benefits for newly hired employees; (9) several employer, employee, and plan representatives stated that mandating social security coverage for all new state and local government employees would raise constitutional issues and would be challenged in court; (10) however, GAO believes that mandatory coverage is likely to be upheld under current Supreme Court decisions; (11) mandatory coverage would also present administrative issues for implementing state and local governments; and (12) up to 4 years could be required for states and localities to develop, legislate, and implement pension plans that are coordinated with social security.
<1. Background> The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, defines the federal government s role during disaster response and recovery. The Stafford Act also establishes the programs and processes through which the federal government provides disaster assistance to state, tribal, territorial, and local governments, as well as certain nonprofit organizations and individuals. According to the act, the President can declare a major disaster after a governor or chief executive of an affected tribal government finds that a disaster is of such severity and magnitude that effective response is beyond the capabilities of the state and local governments and that federal assistance is necessary. That is, when the governor of a state or the chief executive of an Indian tribal government requests a declaration for a major disaster, FEMA evaluates the requests and makes a recommendation to the President, who decides whether or not to declare a major disaster and commit the federal government to provide supplemental assistance. Generally, state and local governments are responsible for the remaining share of disaster costs. <1.1. Federal Disaster Assistance to States> If the President declares a major disaster, the declaration can trigger a variety of federal assistance programs for governmental and nongovernmental entities, households, and individuals. FEMA provides disaster assistance to states, tribal governments, localities and individuals through several programs including: the Public Assistance (PA) and the Individual Assistance (IA) programs.disaster assistance programs. It provides grants to fund debris removal, and the repair, replacement, or restoration of disaster-damaged facilities. PA also funds certain types of emergency protective measures that eliminate or reduce immediate threats to lives, public health, safety, or improved property. To determine whether to recommend that a jurisdiction receive PA funding, FEMA relies on a series of factors including the statewide per capita impact indicator. PA is the largest of FEMA s FEMA s IA program ensures that disaster survivors have timely access to a full range of programs and services to maximize their recovery, through coordination among federal, state, tribal and local governments, nongovernmental organizations, and the private sector. Among other things, IA programs provide housing assistance, disaster unemployment assistance, crisis counseling, and legal services. households may be eligible for financial assistance or direct services if, due to the disaster, they have been displaced from their primary residence, their primary residence has been rendered uninhabitable, or they have necessary expenses and serious needs that are unmet through other means, such as insurance. The IA program provides assistance up to $32,900 for fiscal year 2015 to eligible individuals and households who, as a direct result of a major disaster or emergency, have uninsured or under insured necessary expenses and serious needs that cannot be addressed by other means, such as through other assistance programs or insurance. Specific IA programs and areas of responsibility include: the Individuals and Households Program, including Housing Assistance and Other Needs Assistance; the Disaster Unemployment Assistance Program; Disaster Legal Services; the Crisis Counseling Assistance and Training Program; the Disaster Case Management Program; Mass Care and Emergency Assistance Coordination; Voluntary Agency Coordination; and Disaster Recovery Center and Disaster Survivor Assistance Coordination. If approved for federal disaster assistance, states, tribal governments, and localities are expected to contribute toward disaster response and recovery costs. The usual cost share arrangement calls for the federal government to pay not less than 75 percent of the eligible PA costs of a disaster and for nonfederal entities (e.g., state and local governments) to pay the remaining nonfederal share of 25 percent. The federal government covers 100 percent of the Individuals and Households Program but requires states to contribute 25 percent to the Other Needs Assistance component of this program. This component covers repair or replacement costs for personal property including furniture and personal belongings, and some uninsured medical, dental, funeral, and transportation expenses as well as child care and other expenses. <1.2. State Budgets> If states are denied federal disaster assistance, they may choose to cover some of these costs. Disaster funding, like most other state expenditures, is typically part of a state s annual operating budget providing appropriations through the fiscal year. Disaster costs typically compete with other state priorities unless states establish a separately sourced disaster fund outside of the funds tied to their state s balanced budget requirements. Most states have constitutional or statutory provisions requiring that they balance their operating budgets, commonly referred to as their general fund. <2. Selected States Had Budget Mechanisms to Cover Disaster Costs for the Current Fiscal Year, but Did Not Maintain Reserves for Future Disasters> <2.1. Selected States Used a Range of Budget Mechanisms to Cover Unforeseen Disaster Costs during the Course of the Fiscal Year> All 10 states in our review used a range of mechanisms to ensure the availability of funds for unforeseen disaster costs during the fiscal year or current budget cycle. While each state had its own set of budget mechanisms, all of the selected states provided disaster funds at the start of the fiscal year and as needed during the course of the fiscal year. The types of unforeseen disaster costs states encountered depended, in large part, on the kind of disaster, but were typically related to emergency response activities. For instance, the costs of clearing debris and repairing roads along with emergency policing were typical expenses that states incurred after a major storm. Many of those expenses qualified for federal reimbursement under a presidential disaster declaration. Statewide disaster accounts. Statewide disaster accounts provided funding for disaster expenditures across state agencies or for localities. As shown in figure 2, all 10 states in our review established one or more types of statewide disaster accounts that received funds from general fund appropriations or from other revenue sources. All 10 states funded these statewide accounts through general fund revenues and 6 states Alaska, California, Florida, Indiana, North Dakota, and Vermont used other revenue sources in addition to general fund revenues to cover unforeseen costs that arose during the fiscal year. For example, Florida imposed an annual surcharge on homeowners residential insurance policies and on commercial and business owners property insurance policies, which the state then deposited into a trust fund to be used for emergency management purposes. In addition, one of Indiana s statewide disaster funds relied on public safety fees generated through the sale of retail fireworks, while North Dakota funded its statewide disaster account through a biennial appropriation from the revenues of the state s share of oil and gas taxes. The states in our review based initial funding levels for statewide disaster accounts on a range of considerations, such as estimates of disaster costs based on past events and emergency response costs for unforeseen disasters. Although some statewide disaster accounts allow unexpended balances to be carried over into future fiscal years, states typically budgeted these costs for a single budget cycle. For example, based on its past disaster costs, Alaska typically budgeted disaster relief funds to cover the costs of two state-declared disasters (totaling $2 million) and two federally-declared disasters (totaling $5 million to $6 million). Some states, such as North Dakota and California, may also establish funding amounts in statute. Specifically, North Dakota s Disaster Relief Fund receives an appropriation of $22 million every 2 fiscal years or each biennial budget cycle, while California s Disaster Response- Emergency Operations Account receives an annual appropriation of $1 million at the beginning of each fiscal year, consistent with the state s budget cycle. In establishing statewide disaster accounts, states typically defined the criteria under which the account funds could be used. For example, in Oklahoma, the governor is authorized to distribute funds from the state s disaster account to agencies that requested funds for emergency situations including: (1) destruction of public property; (2) operation of the National Guard; (3) matching funds for federal disaster relief programs; (4) asbestos removal from public buildings; and (5) emergency response necessary to protect the public health, safety, or welfare of livestock or wild animals. In North Dakota, the state s Disaster Relief Fund could be used to reimburse state agencies for disaster-related expenses incurred above the agencies normal operating costs. Budgets of state agencies. Nine of the 10 selected states also covered a portion of unforeseen disaster costs through the operating budgets of state agencies with missions relevant to disaster response and recovery, For example, in West Virginia, such as public safety and transportation.the state s Division of Homeland Security and Emergency Management within the Department of Military Affairs and Public Safety used its regular operating budget to cover disaster response costs. Other agencies in West Virginia, such as the state s transportation and police departments, also used funds in their operating budgets to cover major disaster costs. These agencies then submitted these costs to the emergency management office for reimbursement. As was shown in figure 2 earlier, of the 10 selected states, seven maintained contingency accounts for disasters. For example, Florida s Department of Environmental Protection established a disaster contingency account funded through user fees on Florida s state parks. In addition, the contingency fund for California s Department of Forestry and Fire Protection typically received an appropriation based on the average emergency cost from the previous five years. Supplemental appropriations. Eight of the 10 states in our review made use of supplemental appropriations when the funds appropriated to statewide accounts or agency budgets at the beginning of the fiscal year were insufficient. When states general funds served as the source of supplemental appropriations, these funds were unavailable to spend on other budget areas. Statewide multipurpose reserve accounts, such as budget stabilization funds (also referred to as rainy day funds), could also be tapped in the event that funds were not available through other means. A few states expanded the conditions for which budget stabilization funds could be tapped to include similar unanticipated expenses not directly related to revenue shortfalls or budget deficits. For example, although initially intended to offset revenue shortfalls, West Virginia s budget stabilization fund was subsequently modified to allow the state legislature to make appropriations from the fund for emergency revenue needs caused by natural disasters, among other things. However, budget officials from several states in our review told us that it was uncommon to access budget stabilization funds to cover disaster expenses because their state could generally provide disaster funding from a combination of general fund revenues and spending reductions in other areas. For example, despite having expanded its acceptable uses to include natural disasters, West Virginia only accessed its budget stabilization fund once since 2005 to cover disaster-related expenses. Similarly, in Florida, the state s budget stabilization fund was last used for disaster costs during the 2004 and 2005 hurricane seasons. Funding transfers. In addition, nine states in our review had mechanisms to allow designated officials (e.g., the governor, budget director, or a special committee) to transfer funds within or between agencies or from statewide reserve accounts after the start of the fiscal year. For example, in Indiana, if funds within an agency s budget are insufficient to cover the unexpected costs of a disaster, a special finance board can authorize a transfer of funds from one agency to another. In addition, the state s budget director can transfer appropriations within an agency s accounts if needed for disaster assistance. <2.1.1. Authorities for Releasing Disaster Funds Varied Across States> The authority to release funds from disaster accounts varied by state and resided with the governor, the legislature, or special committees. As we have previously reported, a state where the legislature is in session for only part of the year might give the governor more control over the release of disaster funds.legislature is out of session, the presiding officers of the legislature can agree in writing to suspend the $1 million limit placed on the Governor s disaster spending authority. For example, in the event that the Alaska Also, if a state legislature already appropriated a portion of general fund or other revenues to a disaster account, the governor or budget director can exert greater control over access to the reserves. For example, in California, a gubernatorial emergency declaration grants the state s Director of Finance the authority to tap into any appropriation in any department for immediate disaster response needs. <2.2. All Selected States Budgeted for Ongoing Costs Associated with Past Disasters> All states in our review budgeted for ongoing costs associated with past disasters. Typically, these ongoing costs included recovery-related activities, such as rebuilding roads, repairing bridges, and restoring public buildings and infrastructure. Costs associated with past disasters included the state s share of federal disaster assistance and disaster costs the state would cover in the absence of a federal declaration.for the costs of past disasters, all 10 states determined their budgets based on cost estimates for the upcoming fiscal year, even though each disaster declaration could span several budget cycles. As was shown in figure 2, all selected states used a range of budget mechanisms to cover the cost of past disasters. These mechanisms were similar to those the states used to budget for unforeseen disaster costs. States used some of the mechanisms to appropriate funds at the start of the fiscal year and used other mechanisms to provide disaster funds during the course of the fiscal year. For example, in Missouri, multiple agency accounts funded the expenses from past disasters incurred by state agencies, while a separate statewide account covered the non- federal match of disaster programs. The funding levels in states accounts varied from year to year depending on annual estimates of expected disaster costs, primarily determined through the project worksheet process the means by which the estimated costs are determined by FEMA and the state. For example, Florida s emergency management agency forecasts the ongoing costs associated with past disasters for three future fiscal years and reports these cost estimates on a quarterly basis. In New York, the Governor s budget office, along with its emergency management agency, periodically estimated the amount of disaster program costs the federal government would cover in addition to costs the state would have to bear. <2.2.1. Most Selected States Had Established Cost Share Arrangements with Localities> Most states in our review had established cost share arrangements with localities and passed along a portion of the required nonfederal cost share to them. Two states Alaska and West Virginia covered the 25 percent cost share for federally declared disasters while only one state Indiana passed the 25 percent nonfederal cost share onto its affected localities. In Vermont, municipalities that adopted higher flood hazard mitigation standards could qualify for a higher percentage of state funding for post-disaster repair projects, ranging from a minimum of 7.5 percent to a maximum of 17.5 percent. In Florida, the state typically evenly splits the nonfederal share with local governments but would cover a greater percentage of the nonfederal share for economically distressed localities.each state in our review. <2.3. Selected States Did Not Maintain Reserves for Future Disasters> None of the 10 states in our review maintained reserves dedicated solely for future disasters outside of the current fiscal year. As discussed earlier in this report, although funds in some states statewide disaster accounts could be carried forward into the future, funding for these accounts was typically intended to fund a single fiscal year. For example, unexpended balances from Indiana s State Disaster Relief Fund which receives an annual appropriation of $500,000, could be carried forward from one year to the next. Similarly, North Dakota s Disaster Relief Fund, which receives a biennial appropriation, can carry forward unexpended fund balances into the next biennial cycle. According to a North Dakota state official, this procedure was established in statute to provide a ready source of disaster funding. Otherwise, according to this official, the state legislature would need to identify large amounts of funding from the general fund account at the start of each budget cycle. Some state officials reported that they could cover disaster costs without dedicated disaster reserves because they generally relied on the federal government to fund most of the costs associated with disaster response and recovery. During the past decade, the federal government waived or reduced state and local matching requirements during extraordinary disasters such as Hurricanes Katrina and Sandy. For Hurricane Sandy, however, 100 percent of the federal funding was only available for certain types of emergency work and for a limited period of time. As we have reported in our prior work on state emergency budgeting, natural disasters and similar emergency situations did not have a significant effect on state finances because states relied on the federal government to provide most of the funding for recovery. <2.4. Some Selected States Funded Their Own Disaster Programs> Alaska s individual assistance program also provides reimbursement for personal property loss and assistance with housing repairs at 50 percent of the annual approved amounts for the federal IA Program. provided disaster assistance to localities on several occasions after being denied federal assistance. <3. Although Some States Increased Oversight and Availability of Funds, State Approaches to Budgeting for Disasters Remained Largely Unchanged during the 10-Year Period under Review> <3.1. Some Selected States Took Steps to Increase the Availability and Oversight of Disaster Funds, but Did Not Make Major Changes to Budgeting Approaches> Overall, states did not make major changes to their approaches to budgeting for disaster costs between fiscal years 2004 and 2013. Some states in our review did take steps to increase the availability of disaster funds, while others changed procedures related to legislative oversight. Although the national economic recession occurred during this time (officially lasting from December 2007 to June 2009) and resulted in state revenue declines of 10.3 percent states in our review reported that they were able to ensure the availability of funding to cover the cost of disasters. Officials in Alaska and North Dakota, for example, reported that state revenues generated from oil and gas taxes buffered their states from much of the fiscal distress that other states had experienced during the 2007 to 2009 recession. Three states in our review Alaska, Indiana, and North Dakota changed their budgeting approaches to further ensure the availability of disaster funding prior to a disaster rather than after a disaster. While these moves did not provide funding for future disasters beyond the current fiscal year, they did improve the availability of funds for disaster response within the current fiscal year. For example, Alaska established a statewide disaster fund in the late 1960s to ensure the availability of disaster funding. Prior to 2010, Alaska primarily funded the disaster fund through supplemental appropriations after a disaster had occurred and after the state s administration and emergency management agency had requested funding. However, according to a state official, this approach did not provide funding timely enough for state agencies and localities to respond quickly to a disaster. Rather, the approach involved waiting for the state legislature to appropriate funds to the state s disaster account, which could have taken weeks, particularly if the legislature was not in session. At that time, Alaska experienced multiple concurrent disasters. In addition, the nature of Alaska s climate and the remote location of many of its communities resulted in a need for the state to take swift action to respond to disasters so that residents were able to repair or rebuild their damaged homes before the onset of winter. Consequently, the state began to forward fund the disaster fund to have more money available immediately after a disaster. According to this state official, the change in approach relied on cost estimates of multiple disasters to develop an annual budget figure. In 2006, Indiana began appropriating funds to its State Disaster Relief Fund from the revenues it generated from firework sales to ensure the availability of a dedicated source of disaster funding. Although the state established the disaster relief fund in 1999, it did not appropriate funds to the account due to fiscal constraints. In 2006, the state began dedicating funds from the sale of fireworks. Then in 2007, the state established in statute that the fund would receive an annual appropriation of $500,000 from revenues generated from the firework sales. Prior to 2006, the state relied on general revenue funds to pay for disasters on an as- needed basis. North Dakota established its Disaster Relief Fund during its biennial legislative session (2009 to 2011) to ensure the availability of funding in the event of a disaster. The state appropriated money to the fund at the beginning of the state s biennial budget cycle with revenues generated from the state s tax on oil and gas production. In order to respond to disasters prior to the establishment of this fund, state agencies with emergency response missions, such as the Department of Transportation, had to request funding directly from the state legislature during the time it was in session.session, state agencies were required to obtain a loan from the Bank of North Dakota to cover their immediate disaster costs. Then, to repay the loan, the agencies needed to request a supplemental appropriation when the state legislature reconvened. A North Dakota state official told us that this process was inefficient, so the state legislature established the Disaster Relief Fund to provide an easier means for accessing disaster relief funds. However, if the legislature was out of Legislatures in three of our review states North Dakota, Missouri, and West Virginia took steps to increase their oversight of disaster spending. After North Dakota established a dedicated revenue source to ensure the availability of disaster funding, the state legislature took subsequent steps to increase the oversight of disaster relief funds. In particular, the legislature required state agencies to submit a request to the state s Emergency Commission in order to receive disaster funding. Established in 2011, the Emergency Commission, comprised of the Governor, Secretary of State, and the House and Senate majority leaders, has the authority to approve the appropriation of supplemental funding when there is an imminent threat to the safety of individuals due to a natural disaster or war crisis or an imminent financial loss to the state. Prior to 2011, the state s emergency management agency had been authorized to access disaster relief funds directly without approval from the Commission. According to a state emergency management official, the legislature took this action in response to a number of instances in which federal PA funds initially awarded to the state were deobligated, leaving the state with unanticipated disaster response costs. In one instance, for example, federal PA funds were deobligated because the state did not properly document the pre-existing conditions of a parking lot damaged by the National Guard in responding to a disaster. In this particular case, the state had to appropriate funds from their disaster relief fund to cover the cost of repair, rather than rely on federal PA funding to cover these costs. To provide more oversight for disaster expenditures, the Missouri legislature changed its requirements for accessing funds from the State Emergency Management Agency (SEMA) budget. Specifically, the legislature required that the administration seek legislative approval for all supplemental appropriations to the SEMA budget. According to Missouri budget officials, SEMA used to submit a budget request that represented a rough estimate of anticipated costs for the upcoming fiscal year. If actual costs exceeded SEMA s appropriation, the administration had the authority to appropriate additional money from general revenues for specific line items on an as-needed basis without additional legislative approval. West Virginia s legislature increased oversight of disaster funding by restricting the use of funds appropriated to the Governor s Contingent Fund. In prior years, the legislature appropriated funds to the Governor s Contingent Fund as a civil contingent fund a very broad term, according to a state budget official. Over the last few years, the legislature changed the appropriations bill language to limit spending flexibility for money appropriated to the fund. For example, appropriations bill language specified that funds were being appropriated for 2012 Natural Disasters or May 2009 Flood Recovery. <4. Concluding Observations> States rely on the assurance of federal assistance when budgeting for disasters. Based on current regulations, policies, and practices, the federal government is likely to continue to provide federal funding for large-scale disasters. In light of this federal approach to funding disaster response and recovery, the states in our review designed their budgeting approaches for disasters to cover the required state match for federal disaster assistance as well as the costs they incur in the absence of a federal declaration. For unforeseen disaster costs and for ongoing costs associated with past disasters, these states relied on a number of budget mechanisms including statewide disaster accounts, state agency budgets and supplemental appropriations, to ensure the availability of funding for disasters. However, none of the states in our review maintained reserves dedicated solely for future disasters outside of the current fiscal year. More frequent and costly disasters could prompt reconsideration of approaches to dividing state and federal responsibilities for providing disaster assistance. Given the fiscal challenges facing all levels of government, policymakers could face increased pressure to consider whether the current state and federal approach for providing disaster assistance balances responsibilities appropriately. Absent federal policy changes, the experience of the 10 states we reviewed suggests that states will likely continue to rely on federal disaster assistance for most of the costs associated with the response to large-scale disasters. <5. Agency Comments and Our Evaluation> We provided a draft of this report to the Secretary of the Department of Homeland Security for review and comment. The Department of Homeland Security generally agreed with our findings and provided technical comments, which we incorporated as appropriate. Additionally, we provided excerpts of the draft report to budget officers and emergency management officials in the 10 states we included in this review. We incorporated their technical comments as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Michelle Sager at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to determine (1) the approaches selected states use to budget for and fund state-level disaster costs; and (2) how, if at all, state disaster budgeting approaches have changed over time, including the factors influencing those changes and any challenges states encountered in budgeting for state-level disaster costs. To address the objectives, we selected a nonprobability sample of 10 states from the 50 states and the District of Columbia. To select the states for our sample, we obtained data from the Federal Emergency Management Agency s (FEMA) Integrated Financial Management Information System on major disaster declarations by state during fiscal years 2004 through 2013. We focused on this time frame because it contained the most current data for major disaster declarations. We assessed the reliability of the FEMA data by discussing with another GAO team their recent access and use of the data in a prior year s report and their determination that the data provided reliable evidence to support findings, conclusions, and recommendations. We also discussed data quality control procedures with FEMA officials who were knowledgeable about the specific types of data recorded in the database. Based on how we intended to use the information, we determined that the data were sufficiently reliable for the purpose of selecting states for our study. We sorted the data obtained based on the total number of major disaster declarations approved by state. We calculated the median number of major declarations approved by FEMA and identified states directly above the median. For those states, we also identified the number of major disaster declarations that had been denied by FEMA during the same time period, which ranged from zero denials to seven denials. We then calculated the statewide Public Assistance per capita amount of funding, based on FEMA s statewide per capita indicator of $1.39 and the U.S. Census Bureau s 2013 population estimate for each state. That is, we multiplied the 2013 population estimate for each state by the PA per capita indicator of $1.39. We then grouped the states according to low, medium, and high per capita threshold levels. To ensure geographic dispersion and a range of per capita amounts, we selected 10 states four low per capita states (Alaska, North Dakota, Vermont, and West Virginia), two medium per capita states (Missouri and Oklahoma), and four high per capita states (California, Florida, Indiana, and New York) (see table 1 for additional information). The results of our study are not generalizable to state budgeting approaches for all states and the District of Columbia. We then developed and administered a semistructured interview to state budget officers and emergency management officials in the10 selected states regarding the approaches they used to budget for and fund state- level disaster costs and how, if at all, approaches changed over time. To address the first objective, we analyzed information from the semistructured interviews about selected states approaches to budgeting for disasters. We also obtained and analyzed state budget and other relevant documents to determine how states estimate, authorize, and appropriate state disaster funds, the extent to which states share costs with affected localities, and how cost share arrangements with affected localities are determined. To address the second objective, we analyzed information from the semistructured interviews about how states budgeting approaches have changed during the past decade, factors influencing any changes, and any challenges states face in funding disaster assistance. We focused our questions on the period covering fiscal years 2004 through 2013. We also analyzed FEMA data regarding major state disasters to identify possible trends in the frequency, severity, type, and cost of state disaster events during the period from fiscal years 2004 through 2013. For both objectives, we analyzed relevant state statutes and regulations that govern the use of state disaster funds. In addition, we interviewed FEMA officials who participate in making recommendations to the President as to whether state requests for federal disaster funding should be approved or denied. We conducted this performance audit from April 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of Selected Disaster Funds, Disaster Programs, and Cost Share Arrangements in 10 Selected States The 10 selected states in our review used a range of budget mechanisms to cover the costs of disasters. This appendix provides additional detail on the range of disaster-specific funds, disaster assistance programs, and cost share arrangements in the 10 states. Appendix III: GAO Contact and Staff Acknowledgments <6. GAO Contact> <7. Staff Acknowledgments> In addition to the contact named above, Stanley Czerwinski, Brenda Rabinowitz (Assistant Director), Kathleen Drennan (Analyst-in-Charge), Mark Abraham, Liam O Laughlin, and Robert Yetvin made key contributions to this report. Aditi Archer, Amy Bowser, Jeffrey Fiore, Robert Gebhart, Carol Henn, Donna Miller, Susan Offutt, and Cynthia Saunders also contributed to this report.
Why GAO Did This Study In recent years, natural and human-made disasters have increased in the United States in terms of both numbers and severity. For presidentially declared disasters, the federal government generally pays 75 percent of disaster costs and states cover the rest. As a result of this trend, governments at all levels have incurred increased costs for disaster response and recovery. An understanding of the approaches states take to budget for disaster costs can help inform congressional consideration of the balance between federal and state roles in funding disaster assistance. GAO was asked to examine how states typically budget for costs associated with disasters and any changes to those budget approaches during the past decade. This report reviewed (1) the approaches selected states use to budget for and fund state-level disaster costs; and (2) how, if at all, state disaster budgeting approaches have changed over time. For this review, GAO selected 10 states based on criteria such as the number of major disaster declarations and denials for each state from fiscal years 2004 to 2013. GAO reviewed state statutes, budgets, and other documents explaining states' approaches to budgeting for disaster costs and interviewed state officials. Although GAO's findings are not generalizable, they are indicative of the variation in budget mechanisms among the states. GAO is not making recommendations. GAO received and incorporated, as appropriate, technical comments from the Department of Homeland Security and the 10 selected states. What GAO Found The 10 selected states in GAO's review—Alaska, California, Florida, Indiana, Missouri, New York, North Dakota, Oklahoma, Vermont, and West Virginia—had established budget mechanisms to ensure the availability of funding for the immediate costs of unforeseen disasters and the ongoing costs of past disasters. All 10 states provided disaster funds at the start of the fiscal year and then as needed during the course of the fiscal year. Each of the selected states had its own combination of budget mechanisms that generally fell into four categories: Statewide disaster accounts . These accounts provided the 10 states with the flexibility to fund disaster expenses across state entities or for local governments. States typically funded these accounts through general fund revenue. Six states also used other sources, such as revenues from oil and gas taxes and fees on homeowner's and commercial insurance. The amounts appropriated to these accounts at the start of the fiscal year were based on a range of considerations, such as estimates of disaster costs based on past events and emergency response costs for unforeseen disasters. State agency budgets . Nine of the 10 states also covered a portion of unforeseen disaster costs through the operating or contingency budgets of state agencies with missions relevant to disaster response and recovery. For example, West Virginia's Division of Homeland Security and Emergency Management used its operating budget to cover disaster response costs. Florida's Department of Environmental Protection had a disaster contingency account funded through user fees on state parks. Supplemental appropriations . When advance funding proved insufficient to cover disaster costs, eight of the 10 states provided supplemental funding to pay for the remaining costs. While reserve accounts such as rainy day funds could be used to provide this funding if general funds were unavailable, budget officials said their state rarely tapped these funds. Transfer authority . All 10 states in our review allowed designated officials (i.e., the governor, budget director, or a special committee) to transfer funds within or between agencies or from statewide reserve accounts after the start of the fiscal year. None of the 10 states in GAO's review maintained reserves dedicated solely for future disasters. Some state officials reported that they could cover disaster costs without dedicated disaster reserves because they generally relied on the federal government to fund most of the costs associated with disaster response and recovery. While some states have increased the oversight and availability of disaster funds, all 10 states' approaches to budgeting for disasters have remained largely unchanged during fiscal years 2004 through 2013. Specifically, three states—Alaska, Indiana, and North Dakota—changed their budgeting processes to ensure that funding for disasters was appropriated before rather than after a disaster occurred. In addition, legislatures in three states—Missouri, North Dakota and West Virginia—took steps to increase their oversight of disaster spending.
<1. Budget Process: Ideas for Improvement> Today there is widespread frustration with the budget process. It is attacked as confusing, time-consuming, burdensome, and repetitive. In addition, the results are often disappointing to both participants and observers. Although frustration is nearly universal, there is less agreement on what specific changes would be appropriate. This is not surprising. It is in the budget debate that the government determines in which areas it will be involved and how it will exercise that involvement. Disagreement about the best process to reach such important decisions and how to allocate precious resources is to be expected. We have made several proposals based on a good deal of GAO work on the budget, including the structure of the budget and the budget process.These proposals emphasize the need to improve the recognition of the long-term impact of today s budget decisions and advance steps to strengthen or better ensure accountability. <1.1. Focus on the Long Term> In previous reports and testimonies, we have said that the nation s economic future depends in large part upon today s budget and investment decisions. Therefore, it is important for the budget to provide a long-term framework and be grounded in a linkage of fiscal policy with the long-term economic outlook. This would require a focus both on overall fiscal policy and on the composition of federal activity. In previous reports, we have cautioned that the objective of enhancing long-term economic growth through overall fiscal policy is not well served by a budget process which focuses on the short-term implications of various spending decisions. It is important to pay attention to the long-term overall fiscal policy path, to the longer-term implications of individual programmatic decisions, and to the composition of federal spending. very short term, planning for longer-range economic goals requires exploring the implications of budget decisions well into the future. By this, we do not mean detailed budget projections could be made over a 30-year time horizon, but it is important to recognize that for some programs a long-term perspective is critical to understanding the fiscal and spending implications of a decision. The current 5-year time horizon may work well for some programs, but for retirement programs, pension guarantees, and mortgage-related commitments for example a longer-time horizon is necessary. Although the surest way of increasing national savings and investment would be to reduce federal dissaving by eliminating the deficit, the composition of federal spending also matters. We have noted that federal spending can be divided into two broad categories based on the economic impact of that spending consumption spending having a short-term economic impact and investment spending intended to have a positive effect on long-term private sector economic growth. We have argued that the allocation of federal spending between investment and consumption is important and deserves explicit consideration. However, the current budget process does not prompt the executive branch or the Congress to make explicit decisions about how much spending should be for long-term investment. The budget functions along which the resolution is structured represent one categorization by mission, but they are not subdivided into consumption and investment. Appropriations subcommittees provide funding by department and agency in appropriations accounts that do not distinguish between investment and consumption spending. In short, the investment/consumption decision is not one of the organizing themes for the budget debate. We have suggested that an appropriate and practical approach to supplement the budget s focus on macroeconomic issues would be to incorporate an investment component within the discretionary caps set by BEA. Such an investment component would direct attention to the trade-offs between consumption and investment but within the overall fiscal discipline established by the caps. It would provide policymakers with a new tool for setting priorities between the long term and the short term. Within the declining unified budget deficit path, a target for investment spending could be established for the appropriate level of investment to ensure that it is considered formally in the budget process. <1.2. Enforcement, Accountability, and Transparency> In addition to changes aimed at improving the focus on the long term, we have continued to emphasize the importance of enforceability, accountability, and transparency. We describe these three elements together because it is difficult to have accountability without an enforcement mechanism and without transparency to make the process understandable to those outside it. Accountability in this context has several dimensions: accountability for the full costs of commitments that are to be made and accountability for actions taken which requires targeting enforcement to actions. In addition, it may encompass the broader issue of taking responsibility for responding to unexpected events. Transparency is important not only because in a democracy the budget debate should be accessible to the citizenry but also because without it, there can be little ultimate accountability to the public. In this area, as in others I discuss today, there has been progress. For example, enforcement provisions in BEA have worked within their scope: the discretionary caps and controls on expanding entitlements have held. The design of the law has provided accountability for the costs of actions taken and for compliance with rules. However, accountability for the worse-than-expected deficits in the past has been diffuse. For credibility and for success, we need to consider bringing more responsibility for the results of unforeseen actions into the system. We have previously suggested that Congress might want to consider introducing a lookback into its system of budgetary controls. Under such a process, the current Congressional Budget Office (CBO) deficit projections would be compared to those projected at the time of a prior deficit reduction agreement and/or the most recent reconciliation legislation. For a difference exceeding a predetermined amount, the Congress would decide explicitly through a vote whether to accept the slippage or to act to bring the deficit path closer to the original goal by mandating actions to narrow this gap. required to recommend whether none, some, or all of the overage should be recouped. The Congress could be required to vote either on the President s proposal or an alternative one. Neither of these lookback processes determine an outcome; both seek to increase accountability for decisions about the path of federal spending. Taken together, the changes we have suggested, which could be made within the current budget process, would move us toward increased focus on important decisions and increased accountability for those decisions. Also, as discussed below, additional financial reporting and management reforms underway hold tremendous potential for helping to improve greatly the quality of information available to further enhance budget decision-making. <2. Financial Reporting and Management: The Basis for Future Progress> The budget should be formulated using accurate and reliable financial data on actual spending and program performance. Audited financial statements and reports ought to be the source of these data. Ideally, we should expect such reports to address (1) the full costs of achieving program results, (2) the value of what the government owns and what it owes to others, (3) the government s ability to satisfy future commitments if current policies were continued, and (4) the government s ability to detect and correct problems in its financial systems and controls. Unfortunately, financial accounting information to date has not always been reliable enough to use in federal decision-making or to provide the requisite public accountability for the use of taxpayers money. Good information on the full costs of federal operations is frequently absent or extremely difficult to reconstruct and reliable information on federal assets and liabilities is all too often lacking. While GAO has been actively urging improvements in this area for over 20 years, complete, useful financial reporting is not yet in place. The good news is that tools are now being put in place that promise to get the federal government s financial house in order. First, beginning for fiscal year 1996, all major agencies, covering about 99 percent of the government s outlays, are required to prepare annually financial statements and have them audited. Second, an audited governmentwide financial statement is required to be produced every year starting with fiscal year 1997. Third, FASAB is recommending new federal accounting standards that will yield more useful and relevant financial statements and information. The basis for much of this progress is the CFO Act s requirements for annual financial statement audits. Audits for a select group of agencies under the Act s original pilot program highlighted problems of uncollected revenues and billions of dollars of unrecognized liabilities and potential losses from such programs as housing loans, veterans compensation and pension benefits, and hazardous waste cleanup. Such audits are bringing important discipline to agencies financial management and control systems. Thanks to the benefits achieved from these pilot audits, the Congress extended this requirement, in the 1994 Government Management Reform Act, to the government s 24 major departments and agencies. That act also mandated an annual consolidated set of governmentwide financial statements to be audited by GAO starting for fiscal year 1997. These statements will provide an overview of the government s overall costs of operations, a balance sheet showing the government s assets and liabilities, and information on its contribution to long-term economic growth and the potential future costs of current policies. These reports will provide policymakers and the public valuable information to assess the sustainability of federal commitments. The CFO Act also went beyond these auditing and reporting requirements to spell out an agenda of other long overdue reforms. It established a CFO structure in 24 major agencies and the Office of Management and Budget (OMB) to provide the necessary leadership and focus. It also set expectations for the deployment of modern systems to replace existing antiquated, often manual, processes; the development of better performance and cost measures; and the design of results-oriented reports on the government s financial condition and operating performance by integrating budget, accounting, and program information. measures into the reports and developing reports more specifically tailored to the government s needs. <2.1. FASAB Efforts> The creation of FASAB was the culmination of many years of effort to achieve a cooperative working relationship between the three principal agencies responsible for overall federal financial management OMB, Treasury, and GAO. Its establishment represents a major stride forward because financial management can only improve if these principal agencies involved in setting standards, reporting, and auditing work together. As you know, FASAB was established in October 1990 by the Secretary of the Treasury, the Director of OMB, and me to consider and recommend accounting principles for the federal government. The 9-member board is comprised of representatives from the three principals, CBO, the Department of Defense, one civilian agency (presently Energy), and three representatives from the private sector, including the Chairman, former Comptroller General Elmer B. Staats. FASAB recommends accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information and comments from the public. OMB, Treasury, and GAO then decide whether to adopt the recommended standards; if they do, the standards are published by GAO and OMB and become effective. FASAB will soon complete the federal government s first set of comprehensive accounting standards developed under this consensus approach. Key to the FASAB approach for developing these standards was extensive consultation with users of financial statements early in its deliberations to ensure that the standards will result in statements that are relevant to both the budget process as well as agencies accountability for resources. Users were interested in getting answers to questions on such topics as: Budgetary integrity What legal authority was provided to finance government activities and was it used correctly? Operating performance How much do programs cost and how were they financed? What was achieved? What are the government s assets and are they well managed? What are its liabilities and how will they be paid for? Stewardship Has the government s overall financial capacity to satisfy current and future needs and costs improved or deteriorated? What are its future commitments and are they being provided for? How will the government s programs affect the future growth potential of the economy? Systems and control Does the government have sufficient controls over its programs so that it can detect and correct problems? The FASAB principals have approved eight basic standards and statements, which I will refer to as FASAB standards in my testimony today, and approval of the final one for revenue accounting is expected this spring. This will complete the body of basic accounting and cost accounting standards for all federal agencies to use in preparing financial reports and developing meaningful cost information. The basic standards and statements are: Objectives of Federal Financial Reporting A statement of general concepts on the objectives of financial reporting by the U.S. government providing the basic framework for the Board s work. Entity and Display A statement of general concepts on how to define federal financial reporting entities and what kinds of financial statements those entities should prepare. Managerial Cost Accounting Concepts and Standards A statement of general concepts combined with a statement of specific standards emphasizing the need to relate cost information with budget and financial information to provide better information for resource allocation and performance measurement. Accounting for Selected Assets and Liabilities A statement of specific standards for accounting for basic items such as cash, accounts receivable, and accounts payable. Accounting for Direct Loans and Loan Guarantees A statement of accounting standards responding to the Credit Reform Act of 1990. Accounting for Inventory and Related Property A statement of standards for accounting for inventories, stockpiled materials, seized and forfeited assets, foreclosed property, and goods held under price support programs. Accounting for Liabilities of the Federal Government A statement of standards for federal insurance and guarantee programs, pensions and post-retirement health care for federal workers, and other liabilities, including contingent liabilities. Accounting for Property, Plant and Equipment A statement of standards for accounting for the various types of property (including heritage assets), plant and equipment held by the government. Accounting for Revenue and Other Financing Sources A statement of standards for accounting for inflows of resources (whether earned, demanded, or donated) and other financing sources. A standard for stewardship reporting is also scheduled for completion this spring. While not part of the package of basic standards, it will help inform decisionmakers about the magnitude of federal resources and financial responsibilities and the federal stewardship role over them. The standards and new reports are being phased in over time. Some are effective now; all that have been issued will be effective for fiscal year 1998. OMB defines the form and content of agency financial statements in periodic bulletins to agency heads. The most recent guidance incorporates FASAB standards for selected assets and liabilities, credit programs, and inventory. In the fall, OMB will be issuing new guidance reflecting the rest of the FASAB standards. Since the enactment of the CFO Act, OMB s form and content guidance has stressed the use of narrative Overview sections preceding the basic financial statements as the best way for agencies to relate mission goals and program performance measures to financial resources. Each financial statement includes an Overview describing the agency, its mission, activities, accomplishments, and overall financial results and condition. It also should discuss what, if anything, needs to be done to improve either program or financial performance, including an identification of programs or activities that may need significant future funding. OMB also requires that agency financial statements include a balance sheet, a statement of operations, and a statement reconciling expenses reported on the statement of operations to related amounts presented in budget execution reports. Based on FASAB s standards, OMB is making efforts to design new financial reports that contain performance measures and budget data to provide a much needed, additional perspective on the government s actual performance and its long-term financial prospects. Financial reports based on FASAB s standards will provide valuable information to help sort out various kinds of long-term claims. The standards envision new reports on a broad range of liabilities and liability-like commitments and assets and asset-like spending. Liabilities, such as the federal debt, would be reported on a balance sheet, along with assets owned by federal agencies, like buildings. recognition as liabilities on the balance sheet. FASAB is still considering what types of estimates would be most useful if stewardship reporting is applied to social insurance. To give a picture of the government s capacity to sustain current public services, stewardship reporting will also include 6-year projections of receipt and outlay data for all programs based on data submitted for the President s budget. Stewardship reports based on FASAB standards would also provide information on federal investments intended to have future benefits for the nation, thus providing actual data on the budget s investment component that GAO has recommended and which I discussed earlier. Stewardship reporting would cover federal investments and some performance information for programs intended to improve the nation s infrastructure, research and development, and human capital due to their potential contribution to the long-term productive capacity of the economy. These kinds of activities would not be reflected on the balance sheet because they are not assets owned by the federal government but rather programs and subsidies provided to state and local governments and the private sector for broader public purposes. Stewardship reporting recognizes that, although these investments lack the traditional attributes of assets, such programs warrant special analysis due to their potential impact on the nation s long-term future. Linking costs to the reported performance levels is the next challenge. FASAB s cost accounting standards the first set of standards to account for costs of federal government programs will require agencies to develop measures of the full costs of carrying out a mission or producing products or services. Thus, when implemented, decisionmakers would have information on the costs of all resources used and the cost of support services provided by others to support activities or programs and could compare these costs to various levels of program performance. Perseverance will be required to sustain the current momentum in improving financial management and to successfully overcome decades of serious neglect in fundamental financial management operations and reporting methods. Implementing FASAB standards will not be easy. FASAB has allowed lead time for implementing the standards so that they can be incorporated into agencies systems. Nevertheless, even with this lead time, agencies may have difficulty in meeting the schedule. It is critical that the Congress and the executive branch work together to make implementation successful. As the federal government continues to improve its accountability and reporting of costs and performance, the more useful and reliable data need to be used to influence decisions. That brings me to the task of better integrating financial data and reports into the budget decision-making process. <3. Making Better Informed Budget Decisions Based on Improved Financial Data and Reports> The ultimate goal of more reliable and relevant financial data is to promote more informed decision-making. For this to happen, the financial data must be understood and used by program managers and budget decisionmakers. The changes underway to financial reporting have been undertaken with a goal of making financial data more accessible to these decisionmakers. The budget community s involvement in the FASAB standard-setting process has contributed to this. Still, the future challenge remains to further integrate financial reports with the budget to enhance the quality and richness of the data considered in budget deliberations. Improving the linkages between accounting and budgeting also calls for considering certain changes in budgeting such as realigned account structures and the selective use of accrual concepts. The chief benefit of improving this linkage will be the increased reliability of the data on which we base our management and budgetary decisions. The new financial reports will improve the reliability of the budget numbers undergirding decisions. Budgeting is a forward-looking enterprise, but it can clearly benefit from better information on actual expenditures and revenue collection. Under FASAB standards, numbers from the budget will be included in basic financial statements and thus will be audited for the first time. Having these numbers audited was one of the foremost desires of budget decisionmakers consulted in FASAB s user needs study and stems from their suspicion that the unaudited numbers may not always be correct. The new financial reports will also offer new perspectives and data on the full costs of program outputs and agency operations that are currently not reported in the cash-based budget. Information on full costs generated pursuant to the new FASAB standards would provide decisionmakers a more complete picture of actual past program costs and performance when they are considering the appropriate level of future funding. For example, the costs of providing Medicare are spread among at least three budget accounts. Financial reports would pull all the relevant costs together. <3.1. Realigning Account Structures> The different account structures that are used for budget and financial reporting are a continuing obstacle to using these reports together and may prevent decisionmakers from fully benefiting from the information in financial statements. Unlike financial reporting, which is striving to apply the full cost concept when reporting costs, the budget account structure is not based on a single unifying theme or concept. The current budget account structure evolved over time in response to specific needs. The budget contains over 1,300 accounts. They are not equal in size; nearly 80 percent of the government s resources are clustered in less than 5 percent of the accounts. Some accounts are organized by the type of spending (such as personnel compensation or equipment) while others are organized by programs. Accounts also vary in their coverage of cost, with some including both program and operating spending while others separate salaries and expenses from program subsidies. Or, a given account may include multiple programs and activities. When budget account structures are not aligned with the structures used in financial reporting, additional analyses or crosswalks would be needed so that the financial data could be considered in making budget decisions. If the Congress and the executive branch reexamine the budget account structure, the question of trying to achieve a better congruence between budget accounts and the accounting system structure, which is tied to performance results, should be considered. <3.2. The Selective Use of Accrual Concepts in the Budget> In addition to providing a new, full cost perspective for programs and activities, financial reporting has prompted improved ways of thinking about costs in the budget. For the most part, the budget uses the cash basis, which recognizes transactions when cash is paid or received. Financial reporting uses the accrual basis, which recognizes transactions when commitments are made, regardless of when the cash flows. Cash-based budgeting is generally the best measure to reflect the short-term economic impact of fiscal policy as well as the current borrowing needs of the federal government. And for many transactions, such as salaries, costs recorded on a cash basis do not differ appreciably from accrual. However, for a select number of programs, cash-based budgeting does not adequately reflect the future costs of the government s commitments or provide appropriate signals on emerging problems. For these programs, accrual-based reporting may improve budgetary decision-making. The accrual approach records the full cost to the government of a decision whether to be paid now or in the future. As a result, it prompts decisionmakers to recognize the cost consequences of commitments made today. Accrual budgeting is being done under the Credit Reform Act for credit programs such as the federal family education loan program and the rural electrification and telephone direct loan program. It may be appropriate to extend its use to other programs such as federal insurance programs an issue we are currently studying at the request of the Chairman, House Budget Committee. Our work to date has revealed shortcomings with cash-based budgeting for insurance programs, but also highlighted difficulties in estimating future costs for some of them due to the lack of adequate data or to sensitivity to the assumptions used to model future costs. The potential distortions arising from the cash-based approach must be weighed against the risks and uncertainties involved in estimating longer-term accrued costs for some programs. Our upcoming report on budgeting for insurance will address these issues. Small changes in the right direction are important, but to make the kind of difference we are all seeking will require pulling all this together for budget and oversight. <4. Putting It All Together and Making It Work> Thanks in large part to the legislative impetus of the CFO Act and GPRA, decisionmakers will ultimately have available unprecedented, reliable information on both the financial condition of programs and operations as well as the performance and costs of these activities. While these initiatives carry great potential, they require continued support by the agencies and the Congress. GPRA set forth the major steps federal agencies need to take towards a results-oriented management approach. They are to (1) develop a strategic plan, (2) establish performance measures focused on outcomes or results expressed in terms of the real difference federal programs make in people s lives and use them to monitor progress in meeting strategic goals, and (3) link performance information to resource requirements through annual performance plans. I have supported the intent of GPRA and believe that it offers great potential for enhancing decision-making and improving the management of federal programs. A growing number of federal agencies is beginning to see that a focus on outcomes can lead to dramatic improvements in effectiveness. However, our work also has shown that a fundamental shift in focus to include outcomes does not come quickly or easily. The early experiences of many GPRA pilots show that outcomes can be very difficult to define and measure. They also found that a focus on outcomes can require major changes in the services that agencies provide and processes they use to provide those services. Given that the changes envisioned by GPRA do not come quickly or easily, strong and sustained congressional attention to GPRA implementation is critical. Without it, congressional and executive branch decisionmakers may not obtain the information they need as they seek to create a government that is more effective, efficient, and streamlined. Authorization, appropriation, budget, and oversight committees all have key interests in ensuring that GPRA is successful because, once fully implemented, it should provide valuable data to help inform the decisions that each committee must make. OMB has attempted to prompt progress by giving special emphasis in its budget submission guidance to increasing the use of information on program performance in budget justifications. In preparation for the fiscal year 1997 budget cycle, OMB held performance reviews last May with agencies on performance measures and in September 1995 issued guidance on preparing and submitting strategic plans. Further progress in implementing GPRA will occur as performance measures become more widespread and agencies begin to use audited financial information in the budget process to validate and assess agency performance. GAO, OMB, and the CFO Council have also given thought as to how to best report data and information to decisionmakers. While there are a myriad of legislatively mandated reporting requirements under separate laws, such as GPRA, the Federal Managers Financial Integrity Act, the CFO Act, and the Prompt Pay Act, decisionmakers need a single report relating performance measures, costs, and the budget. This reporting approach is consistent with the CFO Council s proposal for an Accountability Report, which OMB is pursuing. On a pilot basis, OMB is having six agencies produce Accountability Reports providing a comprehensive picture of each agency s performance pursuant to its stated goals and objectives. The ultimate usefulness of the Accountability Report will hinge on its specific content and the reliability of information presented. We will work with OMB and agencies throughout the pilot program. We agree with the overall streamlined reporting concept and believe that, to be most useful, the Accountability Report must include an agency s financial statements and related audit reports. Accountability reports could then be used as the basis for annual oversight hearings, something I have long advocated. Such serious scrutiny of programs and activities is especially important as we seek to reduce the deficit. Oversight hearings based on complete sets of reports could be the basis for considering changes in federal roles and in program design as well as reviewing the adequacy of agencies accountability and performance. Finding the most effective reporting and analytical approaches will require a great deal of collaboration and communication. Appropriations, budget, and authorizing committees need to be full partners in supporting the implementation of these initiatives. The new financial reports based on FASAB s recommended standards will provide much-needed additional perspective on the long-term prospects for government programs and finances. It can be used with other kinds of actuarial and economic analyses already available in making budget decisions. <5. Conclusion> In conclusion, reforms are needed on three fronts in the budget process, in accountability and reporting for costs and performance, and in using the improved reports to better inform policy and budget decisions. Improved financial management and reports are essential to improving the government s ability to provide accountability for public resources. Continuing fiscal pressures will place a premium on the proper stewardship of increasingly scarce public resources. Recent efforts to improve federal financial reporting will, if properly implemented, provide the tools needed to redress long-standing weaknesses. on the current and future stakes involved in our decisions may help policymakers make decisions focused more on the long-term consequences. The public also stands to gain from these initiatives, both from improved accountability for public resources and more informed decisions. Mr. Chairman, this concludes my statement. I would be happy to respond to questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study GAO discussed how the federal government could improve its financial management and budgets. What GAO Found GAO noted that: (1) over the last 6 years, the government has established a solid framework for improving its financial management through legislative mandates, an accounting standards advisory board, and budget process improvements; (2) the budget process should provide a long-term perspective and link fiscal policy to the long-term economic outlook; (3) the Administration and Congress need to make explicit decisions about investment and consumption spending and identify them within the budget; (4) budget enforcement, accountability, and transparency need to be enhanced, possibly through a look-back procedure and particularly in the areas of deficit and mandatory spending; (5) to enhance budget decisionmaking, agency and governmentwide financial statements audits should provide accurate and reliable financial data on actual spending and program performance; (6) the advisory board has approved eight government accounting standards addressing such areas as budget integrity, operating performance, and systems and control and will complete a stewardship standard by the spring of 1996; (7) the Office of Management and Budget is designing new financial reports to increase information on actual performance and long-term financial prospects; and (8) realigning account structures and selective use of accrual concepts in the budget would link accounting and budgeting and improve budget and management decisionmaking by disclosing the full cost of programs and operations.
<1. Background> The Emergency Relief Program, authorized by section 125 of title 23 of the U.S. Code, provides assistance to repair or reconstruct federal-aid highways and roads on federal lands that have sustained serious damage from natural disasters or catastrophic failures. Congress has provided funds for this purpose since at least 1938. Examples of natural disasters include floods, hurricanes, earthquakes, tornadoes, tsunamis, severe storms, and landslides. Catastrophic failures qualify if they result from an external cause that leads to the sudden and complete failure of a major element or segment of the highway system that has a disastrous impact on transportation. Examples of qualifying causes of catastrophic failures include acts of terrorism or incidents such as a barge striking a bridge pier causing the sudden collapse of the structure or a truck crash resulting in a fire that damages the roadway. For natural disasters or other events to be eligible for emergency relief funding, the President must declare the event to be an emergency or a major disaster under the Robert T. Stafford Disaster Relief and Emergency Assistance Act or the governor must declare an emergency with the concurrence of the Secretary of Transportation. Since 1972, Congress has authorized $100 million annually in contract authority for the Emergency Relief Program to be paid from the Highway Trust Fund. Accordingly, FHWA may obligate up to $100 million in any one fiscal year for the program. Any unobligated balance remains available until expended. Additionally, obligations to a single state resulting from a single natural disaster or a single catastrophic failure may not exceed $100 million. In some cases, Congress has enacted legislation lifting this cap for large- scale disasters. Moreover, as provided in FHWA s regulations, states are eligible for assistance under the Emergency Relief Program if the cost of the damage from a single event exceeds $700,000 for emergency assistance.sites in any state affected by the disaster. According to FHWA guidance, each prospective damage site must have at least $5,000 of repair costs to qualify for funding assistance a threshold intended to distinguish unusually large expenses eligible for emergency relief funding from costs that should be covered by normal state maintenance funding. <2. Supplemental Appropriations Comprise Most Emergency Relief Funding Provided to States, and a Backlog of Funding Requests Remains> <2.1. From Fiscal Years 2007 through 2010, Congress Provided More than $2.3 Billion for Emergency Relief Events and to Address a Backlog of Unfunded Requests> From fiscal years 2007 through 2010, Congress provided more than $2.3 billion to the Emergency Relief Program, including more than $1.9 billion in three supplemental appropriations from general revenues and about $400 million in contract authority paid from the Highway Trust Fund (see fig. 3). The supplemental appropriations represented 83 percent of the program s funding over that time period. This percentage has been fairly consistent over time: 86 percent of the total Emergency Relief Program funding provided from fiscal years 1990 through 2006 came from supplemental appropriations. Two of the supplemental appropriations that Congress provided to the Emergency Relief Program since fiscal year 2007 were used to address the backlog of unfunded emergency relief requests from states. In May 2007, Congress provided $871 million to help clear a backlog of $736 million in funding requests from 46 states. In September 2008, when the backlog list reached more than $560 million, Congress provided $850 million to address this backlog and provide additional funds for future requests. In December 2007, Congress provided $195 million for the reconstruction of the Interstate 35 West Bridge in Minnesota. FHWA has allocated all of the $2.3 billion provided to the program since fiscal year 2007, as well as an additional $100 million carried over from previously provided program funding, among 42 states and three territories. Sixty-five percent of the allocations (almost $1.6 billion) went to six states California, Louisiana, Minnesota, North Dakota, Texas, and Washington state (see fig. 4). California received almost $538 million, the most of all states, and most of this was a result of the 2005 2006 winter storms. Washington state was allocated almost $166 million in response to 10 events ranging from a single event estimated to cost $1 million to about $58 million to respond to flooding caused by severe rains in December 2007. Of the $2.4 billion that FHWA allocated to states from fiscal years 2007 through 2010, about 59 percent ($1.4 billion) was allocated for events that occurred during those years. FHWA allocated the remaining 41 percent ($988 million) for events that occurred from fiscal years 2001 through 2006. This amount includes $195 million made available through the December 2007 supplemental appropriation. the lake. Starting in the early 1990s, the lake level has risen dramatically, threatening adjacent roadways. Although Emergency Relief Program regulations define a natural disaster as a sudden and unusual natural occurrence, FHWA determined that the gradual and predictable basin flooding at Devils Lake is eligible for Emergency Relief Program funding. In 2005, through SAFETEA-LU, Congress authorized up to $10 million of Emergency Relief Program funds to be expended annually, up to a total of $70 million, to address an additional problem at Devils Lake and make repairs to certain roads which were impounding water and acting as dams. In the absence of other authority, this funding must come out of the $100 million annual authorization of contract authority, effectively reducing the annual emergency relief funding available to other states. As of March 2010, the Emergency Relief Program has provided more than $256 million for projects related to Devils Lake flooding. <2.2. Emergency Relief Faces Risk from Escalating Costs of Events Occurring in Past Years> In recent years, Congress has provided significant supplemental funding to the Emergency Relief Program, but as of June 2011, a $485 million backlog of funding requests from states remained. This backlog did not include funding requests for August 2011 damages from Hurricane Irene. The backlog list provides a snapshot of states funding requests at a given time and is subject to change as states experience new eligible events. According to guidance in FHWA s Emergency Relief Manual, requested amounts are based on the states anticipated need for emergency relief for the current fiscal year and may be less than the total emergency relief needs for any specific event. The June 2011 backlog list contained almost $90 million in formal funding requests for several events that occurred between 1983 and 1993 that were previously determined to be eligible by FHWA. Specifically, California requested almost $83 million for a single, long-term project in response to a 1983 rockslide, known as Devil s Slide, and an additional $6.5 million for four other events from fiscal years 1990 through 1993. According to FHWA, these requests are for approved emergency relief events with projects that have had delays due to environmental issues or cost overruns. Once an event has been approved for emergency relief by FHWA, current program rules do not establish a time limit in which states must submit all funding requests for repairs. Although FHWA requires states to submit a list of projects within three months of approving a state s application for emergency relief, eligibility stemming from an approved event does not lapse, and a state s list of projects may be amended at any time to add new work. Consequently, FHWA faces the risk of receiving reimbursement requests from states for projects years after an event occurs, including requests for projects that have experienced significant delays and cost increases over time, due to environmental or community concerns. The June 2011 backlog list included project funding requests for two events that occurred more than 10 years ago and which demonstrate FHWA s risk of escalating long-term costs due to older events. The Transportation Equity Act for the 21st Century (TEA-21), Pub. L. No. 105-178, 1217(a), 112 Stat. 107, 214 (1998). review, the tunnel alternative was selected in 2002 and construction of a pair of 4,200-foot-long, 30-foot-wide tunnels began in 2006 23 years after the originating emergency relief event. Construction of the tunnel is ongoing, with a planned completion in March 2013. To date, FHWA has obligated about $555 million in emergency relief funds to the Devil s Slide tunnel project out of an estimated cost of $631 million. The $631 million total project cost estimate includes the $83 million requested on the June 2011 backlog list which is for work completed during fiscal year 2011 as well as an additional $120 million to be requested in the future to fully reimburse Caltrans to complete the project. Alaskan Way Viaduct in Seattle, Washington. The June 2011 backlog list also contained a pending request of $40.5 million from Washington state in response to a February 2001 earthquake which damaged the Alaskan Way Viaduct a 2-mile double-deck highway running along Seattle s waterfront. In the months after the event, FHWA approved $3.6 million for emergency relief repairs to cracks in several piers supporting a section of the viaduct, which were completed by December 2004. At the time of the earthquake, the Washington State Department of Transportation (WSDOT) had begun considering options for replacing the viaduct, which was approaching the end of its design life. After continued monitoring, WSDOT found that the viaduct had experienced accelerated deterioration as a result of the earthquake and requested $2 billion in emergency relief to replace the viaduct. Congress directed FHWA and state and local agencies to determine the specific damages caused by the earthquake and the amount eligible for emergency relief.response, FHWA found that while the replacement of the entire viaduct was not eligible for emergency relief, the project was eligible to receive $45 million to replace the section of the viaduct damaged by the In earthquake.forward with a more comprehensive replacement project for the entire facility, the estimated amount of emergency relief eligibility could be applied to that project. WSDOT now plans to replace the entire viaduct with a bored tunnel under downtown Seattle, with an estimated cost of almost $2 billion. According to FHWA s Washington state division office, the $40.5 million listed on the June 2011 emergency relief backlog list will be obligated toward the construction of the larger replacement project for the viaduct. FHWA further found that if WSDOT decided to move The lack of a time limit for states to submit emergency relief funding requests raises the risk of states filing claims for additional funding years after an event s occurrence, particularly for projects that grow significantly in cost or scope over time. States may have good reasons for submitting funding requests years after an event particularly for larger-scale permanent repairs that may take years to complete but such projects can grow unpredictably. The example of the relocation of S.R.1 away from Devil s Slide and the cost and scope increases that resulted from more than two decades of delays to complete lengthy environmental reviews and address community concerns is case and point. The absence of a time limit for states to submit funding requests hinders FHWA s ability to manage future claims to the program and creates a situation where Congress may be asked to provide additional supplemental appropriations for emergency relief years after an event occurs. Furthermore, states requesting emergency relief funds for projects many years after an event raises questions as to whether the repairs involved meet the goal of the Emergency Relief Program to restore damaged facilities to predisaster conditions. In 2007 we recommended that FHWA revise its regulations to tighten program eligibility criteria, which could include limitations on the use of emergency relief funds to fully finance projects that grew in scope and cost as a result of environmental and community concerns. In July 2011, DOT s regulatory agendaEmergency Relief Program that would, among other actions, consider included a planned rulemaking for the specific time restrictions for states when filing a claim for emergency relief eligible work. However, in October 2011, FHWA withdrew this planned item from its agenda. According to an FHWA official, the planned rulemaking was withdrawn because it was premature and because FHWA is still determining what changes if any are needed to address GAO s 2007 recommendations. <3. FHWA s Program Revisions Have Not Fully Addressed Prior Concerns> <3.1. FHWA Now Has Procedures to Withdraw Some Unused Emergency Relief Allocations from States, But Lacks Information to Verify Whether Additional Unused Allocations Are Still Needed> Since our 2007 report, FHWA has implemented a process to withdraw unused allocations and reallocate funding to benefit other states. FHWA undertook these actions in response to our recommendation to require division offices to annually coordinate with states to identify and withdraw unused allocations that are no longer needed so funds may be used to reduce the backlog of other program requests.based its allocations on a state s estimate of anticipated emergency relief obligations for the fiscal year. Prior to fiscal year 2007, FHWA s policy was to allocate the full amount of each state s emergency relief request, based on total available program funds. Since 2007, FHWA has In fiscal years 2010 and 2011, FHWA division offices coordinated with states to identify and withdraw unused allocations representing approximately $367 million in emergency relief funds from a total of 25 states and 2 territories. To withdraw unused funds from states, FHWA reviews its financial database, FMIS, to identify the amount allocated to each state that has not been obligated to specific projects. FHWA then asks each state to identify remaining fiscal year need for new obligations and the amount of any allocations that will no longer be needed. FHWA then withdraws the amount determined by the state to be no longer needed and reallocates that amount to other nationwide emergency relief needs, such as unfunded requests on the backlog list. Most of the withdrawn allocations were originally allocated to states from fiscal years 2003 to 2006, as shown in figure 5. Of the $299 million that was withdrawn for events occurring from fiscal years 2003 to 2006, about $230 million was withdrawn from Florida. FHWA reallocated $295 million of the $367 million withdrawn from states According to FHWA, the remaining $72 for other nationwide requests.million that was withdrawn but not yet reallocated will be made available to states in future allocations. As of the end May 2011, $493 million that FHWA allocated to states in response to events occurring since 1989 remains unobligated. A significant portion of this amount likely reflects the recent allocation of $320 million in April 2011. However, at least $63 million of the unobligated balance is for older allocations, provided prior to fiscal year 2007. Specifically, New York s unobligated balance includes almost $52 million provided after the September 11, 2001, terrorist attacks for roadway repairs delayed due to ongoing building construction around the FHWA s New York division reported that these World Trade Center site.repairs are not expected to be completed until 2014. In addition, California maintained an unobligated balance of more than $11 million from the October 1989 Loma Prieta earthquake. According to FHWA California division officials, FHWA sought to withdraw some of this allocation, but Caltrans and local officials indicated that this allocation was necessary to complete environmental mitigation and bike path projects that were part of reconstruction of the collapsed Bay Bridge connecting San Francisco and Oakland in California. Although the Emergency Relief Manual states that FHWA division offices are to identify and withdraw unused program funding allocations annually, we found several instances in which division offices applied unused allocations from existing events to new events in the same state without requesting a new allocation. Specifically, our file review at the FHWA Washington state and New York state division offices identified three events from fiscal years 2009 and 2010 that the division offices approved as eligible and funded with allocations that were no longer needed from previous events. This practice, which was permitted in the 1989 version of the Emergency Relief Manual, limits FHWA s ability to track unobligated balances for specific events and determine whether those funds are no longer needed and may be withdrawn. FHWA took steps to limit divisions from using this practice by removing language permitting the practice in the 2009 Emergency Relief Manual. According to FHWA, this change was made so that funds could be more equitably distributed across the nation to address the backlog of funding requests, rather than allowing states to hold unused funds in reserve for future events. Although FHWA removed the language permitting this practice from the manual, FHWA has not provided written guidance to its divisions to prohibit them from applying unused allocations to new events in the same state, and the practice is still being used. For example, in February 2011, FHWA s headquarters allowed the Washington state division to shift unused funds from a prior event to a new event, and in doing so, the division office did not submit a request for an allocation of funds for those new events and FHWA headquarters did not provide an allocation for those events. Consequently, FHWA headquarters did not have a record for the events, nor did it know the amount of funds made available by the division for these events. Furthermore, FHWA headquarters officials were unable to determine how prevalent this practice was across division offices. As a result, FHWA headquarters lacks information on what funding was made available and remains unobligated to states for specific events. Because Emergency Relief Program funding is not subject to the annual limits that the regular federal-aid highway program is, states have an incentive to retain as much emergency relief funding as possible by not returning unused funds. The lack of information on the amount of funds that could be made available for specific events could prevent FHWA from verifying whether allocations provided to states are still needed or may be withdrawn and used to meet current needs. <3.2. In Addition to Unused Allocations, Obligated Funds Remain Unexpended> In addition to the unused allocations, substantial amounts of obligated emergency relief funding have not been expended. About $642 million in emergency relief funding obligated for states from fiscal years 2001 through 2010 remains unexpended as of May 2011 including about $341 million in emergency relief funds obligated from fiscal years 2001 through 2006. In total for the Emergency Relief Program, 8 percent of all funding obligated from fiscal years 2001 through 2006 has yet to be expended (see table 2). Almost half of the unexpended balance from fiscal years 2001 through 2006 is for projects in response to several extraordinary events that occurred during those years, including the September 11, 2001, terrorist attacks in New York and Gulf Coast Hurricanes Katrina, Rita, and Wilma in 2005. Specifically, about $45 million of the $46 million that remains unexpended for fiscal year 2001 is for repair projects to facilities around the World Trade Center site in New York City. Of the $188 million that remains unexpended for fiscal year 2005, about $118 million is for projects in Louisiana in response to Hurricane Katrina. As of the end of May 2011, FHWA obligated about $952 million to 155 emergency relief projects in Louisiana for this event and has since made reimbursements to the state for all but 1 of these projects, providing approximately 88 percent of the amount obligated. Although substantial unexpended obligated funding remains, FHWA lacks information to determine the amount that is unneeded and could be deobligated because there is no time frame for closing out completed emergency relief projects. FHWA division officials in New York and Texas reported that many emergency relief projects are administered by local public agencies, including towns and counties, and these entities are often slow to process their reimbursement requests through the state department of transportation. As such, FHWA lacks information on the status of these projects and whether projects are ongoing or have been completed. For example, in Texas, 28 of 30 projects since 2007 included in our file review were listed as active in FHWA s national database, FMIS. However, according to Texas Department of Transportation (TXDOT) officials, construction on 23 of the 28 active projects was in fact completed and waiting to be closed out. FHWA division office officials reported that FMIS is not a project management system and does not provide the actual status of the construction of projects. As such, states may have completed some emergency relief projects but not processed reimbursement requests from local public agencies or completed final project financial audits. Projects remain active in FMIS until final vouchers have been processed to reimburse states. DOT s Office of Inspector General and external independent auditors have both identified inactive or unexpended obligations as a significant concern within FHWA. Without clear time frames for states to close out completed emergency relief projects, FHWA lacks important information on the status of projects and whether unexpended project funds are no longer needed and may be deobligated to be made available for other emergency relief projects. <3.3. Prior Concerns about Project Eligibility Have Yet to Be Addressed> FHWA has yet to address our longstanding concern about, and our 2007 recommendation for addressing, the use of emergency relief funds to finance projects that have grown in scope beyond the original intent of the program, which is to restore damaged facilities to predisaster conditions. In 1996, we questioned FHWA s decision to use more than $1 billion in emergency relief funds to replace the Cypress Viaduct in Oakland, California, which collapsed as a result of the Loma Prieta FHWA engineers initially estimated that earthquake in October 1989.replacing the destroyed structure along its predisaster alignment would cost $306 million. In response to public concern, Caltrans identified several alternative alignments that it studied in a 2-year environmental review. In 1991, Caltrans and FHWA decided to replace the destroyed 1.5-mile structure, which had bisected a residential area, with a new 5- mile structure running through active rail yards. This cost estimate later increased to more than $1.1 billion at the time of our 1996 report an increase of almost $800 million from FHWA s initial estimate of $306 million to restore the facility to its predisaster condition. As such, we questioned whether the improvements and costs resulting from the significant relocation and changes in scope should have been funded through the Emergency Relief Program rather than the regular federal-aid highway program. We recommended that FHWA modify its guidance to clearly define what costs can be funded through the Emergency Relief Program, particularly when an environmental review recommends improvements or changes to the features of a facility from its predisaster condition in a manner that adds costs and risks to the project. In response to our recommendation, FHWA amended its guidance to more clearly indicate when limits should be placed on emergency relief funding, and when full funding is appropriate, and we closed this recommendation. to restore damaged facilities to predisaster conditions. First, we noted that relocating California S.R.1 at Devil s Slide could have been addressed through the state s regular federal-aid highway program, rather than through the Emergency Relief Program. If the regular federal-aid highway program had been used, the project would not have been eligible for 100 percent federal funding, and the federal government would have saved an estimated $73 million. Second, we reported that the reconstruction of the U.S. Highway 90 Biloxi Bay Bridge in Mississippi which was destroyed in August 2005 during Hurricane Katrina grew in scope and cost by $64 million as a result of community concerns. Specifically, in response to a concern raised by a local shipbuilder about the proposed height of the new bridge, Mississippi department of transportation expanded the scope of the bridge reconstruction to increase the bridge height to allow for future ships to pass under the bridge. The original design was to provide an 85-foot clearance at a cost of $275 million, but this scope was expanded to its current design to provide a 95-foot clearance at a cost of $339 million. FHWA has clarified its definition of an eligible damage site as we recommended in 2007, through its revisions to its Emergency Relief Manual in 2009. Specifically, FHWA s 2009 revisions clarified that grouping damages to form an eligible site based solely on a political subdivision (i.e., county or city boundaries) should not be accepted. This change addressed our concern that FHWA division offices had different interpretations of what constituted a site, such that damage sites that were treated as eligible for emergency relief in one state may have not been eligible in another state. <4. Incomplete Information in Emergency Relief Project Files in Three States Raises Concerns about FHWA s Eligibility Decisions and Program Oversight> <4.1. Documentation for Many Project Files We Reviewed Was Missing, Incomplete, or Inconsistent> In our review of 83 selected emergency relief project files in three FHWA division offices, we found that many of the project files reviewed did not contain documentation called for in the Emergency Relief Manual to support FHWA decisions that projects met program eligibility requirements. Of the 83 projects in our review (totaling about $198.5 million in federal funds), 81 projects (about $192.8 million in federal funds) had at least one instance of missing or incomplete documentation. As a result of this missing information, we were unable to determine the basis of FHWA s eligibility decisions for many of the projects in our file review. The Emergency Relief Manual directs FHWA division offices to maintain files containing information on the methods used to evaluate disasters and FHWA s assessment of damages and estimates of cost. According to the Emergency Relief Program regulations, program data should be sufficient to identify the approved disaster and permit FHWA to determine the eligibility of the proposed work.several areas of concern with FHWA s eligibility determinations based on In our file review, we identified missing, incomplete, or inconsistent documentation, as illustrated in table 3 and described below (see app. III for detailed results of our file review). Forty-seven of 83 project files (57 percent) lacked documentation for on- site damage inspections. In particular, they did not include a detailed damage inspection report (DDIR) or the DDIR was not complete. According to the Emergency Relief Manual, on-site detailed damage inspections are conducted by the applicant or a state department of transportation representative if the applicant is a local public agency, and an FHWA representative, if available, to determine the extent of damage, scope of repair work, preliminary estimate of the repair cost, and whether a project is eligible for emergency relief funding. FHWA provides its division offices with a DDIR form that states may use to document their inspections and provide critical information necessary for determining project eligibility, such as a listing of preliminary repair cost estimates for equipment, labor, and materials for both emergency and permanent repairs. Without such information on file for some projects, we could not confirm that FHWA had that information to make emergency relief project eligibility determinations. These documents may be missing due to lack of clear requirements from FHWA. FHWA requires documented on-site damage inspections but does not have a clear requirement for how states submit the inspections to FHWA officials or for how they approve inspection reports; as a result, the three division offices we visited applied the Emergency Relief Manual guidelines differently. For example, none of the 28 project files we reviewed in Texas included a DDIR because FHWA s Texas division office relies instead on a program of projects, which is a spreadsheet of all projects requesting emergency relief funds. In response to a draft version of this report, FHWA s Office of Program Administration explained that state departments of transportation may use any format to submit the data necessary for FHWA to make an eligibility determination. FHWA s Texas division officials stated that they find the program of projects useful and believed it to be an FHWA requirement; however, we found that the Emergency Relief Manual guidance was ambiguous and did not directly state that this document can be used in place of DDIRs. One section the Emergency Relief Manual indicates that the state department of transportation is to submit the program of projects to the FHWA division office, but it also states that the program of projects should relate the damage to that described in the DDIRs. Furthermore, the manual suggests in an appendix that the program of projects is actually a package of all DDIRs resulting from the detailed damage inspections. In addition, our file review found that the project descriptions in the program of projects did not always provide the detailed information regarding damages and proposed repairs outlined in the Emergency Relief Manual and found on a DDIR. For example, for one Texas project totaling close to $1.7 million in both emergency and permanent repairs, the project description was the same for both emergency and permanent repairs and did not indicate what specific repair activities were conducted for each repair type. Differentiation between emergency and permanent repairs is important because emergency repairs are eligible for a higher federal share and do not require prior FHWA authorization. Without documentation showing a clear distinction between the emergency and permanent repairs information that should be identified and documented on a DDIR per program guidance we could not determine the basis for FHWA s decision that this project met the eligibility requirements for both repair types. Overall, we found the program of projects was less useful than the DDIR for evaluating the full range of information necessary to determine the basis for FHWA s eligibility determinations. We found that about half of the projects in our sample (42 of 83) did not include repair cost estimates. The Emergency Relief Manual states that at a minimum the division office s project file should contain copies of the FHWA field engineer s assessments on damage and estimates of cost. Officials in each of the FHWA division offices that we visited reported that the state s department of transportation is responsible for preparing repair cost estimates, but that FHWA area engineers also conduct some on-site inspections to verify the cost estimates provided. In total, 42 projects in our sample did not include any repair cost estimates; thus, we could not confirm that FHWA officials had this information to make eligibility determinations for those projects. For example, a portion of two projects in our sample for emergency and permanent repairs was to remove sand from drainage ditches and was initially approved by the FHWA Texas division office for reimbursement of up to $1.3 million, although the project file included no repair cost estimate for any of the work associated with the project. Additionally, no information was available in the project file to explain the FHWA Texas division office s decision to later approve a nearly 40 percent increase from $1.3 million to the final approved amount of $1.85 million. In responding to a draft of this report, DOT stated that the cost of the project increased because more sand was removed from the drainage ditches than originally estimated. However, no documentation of this change was included in FHWA s project files. FHWA officials reported that the division office in Texas reviews a sample of preliminary cost estimates based on risk, among other factors, prior to making any eligibility decisions. According to the officials, FHWA s Texas division office reviewed preliminary cost estimates of at least 10 of the 30 projects included in our file review before determining eligibility. The officials also reported that this sampling approach is consistent with FHWA s stewardship agreement with TXDOT and the fact that states have assumed oversight responsibility for design and construction of many federal-aid highway projects, including emergency relief projects. FHWA also reported that TXDOT s oversight responsibilities do not extend to determining whether particular projects are eligible for federal funds. Furthermore, the Emergency Relief Manual states that Emergency Relief Program eligibility determinations reside with FHWA, and estimated repair costs should be documented to determine eligibility. As such, the practice of reviewing a sample of preliminary cost estimates does not appear to be consistent with the requirements in the Emergency Relief Manual, and as a result, we could not determine the basis of FHWA s eligibility decisions for those project cost estimates it did not review. We found other cases in which cost increases were not documented according to the internal policies established by each of the division offices we visited. In New York and Texas, FHWA division officials stated they require additional documentation to justify cost increases of 25 percent or more. In Washington state, FHWA division office officials stated they require additional documentation if costs increase by 10 percent or more. Yet 14 percent of the project files we reviewed (12 of 83) showed total cost increases that exceeded the limits established by the three division offices and no additional documentation was on file to support the increases. The majority of the emergency repair project files that we reviewed did not include documentation demonstrating that emergency repairs were completed within 180 days from the event to be eligible for 100 percent federal reimbursement. included emergency repairs approved to receive 100 percent federal funding reimbursement if repairs were completed within 180 days of the event occurrence. However, 39 of the 58 (67 percent) did not have documentation on file to show the completion date of those repairs (see table 3). In total, only 14 of 58 (24 percent) emergency repair projects provided a completion date that was within 180 days of the event s occurrence. For the majority (39 of 58) of projects, we were unable to confirm whether the emergency repairs were completed within 180 days and whether these projects were eligible to receive 100 percent federal reimbursement. Emergency repairs must be completed within 180 days from the event to be eligible for 100 percent federal funding. See 23 U.S.C. 120(e); also see the FHWA regulation 23 C.F.R. 668.107(a). completion. As such, FHWA lacks a standardized process for verifying the completion of emergency repairs within 180 days on projects for which it does not exercise full oversight. By law, states assume oversight responsibility for the design and construction of many federal-aid highway projects, including the vast majority of emergency relief projects in the three divisions we visited. As such, the states rather than FHWA were responsible for conducting final inspections of emergency relief projects. States are required to conduct a final inspection for all federal-aid highway projects under state oversight, and these inspections could be useful to determine federal share eligibility of emergency repairs if they provide project completion dates. While officials in each of the three state departments of transportation told us that they conduct final inspections of emergency repairs, we found only two final inspection reports prepared by states in FHWA s records to confirm the completion of emergency repairs within the required time frame. In addition, when we reviewed final inspection reports from one of the state departments of transportation in our review, we were frequently unable to verify completion dates. Specifically, 11 of the 12 final inspections performed by officials at New York State Department of Transportation for projects in our review did not include project completion dates. Although the Emergency Relief Manual states that FHWA division offices reserve the right to conduct a final inspection of any emergency relief project, only the FHWA Texas division reported conducting spot inspections for a sample of emergency relief projects. In commenting on a draft of this report, DOT stated that the FHWA New York state division office uses other means to verify completion of emergency repairs within 180 days. According to DOT, the state often submits its DDIRs to FHWA after emergency repairs are completed, which allows FHWA to verify the eligibility and completion of an emergency repair when it reviews the DDIR. DOT reported that the FHWA division office does not sign the DDIR until it confirms the work is completed, and that its signature indicates verification that the work was performed within the required time frame. However, our file review found that 14 of the 18 emergency repair projects in New York that were approved for 100 percent federal funding did not have an FHWA signature on the DDIR. In addition to a lack of documentation, we found eight instances in which permanent repair projects may have incorrectly received 100 percent federal share reimbursement. According to the Emergency Relief Manual, absent specific legislative approval, permanent repair work is not to be considered emergency repair work even if it is completed within 180 days. However, we found instances in which projects were determined to be permanent repairs based on information in the project files, but were later authorized to receive 100 percent federal share. For example, in one project in our review, FHWA s Washington state division office approved permanent repairs to a state highway for $2.6 million in estimated damages caused by a landslide. Our review of FHWA financial records for this project indicates that FHWA later authorized a federal reimbursement of $5.3 million, roughly 99 percent of the total project cost of nearly $5.4 million. FHWA Washington state division officials reported that this project was considered to be a permanent repair performed as an incidental part of emergency repair work. However, the project files did not include any emergency repair work to accompany the approved permanent repairs. According to these officials, the FHWA Washington state division interpreted the 2003 version of the Emergency Relief Manual as allowing incidental permanent work to be funded at 100 percent federal share either with or as emergency repair work. However, the manual states that during the 180 day period following the disaster, permanent repair work is reimbursed at the normal pro rata share unless performed as an incidental part of emergency repair work. As such, based on the program guidance, this project should have been reimbursed at 86.5 percent federal share. A primary purpose of the Emergency Relief Program is to restore highway facilities to predisaster conditions, not to provide improvements or added protective features to highway facilities. However, according to FHWA regulations and the Emergency Relief Manual, such improvements may be considered eligible betterments if the state provides economic justification, such as a benefit-cost analysis that weighs the cost of the betterment against the risk of eligible recurring damage and the cost of future repair through the Emergency Relief Program. In our file review we identified two areas of concern regarding betterments, including instances of missing documentation of benefit-cost analyses: Lack of documentation of required benefit-cost analyses. Six of the 15 projects (40 percent) identified as betterments in our review did not contain the required benefit-cost analyses in their files to justify the As a result we were unable to determine the basis on betterment. which FHWA approved these six betterments. We also found one instance in which the benefit-cost analyses used to justify an approved betterment did not meet Emergency Relief Program requirements. Specifically, FHWA s New York division office approved a betterment of almost $1.6 million to repair and improve a damaged roadway and shoulder caused by an April 2007 storm. However, we found that the report prepared to justify the betterment did not weigh the cost of the proposed betterment against the risk of future damages and repair costs to the Emergency Relief Program, as required by program regulations. Consequently, we were unable to determine the basis on which FHWA approved the $1.6 million betterment. Lack of documentation indicating whether projects include betterments. We found that it was often difficult to determine which projects included betterments, as FHWA lacks a standard process for where and how betterments should be identified in project documentation. The Emergency Relief Manual states that betterments must receive prior FHWA approval and that further development of contemplated betterments should be accomplished with FHWA involvement, necessitating that proposed betterments are specifically identified. We found eight project files with indications that the projects may have included betterments that were not identified explicitly in project documentation or by FHWA officials. For example, following the completion of emergency repairs to remove debris and protect a bridge against erosion caused by a landslide, the FHWA Washington state division office approved an additional $3.7 million in permanent repairs in response to continued erosion and movement of the hillside. The documentation in the project file indicated that this permanent work was added to stabilize the slide area in anticipation of future flooding. According to officials from the FHWA Washington state division, this slide stabilization project was a betterment, but the project file did not contain documentation to indicate that this project was in fact a betterment. FHWA provides considerable discretion to its division offices to tailor the Emergency Relief Program within states and lacks a standard mechanism to specifically identify whether a project includes a betterment. FHWA s Office of Asset Management has developed an Economic Analysis Primer for FHWA division offices to use when evaluating benefit-cost analyses for federal-aid program projects. However, neither the Emergency Relief Manual nor the Economic Analysis Primer provide sample benefit-cost analyses or specific guidance on what information should be included in the benefit-cost analysis to demonstrate that the proposed betterment will result in a savings in future recurring repair costs under the Emergency Relief Program. Because we had found betterments without documentation of the required benefit-cost analyses on file and identified possible betterments that were not explicitly identified as such, we could not confirm that federal funds were being reimbursed in accordance with the requirements of the Emergency Relief Program. Further, absent specific guidance for identifying and approving betterments to its division offices, FHWA cannot be assured that the Emergency Relief Program is being administered consistently. <5. Conclusions> The federal government plays a critical role in providing financial assistance to states in response to natural disasters and other catastrophic events. Given the costs of these events and the significant fiscal challenges facing both states and the federal government, it is increasingly necessary that federal financial support be delivered in an effective, transparent, and accountable manner so that limited funds are put to their best use. FHWA s stewardship of the Emergency Relief Program could be better structured to meet that necessity. First, because some emergency relief projects can be delayed for many years due to environmental or community concerns and projects can grow significantly in scope and cost, the federal government faces the risk of incurring long-term costs for such projects. FHWA has limited tools to control its exposure to the costs of older events and ensure that as projects grow in scope and cost that they do not go beyond the original intent of the program, which is to assist states to restore damaged facilities to their predisaster conditions. Once an event has been approved for emergency relief by FHWA, the Emergency Relief Program as currently structured does not limit the time during which states may request additional funds and add projects, which increase the size of FHWA s backlog list. Because Emergency Relief Program funding is not subject to the annual limits of the regular federal-aid highway program, states have an incentive to seek as much emergency relief funding as possible. Consequently, without reasonable time limits for states to submit funding requests for such older events, FHWA s ability to anticipate and manage future costs to the Emergency Relief Program is hindered, as is Congress ability to oversee the program. Furthermore, without specific action by FHWA to address the recommendation from our 2007 report that it revise its emergency relief regulations to tighten eligibility criteria, the Emergency Relief Program will continue to face the risk of funding projects with scopes that have expanded beyond the goal of emergency relief and may be more appropriately funded through the regular federal-aid highway program. Second, while FHWA has taken some important steps in response to our 2007 report to manage program funding by withdrawing unobligated balances from states, it faces challenges in tracking allocations that have been provided to states. In particular, because FHWA division offices have allowed states to transfer unobligated allocations from an existing event to new events, and because FHWA headquarters is not tracking which divisions have done so, FHWA headquarters does not have the information needed to identify and withdraw all unneeded funds. In addition, without time frames to expedite the close-out of completed emergency relief projects, FHWA lacks useful information to help determine whether obligated but unexpended program funds are no longer needed and could be deobligated. Finally, the fact that we could not determine the basis of FHWA s eligibility decisions in three states on projects costing more than $190 million raises questions about whether emergency relief funds are being put to their intended use and whether these issues could be indicative of larger problems nationwide. While federal law allows states to assume oversight over design and construction of much of the federal-aid highway program, including many emergency relief projects, FHWA is ultimately responsible for ensuring that federal funds are efficiently and effectively managed and that projects receiving scarce emergency relief funds are in fact eligible. This is especially important in light of the fact that emergency relief funds have been derived principally from general revenues in recent years and that the funds that states receive are above and beyond the funding limits for their regular federal-aid highway program funds. Without clear and standardized procedures for divisions to make and document eligibility decisions including documenting damage inspections and cost estimates, verifying and documenting the completion of emergency repair projects within the required time frame, and evaluating information provided to justify proposed betterments FHWA lacks assurance that only eligible projects are approved, and that its eligibility decisions are being made and documented in a clear, consistent, and transparent manner. <6. Recommendations for Executive Action> To improve the accountability of federal funds, ensure that FHWA s eligibility decisions are applied consistently, and enhance oversight of the Emergency Relief Program, we recommend that the Secretary of Transportation direct the FHWA Administrator to take the following four actions: Establish specific time frames to limit states ability to request emergency relief funds years after an event s occurrence, so that FHWA can better manage the financial risk of reimbursing states for projects that have grown in scope and cost. Instruct FHWA division offices to no longer permit states to transfer unobligated allocations from a prior emergency relief event to a new event so that allocations that are no longer needed may be identified and withdrawn by FHWA. Establish clear time frames for states to close out completed projects in order to improve FHWA s ability to assess whether unexpended program funds are no longer needed and could be deobligated. Establish standardized procedures for FHWA division offices to follow in reviewing emergency relief documentation and making eligibility decisions. Such standardized procedures should include: clear requirements that FHWA approve and retain detailed damage inspection reports for each project and include detailed repair cost estimates; a requirement that division offices verify and document the completion of emergency repairs within 180 days of an event to ensure that only emergency work completed within that time frame receives 100 percent federal funding; and consistent standards for approving betterments, including guidance on what information the benefit-cost analyses should include to demonstrate that the proposed betterment will result in a savings to the Emergency Relief Program, and a requirement that FHWA approval of funding for betterments be clearly documented. <7. Agency Comments and Our Evaluation> We provided a draft of this report to DOT for review and comment. DOT officials provided technical comments by email which we incorporated into the report, as appropriate. In response to our finding that the Emergency Relief Program lacks a time limit for states to submit emergency relief funding requests, and our recommendation to establish specific time frames to limit states ability to request emergency relief funds years after an event s occurrence, DOT noted that the program does include general time frames for states to submit an application and have work approved. We incorporated this information into the final report; however, since a state s list of projects may be amended at any time to add new work, we continue to believe that FHWA s ability to anticipate and manage future costs to the Emergency Relief Program is hindered absent specific time frames to limit states requests for additional funds years after an event s occurrence. Such time frames would provide FHWA with an important tool to better manage program costs. DOT also commented that its ability to control the costs of some of the projects cited in the report that have grown in scope and cost over the years is limited in some cases by the fact that DOT received statutory direction from Congress to fund these projects. For example, Congress directed FHWA to provide100 percent federal funding for all emergency relief projects resulting from Hurricane Katrina in 2005. We incorporated additional information to recognize this statutory direction; however, a determination by Congress that a particular event should qualify for relief under the Emergency Relief Program, or for other individual actions, does not relieve FHWA of its stewardship and oversight responsibilities. Except as Congress otherwise provides, this includes its responsibility to determine whether enhancements to projects or betterments are consistent with its regulations and the intent of the Emergency Relief Program to restore damaged facilities to predisaster conditions. We continue to believe that, as a steward of public funds, FHWA generally has the discretion to take reasonable steps to limit the federal government s exposure to escalating costs from projects that grow in scope over time. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology To identify Emergency Relief Program funding trends since our 2007 report, we reviewed federal statutes, including supplemental appropriations to the Emergency Relief Program made since 2007, and Federal Highway Administration (FHWA) documentation on annual funding authorizations to the program. We also reviewed FHWA data on emergency relief funds allocated to states in response to emergency relief events from fiscal years 2007 through 2010, as provided by FHWA s Office of Program Administration. We interviewed FHWA officials in the Office of Program Administration to gather specific information on how data on allocations was collected and we also reviewed FHWA financial data on total allocations to states from FHWA s fiscal management information system (FMIS). We interviewed officials from FHWA Federal Lands Highway, FHWA s North Dakota Division Office, and the North Dakota state department of transportation concerning funding and project activities for the Devils Lake, North Dakota, emergency relief projects. To gather additional information on the Devil s Slide project in California, we interviewed the FHWA California division office and reviewed information on the estimated project costs. To identify key changes to the Emergency Relief Program implemented in response to concerns raised in our 2007 report, we reviewed recommendations made to FHWA in our 2007 report and FHWA Emergency Relief Program regulations and guidance, including FHWA s Emergency Relief Manual, as revised in 2009. We compared information in the current version of the Emergency Relief Manual with information in the previous version to determine which elements were revised. We interviewed FHWA officials in the Office of Program Administration to determine why specific changes were made, and we interviewed officials in three FHWA division offices to determine how program changes were implemented. To corroborate information provided by FHWA regarding its process of withdrawing unused Emergency Relief Program funds from states, we reviewed FMIS data on the emergency relief funds that were allocated among all states and territories, obligated to specific projects, and the remaining unobligated balance for all active Emergency Relief Program codes as of May 31, 2011. To determine other amounts of program funding that remained unused, we reviewed data in FMIS on the amount of emergency relief funding obligated to specific projects and expended by all states and territories for events occurring from fiscal years 2001 through 2010. We provided FHWA officials with our methodology for gathering data from FMIS to ensure that our data queries were accurate. To ensure the reliability of data collected in FMIS we interviewed FHWA officials on the procedures used by FHWA and states departments of transportation to enter and verify financial information entered into FMIS. We found these data to be sufficiently reliable for our purposes. To determine the extent to which selected emergency relief projects were awarded in compliance with program eligibility requirements, we reviewed federal statutes and regulations, and FHWA guidance on emergency relief eligibility requirements. We selected a nongeneralizable sample of state department of transportation and FHWA division offices in three states New York, Texas, and Washington state. The states selected are not representative of the conditions in all states, the state departments of transportation, or FHWA division offices, but are intended to be examples of the range of practices and projects being funded by the Emergency Relief Program across the country. These states were selected based on several criteria: 1. The overall amount of emergency relief funding allocated to a state from fiscal years 2007 through 2010, to identify those states that were allocated the most funding (at least $15 million) over that period, based on allocation data provided by FHWA headquarters. 2. Frequency of funding requests to identify those states that requested funds for three or more fiscal years from 2007 through 2010. 3. The occurrence of an eligible emergency relief event since FHWA updated its Emergency Relief Manual in November 2009. For our purposes, we used emergency relief eligible events beginning October 1, 2009, as a proxy for identifying states with emergency relief events since the November 2009 manual update. A total of 10 states met all three criteria. We narrowed our selection down by eliminating those states that experienced outlier events, such as North Dakota s reoccurring basin flooding at Devils Lake and the catastrophic failure of the Interstate 35 West bridge in Minnesota. We judgmentally selected New York, Texas, and Washington state to reflect a geographic dispersion of states. We reviewed a sample of emergency relief project files in the FHWA division office in each of these states to determine whether the project files included required or recommended documentation cited in federal statute, regulations, and FHWA program guidance. Such documentation included the President or state governors proclamation of a disaster, detailed damage inspection reports, cost estimates for repairs, photographs of the damage, and other information. Across the three division offices, we selected a nongeneralizable sample of 88 Emergency Relief Program files out of a total universe of 618 project files for emergency relief projects approved by FHWA in those states from fiscal years 2007 through 2010. Among the 88 projects in our review, 5 projects had been withdrawn by states as FHWA had determined them ineligible for emergency relief funds, or they were reimbursed through a third party insurance settlement, bringing the total number of projects reviewed to 83. The project files we reviewed represented approximately 67 percent of all emergency relief funds obligated to those states during that time period. Those projects were selected based on the following criteria: 1. All projects with more than $1 million in obligated federal funds between fiscal years 2007 and 2010, including a mix of active and closed projects and various event or disaster types. 2. Projects with more than $1 million in obligated federal funds for events from fiscal years 2001 through 2006 on the list of formal emergency relief funding requests as of March 7, 2011, that were either currently active or were completed more than five years after the event occurred. 3. Projects that had other characteristics that we determined to warrant further review, such as events with $0 amounts listed in FHWA s FMIS database for total cost or which had expended relatively small amounts of funding compared with the obligated amounts in FMIS. Prior to our site visits, we requested that the division offices provide all documentation they maintain for each of the projects selected in our sample. We reviewed all the documentation provided during our site visits, and requested follow-up information as necessary. In conducting our file review, a GAO analyst independently reviewed each file and completed a data collection instrument to document the eligibility documentation that was included for each file. A second reviewer independently reviewed the file to verify whether the specific information identified by the first reviewer was present in the file. The analysts met to discuss and resolve any areas of disagreement until a consensus was reached on whether the required information was included in the file. To gather additional information on the project files we reviewed and the procedures used to manage and oversee emergency relief projects, we interviewed officials in the FHWA division offices and the departments of transportation in our three selected states. We provided the results of our file review to FHWA for their comment and incorporated their responses as necessary within our analysis. Lastly, we contacted state and local audit organizations through the National Association of State Auditors, Comptrollers, and Treasurers for the three states we reviewed, as well as North Dakota, to obtain reports or analyses that were conducted on FHWA s Emergency Relief Program. None of the states in our review had conducted substantive work on the Emergency Relief Program. We conducted this performance audit from November 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of Emergency Relief Funding for Projects at Devils Lake, North Dakota Devils Lake in North Dakota lies in a large natural basin and lacks a natural outlet for rising water to flow out of the lake. Starting in the early 1990s, the lake level has risen dramatically nearly 30 feet since 1992 which has threatened the roadways near the lake which were built in the 1930s and 1940s when lake water levels were lower. In April 2000, FHWA issued a memorandum that authorized raising the roads at Devils Lake in response to a predicted rise in the water level of the lake that was within 3 feet of causing inundation, as forecasted by the National Weather Service or U.S. Geological Survey. This allowance to repair roadways prior to damages incurred by an event is a unique provision for the FHWA Emergency Relief Program, which otherwise funds only post-disaster repair or restoration. The basin flooding events at Devils Lake also precipitated a related problem at Devils Lake, as some communities around the lake plugged culverts under roadways to impound rising water and protect property from flooding, which increased the roadways risk of failure. These roads were subsequently referred to as roads-acting-as- dams which required additional improvements to ensure their structural integrity to serve as dams. Devils Lake projects involve multiple stakeholders, depending on the location and type of roadway. FHWA s North Dakota division office is responsible for overseeing the Emergency Relief Program projects administered by North Dakota department of transportation. FHWA s Office of Federal Lands Highway is responsible for the oversight of the Emergency Relief on Federally Owned Roads program, which covers projects on the Spirit Lake Tribe Indian Reservation. The Central Division of Federal Lands Highway leads the overall coordination among the federal, state, and local agencies. FHWA reported that the two FHWA offices are working together to address the roads-acting-as dams projects which affect state highways and roads on the Sprit Lake Tribe Indian Reservation. The North Dakota department of transportation and the Spirit Lake Tribe are responsible for administering the construction projects on their respective roads. To ensure the integrity of the roads at Devils Lake, Congress included funding provisions in Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) to raise the roadways and make improvements to roads-acting-as-dams. Through SAFETEA-LU, Congress authorized up to $10 million of Emergency Relief Program funds to be expended annually, up to a total of $70 million, for work in the Devils Lake region of North Dakota to address the roads-acting-as-dams situation. These funds are known as section 1937 funds for the provision in SAFETEA-LU which authorized them. In the absence of other authority, this $10 million must come out of the $100 million annual authorization of contract authority that funds the Emergency Relief Program, effectively reducing the annual emergency relief funding available to other states to $90 million. SAFETEA-LU also included language that exempted the work in the Devils Lake area from the need for further emergency declarations to qualify for emergency relief funding. According to a June 24, 2011, FHWA policy memo, the final allocation of section 1937 funds was made on March 16, 2011, and the $70 million limit has been reached. Although rising water levels at Devils Lake are expected to continue into the future, no further federal-aid highway funds are eligible to raise roads-acting-as-dams or to construct flood control and prevention facilities to protect adjacent roads and lands. Appendix III: Results of GAO s File Review of Emergency Relief Project Documentation Available in Three FHWA Division Offices Figure 6 represents the results of our review of 88 selected project files from FHWA s division offices in New York, Texas, and Washington state. Our data collection instrument was used to collect the values for each field during our file review, and that information was summarized and analyzed by at least two GAO analysts (see app. I for a complete discussion of our file review methodology). Appendix IV: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director; Hiwotte Amare; Matt Barranca; Melinda Cordero; Lorraine Ettaro; Colin Fallon; Bert Japikse; Catherine Kim; Hannah Laufe; Kelly Liptan; Scott McNulty; Josh Ormond; and Tina Won Sherman.
Why GAO Did This Study The Federal Highway Administration (FHWA), within the U.S. Department of Transportation (DOT), administers the Emergency Relief Program to provide funds to states to repair roads damaged by natural disasters and catastrophic failures. In 2007, GAO reported that in recent years states' annual demand for emergency relief funds often exceeded the program's $100 million annual authorization from the Highway Trust Fund and required supplemental appropriations from general revenues to address a backlog of funding requests from states. GAO recommended that FHWA tighten eligibility standards and coordinate with states to withdraw unneeded emergency relief funds, among other actions. For this report, GAO reviewed (1) Emergency Relief Program funding trends since 2007, (2) key program changes made in response to GAO's 2007 report, and (3) the extent to which selected emergency relief projects were approved in compliance with program eligibility requirements. GAO reviewed projects in New York, Texas, and Washington state, states selected based on the amount and frequency of funding allocations since 2007, among other factors. What GAO Found From fiscal years 2007 through 2010, the Emergency Relief Program received about $2.3 billion, of which $1.9 billion came from three supplemental appropriations compared with about $400 million authorized from the Highway Trust Fund. FHWA allocated this funding to 42 states and 3 territories to reduce the backlog of funding requests, with $485 million in unfunded requests remaining as of June 2011. This backlog list did not include funding requests for August 2011 damages from Hurricane Irene. Because the program lacks time frames to limit states from requesting funds years after events occur, the June 2011 backlog list includes about $90 million for events that occurred prior to fiscal year 1994. Without time limits for emergency relief funding requests, FHWA's ability to anticipate and manage future program costs is hindered. In response to GAO's 2007 report, FHWA withdrew about $367 million of unobligated emergency relief funds from states and redistributed most of this funding for other emergency relief needs. However, additional funding remains unused, including (1) at least $63 million allocated to states before fiscal year 2007 that has yet to be obligated to projects and (2) $341 million obligated between fiscal years 2001 and 2006 that remains unexpended. Due to a lack of time frames for states to close-out completed projects, FHWA lacks project status information to determine whether unexpended funding is no longer needed and could be deobligated. FHWA has not addressed GAO's 2007 recommendation to revise its regulations to limit the use of emergency relief to fully fund projects that have grown in scope and cost as a result of environmental or community concerns. The Emergency Relief Program faces the continued risk of escalating costs due to projects that have grown in scope beyond the program's goal of restoring damaged facilities to predisaster conditions. GAO's review of 83 emergency relief project files in three FHWA state offices found many instances of missing or incomplete documentation--as such, GAO was unable to determine the basis by which FHWA made many eligibility determinations. For example, about half of the project files did not include required repair cost estimates, and 39 of 58 (67 percent) emergency repair projects approved for 100 percent federal funding did not contain documentation of completion within 180 days--a requirement for states to receive 100 percent federal funding. FHWA lacks clear requirements for how states submit and FHWA approves key project documentation, which has resulted in FHWA state offices applying eligibility guidelines differently. Establishing standardized procedures for reviewing emergency relief documentation and making eligibility decisions would provide greater assurance that projects are in fact eligible and that FHWA makes eligibility determinations consistently and transparently. What GAO Recommends GAO makes several recommendations including that FHWA establish (1) time frames to limit states' requests for emergency relief funds and to close completed projects and (2) standardized procedures for reviewing emergency relief documentation and making eligibility decisions. DOT provided technical comments on project time frames and costs which GAO incorporated as appropriate.
<1. DOE s Budget Authority for Renewable, Fossil, and Nuclear Energy R&D Has Substantially Declined in Real Terms Since 1978> DOE s budget authority for renewable, fossil, and nuclear energy R&D dropped by 92 percent from $6 billion in fiscal year 1978 to $505 million in fiscal year 1998 (in inflation-adjusted terms) before bouncing back to $1.4 billion in fiscal year 2008. As shown in figure 2, R&D budget authority in renewable, fossil, and nuclear energy peaked in the late 1970s and fell sharply in the 1980s. Since fiscal year 1998, R&D budget authority for renewable and nuclear energy R&D have grown, while fossil energy R&D funding has fluctuated in response to coal program initiatives. Nuclear energy R&D, which received no funding in fiscal year 1998, experienced the largest increase, rising to $438 million in fiscal year 2008. During this period, budget authority for renewable energy increased by 89 percent and fossil energy increased by 116 percent. A comparison of DOE s fiscal year 2009 budget request with the fiscal year 2008 appropriation shows that renewable energy R&D would decline slightly, while fossil energy R&D and nuclear energy R&D would increase by 34 percent and 44 percent, respectively (see app. I). As shown in figure 3, budget authority for the Office of Science increased by 16 percent from $3.4 billion in fiscal year 2000 to $4 billion in fiscal year 2008. The budget request for the Office of Science for fiscal year 2009 is $4.7 billion, a 19-percent increase over the fiscal year 2008 appropriation. Because the Office of Science funds basic research in materials sciences, for example, many of its R&D programs may have useful applications for energy R&D. In fiscal year 2009, the Office of Science has requested $69.1 million for research related to the solar energy R&D program, $42.9 million related to biomass R&D, and $60.4 million for the Hydrogen Fuel Initiative. The Office of Science also funds fundamental research in such areas as high energy physics, nuclear physics, and fusion energy. <2. DOE Faces Key Challenges in Developing Advanced Energy Technologies for Deployment> There are key technical, cost, and environmental challenges in developing advanced renewable, fossil, and nuclear energy technologies to address future energy challenges. <2.1. DOE s R&D Challenges for Advanced Renewable Energy Technologies> DOE s recent R&D focus in renewable energy has been in (1) biomass- derived ethanol, (2) hydrogen-powered fuel cells, (3) wind technologies, and (4) solar technologies. The primary focus of ethanol and hydrogen R&D is to displace oil in the transportation sector. The primary focus of wind and solar technologies is to generate electricity. DOE also conducts R&D on geothermal and hydropower to generate electricity, but they have reflected a small proportion of the R&D budget in prior years and are not discussed here. Biomass-derived ethanol. DOE s short-term R&D goal is to help meet the administration s 20 in 10 goal of substituting 20 percent of gasoline consumption in 10 years with alternative fuels, primarily biomass-derived ethanol. DOE s longer-term R&D goal is to develop new technologies to allow the ethanol industry to expand enough to displace 30 percent of gasoline requirements about 60 billion gallons by 2030. In 2007, industry produced over 7 billion gallons of ethanol, displacing about 3 percent of the nation s oil consumption. Ethanol, however, faces high production and infrastructure costs, creating challenges in competing with gasoline nationally. Ethanol refiners in the United States rely mostly on corn as a feedstock, the use of which has contributed to price increases for some food products, and ethanol s corrosive properties create challenges in developing an infrastructure for delivering and dispensing it. DOE s R&D focuses on (1) developing a more sustainable and competitive feedstock than corn, primarily by exploring technologies to use cellulosic biomass from, for example, agricultural residues or fast-growing grasses and trees; (2) reducing the cost of producing cellulosic ethanol to $1.33 per gallon by 2012 and $1.20 per gallon by 2017; (3) converting biomass to biofuels through both biochemical and thermochemical processes to help the industry expand; (4) contributing to a strategy to develop a national biofuels infrastructure, including demonstration projects for integrated biorefineries to develop multiple biomass-related products; and (5) promoting market-oriented activities to accelerate the deployment of biomass technologies. Although DOE has made progress in reducing ethanol production costs, cellulosic ethanol in 2007 based on current corn prices still cost about 50 percent more to produce than corn ethanol. Hydrogen-powered fuel cells. The long-term R&D goal of DOE s Hydrogen Fuel Initiative is to provide hydrogen fuel cell technologies to industry by 2015 to enable industry to commercialize them by 2020. To be commercialized, hydrogen fuel cell technologies must be competitive with gasoline vehicles in terms of price, convenience, safety, and durability. Hydrogen is the preferred fuel for vehicle fuel cells because of the ease with which it can be converted to electricity and its ability to combine with oxygen to emit only water and heat as byproducts. Let me clarify, however, that hydrogen is not an energy source, but, like electricity, is an energy carrier. Furthermore, because hydrogen is lighter than air, it does not exist on earth and must be extracted from common compounds. Producing hydrogen through the extraction process requires energy from renewable, fossil, or nuclear sources, adding to the challenge of developing hydrogen technologies. Our January 2008 report concluded that DOE has made important progress in developing hydrogen fuel cells, but the program has set very ambitious targets and some of the most difficult technical challenges those that require significant scientific advances lie ahead. Specifically, R&D for vehicles includes reducing the cost of commercial-scale manufacturing of fuel cells by nearly fourfold, storing enough hydrogen on board a fuel-cell vehicle to enable a 300-mile driving range, and increasing the durability of fuel cells by more than threefold to match the 150,000 mile life-span of gasoline vehicles. DOE also conducts R&D on stationary and portable fuel cells which could be used, for example, to replace batteries on fork lifts and diesel generators used for back-up power. We recommended that DOE update its overarching R&D plan to reflect the technologies it reasonably expects to provide to industry by 2015 to accurately reflect progress made by the Hydrogen Fuel Initiative, the challenges it faces, and its anticipated R&D funding needs. I would also note that developing the supporting infrastructure to deploy the technologies nationally will likely take decades, tens of billions of dollars in investments, and continued R&D well beyond the 2015 target date. DOE s fiscal year 2009 budget request would reduce funding for the Hydrogen Fuel Initiative by 17 percent from $283.5 million in fiscal year 2008 to $236 million in fiscal year 2009. The budget also proposes to increase the proportion of longer-term R&D by increasing the funding for basic research. Although the Hydrogen Program Manager told us that funding is sufficient to meet target dates for critical technologies, other target dates for supporting technologies such as hydrogen production from renewable sources would be pushed back. Wind technologies. DOE is assessing its long-term vision of generating 20 percent of the nation s electricity using wind energy by 2030. Its current R&D efforts, however, are focused on more immediate expansion of the wind industry, particularly on utility-scale wind turbines. More specifically, DOE has focused its R&D efforts on improving the cost, performance, and reliability of large scale, land-based wind turbines, including both high- and low-wind technologies; developing small and mid- size turbines for distributed energy applications, such as for residential or remote agricultural uses; and gathering information on more efficient uses of the electricity grid and on barriers to deploying wind technology and providing that information to key national, state, and local decision- makers to assist with market expansion of wind technologies. For example, one of DOE s targets is to increase the number of distributed wind turbines deployed in the United States from 2,400 in 2007 to 12,000 in 2015. Although wind energy has grown in recent years, from about 1,800 megawatts in 1996 to over 16,800 megawatts in 2007, the wind industry still faces investors concerns about high up-front capital costs, including connecting the wind farms to the power transmission grid. Solar technologies. DOE s R&D goal is for solar power to be unsubsidized and cost competitive with conventional technologies by 2015 by, for example, developing new thin-film photovoltaic technologies using less expensive semiconductor material than crystalline-silicon to reduce the manufacturing cost of solar cells. Specifically, DOE is working to reduce the costs of photovoltaic systems from about 18-23 cents per kilowatt hour in 2005 to about 5-10 cents per kilowatt hour in 2015. DOE is also conducting R&D to reduce the cost and improve the reliability of concentrating solar power technologies, which use various mirror configurations to convert the sun s energy to heat to generate electricity. In addition, DOE has expanded R&D to address low-cost thermal storage to allow solar thermal systems to be more valuable to utility grid power markets. Along these lines, both the photovoltaic and concentrated solar power activities have ramped up efforts in the areas of grid integration and reliability to facilitate the transition to larger scale, centralized solar electric power plants. Investors concerns about high up-front capital costs are among the most significant challenges in deploying photovoltaic or concentrating solar energy technologies. This requires both technologies to have lower costs for installation and operations and maintenance, better efficiency of converting solar power to electricity, and longer-term (20 to 30 years) durability. <2.2. DOE s R&D Challenges for Advanced Fossil Energy Technologies> Since fiscal year 2006, DOE has proposed eliminating its R&D in oil and natural gas and, in January 2008, announced a restructuring of its coal R&D program. Increased oil production. Since fiscal year 2006, DOE has proposed to terminate its oil R&D. In November 2007, we reported that DOE has focused its R&D on increasing domestic production primarily by improving exploration technologies, extending the life of current oil reservoirs, developing drilling technology to tap into deep oil deposits, and addressing environmental protection. DOE officials stated that if the oil R&D program continues, it would focus on such areas as enhanced oil recovery technologies and expanding production from independent producers. Independent producers account for about 68 percent of domestic oil production. Natural gas technologies. Since fiscal year 2006, DOE has proposed to terminate its natural gas R&D. Our November 2007 report noted that DOE s R&D focuses on improving exploration technologies, reducing the environmental impact of natural gas operations, developing drilling technology to tap into deep gas reservoirs, and developing the technology for tapping into natural gas in naturally occurring methane hydrate found in permafrost regions on land and beneath the ocean floor. Clean coal technologies. DOE s R&D goal is to reduce harmful power plant emissions to near-zero levels by 2020. For new power plant applications, DOE is developing and demonstrating advanced integrated gasification combined cycle (IGCC) technologies. In 2003, DOE announced plans to construct a near-zero emissions commercial scale R&D facility called FutureGen with an alliance of coal mining and coal-based electric generating companies. DOE had originally pledged about three-quarters of the estimated $1 billion cost of the FutureGen project (in constant fiscal year 2004 dollars). With escalation costs and rising price of materials and labor, the estimated project costs rose to nearly $1.8 billion. As a result, DOE announced in January 2008 that it is restructuring FutureGen to focus on multiple, competitively selected projects that demonstrate carbon capture and sequestration at commercially viable power plant project sites. The impact of DOE s restructuring on FutureGen at this time is not known, but an industry official from the FutureGen Alliance noted that the project cannot go forward without federal government assistance. Separate from the FutureGen project, DOE also conducts R&D on near- zero emission power plants including carbon capture and sequestration through its fuels and power systems programs and its Clean Coal Power Initiative. <2.3. DOE s R&D Challenges for Advanced Nuclear Energy Technologies> DOE has focused nuclear energy R&D in the following three areas: The Nuclear Power 2010 program focuses on reducing regulatory and technical barriers to deploying advanced Generation III nuclear power reactors, which are designed to be more efficient than currently operating reactors. Because over the past 30 years, no electric power company had applied to the Nuclear Regulatory Commission for a license to construct a new nuclear reactor, Nuclear Power 2010 shares the costs with industry of preparing early site permits and or construction and operating license applications for submission to the Nuclear Regulatory Commission. Nuclear Power 2010 also regulates the risk insurance authorized by the Energy Policy Act of 2005 that protects industry from certain regulatory delays during licensing and construction. The Global Nuclear Energy Partnership program an extension of the Advanced Fuel Cycle Initiative develops proliferation-resistant nuclear fuel cycles that maximizes energy output and minimizes waste. Specifically, the program is designed to reduce the threat of global nuclear proliferation by developing advanced technologies for reprocessing spent nuclear fuel in the 2030 time frame. One of the critical elements of this effort is to develop a sodium-cooled fast reactor designed to burn a wide variety of nuclear fuels to reduce the total amount, temperature, and radiotoxicity of the spent fuel that might otherwise have to be stored for thousands of years in a repository. Beginning in fiscal year 2008, the Generation IV Program is focusing solely on the Next Generation Nuclear Plant (NGNP), designed as a versatile, efficient, high-temperature reactor capable of generating electricity and producing hydrogen. DOE collaborates with 12 other international partners on R&D related to fuels, materials, and design methodologies as part of the Generation IV International Forum. <3. Concluding Observations> In the current wake of higher energy costs and the growing recognition that fossil energy consumption is contributing to global climate change, the nation is once again assessing how best to stimulate the deployment of advanced energy technologies. While still considerably below its peak in the late 1970s, DOE s budget authority for renewable, fossil, and nuclear energy R&D has rebounded to $1.4 billion during the past 10 years after hitting a low point in fiscal year 1998. However, despite DOE s energy R&D funding of $57.5 billion over the last 30 years, the nation s energy portfolio remains heavily reliant on fossil fuels. Many technical, cost and environmental challenges must be overcome in developing and demonstrating advanced technologies before they can be deployed in the U.S. market. Our December 2006 report suggested that the Congress consider further stimulating the development and deployment of a diversified energy portfolio by focusing R&D funding on advanced energy technologies. However, because it is unlikely that DOE s energy R&D funding alone will be sufficient to significantly diversify the nation s energy portfolio, coordinating energy R&D with other federal programs, policies, incentives, standards, and mandates that can impact the nation s energy portfolio will be important for targeting any desired goals to change the nation s energy portfolio. In addition, state and local governments and other nations, along with a worldwide private sector, will play a role in developing and deploying advanced energy technologies both here and throughout the global energy market. A key factor to any sustainable deployment of advanced energy technologies will be to make them cost competitive, while addressing technical and environmental challenges, so that the market can support a more diversified portfolio. Otherwise, without sustained higher energy prices for our current portfolio, or concerted, high-profile federal government leadership, U.S. consumers are unlikely to change their energy-use patterns, and the U.S. energy portfolio will not significantly change. Appendix I: Comparison of DOE s Fiscal Year 2008 Appropriations with Its Fiscal Year 2009 Budget Request Fiscal year 2009 budget request (7) (70) (29) (1) (100) (100) (60) (58.0) Fiscal year 2009 budget request (100) (53.2) Exclde budget authority for Vehicle Technologie, which inclde the FreedomCAR nd Fel Prtnerhip nd the 21t Centry Trck Prtnerhip. The Vehicle Technologie R&D progrm focus on improving the energy efficiency of vehicle y developing lightweight mteri, dvnced batterie, power electronic, nd electric motor for hyrid nd plg-in hyrid vehicle, nd dvnced combustion engine nd fel. The Hydrogen Fel Inititive inded eptely throgh DOE Office of Energy Efficiency nd Renewable Energy, Fossil Energy, Ncler Energy, nd Science nd the Deprtment of Trporttion. In ddition to Hydrogen Technology R&D, Energy Efficiency nd Renewable Energy nd Fel Cell Technology R&D, which hitoriclly has een n energy efficiency progrm. The fil yer 2008 pproprition for Fel Cell Technology R&D i $116.6 million, nd DOE reqt for fil yer 2009 i $79.3 million The Hydrogen Fel Inititive received totl of $283.5 million in budget authority in fil yer 2008; the dminitrtion i reqting $236 million for the inititive in fil yer 2009. Dring fil yer 2008, Energy Efficiency nd Renewable Energy trferred ome of the Hydrogen Fel Inititive ctivitie to it Vehicle Technologie R&D progrm. Exclde the Mixed Oxide Fel Fabriction Fcility, which received $278.8 million in fil yer 2008. DOE i reqting $487 million for fil yer 2009. Dring fil yer 2008, R&D on the odim-cooled fast rector was trferred from the Genertion IV progrm to the Accelerted Fel Cycle Inititive/Global Ncler Energy Prtnerhip Progrm. <4. Contacts and Acknowledgments> For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Richard Cheston, Robert Sanchez, Kerry Lipsitz, MaryLynn Sergent, and Anne Stevens made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study For decades, the nation has benefited from relatively inexpensive energy, in the process growing heavily reliant on conventional fossil fuels--oil, natural gas, and coal. However, in the current wake of higher energy costs and environmental concerns about fossil fuel emissions, renewed attention is turning to the development of advanced energy technologies as alternatives. In the United States, the Department of Energy (DOE) has long conducted research, development, and demonstration (R&D) on advanced renewable, fossil, and nuclear energy technologies. DOE's Office of Science has also funded basic energy-related research. This testimony addresses (1) funding trends for DOE's renewable, fossil, and nuclear energy R&D programs and its Office of Science and (2) key challenges in developing and deploying advanced energy technologies. It is based on GAO's December 2006 report entitled Department of Energy: Key Challenges Remain for Developing and Deploying Advanced Energy Technologies to Meet Future Needs (GAO-07-106). In doing that work, GAO reviewed DOE's R&D budget data and strategic plans and obtained the views of experts in DOE, industry, and academia, as well as state and foreign government officials. What GAO Found Between fiscal years 1978 and 1998, DOE's budget authority for renewable, fossil, and nuclear energy R&D fell 92 percent when adjusted for inflation (from its $6 billion peak in fiscal year 1978 to $505 million in fiscal year 1998). It has since rebounded to $1.4 billion in fiscal year 2008. Energy R&D funding in the late 1970s was robust in response to the 1973 energy crisis caused by constricted oil supplies. However, R&D funding plunged in the 1980s as oil prices returned to their historic levels. DOE's fiscal year 2009 budget, as compared with 2008, requests slightly less budget authority for renewable energy R&D, while seeking increases of 34 percent for fossil energy R&D and 44 percent for nuclear energy R&D. In addition, DOE is requesting $4.7 billion for basic research under its Office of Science. The development and deployment of advanced energy technologies present key technical, cost, and environmental challenges. DOE's energy R&D program has focused on reducing high up-front capital costs; improving the operating efficiency of advanced energy technologies to enable them to better compete with conventional energy technologies; and reducing emissions of carbon dioxide, a greenhouse gas linked to global warming, and pollutants that adversely affect public health and the environment. However, while DOE has spent $57.5 billion over the past 30 years for R&D on these technologies, the nation's energy portfolio has not dramatically changed--fossil energy today provides 85 percent of the nation's energy compared to 93 percent in 1973. Because DOE's energy R&D funding alone will not be sufficient to deploy advanced energy technologies, coordinating energy R&D with other federal energy-related programs and policies will be important. In addition, other governments and the private sector will play a key role in developing and deploying advanced energy technologies that can change the nation's energy portfolio.
<1. Introduction> The federal and Commonwealth governments have had a long-term interest in policies to stimulate economic growth in Puerto Rico. Historically, the centerpiece of these policies has been the combination of the possessions tax credit in the U.S. Internal Revenue Code (IRC) and extensive tax incentives in the Puerto Rican tax code for U.S. and foreign businesses. In the early 1990s Congress became dissatisfied with the effectiveness of the credit and introduced restrictions to better target employment-generating activities. Then in 1996 Congress repealed the credit but allowed existing possessions corporations to earn either the possessions credit or a replacement credit during a 10-year phaseout period ending in 2006. Various proposals have been placed before Congress for some form of replacement assistance to the Puerto Rican economy. Congress could better assess the merits of the various proposals if it had more complete information relating to the recent performance of the Puerto Rican economy, the current treatment that Commonwealth residents receive under both federal tax policies and federal social programs, and information relating to the burden of taxes that residents of Puerto Rico pay, relative to those paid by residents of the states and the other U.S. insular areas. To provide a basis for future decisions regarding legislation on Puerto Rican economic issues, this report explains how the U.S. federal tax treatment of individuals and businesses in Puerto Rico and of the insular government differs relative to the treatment of governments, businesses, and individuals in the states and the other U.S. insular areas; compares trends in Puerto Rico s principal economic indicators since the early 1980s with similar indicators at the national level for the United States and provides what is known about capital flows between Puerto Rico and the United States and between Puerto Rico and foreign countries; reports on changes in the activities and tax status of the corporations that have claimed the possessions tax credit since 1993; provides information on the distribution of private-sector economic activity in Puerto Rico by type of business entity; describes the total amount of tax paid by individuals and businesses in the states and the U.S. insular areas and shows percentage breakdowns by type of tax; and describes how the principal U.S. federal social programs apply to Puerto Rican residents, relative to residents of the states and the other U.S. insular areas. <1.1. Background> Puerto Rico is one of the two nonstate Commonwealths associated with the United States. The other is the Commonwealth of the Northern Mariana Islands (CNMI). The United States also has three major territories under the jurisdiction of the U.S. Department of Interior. The major territories are Guam, the U.S. Virgin Islands, and American Samoa. The three major territories plus the two nonstate Commonwealths are referred to in this report as the insular areas. These areas are often grouped together in this manner for the purpose of federal legislation. For this reason, and when necessary for the purpose of comparison to Puerto Rico, this report provides a limited discussion on the other insular areas. With the exception of American Samoa, those born in the insular areas are U.S. citizens; however, insular area residents are not afforded all of the rights of citizens residing in the states. More than four million U.S. citizens and nationals live in the insular areas. These areas vary in terms of how they came under the sovereignty of the United States and also in terms of their demographics, such as median age and education levels. Each of the insular areas has its own government and maintains a unique diplomatic relationship with the United States. General federal administrative responsibility for all insular areas but Puerto Rico is vested in the Department of the Interior. All departments, agencies, and officials of the executive branch treat Puerto Rico administratively as if it were a state ; any matters concerning the fundamentals of the U.S.-Puerto Rican relationship are referred to the Office of the President. Residents of all the insular areas enjoy many of the rights enjoyed by U.S. citizens in the states. But some rights that, under the Constitution, are reserved for citizens residing in the states have not been extended to residents of the insular areas. For example, residents of the insular areas cannot vote in national elections, nor do their representatives have full voting rights in Congress. Residents of all of the insular areas receive federally funded aid for a variety of social programs. Although residents of an insular area do not pay federal income taxes on income earned in that insular area, federal tax policy does play an important role in the economies of the insular areas. Historically, the federal government has used tax policy as a tool to encourage investment and increase employment in the insular areas. <1.1.1. Puerto Rico s Relationship with the United States> Puerto Rico s Constitution of 1952 defines Puerto Rico as a self-governing Commonwealth of the United States. Although fiscally autonomous, Puerto Rico is similar to the states in many aspects. For example, matters of currency, interstate commerce, and defense are all within the jurisdiction of the U.S. federal government. Puerto Rican residents are required to pay local income taxes on income earned from Puerto Rican sources, but not federal income taxes. Puerto Rican residents, however, do contribute to the U.S. national Medicare and Social Security systems. Generally, federal labor, safety, minimum wage laws and standards also apply in Puerto Rican to the same extent they apply to the states. The federal government plays a pervasive role in Puerto Rico that stems not only from the applicability of the United States Constitution, laws and regulations, but from the transfer to the island of more than $13 billion in federal funds every year to fund social programs to aid Puerto Rican residents, including earned benefits such as Social Security and unemployment benefits. Chapters 2 and 7 of this report discuss in detail the how the U.S. federal tax code applies to residents of Puerto Rico and how the principal U.S. federal social programs are applied in Puerto Rico, respectively. <1.1.2. Characteristics of Puerto Rico> Puerto Rico occupies a central position in the West Indies. It comprises six main islands with a land area of 3,421 square miles and a population of almost four million people. Puerto Rico is thought to have one of the most dynamic economies in the Caribbean region, an economy in which manufacturing, driven by the pharmaceutical industry, has surpassed agriculture as the primary sector in terms of domestic income. Over 40 percent of Puerto Rico s domestic income since the mid-1980s has been derived from manufacturing. Pharmaceuticals accounted for almost 40 percent of total value added in manufacturing in 1987; that share rose to over 70 percent by 2002. Table 2 describes some of the demographic characteristics of Puerto Rico and compares them to national averages in 2000. possessions has been subject to special tax provisions. The Tax Reform Act of 1976 modified the form of the preferential tax treatment by establishing the possessions tax credit under Section 936 of the Internal Revenue Code. The stated purpose of this tax credit was to assist the U.S. possessions in obtaining employment-producing investments by U.S. corporations. Prior to 1994, the possessions tax credit was equal to the full amount of the U.S. income tax liability on income from a possession. The credit effectively exempted two kinds of income from U.S. taxation: income from the active conduct of a trade or business in a possession, or from the sale or exchange of substantially all of the assets used by the corporation in the active conduct of such trade or business and certain income earned from financial investments in U.S. possessions or certain foreign countries, generally referred to as qualified possession source investment income (QPSII). In order for the income from an investment to qualify as QPSII, the funds for the investment must have been generated from an active business in a possession, and they must be reinvested in the same possession. Dividends repatriated from a U.S. subsidiary to a mainland parent have qualified for a dividend-received deduction since 1976, thus allowing tax-free repatriation of possession income. The possessions tax credit was criticized on the grounds that the associated revenue cost was high compared to the employment it generated, that a large share of the benefits of the credit were not reaped by Puerto Rican residents and that it distorted debate over Puerto Rico s political status. The Omnibus Budget Reconciliation Act of 1993 placed caps on the amounts of possessions credits that corporations could earn for tax years beginning in 1994 or later. The Small Business Job Protection Act of 1996 repealed the possessions tax credit for taxable years beginning after 1995. However, the act provided transition rules under which a corporation that was an existing credit claimant was eligible to claim credits with respect to possessions business income for a period lasting through taxable years beginning before 2006. Additional background on Section 936 of the U.S. Tax Code and the possessions credit is provided in chapters 2 and 4. <1.1.3. Additional Studies Relating to the Economy of Puerto Rico> Several of our previous studies, as well as work done by the Internal Revenue Service (IRS) and the U.S. Census Bureau (Census), address aspects of the Puerto Rican economy discussed in this report, including the business activity of possessions corporations and employment, payroll, value added, and capital expenditures by economic sector. Our previous work also addresses broader trends in the Puerto Rican economy, as does work underway by the Brookings Institution. A related study is also expected shortly by the Joint Committee on Taxation. Its work will evaluate legislative options concerning Puerto Rico. Table 3 highlights the scope of several recent reports on Puerto Rico, as well as the two studies that are in progress. <1.2. Scope and Methodology> The Chairman and Ranking Minority Member of the U.S. Senate Committee on Finance asked us to study fiscal relations between the federal government and Puerto Rico and trends in the Commonwealth s economy with a particular focus on the activities of possessions corporations operating there. <1.2.1. Federal Tax Treatment> To determine the U.S. federal tax treatment of individuals and businesses in Puerto Rico, relative to the states and the other insular areas, we examined the IRC, Department of the Treasury regulations, relevant Treasury rulings and notices, and legislation. <1.2.2. Economic Indicators and Capital Flows> To compare trends in principal economic indicators for the United States and Puerto Rico, we obtained data from both U.S. and Puerto Rican sources. The trends we present are commonly used measures of overall economic activity and important components of economic activity, such as saving, investment, labor force participation, and unemployment. We reported on many of these indicators in our previous report on economic trends in Puerto Rico. The data shown are largely drawn from the National Income and Product Account series produced annually by economic statistics agencies in the United States and Puerto Rico. Most of the data we used for the U.S. economic series are produced by the Bureau of Economic Analysis and the Bureau of Labor Statistics and are publicly available from the Internet. When we compared U.S. data to Puerto Rican data that are based on the Puerto Rican July 1 June 30 fiscal year, we computed annual U.S. figures using monthly or quarterly data to match the Puerto Rican fiscal year. Most of the annual data we used for Puerto Rican economic trends are produced by the Planning Board of Puerto Rico and are also publicly available. In some instances, the methodologies used by the Planning Board to produce certain data series are outdated relative to the methodologies now used by the United States. For example, the methodology used in calculating certain price indices in Puerto Rico is outdated and the methods used to obtain unemployment data have been somewhat less rigorous than in the United States. In these cases, we reviewed literature concerning the limitations of various series and interviewed Puerto Rican officials about the methods they use to collect and develop their data. These limitations are noted in the report. Wherever possible, we used alternative assumptions and data sources to determine if any conclusions drawn from the data are sensitive to the particular data series used. For example, we applied both U.S. and Puerto Rican price indices to Puerto Rican gross domestic product (GDP) data to see if applying different measures of price changes would lead to different conclusions about whether the Puerto Rican economy has been growing faster or slower than the U.S. economy. Puerto Rico s Planning Board has recently contracted with several consultants for a review of their entire set of methodologies for preparing the Commonwealth s income and product accounts, including the deflators. The Board has also been negotiating a memorandum of agreement with the U.S. Bureau of Economic Analysis for the latter to provide advice on this effort. For some indicators of interest, annual data are not available for Puerto Rico. In some of these cases, we used decennial census data. The decennial census covers both the United States and Puerto Rico and produces comparable statistics on educational attainment and poverty levels. We also used data from the Economic Census of Puerto Rico and the Economic Census of the United States, also produced by Census. These data included detailed information on employment, investment, and value added broken down by sector of the economy. These data, produced by Census every 5th year, are of particular relevance to the possible effects of phaseout of the possessions tax credit. To provide information on what is known regarding the flow of capital into and out of Puerto Rico, we interviewed Puerto Rican government officials and private sector experts to help us to ascertain what data were available. We determined that the available data would not allow us to present a comprehensive picture of the trends in capital flows. The most significant gap in that picture is data relating to direct investment by corporations incorporated outside of Puerto Rico, which is financed from within their own affiliated groups, rather than through financial institutions. We can, however, report on changes over the years between 1995 and 2004 in the amount of funds that nonresidents hold in the Puerto Rican banking system and the amount of funds that the banking system invests within and outside of the Commonwealth. In order to identify where the assets held in the Puerto Rican banking system are invested and where the owners of the banks liabilities reside, we analyzed institution-specific data that the Office of the Commissioner of Financial Institutions (OCFI) collects for oversight purposes. Banks and certain other financial institutions in Puerto Rico are required to report detailed information regarding their assets, liabilities, and capital to the OCFI through a computerized CALL report data system. Appendix I describes our analysis of the financial data. We also used data provided by Puerto Rico s Government Development Bank to show trends in Puerto Rican government borrowing in the U.S. and local capital markets. The consensus of the government and private sector financial experts whom we interviewed was that all Puerto Rican government bonds that qualify for tax exemption under Section 103 of the IRC, such as bonds that are issued for the purpose of capital improvement projects, are sold in the U.S. market. All other Puerto Rican government bonds that are taxable in the United States but tax exempt in Puerto Rico are sold in the local market. The Government Development Bank was able to provide us with a complete and detailed accounting of each of their debt issues and to identify which ones did or did not qualify for the U.S. tax exemption. <1.2.3. Changes in Possessions Corporation Activity> In order to examine changes in the activities of possessions corporations operating in Puerto Rico since the early 1990s, we constructed several databases from an assortment of tax return data we obtained from IRS and Puerto Rico s Department of Treasury. Our principal source of data was IRS s Statistics of Income unit (SOI), which compiles comprehensive data on possessions corporations every other year. We obtained the complete set of these biennial databases from 1993 through 2003 and used information from SOI to identify those possessions corporations that operated in Puerto Rico. For the first stage of our analysis, we linked the biennial records for each individual corporation by its employer identification number (EIN) so that we could identify any data gaps for specific corporations in particular years and so we could complete a second, more complicated data analysis (described below). We filled in missing data for individual corporations to the extent possible from other IRS files and through imputations based on surrounding-year data. The extent of the imputations were minimal relative to the population totals we report. We used the final database on 656 possessions corporations that operated in at least 1 year between 1993 and 2003 to report on changes over time in the aggregate income, tax credit, and total assets of this population of corporations and to show how these particular variables were distributed across different industries. We also used data from the past four Economic Censuses of Puerto Rico (1987, 1992, 1997, and 2002) compiled by Census to show how the importance of possessions corporations in Puerto Rico s manufacturing sector has changed over time. For the second stage of our analysis, we focused on a subpopulation of the largest groups of affiliated possessions corporations operating in Puerto Rico. For each of these groups we compiled data on other affiliated corporations (i.e., those sharing the same ultimate parent corporations) that also operated in Puerto Rico, but were not possessions corporations. The objective of this analysis was to assess the extent to which the large corporate groups that accounted for most of the activity of possessions corporations remained active in Puerto Rico, even as the operations of their possessions corporations were being phased out. We started by identifying the 77 largest groups of possessions corporations in terms of the amount of credit they earned, their total income, and their total assets. These large groups gave us a subpopulation that accounted for over 90 percent of the tax credit and income earned and over 90 percent of the assets owned by possessions corporations between 1993 and 2003, and at the same time reduced the number of corporations we had to work with from 656 to 172. This reduction in the number of corporations we had to work with was important because data limitations caused some of the steps in our database development to be very labor intensive. We used two key data sources to identify and obtain data for the members of the large groups that operated in Puerto Rico but which were not possessions corporations. The first source was the database in which IRS maintained the records of all forms 5471 that had been filed between 1996 and 2002. (The owners of controlled foreign corporations must file a separate form 5471 every year for each CFC that they own.) The second source was a database that the Puerto Rican Department of Treasury (with the assistance of the Government Development Bank) had recently transcribed from all Puerto Rican tax returns for tax years 1998 through 2001 filed by all corporations or partnerships that received tax incentives from the Government of Puerto Rico. Officials from the Department of Treasury and from the Puerto Rico Industrial Development Company (PRIDCO) told us that almost all U.S.- or foreign-owned manufacturing corporations operating in Puerto Rico receive tax incentives, as do corporations in designated service industries that export products or services from Puerto Rico. A total of 1,758 different taxpayers appeared in the database for at least 1 of the tax years. We used a series of both automated and manual search and matching approaches to link the CFCs and other types of companies from these two databases to our 77 large corporate groups. We also used information from both databases to determine which of the CFCs had operations in Puerto Rico and, in the case of CFCs with operations in multiple countries, to make a range of estimates for the amount of income they earned in Puerto Rico. The data on income, assets, taxes paid, and place of incorporation that we extracted from the two databases for these linked corporations allow us to provide a more complete picture of the trends in activities of the corporate groups that have taken advantage of the possessions tax credit over the years. Through interviews with officials from the agencies providing the data and our own computer checks for internal consistency in the data, we determined that the quality of the data was sufficient for the purposes of our report when viewed with the cautions we raise at various points in the text. One problem that afflicted all of the databases to some degree was missing values arising from the fact that IRS and the Puerto Rican Department of Treasury could not always obtain every tax return that should have been in their databases in a particular year and the fact that taxpayers did not always accurately fill in every line of the return that they should have. Our access to multiple databases that overlapped to some extent enabled us to address this problem by filling in gaps with data from an alternative file, making reasonable imputations, or at a minimum assessing whether missing values would have made a significant difference to our results. <1.2.4. Distribution of Business Activity> In order to show how economic activity in Puerto Rico is distributed across different forms of businesses, we negotiated a special arrangement with IRS and Census that enabled us to disaggregate the data from Census s recently completed 2002 Economic Census of Puerto Rico by categories of business entities that are more specifically relevant to tax policymakers than the categories Census uses for its own publications. The 2002 Economic Census collected data on employment, payroll, and other economic measures from all nonfarm, private sector employers in Puerto Rico, making it a comprehensive enumeration of Puerto Rican businesses. We used taxpayer data from IRS and Puerto Rico to determine, in as many cases as possible, the type of federal or Puerto Rican income tax return each of these employers filed and, in the case of corporations, where they were incorporated. We then used this information to place each employer into a business entity group, such as possessions corporation, CFC incorporated in Puerto Rico, CFC incorporated elsewhere, sole proprietor, and so forth. Census then provided us with tabulations of their data for each of these groups, disaggregated by industry to the extent that their disclosure rules would permit. We developed a coding system and a data- exchange procedure that enabled us to link tax and Census data for specific employers in such a way that Census did not have to view restricted IRS data and we did not have to view confidential Census data for specific survey respondents. (See app. III for details.) The data that we used to determine the tax filing status and place of incorporation for the employers in the Census database came from the IRS and Puerto Rico databases described above, plus a couple of additional sources. Another important new source of data was IRS s National Accounts Profile (NAP) database, which contains selected information for all individuals and businesses that have an EIN. Each employer in Puerto Rico has a federal EIN because it must collect Federal Insurance Contributions Act (FICA) taxes on behalf of its employees. Consequently, we were able to access NAP data for a very high percentage of the employers included in the Census. For those employers we were able to determine what, if any, federal income tax form they were required to file, whether they were included in their parent corporation s consolidated return, and whether or not IRS had identified them as being sole proprietors. The other data sources that we used for this particular analysis included sets of income tax returns for some of the businesses operating in Puerto Rico that IRS had provided to Census, and a list of CFCs operating in Puerto Rico that PRIDCO had compiled. None of the non-Census data sources that we used was comprehensive and some of the sources more closely met our needs than others. Appendix III describes how we used these data to place each employer into a business entity group. For those cases where we could not reliably place an employer into a group based on tax data or data from PRIDCO we asked Census to place them into certain groups based on their survey responses. <1.2.5. Fiscal Comparison> To compare the overall tax burden borne by individuals and businesses in Puerto Rico with the burden borne by individuals and businesses in the states and in the other insular areas, we obtained and analyzed detailed data on state and local government revenues from the U.S. Census of Governments, data on Commonwealth government revenue from the Puerto Rican Department of Treasury, data on municipal tax revenue in Puerto Rico from Oficina del Comisionado de Asuntos Municipales, Centro de Estadisticas Municipales, and revenue data for the other insular areas reported in their 2002 Single Audit reports. We also obtained data on federal taxes collected in Puerto Rico and the states from IRS s 2002 Data book. (No such data were available for the insular areas.) We compared taxes paid on a per capita basis and as a percent of personal income. We make our comparison for year 2002 because that is the year of the most recent Census of Governments. We also compared federal expenditures for the states, Puerto Rico, and the insular areas using data we obtained from the Consolidated Federal Funds Report for Fiscal Year 2002 and the Federal Aid to States for Fiscal Year 2002. In addition, we report specifically on transfers of excise tax and customs duty revenues that the federal government makes to Puerto Rico using data obtained from U.S. Customs and the Alcohol and Tobacco Tax and Trade Bureau. To assess the reliability of the data, for the Census and Puerto Rican Treasury data we interviewed knowledgeable officials and reviewed supporting documentation to understand the internal procedures in place to ensure data quality. For the insular areas we compared data reported in the Single Audit reports to other published data. We determined that the data we obtained from the Puerto Rican Department of Treasury is consistent with what was reported in the Commonwealth s Comprehensive Annual Financial Report. Although we found the data reliable for the purpose of our engagement, we note certain limitations in the data. In particular, all the state and local data compiled by Census are as-reported by cognizant government officials responsible for financial matters in each of the political entities and may not have been subjected to any internal or external accuracy checks. Checks performed by Census on its data are for completeness and consistency with internal and external sources. The independent auditor s statement in the Single Audit reports for the insular areas indicated that the auditors generally could not verify the accuracy of reported information. In addition, federal, state, and insular area fiscal years differ, so the data do not cover exactly the same period of time. <1.2.6. Federal Social Programs> Interviews with federal agencies and prior GAO work provided the basis for our description of the application of the principal U.S. federal social programs to Puerto Rico residents, relative to the states, and the other insular areas. To select the social programs included in this report we consulted with GAO experts in the areas of health care policy; education, workforce, and income security policy; and financial markets and community investment policy. With the help of these experts, we arrived at a list of the principal federal social programs, which we then pared down, based on program availability in Puerto Rico and expenditure level in Puerto Rico. We relied on prior GAO work and interviews with federal agency officials to determine how each program is applied in Puerto Rico, relative to the other areas. We used program-level data, supplied by federal agencies, to report program expenditures for fiscal year 2002. We selected fiscal year 2002 because in chapter 6 of this report, we provide a more complete analysis of the revenue and expenditures of Puerto Rico, the states, and the other insular areas using the year of the most recent Census of Governments, 2002. Our methodologies for each objective were discussed with experts including those from the Office of the Comptroller General of Puerto Rico, Puerto Rico s Government Development Bank, Puerto Rico s Planning Board, Puerto Rico s Office of the Commissioner of Insurance Institutions and Puerto Rico s Office of the Commissioner of Financial Institutions. Federal-level experts include those from Census and IRS. Our work was performed from February 2004 to April 2006 in accordance with generally accepted government accounting standards. <2. U.S. Federal Tax Treatment of Puerto Rico and Other Insular Areas Varies by Area and Type of Tax> Individuals who are residents of Puerto Rico or other U.S. insular areas and who earn income only from sources outside of the states generally pay no federal income tax; however, their wages are all subject to Social Security and Medicare taxes, and wages paid to residents of Puerto Rico and the U.S. Virgin Islands also are subject to federal unemployment tax. Corporations organized in Puerto Rico, like those organized in the other U.S. insular areas, are generally treated for U.S. tax purposes as if they were organized under the laws of a foreign country. Until this year, special rules enabled corporations organized in the United States that met certain conditions to reduce the federal tax payable on income earned in and repatriated from Puerto Rico and other insular areas. <2.1. U.S. Tax Treatment of Insular Area Residents with U.S.-source Income and U.S. Residents with Insular Area-source Income Varies by Area> Individuals residing in an insular area and who earn income only from sources there file one income tax return there and are required to pay income tax only to that area. The U.S. income tax treatment of U.S.-source income of residents of an insular area (which does not include income earned in the insular areas, other than that earned by U.S. government employees) depends on the area: Residents of American Samoa and Puerto Rico must pay U.S. income tax on all their income from sources outside American Samoa or Puerto Rico, respectively, if such income exceeds the federal filing threshold. The U.S. government retains the tax collected from residents of Puerto Rico, but is required to transfer the tax collected from residents of American Samoa to its government. Residents of Guam and CNMI owe income tax to the territory and Commonwealth, respectively, on their U.S.-source income; the governments of these Commonwealths and territories are required to transfer a portion of this tax revenue to the U.S. government if the resident s income exceeds certain income thresholds. Generally, the U.S. government does not tax, or receive any tax revenue from U.S. Virgin Island residents who have U.S.-source income so long as such residents report all of their income, identify the source of their income, and pay their income taxes to the U.S. Virgin Islands. The U.S. income tax treatment of U.S. residents with Commonwealth- or insular area source income also depends on the insular area: U.S. residents with income from Puerto Rico or American Samoa are subject to U.S. federal tax on that income. They also pay tax on that income to Puerto Rico or American Samoa, respectively, and receive a foreign tax credit against their U.S. tax liability for this amount. U.S. residents with income from Guam or CNMI owe U.S. income tax on that income; the federal government is required to transfer a portion of the tax revenue received from Guam and CNMI residents back to the respective territory and Commonwealth. U.S. residents who earn income in the U.S. Virgin Islands must file identical tax returns with both the government there and the U.S. government; each government s share of the revenues is based on where the income was earned. <2.2. While FICA Taxes Are Imposed on Wages Paid to Employees in All Insular Areas, Unemployment Insurance Tax Applies Only to Wages Paid in Puerto Rico and U.S. Virgin Islands> The Federal Insurance Contributions Act imposes wage-based taxes on employers and employees in the United States and the Commonwealths and territories to support Social Security and Medicare. The employment upon which taxes are collected includes services performed in the United States and the insular areas. Taxes collected under the act are not transferred to the treasuries of the insular areas. The Federal Unemployment Tax Act imposes a tax on wages paid to employees, based on wages paid. Puerto Rico and the U.S. Virgin Islands are the only insular areas covered by the Act. The proceeds of the tax are used to support the federal-state unemployment compensation program and are not transferred to the treasuries of either area. <2.3. Taxation of Corporations Incorporated in the United States> The federal government taxes a U.S. corporation on its worldwide income (reduced by any applicable foreign income tax credit), regardless of where the income is earned. When the tax is due depends on several factors, including whether the income is U.S.- or foreign-source and, if it is foreign income, on the structure of the corporation s business operations. However, since 1976, and through taxable years beginning prior to December 31, 2006, U.S. corporations with a domestic subsidiary conducting a trade or business in insular areas could qualify to receive significant tax benefits through the possessions tax credit. Prior to taxable years beginning in 1994, the credit effectively exempted from U.S. taxation all possession-source income of a qualified possessions corporation. Dividends repatriated from a wholly-owned possessions corporation to the mainland parent qualified for a 100 percent deduction, thus allowing tax- free repatriation of possession income. The credit also exempted qualified possession-source investment income (QPSII), which is certain income the possessions corporation earned from financial investments in U.S. possessions or certain foreign countries. The credit for qualified research expense was also allowed for such research conducted by a possessions corporation. Starting in taxable years beginning in 1994, the amounts of possessions tax credits that a possessions corporation could claim were capped. Under the cap, a possessions corporation had to choose between two alternatives a percentage limitation option or an economic activity limitation option. In 1996, the possessions tax credit was fully repealed for taxable years beginning after 2005. Existing possessions corporations could continue to claim the possessions tax credit for tax years beginning prior to 2006. These existing credit claimants, however, were subject to an income cap based on the average business income that the corporation earned in a possession during a specified base period. A possessions corporation electing the percentage limitation was subject to the income cap beginning in 1998 and a possessions corporation electing the economic limitation was subject to the income cap beginning in 2002. Only QPSII earned before July 1, 1996, qualified for the credit for tax years beginning after December 31, 1995. <2.4. Taxation of Corporations Incorporated outside the United States> Corporations organized outside the United States, including corporations organized in Puerto Rico and the other insular areas, are generally treated as foreign corporations for U.S. tax purposes. These corporations are taxed on their U.S. source earnings the tax paid generally depends on whether the income is effectively connected with the conduct of a trade or business within the United States, but income from insular areas is not subject to U.S. tax. Foreign corporations pay U.S. tax at two rates a flat 30 percent rate is withheld on certain forms of nonbusiness gross income from U.S. sources, and a tax is imposed at progressive rates on net income from a U.S. trade or business. Corporations in Puerto Rico must pay the 30 percent withholding tax; corporations in the other insular areas do not pay the withholding tax if they meet certain tests that establish close connections with the insular area in which the corporation was created. U.S.-source dividends paid to corporations organized in Puerto Rico are subject to a 10 percent withholding tax provided that the same tests mentioned above are satisfied and the withholding tax on dividends paid to the U.S. corporations is not greater than 10 percent. Corporations organized under the laws of an insular area may be treated as a controlled foreign corporation (CFC) for U.S. income tax purposes. To qualify as a CFC, the corporation must be more than 50 percent U.S.- owned, taking into account only U.S. shareholders that meet a 10 percent stock ownership test. Gross income from the active conduct of business in Puerto Rico or elsewhere outside of the United States is not taxed until it is repatriated to the U.S. shareholders in the form of dividends. Subject to certain limitations, these shareholders are entitled to a credit for any foreign income taxes paid by the CFC with respect to the earnings distributed. Certain types of passive income, such as dividends and interest, earned by CFCs are currently includable in the income of the U.S. shareholders, under subpart F of the U.S. Tax Code, even though those amounts are not actually distributed to them. These shareholders are, subject to certain limitations, also entitled to a credit for foreign income taxes paid with respect to the amounts includible in income under subpart F. Certain kinds of income received by a CFC organized under the laws of an insular area are not considered subpart F income: income received from the sale in the insular area of personal property manufactured by the CFC in that area, dividend or interest income received from a related corporation also organized under the laws of that insular area, and rents or royalties from a related corporation received by a CFC organized under the laws of an insular area for the use of property in the insular area where the CFC is organized. The allocation of gross income, deductions, and credits between related taxpayers, such as intercompany sales from a CFC to a U.S. domestic parent, is subject to transfer pricing rules that are designed to prevent manipulation of the overall tax liability. <2.5. Deduction for Income from Domestic Production Activities> In 2004, in response to a long-running dispute with the European Union, Congress repealed the extraterritorial income (ETI) exclusion and enacted a deduction relating to income attributable to domestic production activities. For purposes of the ETI exclusion, the United States included Puerto Rico. Puerto Rico is not included, however, in the definition of U.S. for purposes of the deduction for domestic production. <2.6. Goods Imported to Insular Areas Are Generally Exempt from U.S. Excise Taxes, but a Special Tax Is Imposed on Goods Made in Puerto Rico and U.S. Virgin Islands> Merchandise imported into an insular area from the United States is exempt from U.S. excise taxes. The only U.S. excise taxes that apply to products imported into any of the insular areas from another country are those where specific language extends the tax beyond the United States, which is generally defined, for tax purposes, as only the states. This language exists for a tax on petroleum (an environmental tax), a tax on certain vaccines, a tax on certain chemicals, and a tax on certain imported substances. If any revenue from these excise taxes is collected in American Samoa, Puerto Rico, or the U.S. Virgin Islands, the U.S. government retains the revenue. The governments of Guam or CNMI receive any revenue from these taxes collected in their respective territory and Commonwealth. There is a special equalization U.S. excise tax on articles manufactured in Puerto Rico or the U.S. Virgin Islands and exported to the United States equal to the tax that would have been imposed had the articles been manufactured in the United States. Subject to the limitations described below for distilled spirits, the U.S. Treasury returns all the revenue from the tax on articles manufactured in Puerto Rico to the Treasury there except the amounts needed to pay refunds and drawbacks to manufacturers and the amount needed to cover its enforcement expenses. The return to the U.S. Virgin Islands also excludes amounts needed to pay refunds and drawbacks, plus one percent of the total tax collected. All U.S. excise taxes collected on articles manufactured from Guam and CNMI and exported to the United States must be transferred to their respective territory and Commonwealth governments. A special limitation applies for the U.S. excise tax on distilled spirits manufactured in Puerto Rico and the U.S. Virgin Islands and exported to the United States. The tax rate ordinarily applied to rum is $13.50 per proof gallon exported, of which $10.50 per proof gallon is returned to the appropriate insular area. Puerto Rico and the U.S. Virgin Islands also share revenue from the U.S. excise tax collected on all rum imported into the United States from a foreign country. Their respective shares are proportionate to the relative sizes of their rum exports to the United States during the prior fiscal year. Puerto Rico s share, however, cannot exceed 87.626889 percent or be less than 51 percent while the U.S. Virgin Islands share cannot exceed 49 percent nor drop below 12.373111 percent. <2.7. U.S. Government Is Responsible for Collecting Customs Duties in Puerto Rico and Helps Collect Duties in U.S. Virgin Islands> The U.S. government collects duties on goods imported into U.S. customs territory, which encompasses the states and Puerto Rico, unless they are exempt. U.S. customs duties collected in Puerto Rico are deposited in a special U.S. Treasury account. After deductions for refunds and the expenses of administering customs activities in Puerto Rico, the remaining amounts are transferred to the treasury there. Although the U.S. Virgin Islands are not in U.S. customs territory, the U.S. government helps collect local duties there. These collections are transferred to the government of the U.S. Virgin Islands after items such as operational expenses are deducted. The U.S. government has authority to administer and enforce collection of custom duties in American Samoa, upon request of the Governor. Guam and CNMI administer and enforce their own customs policies and procedures. Items imported into U.S. customs territory from American Samoa, Guam, CNMI, and the U.S. Virgin Islands are subject to U.S. customs duties unless the items are exempt. <3. Trends in Production, Income, and Other Economic Indicators for Puerto Rico> The economic well-being of Puerto Rican residents, measured in terms of either per capita or median income, remains well below that of residents of the states. The relative progress that the Puerto Rican economy has made since 1980 is difficult to measure with precision for a number of reasons, including tax-induced distortions in how U.S. corporations have reported income earned in the Commonwealth. The low rate of labor participation is a crucial issue in Puerto Rico s economic performance, and the rate of investment appears insufficient to significantly reduce the disparity between mainland and Puerto Rican incomes. <3.1. Measuring Economic Progress in Puerto Rico Is Challenging but the Income of Commonwealth Residents Remains Well Below That of U.S. Residents> As shown in figure 9, Puerto Rico s per capita GDP of about $21,000 in 2005 remained well below U.S. per capita GDP of about $41,000. GDP is a broad measure of overall income or economic activity occurring within a nation s borders in a given year. According to the Puerto Rican and U.S. national income and product accounts, this measure has grown more rapidly in Puerto Rico than in the United States since 1980, when viewed on a per capita basis after adjustments for inflation. However, for a number of reasons, the growth rate of real (meaning inflation-adjusted) GDP likely does not represent a very accurate measure of changes in the economic well-being of Puerto Rican residents. First, as a result of U.S. tax provisions and a development strategy pursued by successive Puerto Rican governments to use local tax incentives to attract investment by U.S. and foreign firms, a significant amount of the investment income included in GDP is paid out to U.S. and foreign investors. In figure 9, the income earned by nonresidents is approximately represented by the gap between Puerto Rican GDP and Puerto Rican GNP. GNP is a measure of the total amount of income earned by residents in a given year from sources within and from outside of the country. In contrast to Puerto Rico, GDP has been consistently about the same as GNP in the United States, which indicates that the amount of income earned abroad by U.S. residents is close to the amount of income earned by foreign owners of assets located in the United States. As of 2005, Puerto Rico s per capita GNP of about $14,000 remained well below the U.S. level of about $41,000. Second, using the possessions tax credit, U.S.-based groups of affiliated corporations (i.e., those owned by a common U.S. parent corporation) with certain types of operations in Puerto Rico have had incentives to attribute as much net income to those operations as is legally permissible, rather than to related operations in the United States. Moreover, the nature of these incentives has changed during the period covered by our review. Consequently, the income reported by these corporations to have been earned in Puerto Rico in a given year may overstate the actual economic importance of their Puerto Rican production, and changes in income over the years may reflect not only changes in the economic activity of these corporations, but also changes in how corporations have computed their Puerto Rican source income. Some of the data reported later in this chapter suggest that this so-called income shifting has taken place. This particular issue affects data on GDP and income and possibly value added for corporations owned by U.S. parent corporations; it should not affect GNP or income and value added for Puerto Rican-owned corporations. Third, as is the case for any country, the scale of the informal, or underground, economy in Puerto Rico is difficult to measure. If the informal economy in Puerto Rico is large relative to the informal economy in the United States, as some analysts believe, a relatively large amount of economic activity in Puerto Rico may not be reflected in national income and labor market statistics. As discussed below, the presence of a large informal economy may be one explanation of low reported labor force participation rates in Puerto Rico. Analysts who have recently looked at this issue disagree on the size of the informal economy and on whether it has been growing as a share of the total economy. The size and any growth in the informal economy in Puerto Rico, relative to that in the United States, would affect comparisons between levels and growth in per capita income earned in the two jurisdictions. Lastly, as acknowledged by the Puerto Rico Planning Board, there are problems with some Puerto Rican price indices, which cause an unknown degree of inaccuracy in the inflation adjustments to the long-term trend data on the Puerto Rican economy and, therefore, some imprecision in the real growth rates of key economic indicators that are stated in terms of dollar values. Most concerns center on the Puerto Rican consumer price index (CPI, a measure of prices on consumer goods) and the fact that the market basket of goods used to compute the index has not been updated since the 1970s. This means that the index will tend to overstate price changes. In the analysis in this chapter, we have used the Puerto Rican gross product deflator a broad measure of how prices have changed on average for goods and services in the economy for our inflation adjustments. Although analysts within and outside of Puerto Rico s Planning Board, which produces the deflator, consider it to be less problematic than the CPI, they still have concerns relating to fact that the CPI is one of the components used in estimating the deflator and the fact that methodologies for other components are also outdated. Given the concerns with the Puerto Rican deflator, there is a question as to whether that measure or the U.S. gross product deflator more accurately accounts for the changes in prices in Puerto Rico. The U.S. deflator shows slower price increases over this period than does the Puerto Rican deflator. For this reason, we also report some results based on the use of the U.S. deflator in cases where they differ notably from those based on the Puerto Rican deflator. When comparing the trends in real per capita GNP in Puerto Rico and the United States from 1980 to 2005, the choice of deflators does make a difference. Over that period, inflation-adjusted per capita income increased at an average annual rate of 1.9 percent in the United States, while it rose at 1.5 percent in Puerto Rico if the Puerto Rican deflator is used. However, if the U.S. deflator is applied to Puerto Rican GNP, annual real per capita GNP rose by 2.5 percent annually, faster than the growth in the United States. Real per capita GDP rose more rapidly in Puerto Rico than in the United States, regardless of which deflators are used. U.S. GDP rose at an annual average rate of 1.9 percent from 1980 to 2005, while the average annual growth rate for Puerto Rico was 2.1 percent using the Puerto Rican deflator and 3.2 percent using the U.S. deflator. Figure 10 shows the composition of Puerto Rican GDP over time and the trend in net income payments abroad. GDP consists of expenditures on personal consumption, investment, government consumption of goods and services, and net exports (the value of exports minus the value of imports). The figure shows that net exports have risen substantially from 1980 to 2005 as a share of GDP, and consumption, which is largely determined by Puerto Rican income, has fallen as a share of GDP. Figure 10 also shows net income payments abroad, expressed as a share of GDP. This series represents the amount of income paid to foreign owners of capital located in Puerto Rico, minus income earned by Puerto Ricans from investments outside of Puerto Rico. GNP differs from GDP by this amount. For Puerto Rico, the net outflow of income has increased as a share of GDP over the period, increasing the gap between GDP and GNP. <3.2. Puerto Rico Relies Heavily on Nonresidents to Finance Local Investment> Figure 11 shows the relationship between savings and investment in Puerto Rico. The components of total national saving in Puerto Rico are personal saving, government saving, business saving through retained earnings, and depreciation. The figure shows that investment in Puerto Rico has been greater than national saving, highlighting again that investment in Puerto Rico has been significantly financed by foreign sources. Since 2001, government saving has fallen and undistributed corporate profits have risen significantly. The personal saving rate as measured in the Puerto Rican national accounts has been negative since 1980. If transfers from foreigners to residents of Puerto Rico are underreported, however, the official data for income and saving would also be understated. We cannot provide a comprehensive picture of the trends in various components of U.S. and foreign investment in Puerto Rico because data are not available for one of the most important components direct foreign investment, for which corporations obtain financing from within their own affiliated groups, rather than through financial institutions. We can, however, report trends for foreign funds flowing through key types of financial institutions and the Puerto Rican government. In the next two chapters, we will also provide some information on investments by important subpopulations of corporations. Over the past decade, the amount of nonresident funds flowing into depository institutions in Puerto Rico has increased steadily. Figure 12 shows Puerto Rico s depository institutions liabilities between 1995 and 2004, and figures 52 and 53 in appendix II show the shift in deposits and debt, respectively. The composition of deposits has changed significantly with exempt investments by possessions corporations (which in the past had been encouraged by a special component of the possessions tax credit) being replaced by deposits obtained through brokers that sell certificates of deposits for the banks in the U.S. capital market. (Fig. 54 in app. II shows those offsetting trends.) Figure 13 below shows that the share of assets held by depository institutions in the United States and foreign countries has also increased over the past decade. A large part of this growth can be attributed to the increase in U.S. and foreign securities investments. Loans made by Puerto Rico s depository institutions, which we assume to be primarily local, have also increased steadily. Figures 55 and 56 in appendix II show these two trends. Puerto Rican government debt has increased steadily over the past decade. Between 1995 and 2005, Puerto Rico s real total public debt outstanding increased from $25.6 billion to $36.4 billion (see fig. 14 below). Most of Puerto Rican public debt is sold in the U.S. market, but the amount sold within Puerto Rico has increased steadily since 1999. In 2005 an estimated $31.6 billion was sold in the United States, and $4.8 billion was sold locally in Puerto Rico. In appendix II we include both the breakdown of debt payable by the government and debt issued by the government but repaid by others (such as the federal government or the private sector) because there are differences of opinion about what should be termed government debt (see figs. 58 and 59). An example of this type of debt is the series of bond issues linked to The Children s Trust Fund between 2001 and 2005, all of which are backed by assets from the United States Attorney General s 1999 Master Tobacco Settlement Agreement. Between 1995 and 2005, total debt issued by the Puerto Rican government, but payable by others, increased from an estimated $6.6 billion to an estimated $7.1 billion in 2005. <3.2.1. Investment Appears Insufficient to Reduce the U.S.-Puerto Rican Income Gap> Figure 15 shows the level and composition of gross investment spending in Puerto Rico from 1980 to 2005. During the recession of the early 1980s, investment fell below 10 percent of GDP by 1983. Thereafter, investment recovered and remained around 15 percent of GDP for a number of years until a period of rapid growth in largely private-sector investment in the late 1990s pushed the share close to 20 percent of GDP by 2000. Investment rates have fallen back to about 15 percent of GDP most recently. If Puerto Rico s investment rate remains at recent levels, the gap between U.S. and Puerto Rican per capita incomes is unlikely to diminish. The U.S. investment rate, including both private investment and a measure of government investment, has been about 19 percent of GDP in recent years. Continuation of these relative investment rates implies that the per capita income gap is unlikely to narrow significantly, unless capital formation is augmented by increases in employment, education, training, or other types of productivity improvements. Figure 16 shows a breakdown of Census data on capital spending in the manufacturing sector for 1987, 1992, 1997, and 2002. The data show that investment in manufacturing dipped significantly between 1992 and 1997, before rebounding by 2002. This slump in investment does not appear in the Planning Board investment data for private sector investment shown in figure 15. The Planning Board data cover more sectors than do the Census data; however, investment in manufacturing should represent a substantial portion of the investment in private structures and machinery. <3.3. Data on Value Added and Income Show That the Pharmaceuticals Industry Has Significantly Increased Its Dominance of Puerto Rican Manufacturing but Evidence Suggests That These Measures May Be Overstated> Although both Census data on value added and Puerto Rican government data on domestic income show that the pharmaceutical industry has significantly increased its already dominant position in the manufacturing sector since the early 1990s, evidence suggests that income shifting within U.S.-owned corporate groups likely has resulted in overstatements of the importance of the manufacturing sector, as a whole, and the pharmaceutical industry, in particular, when measured in terms of value added or income. Unfortunately, it is difficult to know the extent of any overstatement in these economic variables. Evidence is mixed as to whether the extent of the overstatement increased as the pharmaceutical operations of possessions corporations were shifted over to other types of businesses. Other measures of economic activity, such as employment and capital spending, should not be affected by income shifting and, therefore, can be used to either support or challenge conclusions based on measures of value added and income. Census data on value added and Puerto Rican Planning Board data on domestic income both show steady and significant growth in the pharmaceutical industry. Figure 17 shows that value added in the pharmaceutical industry more than doubled in real terms from 1992 to 2002, while value added in all other manufacturing industries, as a whole, declined. Figure 18 shows that the chemical industry, which consists mainly of pharmaceuticals, saw its share of net manufacturing domestic income increase from around 50 percent in 1992 to over 60 percent in 2005. The strong reported performance of the pharmaceutical sector is the reason that the manufacturing sector has been able to slightly increase its share of domestic income, while the share of income of most other manufacturing industries has declined. Manufacturing s share of income, shown in figure 19, greatly exceeds its share of employment, as shown in figures 23 and 24. Some of the difference may be attributable to a higher level of labor productivity in manufacturing than in other sectors. Recent research suggests, however, that reported levels of value added in Puerto Rican manufacturing are implausible. For example, the official data imply that labor s share of value added in manufacturing fell from an average of 50 percent from 1950 to 1970 to only 14 percent in 2004. Similar declines are not evident in data for other sectors or in U.S. manufacturing statistics. Over the years, several analysts have concluded that the incentives provided by the possessions tax credit have led U.S. corporate groups to shift income to Puerto Rican affiliates. Until the mid-1990s, the credit essentially allowed profits earned from qualified Puerto Rican operations to be returned to the mainland free of federal tax (even when largely exempted from Puerto Rican income taxes). In addition, one option under the credit allowed the U.S. corporate parent to apply a 50-50 split of their combined taxable income from the sale of products to third parties if the products were derived from an intangible asset, such as a patent, invention, formula, or trademark. Although a substantial portion of this income can be attributed to manufacturing intangibles developed and owned by the U.S. corporate parent, there is no requirement that the allocation of income from such manufacturing intangible assets reflect where costs were actually generated, or where value was actually added to the products. Consequently, corporate groups that produced pharmaceuticals, or other products whose final values are largely based on the value of intellectual property, were given flexibility under the law to shift net income to the possession corporations operating in Puerto Rico or another insular area. This shifting of income and value added to the Puerto Rican operations of possessions corporations ultimately gets reflected in economic data compiled by the Puerto Rican government, which is based heavily on data pulled from samples of corporate tax returns, and possibly in data that Census collects in its surveys of employers for the economic censuses, if the economic data the employers provide are based on their tax accounts. The nature of income shifting changed significantly after 1995, when the phaseout of the possessions tax credit began. Some of the corporate groups that owned possessions corporations in Puerto Rico began to close or reduce operations in those corporations and shift production to CFCs located on the island. Corporate groups still have some incentives to retain operations in Puerto Rico rather than shift that production to the United States. First, Puerto Rico responded to the phaseout of the credit by increasing the generosity of its own tax incentives. Second, manufacturing income earned from an active trade or business by the CFCs is not subject to federal tax unless it is repatriated to the United States. A change in income shifting has also occurred because the rule for arbitrarily splitting net income 50-50 between Puerto Rican and U.S. operations does not apply to CFCs. Nevertheless, corporate groups may be able to shift income to Puerto Rico through the manner in which they set prices on goods and services transferred among affiliated corporations. Data from the last four economic censuses of manufacturing in Puerto Rico, presented in figure 20, show that valued added per employee in the pharmaceutical industry was already at least twice as high as the ratio for all other industries in 1987 and 1992. The difference between the pharmaceutical industry and the other industries grew larger in 1997 and then broadened dramatically by 2002. The 2002 figure of $1.5 million for value added per employee in Puerto Rican pharmaceutical manufacturing was three times as high as the ratio for the U.S. pharmaceutical industry for the same year. Moreover, while the U.S. ratio grew only 8 percent in real terms between 1997 and 2002, the Puerto Rican ratio grew by 65 percent over that same period. The data on value added per employee by type of business in figure 21 suggest that the sharp increase in that measure between 1997 and 2002 may have been a direct result of the shift in pharmaceutical operations from possessions corporations to CFCs. (These data are derived from a special research effort in which we obtained assistance from Census and IRS to aggregate data from the 2002 Economic Census of Puerto Rico by particular types of business entities, including possessions corporations and CFCs.) The value added per employee of $4.2 million for pharmaceuticals CFCs incorporated outside of Puerto Rico was dramatically higher than for any other type of business in Puerto Rico. The next highest ratio was $1.6 million for pharmaceuticals CFCs incorporated in Puerto Rico, which was still considerably higher than the ratio of $0.9 million for possessions corporations in the pharmaceutical industry. That data, combined with the data in figure 20, suggest a significant change in transfer pricing by large pharmaceuticals groups, which makes it difficult to say how much of the strong reported growth in output and income in the Puerto Rican pharmaceutical industry, and in the manufacturing sector as a whole, represents an increase in actual economic activity. Data on rates of return on assets for possessions corporations and CFCs in the chemical industry do not confirm the conclusion that a dramatic change in income-shifting practices occurred as CFCs replaced possessions corporations in the industry. We used data from federal tax returns to compare various rates of return for CFCs and possessions corporations in the Puerto Rican chemical industry. The comparisons we were able to make for 1997 through 2001 did not show a consistent difference between the two types of corporations. The ratios of gross profits (the closest tax-data equivalent to value added) to total assets for CFCs were significantly higher than those for possessions corporations in both 1997 and 1999, but the ratios were very close together in 2001. We also compared the gross and net operating rates of return of the two types of corporations and found that neither type dominated the other one consistently across the years. The results of our analysis are presented in appendix IV. <3.4. International Trade Plays a Large Role in Puerto Rico s Economy> International trade plays a much larger role in the Puerto Rican economy than it does in the U.S. economy. While the output of an economy (GDP) depends on the difference between exports and imports (net exports), the size of exports and imports relative to GDP are indicators of the importance of trade to the economy. For the United States, exports of goods and services averaged about 10 percent of GDP between 1980 and 2005. Imports increased from about 10 percent of GDP in the early 1980s to about 16 percent of GDP in 2005. While potential distortions in trade data should be kept in mind, the share of exports and imports has been substantially greater in Puerto Rico. For Puerto Rico, the value of exported goods and services as a percentage of GDP grew from about 70 percent of GDP in the 1980s to about 80 percent in 2005. Imports fell as a share of GDP from about 70 percent to about 63 percent in recent years. As reported in the Puerto Rican national accounts, the value of pharmaceutical imports and exports increased substantially from 1996 to 2005. The value of imported pharmaceuticals increased from about 9 percent of all merchandise imports to about 33 percent during that period. As a share of GDP, the value of imported pharmaceuticals increased from about 4 percent to about 15 percent. The value of pharmaceutical exports rose rapidly as a share of merchandise exports from about 27 percent to about 61 percent. As a percentage of GDP, the value of pharmaceutical exports rose from about 14 percent to about 42 percent. However, as noted above, a significant portion of the recorded increase in Puerto Rico s trade surplus may reflect changes in transfer pricing, with artificially low values for Puerto Rico s imports and high values for Puerto Rico s exports, rather than increased activity. While the United States is the largest trading partner for Puerto Rico for exports and is a large source of Puerto Rican imports, the foreign country share of imports to Puerto Rico has been growing since 1995. In 2005, slightly less than half of the value of imports to Puerto Rico came from foreign countries. About 80 percent of Puerto Rico s exports go to the United States. Puerto Rico s overall trade surplus reflects a trade surplus with the United States as Puerto Rico exports more to the United States than it imports from the United States, and a smaller trade deficit with the foreign countries. <3.5. Official Statistics Indicate That Unemployment Has Been Much Higher in Puerto Rico Than in the United States and Labor Force Participation Has Been Lower> Figure 22 shows the unemployment rates and labor force participation rates for the United States and Puerto Rico from 1980 to 2005. The unemployment rate has been significantly higher in Puerto Rico than in the United States, and the labor force participation rate has been much lower. Academics and economists from research institutions have offered several possible explanations for the relatively low labor force participation rate in Puerto Rico and attempted to determine which of these factors might be important. While the low labor force participation rate is seen as a crucial issue for the economic performance of Puerto Rico, there is no consensus on its cause. Possible explanations for the low labor force participation rate include the migration of Puerto Rican citizens with the most interest in participating in the labor force to seek higher wage employment in the United States, leaving residents that have relatively less attachment to the labor force; the fact that government programs that are in place, such as the Nutrition Assistance Program (NAP, the Puerto Rican food stamp program) and disability insurance, can discourage work, while the U.S. program that encourages labor force participation the Earned Income Tax Credit is not a part of the tax system in Puerto Rico; the fact that the U.S. minimum wage applies in Puerto Rico may discourage business demand for lower-skilled workers, who are likely to make up a larger share of the potential work force in Puerto Rico than in the United States; and that a relatively large share of Puerto Ricans work in the informal economy and that this work is not reflected in economic statistics. Regarding this last issue, analysts have raised issues with the quality of the Puerto Rican labor force survey, which is the data source for the unemployment rate and the labor force participation rate. The survey is designed to be similar to the U.S. Current Population Survey (CPS), from which the U.S. data are derived, but the questions regarding labor market activity in the surveys differed and the question asked by the Puerto Rico household survey may not have captured work activity in the informal sector of the economy as well as the question asked in the CPS. On the other hand, labor force participation as measured in the decennial census which uses the same question as the CPS has also been low and the estimate for 2000 was lower than the household survey estimate for that year. The Bureau of Labor Statistics (BLS) has been working with the Puerto Rican government to improve the household survey in several areas. In addition, labor force data for 2005 are scheduled to be reported for Puerto Rico as a part of the Census Bureau s American Community Survey effort. Educational attainment can play an important role in developing labor market skills. Data on educational attainment in Puerto Rico is collected in the decennial census and can be compared to data for the United States. These data show that the gap in educational attainment between Puerto Rico and the United States narrowed significantly during the 1990s. Nonetheless, in 2000, 40 percent of the population over 25 in Puerto Rico had not finished high school, which is nearly the double the U.S. share. At the same time, about 38 percent of adults reported having at least some college education (see table 4). Recent research concluded that there is a substantial mismatch between Puerto Rico s industry structure and the educational achievement of its population. While the mean years of schooling among Puerto Rican adults was substantially below that of any state in the last three censuses, the average years of schooling of people typically employed by the industries operating in Puerto Rico exceeds that of at least two-thirds of the states. The researchers suggest that the Puerto Rican economy has failed to generate jobs that fit the educational qualifications of the Commonwealth s population. In some sense, therefore, Puerto Rico s missing jobs can be found in labor intensive industries heavily reliant on less-educated workers. The authors conclude that the Possessions Tax Credit and other federal tax incentives contributed to an industry structure that is poorly aligned with the sort of job opportunities needed by Puerto Rico s population. Annual data on employment in Puerto Rico come from two sources: the Puerto Rico household survey, and the BLS establishment survey. The Puerto Rico household survey has consistent sector definitions across time and includes the self-employed. The establishment survey data are limited to employees and reflect the new North American Industry Classification System industry definitions. In the figures that follow, we aggregated some of the industry categories and show the distribution of employment by sector. Both surveys show employment in Puerto Rico generally increasing since 1991 and show manufacturing employment declining since 1995. As shown in figure 25, data from the Census of Manufacturing for Puerto Rico for 1997 and 2002 also indicate a decline in manufacturing employment. Manufacturing employment fell by about 27 percent from 1995 to 2005, according to establishment survey data. Both the household and establishment data sources show that the government sector employs a large percentage of workers about 23 percent in the household survey and about 30 percent in the establishment survey. For the United States, manufacturing employment has been falling, both in absolute numbers of employees and as a percentage of all employees. Between 1980 and 2005, manufacturing employment fell by about 4.5 million employees (about 24 percent). From 1995 to 2005, manufacturing employment fell by about 3 million employees (about 17 percent). As of 2005, manufacturing employees represented about 10.7 percent of all employees. Government employees constituted about 16 percent of total employees in the United States, down from about 18 percent in 1980. <3.6. Since 1980, Real Per Capita Personal Income in Puerto Rico Has Not Grown Enough to Substantially Reduce the Gap between U.S. and Puerto Rican Living Standards> Although the likely imprecision of price deflators for Puerto Rico leaves the exact growth rate of real per capita personal income there difficult to determine, the rate has not been sufficient to substantially reduce the gap between U.S. and Puerto Rican living standards. Puerto Rican per capita personal income is well below that in the United States (see fig. 26). As we did in comparing U.S. and Puerto Rican GDP and GNP, we adjusted aggregate per capita personal income data using both U.S. and Puerto Rican price deflators. The growth rate in per capita personal income is somewhat higher in Puerto Rico than in the United States when the U.S. deflator is used to adjust Puerto Rican per capita personal income for inflation. In this case, the average annual percentage increase in Puerto Rican per capita personal income was 2.1 percent while U.S. per capita personal income rose by 2.0 percent on average per year. When the Puerto Rican deflator is used to make adjustments for inflation, Puerto Rican per capita personal income grew at a slower rate (1.1 percent) than in the United States (2.0 percent). The difference arises because the U.S. price deflator increased less than the Puerto Rico price deflator. Using both price indices serves to illustrate the sensitivity of the calculation to the index used. In addition, private income transfers from Puerto Rico emigrants now living in the United States made to Puerto Rican residents may be understated, which would lead to an understatement of Puerto Rican personal income. As U.S. citizens, Puerto Ricans are free to migrate to the mainland United States and return as they wish. According to Census estimates, net migration from Puerto Rico to the United States in the 1980s totaled about 126,000. During the 1990s, net migration was estimated to be about 111,000. Census data show the distribution of income in Puerto Rico and the United States and the percentages of individuals and families with incomes below official poverty lines. The median household income in 1999 was $41,994 in the United States and $14,412 in Puerto Rico. In 1999, 48.2 percent of households in Puerto Rico had incomes below the poverty level, which was nearly four times the U.S. share, as shown in table 5. As the disparity between average incomes in the United States and Puerto Rico suggests, a much higher percentage of Puerto Rican households is in the lower income categories. In 1999, only about 10 percent of U.S. households had annual incomes below $10,000, compared to 37 percent of Puerto Rican households (see table 6). The distribution of income is more unequal in Puerto Rico than in the United States. Economies in general have a small share of households receiving a disproportionately large share of income. As a result, the ratio of mean to median household income exceeds 1.0. As an indication of the greater degree of income inequality in Puerto Rico, the ratio of mean to median household income in 1999 was 1.69 in Puerto Rico compared to 1.35 in the United States. <4. Much Possessions Corporation Activity Has Shifted to Affiliated Corporations> Possessions corporations have played an important role in the Puerto Rican economy, particularly in the manufacturing sector, where they accounted for well over half of valued added throughout the 1990s. Most of the possessions tax credit and income earned by possessions corporations in Puerto Rico has been earned by corporations in the pharmaceutical industry. Once the possessions tax credit was repealed, many of the large corporate groups that owned possessions corporations in Puerto Rico began to shift their operations to other types of business entities. Although the various tax and economic census data that we present in this chapter have significant limitations, we believe that, together, they form the basis for a reasonably accurate picture of the broad changes that have occurred in Puerto Rico s manufacturing sector over the past two decades. Those data indicate that much of the decline in activity of possessions corporations in the manufacturing sector was offset by the growth in other corporations, so that some measures of aggregate activity remained close to their 1997 levels. For example, value added in manufacturing remained fairly constant between 1997 and 2002. Most of the offsetting growth was concentrated in the chemical industry, which is dominated by pharmaceuticals. <4.1. Possessions Corporations Dominated Puerto Rico s Manufacturing Sector up until the Late 1990s> Possessions corporations continued to dominate Puerto Rican manufacturing through the mid-1990s, despite the legislative changes that made the possessions tax credit significantly less generous after 1993. According to the 1992 Economic Census of Puerto Rico Manufacturing, these corporations accounted for 42.2 percent of employment and 64.3 percent of valued added in the manufacturing sector (as seen in fig. 27). By the next economic census in 1997, possessions corporations share of value added had increased to 72 percent, while their share of employment remained little changed at 40.8 percent. This pattern of growth up to 1997 is also apparent in the data from the federal tax returns of possessions corporations shown in figure 28. The aggregate total income, gross profits, and net income of possessions corporations operating in Puerto Rico all increased slightly between 1993 and 1997 (after adjusting for inflation), although there was a small decline in the corporations total assets. The growth in possessions corporation activity occurred despite the limitations that Congress placed on the possessions tax credit after 1993 and a decline in the number of corporations claiming the credit. Figure 29 shows that those limitations significantly reduced the generosity of the credit. Possessions corporations earned about 20 cents of credit for each dollar of income they earned in 1993, but only half that amount by 1997. Over that period, the number of corporations claiming the credit for operations in Puerto Rico fell from 378 to 291 and the amount of credit claimed declined from $5.8 billion to $3.2 billion. The decline in possessions corporation income, value added, and employment began after the Small Business Job Protection Act of 1996, which placed additional limits on the amount of credit that corporations could earn and, more importantly, repealed the credit completely for tax years beginning after 1995, subject to a 10-year phaseout. The generosity of the credit reached a low of less than 7 cents per dollar of income by 1999. The number of corporations claiming the credit fell to 124 by 2003 and the amount of credit they claimed that year fell to $1.1 billion. Moreover, in contrast to the period leading up to 1997, the aggregate total income, gross profits, and net income earned by possessions corporations all declined by more than 50 percent between 1997 and 2003, while their total assets declined by almost 30 percent. The significantly decreased importance of possessions corporations is also apparent in the most recent economic census data (fig. 27), showing that these corporations accounted for only 26.7 percent of manufacturing value added and only 31.8 percent of manufacturing employment in 2002. <4.2. The Pharmaceutical Industry Has Dominated the Use of the Possessions Tax Credit in Puerto Rico> Most of the possessions tax credit and income earned by possessions corporations in Puerto Rico has been earned by corporations in the pharmaceutical industry. Figure 31 shows that pharmaceuticals corporations earned over half of all the credit earned each year from 1995 through 2003. Figure 32 shows that these corporations earned an even larger share of the aggregate gross profit earned by possessions corporations in each of those years. Manufacturers of beverages and tobacco products, medical equipment, and computers, electronics, and electrical equipment were also heavy users of the credit during this period, though not nearly to the same extent as pharmaceuticals manufacturers. Both of these figures are based on data for possessions corporations in the 77 largest corporate groups operating in Puerto Rico. (See the following section.) <4.3. Businesses Have a Variety of Options for Continuing Operations in Puerto Rico after They Cease Operating as Possessions Corporations> Parent corporations have a number of options for conducting business in Puerto Rico if they wish to do so after termination of the possessions tax credit. Large corporate groups are believed to have used at least four different approaches to rearranging their overall corporate structure (including the possessions corporation and their Puerto Rican operations) in anticipation of termination of the possessions tax credit. The U.S. federal tax consequences of these approaches vary as follows: The possessions corporation loses its 936 status but remains a subsidiary incorporated in the United States and is consolidated into its parent s federal tax return. The parent corporation includes the relevant income and expenses of the subsidiary when computing its own federal taxes. Tax attributes, such as carryovers of certain accumulated losses, of the former possessions corporation would be governed by applicable IRS regulations and guidance. The possessions corporation liquidates into its parent (i.e., it no longer remains a separate corporate entity). Generally, if the parent satisfies certain ownership requirements, no gain or loss would be recognized to either the parent or the subsidiary for U.S. federal income tax purposes. The domestic parent would inherit and take into account certain items of the former possessions corporation, such as earnings and profits, net operating and capital loss carryovers, and methods of accounting. No foreign tax credit is allowed for any foreign taxes paid in connection with the liquidation, and the deduction of certain losses and other tax attributes may be limited. The possessions corporation is converted into or replaced by a CFC. This change can occur if the possessions corporation reincorporates and conducts business as a CFC; if it sells or contributes most of its assets to a CFC; or if it winds down its operations as its parent corporation starts up a new CFC to operate in Puerto Rico. Any income that the replacement CFC earns from the active conduct of business in Puerto Rico or elsewhere outside of the United States generally is not taxed until it is repatriated to the U.S. shareholders in the form of dividends. A number of tax consequences arise in cases where the possessions corporation actually reincorporates as a CFC. There are also significant tax issues (discussed further below) relating to the transfer of assets (through either a contribution or a sale) from possessions corporations to CFCs. The possessions corporation is converted into or replaced by a limited liability corporation (LLC) or partnership. An LLC can elect to be treated as a corporation, as a partnership, or as a disregarded entity. If the LLC elects to be treated as a corporation, its net earnings would be included either individually or, if required to file a consolidated return, on its parent s return. If it chose partnership treatment, the LLC itself would generally not be subject to federal income tax but its income, deductions, gains, and losses would be distributed to its members, who would include such amounts in calculating their federal income tax. If the LLC is treated as a disregarded entity, its income, deductions, gains, and losses are included on the member s federal tax return. Parent corporations could substantially change the manner in which income from their Puerto Rican business operations were treated for federal tax purposes even without making a formal change in the legal status of their possessions corporations. The parents could simply reduce production by their possessions corporations and start up or expand production in other forms of businesses operating in Puerto Rico. We used tax return data from both IRS and the Treasury of Puerto Rico to track changes in the activity of possessions corporations, as well as to assess the extent to which declines in that activity have been offset by increases in the activity of affiliated businesses operating in Puerto Rico. In order to make this assessment for a particular group of affiliated corporations, we needed to examine data for each member of the group that had operations in Puerto Rico.Given that considerable effort was required to identify the group members that operated in Puerto Rico, we limited our review to the largest 77 groups, which included at least one possessions corporation between 1993 and 2001. These 77 large groups accounted for over 92 percent of the credit and income earned by possessions corporations in every year from 1993 through 2001 and for over 91 percent of the assets owned by such corporations in each of those years. The large groups included a total of 172 possessions corporations that we tracked between 1993 and 2003. The number of possessions corporations that these 77 large groups owned and operated in Puerto Rico declined from a high of 146 in 1995 to 58 by 2003. As of 2001, these groups also conducted operations in Puerto Rico through 49 CFCs and at least 28 other businesses. Fourteen of the groups operated both possessions corporations and CFCs in Puerto Rico in 2001. In the following section we report on trends in the income and assets of these large corporate groups. The popular choice of replacing the operations of possessions corporations with CFCs offers long-term tax benefits but could entail high initial tax costs for some corporations. Many corporate groups have chosen to operate in Puerto Rico through CFCs, possibly to take advantage of the federal tax deferral on income earned there. Some may have rejected this choice because their possessions subsidiaries owned valuable intangible assets, such as drug patents or food recipes, and the transfer of these assets to a non-U.S. entity, such as a CFC, could have been treated as a taxable exchange, possibly resulting in a substantial, one-time tax liability. Affiliated groups can avoid this tax if they keep the intangible assets in their U.S. firms, rather than transferring them to their new CFCs. However, in order for those CFCs to use those intangibles in their production processes, they must pay royalties to the U.S. owners and those royalties would be subject to federal income tax. IRS officials have expressed concern that the repeal of section 936 has not had its intended effect. Congress repealed section 936 because it was viewed as providing an overly generous tax benefit to taxpayers with operations in Puerto Rico. However, IRS officials believe that despite the repeal of section 936, many taxpayers with operations in Puerto Rico could be incurring approximately the same or even lower tax liabilities than they did under section 936 by restructuring their activities through CFCs. Taxpayers who converted into CFCs may have avoided the tax consequences typically associated with such a conversion, namely, tax liabilities arising from the transfer of intangibles from possessions corporations to CFCs or a significant increase in royalty payments from Puerto Rico. One private sector tax expert familiar with the practices of U.S. businesses operating in Puerto Rico could not recall any case in which a taxpayer reported a transfer of intangibles of any significant value from a possessions corporation to a CFC. The expert also told us that the reason why the IRS has not seen a notable increase in royalty payments from CFCs to U.S. firms holding intangibles is that, well before the expiration of the possessions tax credit, corporate groups had their existing or newly formed CFCs enter into research cost-sharing arrangements with their possessions corporations so that they would be codevelopers of new intangibles and, thereby, would have certain ownership rights to use the technology without paying royalties. The groups also tried to involve their CFCs as much as possible in the development of new products through other arrangements, such as research partnerships with unrelated technology-developing firms. <4.4. Some Measures of Aggregate Manufacturing Activity Have Remained Constant Despite a Decline in Possessions Corporation Activity> A combination of tax return and economic census data indicate that the decline in income and value added of possessions corporations between 1997 and 2002 has been largely offset by an increase in the income and value added of affiliated corporations that left aggregate income and value added roughly constant. Although some evidence of a change in income- shifting behavior by these corporate groups makes it difficult to say how accurately trends in reported income and value-added data represent trends in actual economic activity in Puerto Rico, data on employment, capital expenditures, and total assets (which should not be distorted by income shifting) support the conclusion that a substantial amount of possessions corporation activity has been continued by other types of businesses. However, most of this continued activity is concentrated in the pharmaceutical industry and the decline in possessions corporation activity in other industries has not been offset. None of the data we present address the question of what corporate activity would have taken place during this period if the possessions tax credit had not been repealed. <4.4.1. Tax Return and Economic Census Data Indicate That Much of the Income and Value Added of Possessions Corporations Declined While That of Affiliated Businesses Increased> Tax return data on the affiliated corporate groups that have claimed almost all of the possessions tax credit indicate that between 1997 and 2001 at least a large portion (and possibly all) of the decline in reported incomes of possessions corporations operating in Puerto Rico was offset by increases in the reported incomes and total assets of affiliated corporations operating in Puerto Rico, particularly that of CFCs. The offset left the income that these groups earned in Puerto Rico roughly the same in 2001 as in 1997. This finding is consistent with data on value added in manufacturing from recent economic censuses of Puerto Rico. Gross profit, which equals income from sales minus the cost of goods sold, is the income measure from tax returns that is closest in definition to the value-added measure from census data that we presented earlier. Both of these measures may be distorted by income shifting, as we explain in the next section; however, value added is considered to be the best measure of the economic importance of manufacturing activity. We examined data for both of these measures, as well as other measures not distorted by income shifting, to assess the extent to which possessions corporation activity has been replaced by the activity of other types of businesses. Figure 33 shows that the aggregate gross profit of the possessions corporations in our 77 large groups peaked at $28.8 billion in 1997 and then fell to $11.4 billion by 2003. The figure also presents our lower-bound estimates for the amount of gross profits from Puerto Rico that CFCs reported. These estimates include only the profits of those CFCs for which we had Puerto Rican tax returns or that appeared to have operations only in Puerto Rico because those are the cases where we can be the most confident that our figures represent profits attributable only to Puerto Rican operations. The gross profits of those CFCs grew from $2.4 billion to $7.1 billion between 1997 and 2001. These estimates are likely to represent a lower bound for the amount of CFC profits in Puerto Rico because they do not include any of the profits for CFCs whose income was difficult to allocate between Puerto Rico and other locations. We present alternative estimates, labeled CFC total if allocated by tax ratio, of the gross profits from Puerto Rico of all of the CFCs in our large groups. These more comprehensive estimates are not likely to be very precise, but they are consistent with some of the census data that we present on CFCs in chapter 5. The estimates show CFC gross profits growing from $3.0 billion to $11.5 billion between 1997 and 2001. Finally, figure 33 also shows the gross profits reported on Puerto Rican tax returns by members of the 77 large groups, other than possessions corporations and CFCs. The gross profits of these businesses increased from $3.0 billion to $7.0 billion between 1999 and 2001. The data in figure 33 indicate that much of the $10.7 billion decline in the gross profits of possessions corporations between 1997 and 2001 was offset by increases in the profits of affiliated corporations. The lower-bound estimates for CFCs grew by $4.7 billion over that period, while the profits of the other affiliates, including LLCs, grew by $3.9 billion between 1999 and 2001. The combined profits of these two sets of businesses, therefore, grew by about $8.7 billion. If we use the tax ratio estimate for all CFCs, the combined growth in profits grew by about $12.5 billion. The gross profit of the other affiliated businesses is likely to be understated relative to those of the possessions corporations because of differences in the income definitions used for federal and Puerto Rican tax purposes. For those possessions corporations for which we had both federal and Puerto Rican returns, the gross profit from the Puerto Rican return averaged about 70 percent of the gross profit on the federal return. For this reason figure 33 may understate the extent to which the decline in possessions corporations Puerto Rican operations has been offset by these other affiliates. Data from recent economic censuses on value added in Puerto Rican manufacturing lend additional support to the conclusion that we draw from figure 33 that much, if not all, of the decline in income of possessions corporations in Puerto Rico between 1997 and 2001 was largely offset by increases in the incomes of other types of businesses. Figure 34 shows that valued added by possessions corporations in Puerto Rican manufacturing followed roughly the same pattern as the gross profits data presented in figure 33; it also shows that other types of businesses made up for approximately all of the possessions corporations decline between 1997 and 2002. The extent to which the decline in income and value added of possessions corporations was offset by the growth of their affiliates varied significantly by industry. Figure 35 decomposes the last two columns of figure 34 into the chemical industry (which includes pharmaceuticals) and all other manufacturing industries. It shows that a significant drop in the value added of possessions corporations in the chemical industry was more than offset by the substantial growth in value added by other types of businesses. In contrast, the value added of both possessions corporations and all other types of businesses declined between 1997 and 2002 in the remainder of the manufacturing sector, outside of chemicals. Our tax data for large corporate groups showed similar variation across industries. The corporate groups in the chemicals and medical equipment industry group offset a larger proportion of the decline in the income of their possessions corporations between 1997 and 2002 with income from other types of affiliates operating in Puerto Rico than was the case for large corporate groups as a whole. Trends in the income of possessions corporations in the other two industrial groupings that we are able to present with our tax data computer, electronics, and electrical equipment; and food and kindred products were somewhat erratic between 1993 and 2001 before declining by 2003. There was negligible to no growth in the incomes of CFCs and other types of businesses in these two industrial groupings during the period we could observe between 1997 and 2002. (See tables 17 and 18 in app. IV.) <4.4.2. Data on Capital Expenditures, Total Assets, and Employment Also Indicate That a Substantial Amount of Possessions Corporation Activity Has Been Continued by Other Types of Businesses in Certain Industries> As we explained in chapter 3, the data on income and value added for members of large corporate groups operating in Puerto Rico may be distorted by changes in the income reporting practices of these groups during the late 1990s. For this reason it is difficult to know how accurately trends in reported income and value added represent trends in actual economic activity in Puerto Rico. Nevertheless, data on capital expenditures, total assets, and employment (which should not be distorted by income shifting) support the conclusion that a substantial amount of possessions corporation activity has been continued by other types of businesses. Much of this continued activity is concentrated in the chemical industry, which is dominated by pharmaceutical producers. The economic census data on capital expenditures on manufacturing plant and equipment in figure 36 show that this investment increased dramatically between 1997 and 2002 after having dropped from 1992 to 1997. We cannot divide this time series of capital spending data between possessions corporations and other forms of business; however, figure 36 shows that most of the spending increase was in the pharmaceutical industry, which was the source of about two-thirds of total possessions corporations profits in 1997. Consequently, it appears that any overall decline in possessions corporations capital spending that may have occurred since 1997 must have been more than offset by the investment of other businesses. The tax data for our 77 large corporate groups show that the $12.1 billion decline in the total assets of the possessions corporations in these groups between 1997 and 2001 was largely offset by an increase of at least $9.4 billion in the total assets of affiliated corporations operating in Puerto Rico (see table 15 in app. IV). The decline in assets may have been more than fully offset, depending on the growth in the Puerto Rican assets of the CFCs that we were not able to include in our estimates. However, as was the case with income and value added, there were significant differences across industries behind the trends for the manufacturing sector as a whole. The decline in assets of possessions corporations in the chemical and medical equipment industries between 1997 and 2001 was more than offset by the increased assets of their affiliates even if we use just our lower-bound estimates for CFCs. In comparison, a little over half of the decline in possessions corporations assets in the computer, electronics, and electrical equipment industries between 1997 and 2001 was offset by the growth in affiliated CFCs assets. (See tables 16 and 17 in app. IV.) The economic census data on employment in Puerto Rico s manufacturing sector in figure 37 shows that the decline in employment by possessions corporations between 1997 and 2002 was not as drastic as the declines in their profits or value added over that period (shown previously in figs. 33 and 34); however, there was no offsetting increase in overall employment by other types of manufacturing firms. Figure 38, which decomposes the last two columns of figure 37 into the chemical industry and all other industries, shows that employment by possessions corporations in the chemical industry did, in fact, fall sharply between 1997 and 2002, but other types of businesses in the industry more than made up for that decline. In the remaining industries as a whole, there was a smaller percentage decrease in employment by possessions corporations but there was also a decrease, rather than an offsetting increase, in the employment by other types of businesses. The chemical industry is much less important in terms of overall employment in manufacturing than it is in terms of value added. For this reason the continued strength of that industry was not enough to prevent an overall decline in manufacturing employment. <5. U.S. Businesses Dominated Puerto Rican Manufacturing in 2002 but Played Smaller Roles in Other Sectors> U.S.-owned businesses accounted for at least 71 percent of value added and at least 54 percent of employment in Puerto Rico s manufacturing sector in 2002. CFCs produced most of this value added but possessions corporations still accounted for most of the employment by U.S. firms. The CFCs are particularly important in the pharmaceutical industry and much less so in other manufacturing industries. U.S. corporations appear to account for less than 25 percent of employment in Puerto Rico s wholesale and retail trade sectors, where local corporations are the most important employers. Similarly, U.S.-owned corporations are not the majority employers in any of the large Puerto Rican service industries for which data are available. <5.1. U.S. CFCs Have Become the Most Important Type of Business Entity in Puerto Rico s Manufacturing Sector in Terms of Value Added but Not in Terms of Employment> As of 2002, U.S. CFCs accounted for 42 percent of value added in Puerto Rico s manufacturing sector a larger share than that of any other type of business entity (see fig. 39). Possessions corporations had the next largest share of value added with 27 percent, and other U.S. corporations accounted for 2 percent of the total. Together, these three types of businesses produced at least 71 percent of total manufacturing value added. A small number of U.S.-owned or U.S.-incorporated businesses may be included in the category corporations of type unknown, but we believe that most of the data for that category (in all of the figures in this chapter) are attributable to corporations that are not incorporated in the United States and are not CFCs. Possessions corporations remained the largest single type of employer, with 31 percent of the sector s total employment (see fig. 40). Despite their large share of manufacturing value added, CFCs had a relatively small share 14 percent of the sector s total employment, which resulted in the extraordinarily high ratios of value added per employee that we discussed earlier. In contrast, other U.S. corporations and corporations incorporated in Puerto Rico had significantly larger shares of total employment than they did of value added. A little less than two-thirds of the CFCs value added and half of their employment is attributable to CFCs incorporated outside of Puerto Rico. This distribution of value added is similar to the estimated distribution of gross profit between the two types of CFCs, based on the tax data for our 77 large corporate groups for 2001. The estimates presented in figure 41 are based on our tax ratio approach for attributing portions of the income of multilocation CFCs to Puerto Rico. The estimates indicate that 70 percent of the gross profit and 73 percent of net income that CFCs earned in Puerto Rico in 2001 were earned by CFCs incorporated outside of Puerto Rico. Using the tax data, we estimate that more than three-quarters of the total gross and net income earned by the CFCs incorporated outside of Puerto Rico in 2001 is attributable to CFCs incorporated in the Cayman Islands, Ireland, the Netherlands, and the U.S. Virgin Islands. A comparison of figures 42 and 43 shows that the value added of CFCs in 2002 was concentrated in the pharmaceutical industry. These firms accounted for over half of the value added in that industry, or almost three times as much as the value added of possessions corporations. In contrast, CFCs accounted for only 13 percent of the value added in all of the remaining manufacturing sectors, where possessions corporations still dominated with a 48 percent share. At this more specific industry level of data, Census nondisclosure rules prevent us from providing as much detail about other forms of businesses. We needed to add pass-through entities into the all other and unknown category. However, from table 20 in appendix V, we do know that between approximately 80 percent and 90 percent of the employees of these entities were concentrated in two industries pharmaceuticals and medical equipment and that between 25 percent and 63 percent of these employees were in each of these industries. If the value added of these entities was distributed across industries in approximately the same manner as their employment, then pass-through entities would have accounted for between 3 percent and 7 percent of value added in pharmaceuticals. Data in table 20 of appendix V show that possessions corporations and CFCs were approximately equal in importance in terms of employment in the pharmaceutical industry in 2002 and, together, they accounted for 61 percent of the industry s employment. The data also show that possessions accounted for a little over a quarter of total employment in all other manufacturing industries, while CFCs accounted for only 9 percent. <5.2. The Role of U.S. Corporations Is Much Smaller in Puerto Rico s Wholesale and Retail Trade Than in Manufacturing> Corporations that were U.S. CFCs and businesses incorporated in the United States accounted for less than a quarter of total employment in the Puerto Rican wholesale trade sector and, as figure 44 shows, about half of their employment was in corporations other than CFCs or possessions corporations. Corporations in the unknown category, which we believe to be largely ones that are not incorporated in the United States or owned by U.S. parent corporations were by far the largest employers in the wholesale trade in 2002, as shown in figure 44. Figure 45 indicates that this employment distribution was similar for the retail trade sector. The primary difference between the two sectors is that possessions corporations played no role at all in retail trade and sole proprietors played a more important role in that sector than in wholesale trade. The distributions of payroll across entities in these two sectors largely mirrors the distributions of employment (see table 17 in app. V). <5.3. Neither Possessions Corporations nor CFCs Were Significant Employers in 2002 in Most Puerto Rican Service Industries for Which Data Are Available> In general, possessions corporations and CFCs played minor roles as employers in Puerto Rico s service sector. The 2002 Economic Census of Island Areas compiled data for 11 service industries, as well as the mining, utilities, and transportation and warehousing sectors in Puerto Rico. Table 7 shows the distribution of employment across types of businesses for the six largest services (in terms of employment) covered by the census. Appendix V tables 25 27 show the distribution of employment, sales, and payroll, for all 11 service industries and the three other sectors. CFCs accounted for 32.7 percent of employment in the information services industry (which includes telecommunications, broadcasting, publishing, motion pictures, and Internet services), but for no more than 5.1 percent in any of the other five large services. Possessions corporations accounted for 10 percent of employment in the accommodations industry but for no more than 2.4 percent in any of the other large services. Other U.S. corporations accounted for between 10 percent and 20 percent of employment in each of the six services. Most of the remaining employment in the large service industry is attributable to local corporations (in the type unknown group) and sole proprietors. The category all other employers, which includes nonprofit entities, accounts for up to 22 percent of total employment in healthcare services, which is the largest service industry. <6. Taxes Per Capita in Puerto Rico Are Lower Than in the States but Are about the Same Share of Income> The taxes paid to all levels of government (federal, Commonwealth, and local) in Puerto Rico in 2002 were $3,071 per capita considerably less than the per capita taxes of $9,426 paid in the states. However, the combined taxes paid by Puerto Rico residents amounted to 28 percent of their personal income, which was close to the 30 percent figure in the states. Puerto Rico s outstanding government debt in 2002 was much higher than that of state and local governments as a share of personal income, partly because the Commonwealth government has a wider range of responsibilities. <6.1. Taxes Paid Per Capita in Puerto Rico Are Lower Than Those in the States but the Taxes Are about the Same Share of Personal Income in Both Places> The amount of taxes that Puerto Rico residents paid per capita in fiscal year 2002 ($3,071) was about one-third of the amount paid by residents of the states ($9,426) (see fig. 46). The mix of the taxes was also quite different. While nearly 60 percent ($5,619) of the taxes paid by residents of the states were federal taxes, only about 25 percent ($760) of the total taxes paid by Puerto Rico residents were federal taxes because those residents generally are not subject to federal income tax on the income they earn in Puerto Rico. Data on federal taxes paid in the other insular areas are not available. Taxes paid by residents of the other insular areas to their own governments in 2002 amounted to $2,451 per capita slightly higher than the $2,310 per capita that residents of Puerto Rico paid to the Commonwealth and municipal governments. The location where a tax is paid is not necessarily the same location as where the economic burden of the tax falls. The data we present in this chapter pertain to the former. Comparing the taxes Puerto Rico residents paid to the average of the five states whose residents paid the least total taxes, we found that Puerto Rico residents paid about 54 percent of the amount paid by these state residents ($5,713). The average percentage of taxes paid in these same five states that were federal taxes was nearly 47 percent ($2,705), still nearly double the percentage for Puerto Rico. The average per capita amount of taxes paid in the five highest tax states was $15,491 five times the per capita tax in Puerto Rico. Taxes as a share of personal income are about the same in Puerto Rico and the states, which is not surprising because Puerto Rico s income per capita is so much lower. Taxes paid in Puerto Rico amounted to 28 percent of the Commonwealth s personal income, while those paid in the states amounted to 30 percent of aggregate state personal income. Taxes in the five lowest- tax states were an average of 23 percent of the states aggregate personal income, while those in the five highest-tax states averaged 39 percent. (See table 28 in app. VI for additional detail.) <6.2. Income and Employment Taxes Account for about Two-thirds of the Taxes Paid in Both Puerto Rico and the States, but the Allocation of Those Taxes by Level of Government Differs between the Two Locations> As shown in figure 48, about 75 percent of the taxes paid in Puerto Rico are levied by the Commonwealth and municipal governments. The property tax and gross receipts tax imposed by the municipal government accounted for a little over 17 percent of taxes paid with the remainder going to the Commonwealth government. Commonwealth income taxes accounted for 41 percent of total taxes with slightly more than half of that being paid by resident individuals. Sales and excise taxes represented 23 percent of the total. Data available from IRS for Puerto Rico and the states do not separate federal individual income tax payments from payments of federal employment taxes, such as those for Social Security, Medicare, and unemployment compensation; however, most of the tax shown for that combined category in figure 48 should be employment taxes because most residents of Puerto Rico pay little, if any federal income tax. Even less federal estate, gift, or excise tax is paid in Puerto Rico. Federal excise taxes on goods manufactured in Puerto Rico and sold in the states are transferred to the Commonwealth and more than offset any federal excise tax on products consumed there. Federal individual income and employment The figures for federal estate and gift taxes round to 0 percent. In contrast to the case of Puerto Rico, more than half of the taxes paid in the states go to the federal government, which provides a larger range of services to the states than it does to the Commonwealth. Federal individual income and employment taxes accounted for 56 percent of the taxes paid, while federal estate, gift, and excise taxes amounted to an additional 3 percent, resulting in a combined federal share of 59 percent (see fig. 49). When the 10 percent of taxes paid in the form of state and local income taxes are added to the 56 percent that go to federal individual income and employment taxes, the resulting 66 percent share is almost equal to the 67 percent share in Puerto Rico for this same group of taxes. Of the remaining total, state and local property taxes and other revenues (including lotteries and licenses) account for greater shares of the total taxes paid in the state than they do in Puerto Rico, while sales and excise taxes represent a smaller share. <6.3. Puerto Rico s Outstanding Government Debt in 2002 Was Much Higher Than That of State and Local Governments as a Share of Personal Income, Partly Because the Commonwealth Government Has a Wider Range of Responsibilities> The amount of Puerto Rican government-issued debt outstanding as of 2002 was slightly higher in per capita terms, but much higher as a share of personal income, than was state and local government-issued debt. As shown in figure 50, the outstanding amount of Puerto Rican government debt per capita in 2002 was about $7,580, compared to a national average of $5,820 for state and local government-issued debt. The per capita debt of the governments of the other insular areas in 2002 was about $5,690. Although all of this debt was issued by the respective governments, some of it is directed to private use and will be paid back by targeted beneficiaries. About 16 percent of Puerto Rico s government debt fell into this private use category, compared to about 23 percent for state and local government debt. <6.4. Federal Grants and Payments to Governments Per Capita Are the Same for Puerto Rico and the States but Direct Federal Payments to Individuals Per Capita Are Significantly Lower in Puerto Rico> The states and insular areas receive funds from the federal government in the form of grants, direct aid, loans, and insurance and procurement payments (see table 8). Federal grants and payments to the Puerto Rican government in 2002 amounted to $1,242 per capita, about the same as the $1,264 per capita paid to all state and local governments in the states, but less than the $1,703 per capita paid to the other insular area governments. The $2,057 per capita of direct federal payments to individuals in Puerto Rico was well below the $3,648 per capita paid to state residents, but higher than the $1,418 per capita paid to residents of the other insular areas. The following chapter and appendix VII provide detailed information on the amount of spending for specific federal social programs in Puerto Rico, the states, and other insular areas and describes similarities and differences in the operation of these programs in the various locations. The per capita federal payments of $336 for salaries, wages, and procurement in Puerto Rico were about 20 percent of payments for those purposes in the states and the other insular areas. (Page is left blank intentionally.) Some federal funds that Puerto Rico received as grants and direct payments were in the form of a rebate on custom duties and a cover over of excise taxes collected on rum. These funding sources are not available to the states or the District of Columbia, or most of the insular areas except for the U.S. Virgin Islands. On a per capita basis the U.S. Virgin Islands received a larger rebate payment than Puerto Rico and a larger cover over payment than Puerto Rico (see table 9). <7. The Extent That Federal Social Programs in Puerto Rico Mirror Those in the States and Other U.S. Insular Areas Varies> <7.1. Comparison of Selected Federal Social Programs> Like the states, Puerto Rico and the other U.S. insular areas receive federal funds for a variety of social programs including federal housing assistance, education, and health care financing programs which provide assistance to elderly and needy families and individuals. Generally, the social programs we examined in these areas targeted similar populations and delivered similar services although Puerto Rico and the other insular areas did not always do so through the program as it exists in the states (see table 10). For example, in lieu of the Food Stamp Program available in the states, which is an entitlement program based on the number of participants, Puerto Rico receives a capped block grant that has similar eligibility requirements. The major difference between some of the social programs we examined in the states versus those in Puerto Rico and the other insular areas is how they are funded. For example, where federal Medicaid spending is an open-ended entitlement to the states, it is subject to a statutory cap and a limited matching rate in Puerto Rico and the other insular areas. Some of the social programs and housing programs that we examined are available in the states, but are not available in some of the insular areas. More detailed information on how each of the programs is applied in the insular areas and the states can be found in appendix VII.
Why GAO Did This Study The federal possessions tax credit, which was designed to encourage U.S. corporate investment in Puerto Rico and other insular areas, expires this year. Proponents of continued federal economic assistance to Puerto Rico have presented a variety of proposals for congressional consideration. In response to a request from the U.S. Senate Committee on Finance, this study compares trends in Puerto Rico's principal economic indicators with those for the United States; reports on changes in the activities and tax status of the corporations that have claimed the possessions tax credit; explains how fiscal relations between the federal government and Puerto Rico differs from the federal government's relations with the states and other insular areas; and compares the taxes paid to all levels of government by residents of Puerto Rico, the states, and other insular areas. GAO used the latest data available from multiple federal and Puerto Rican government agencies. Data limitations are noted where relevant. Key findings are based on multiple measures from different sources. GAO is not making any recommendations in this report. In comments on this report the Governor of Puerto Rico said the report will be useful for evaluating policy options. What GAO Found Puerto Rico's per capita gross domestic product (GDP, a broad measure of income earned within the Commonwealth) in 2005 was a little over half of that for the United States. Puerto Rico's per capita gross national product (GNP, which covers income earned only by residents of the Commonwealth) was even lower relative to the United States. Concerns about Puerto Rico's official price indexes make it difficult to say whether the per capita GNP of Puerto Rican residents has grown more rapidly than that of U.S. residents; however, the absolute gap between the two has increased. U.S. corporations claiming the possessions tax credit dominated Puerto Rico's manufacturing sector into the late 1990s. After the tax credit was repealed in 1996 beginning a 10-year phaseout period, the activity of these corporations decreased significantly. Between 1997 and 2002 (the latest data available) valued added in these corporations decreased by about two-thirds. A variety of data indicates that much of this decline was offset by growth in other corporations, so that some measures of aggregate activity remained close to their 1997 levels. For example, value added in manufacturing remained fairly constant between 1997 and 2002. Most of the offsetting growth was in the pharmaceutical industry. Residents of Puerto Rico pay considerably less total tax per capita than U.S. residents. However, because of lower incomes they pay about the same percentage of their personal income in taxes. The composition of taxes differed between Puerto Rico and the states with federal taxes being a larger share of the total in the states. This difference reflects the facts that (1) residents of Puerto Rico generally do not pay federal income tax on income they earn in the Commonwealth and (2) the Commonwealth government has a wider range of responsibilities than do U.S. state and local governments.
<1. Background> Hepatitis C was first recognized as a unique disease in 1989. It is the most common chronic blood-borne infection in the United States and is a leading cause of chronic liver disease. The virus causes a chronic infection in 85 percent of cases. Hepatitis C, which is the leading indication for liver transplantation, can lead to liver cancer, cirrhosis (scarring of the liver), or end-stage liver disease. Most people infected with hepatitis C are relatively free of physical symptoms. While hepatitis C antibodies generally appear in the blood within 3 months of infection, it can take 15 years or longer for the infection to develop into cirrhosis. Blood tests to detect the hepatitis C antibody, which became available in 1992, have helped to virtually eliminate the risk of infection through blood transfusions and have helped curb the spread of the virus. Many individuals were already infected, however, and because many of them have no symptoms, they are unaware of their infection. Hepatitis C continues to be spread through blood exposure, such as inadvertent needle-stick injuries in health care workers and through the sharing of needles by intravenous drug abusers. Early detection of hepatitis C is important because undiagnosed persons miss opportunities to safeguard their health by unknowingly behaving in ways that could speed the progression of the disease. For example, alcohol use can hasten the onset of cirrhosis and liver failure in those infected with the hepatitis C virus. In addition, persons carrying the virus pose a public health threat because they can infect others. The Centers for Disease Control and Prevention estimates that nearly 4 million Americans are infected with the hepatitis C virus. Approximately 30,000 new infections occur annually. The prevalence of hepatitis C infection among veterans is unknown, but limited survey data suggest that hepatitis C has a higher prevalence among veterans who are currently using VA s health care system than among the general population because of veterans higher frequency of risk factors. A 6 year study 1992 1998 of veterans who received health care at the VA Palo Alto Health Care System in Northern California reported that hepatitis C infection was much more common among veterans within a very narrow age distribution 41 to 60 years of age and intravenous drug use was the major risk factor. VA began a national study of the prevalence of hepatitis C in the veteran population in October 2001. Data collection for the study has been completed but results have not been approved for release. The prevalence of hepatitis C among veterans could have a significant impact on current and future VA health care resources, because hepatitis C accounts for over half of the liver transplants needed by VA patients costing as much as $140,000 per transplant and the drug therapy to treat hepatitis C is costly about $13,000 for a 48-week treatment regimen. In the last few years, considerable research has been done concerning hepatitis C. The National Institutes of Health (NIH) held a consensus development conference on hepatitis C in 1997 to assess the methods used to diagnose, treat, and manage hepatitis C infections. In June 2002, NIH convened a second hepatitis C consensus development conference to review developments in management and treatment of the disease and identify directions for future research. This second panel concluded that substantial advances had been made in the effectiveness of drug therapy for chronic hepatitis C infection. VA s Public Health Strategic Healthcare Group is responsible for VA s hepatitis C program, which mandates universal screening of veterans to identify at-risk veterans when they visit VA facilities for routine medical care and testing of those with identified risk factors, or those who simply want to be tested. VA has developed guidelines intended to assist health care providers who screen, test, and counsel veterans for hepatitis C. Providers are to educate veterans about their risk of acquiring hepatitis C, notify veterans of hepatitis C test results, counsel those infected with the virus, help facilitate behavior changes to reduce veterans risk of transmitting hepatitis C, and recommend a course of action. In January 2003, we reported that VA medical facilities varied considerably in the time that veterans must wait before physician specialists evaluate their medical conditions concerning hepatitis C treatment recommendations. To assess the effectiveness of VA s implementation of its universal screening and testing policy, VA included performance measures in the fiscal year 2002 network performance plan. Network performance measures are used by VA to hold managers accountable for the quality of health care provided to veterans. For fiscal year 2002, the national goal for testing veterans identified as at risk for hepatitis C was established at 55 percent based on preliminary performance results obtained by VA. To measure compliance with the hepatitis C performance measures, VA uses data collected monthly through its External Peer Review Program, a performance measurement process under which medical record reviewers collect data from a sample of veterans computerized medical records. Development of VA s computerized medical record began in the mid-1990s when VA integrated a set of clinical applications that work together to provide clinicians with comprehensive medical information about the veterans they treat. Clinical information is readily accessible to health care providers at the point of care because the veteran s medical record is always available in VA s computer system. All VA medical facilities have computerized medical record systems. Clinical reminders are electronic alerts in veterans computerized medical records that remind providers to address specific health issues. For example, a clinical reminder would alert the provider that a veteran needs to be screened for certain types of cancer or other disease risk factors, such as hepatitis C. In July 2000, VA required the installation of hepatitis C clinical reminder software in the computerized medical record at all facilities. This reminder alerted providers when they opened a veteran s computerized medical record that the veteran needed to be screened for hepatitis C. In fiscal year 2002, VA required medical facilities to install an enhanced version of the July 2000 clinical reminder. The enhanced version alerts the provider to at-risk veterans who need hepatitis C testing, is linked directly to the entry of laboratory orders for the test, and is satisfied once the hepatitis C test is ordered. <2. Thousands of Veterans Identified as At Risk Remain Untested for Hepatitis C Despite VA Exceeding Its Testing Goal> Even though VA s fiscal year 2002 performance measurement results show that it tested 62 percent of veterans identified to be at risk for hepatitis C, exceeding its national goal of 55 percent, thousands of veterans in the sample who were identified as at risk were not tested. Moreover, the percentage of veterans identified as at risk who were tested varied widely among VA s 21 health care networks. Specifically, we found that VA identified in its performance measurement sample 8,501 veterans nationwide who had hepatitis C risk factors out of a sample of 40,489 veterans visiting VA medical facilities during fiscal year 2002. VA determined that tests were completed, in fiscal year 2002 or earlier, for 62 percent of the 8,501 veterans based on a review of each veteran s medical record through its performance measurement process. For the remaining 38 percent (3,269 veterans), VA did not complete hepatitis C tests when the veterans visited VA facilities. The percentage of identified at-risk veterans tested for hepatitis C ranged, as table 1 shows, from 45 to 80 percent for individual networks. Fourteen of VA s 21 health care networks exceeded VA s national testing performance goal of 55 percent, with 7 networks exceeding VA s national testing performance level of 62 percent. The remaining 7 networks that did not meet VA s national performance goal tested from 45 percent to 54 percent of at-risk veterans. VA s fiscal year 2002 testing rate for veterans identified as at risk for hepatitis C reflects tests performed in fiscal year 2002 and in prior fiscal years. Thus, a veteran who was identified as at risk and tested for hepatitis C in fiscal year 1998 and whose medical record was reviewed as part of the fiscal year 2002 sample would be counted as tested in VA s fiscal year 2002 performance measurement result. As a result of using this cumulative measurement, VA s fiscal year 2002 performance result for testing at-risk veterans who visited VA facilities in fiscal year 2002 and need hepatitis C tests is unknown. To determine if the testing rate is improving for veterans needing hepatitis C tests when they were seen at VA in fiscal year 2002, VA would also need to look at a subset of the sample of veterans currently included in its performance measure. For example, when we excluded veterans from the sample who were tested for hepatitis C prior to fiscal year 2002, and included in the performance measurement sample only those veterans who were seen by VA in fiscal year 2002 and needed to be tested for hepatitis C, we found Network 5 tested 38 percent of these veterans as compared to Network 5 s cumulative performance measurement result of 60 percent. <3. Several Factors Impeded One Network s Efforts to Test Veterans Identified as At Risk> We identified three factors that impeded the process used by our case study network, VA s Network 5 (Baltimore), for testing veterans identified as at risk for hepatitis C. The factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. More than two- thirds of the time, veterans identified as at risk were not tested because providers did not order the test, a crucial step in the process. The remainder of these untested veterans had tests ordered by providers, but the actual laboratory testing process was not completed. Moreover, veterans in need of hepatitis C testing had not been tested because providers did not always recognize during subsequent clinic visits that the hepatitis C testing process had not been completed. These factors are similar to those we identified and reported in our testimony in June 2001. <3.1. Hepatitis C Tests Were Not Always Ordered for Veterans Identified as At Risk> Primary care providers and clinicians in Network 5 s three facilities offered two reasons that hepatitis C tests were not ordered for over two- thirds of the veterans identified as at risk but not tested for hepatitis C in the Network 5 fiscal year 2002 performance measurement sample. First, facilities lacked a method for clear communication between nurses who identified veterans risk factors and providers who ordered hepatitis C tests. For example, in two facilities, nurses identified veterans need for testing but providers were not alerted through a reminder in the computerized medical record to order a hepatitis C test. In one of these facilities, because nursing staff were at times delayed in entering a note in the computerized medical record after screening a veteran for hepatitis C risk factors, the provider was unaware of the need to order a test for a veteran identified as at risk. The three network facilities have changed their practices for ordering tests, and as of late 2002, nursing staff in each of the facilities are ordering hepatitis C tests for at-risk veterans. The second reason for tests not being ordered, which was offered by a clinician in another one of the three Network 5 facilities, was that nursing staff did not properly complete the ordering procedure in the computer. Although nurses identified at-risk veterans using the hepatitis C screening clinical reminder in the medical record, they sometimes overlooked the chance the reminder gave them to place a test order. To correct this, nursing staff were retrained on the proper use of the reminder. <3.2. Hepatitis C Test Orders Were Not Always Completed> For the remaining 30 percent of untested veterans in Network 5, tests were not completed for veterans who visited laboratories to have blood drawn after hepatitis C tests were ordered. One reason that laboratory staff did not obtain blood samples for tests was because more than two-thirds of the veterans test orders had expired by the time they visited the laboratory. VA medical facilities consider an ordered test to be expired or inactive if the veteran s visit to the laboratory falls outside the number of days designated by the facility. For example, at two Network 5 facilities, laboratory staff considered a test order to be expired or inactive if the date of the order was more than 30 days before or after the veteran visited the laboratory. If the veteran s hepatitis C test was ordered and the veteran visited the laboratory to have the test completed 31 days later, the test would not be completed because the order would have exceeded the 30- day period and would have expired. Providers can also select future dates as effective dates. If the provider had designated a future date for the order and the veteran visited the laboratory within 30 days of that future date, the order would be considered active. Another reason for incomplete tests was that laboratory staff overlooked some active test orders when veterans visited the laboratory. VA facility officials told us that laboratory staff could miss test orders, given the many test orders some veterans have in their computerized medical records. The computer package used by laboratory staff to identify active test orders differs from the computer package used by providers to order tests. The laboratory package does not allow staff to easily identify all active test orders for a specific veteran by creating a summary of active test orders. According to a laboratory supervisor at one facility, the process for identifying active test orders is cumbersome because staff must scroll back and forth through a list of orders to find active laboratory test orders. Further complicating the identification of active orders for laboratory staff, veterans may have multiple laboratory test orders submitted on different dates from several providers. As a result, when the veteran visits the laboratory to have tests completed, instead of having a summary of active test orders, staff must scroll through a daily list of ordered tests in two facilities up to 60 days of orders to identify the laboratory tests that need to be completed. Network and facility officials are aware of, but have not successfully addressed, this problem. VA plans to upgrade the computer package used by laboratory staff during fiscal year 2005. <3.3. Providers Often Unaware That Hepatitis C Tests Were Not Ordered or Completed> Hepatitis C tests that were not ordered or completed sometimes went undetected for long periods in Network 5, even though veterans often made multiple visits to primary care providers after their hepatitis C risk factors were identified. Our review of medical records showed that nearly two-thirds of the at-risk veterans in Network 5 s performance measurement sample who did not have ordered or completed hepatitis C tests had risk factors identified primarily in fiscal years 2002 and 2001. All veterans identified as at risk but who did not have hepatitis C test orders visited VA primary care providers at least once after having a risk factor identified during a previous primary care visit, including nearly 70 percent who visited more than three times. Further, almost all of the at- risk veterans who had hepatitis C tests ordered but not completed returned for follow-up visits for medical care. Even when the first follow- up visits were made to the same providers who originally identified these veterans as being at risk for hepatitis C, providers did not recognize that hepatitis C tests had not been ordered or completed. Providers did not follow up by checking for hepatitis C test results in the computerized medical records of these veterans. Most of these veterans subsequently visited the laboratory to have blood drawn for other tests and, therefore, could have had the hepatitis C test completed if the providers had recognized that test results were not available and reordered the hepatitis C tests. <4. Some VA Networks and Facilities Have Taken Action Intended to Improve Hepatitis C Testing of Veterans Identified as At Risk> Steps intended to improve the testing rate of veterans identified as at risk for hepatitis C have been taken in three of VA s 21 health care networks. VA network and facility officials in the three networks we reviewed Network 5 (Baltimore), Network 2 (Albany), and Network 9 (Nashville) identified similar factors that impede hepatitis C testing and most often focused on getting tests ordered immediately following risk factor identification. Officials in two networks modified VA s required hepatitis C testing clinical reminder, which is satisfied when a hepatitis C test is ordered, to continue to alert the provider until a hepatitis C test result is in the medical record. Officials at two facilities one in Network 5 and the other in Network 9 created a safety net for veterans at risk for hepatitis C who remain untested by developing a method that looks back through computerized medical records to identify these veterans. The method has been adopted in all six facilities in Network 9; the other two facilities in Network 5 have not adopted it. <4.1. Some Networks and Facilities Took Steps Intended to Improve Hepatitis C Test Ordering and Completion> VA network and facility managers in two networks we reviewed Networks 2 and 9 instituted networkwide changes intended to improve the ordering of hepatitis C tests for veterans identified as at risk. Facility officials recognized that VA s enhanced clinical reminder that facilities were required to install by the end of fiscal year 2002 only alerted providers to veterans without ordered hepatitis C tests and did not alert providers to veterans with ordered but incomplete tests. These two networks independently changed this reminder to improve compliance with the testing of veterans at risk for hepatitis C. In both networks, the clinical reminder was modified to continue to alert the provider, even after a hepatitis C test was ordered. Thus, if the laboratory has not completed the order, the reminder is intended to act as a backup system to alert the provider that a hepatitis C test still needs to be completed. Providers continue to receive alerts until a hepatitis C test result is placed in the medical record, ensuring that providers are aware that a hepatitis C test might need to be reordered. The new clinical reminder was implemented in Network 2 in January 2002, and Network 9 piloted the reminder at one facility and then implemented it in all six network facilities in November 2002. <4.2. Some Facilities Developed a Safety Net for Veterans Identified as At Risk Who Have Not Been Tested> Officials at two facilities in our review searched all records in their facilities computerized medical record systems and found several thousand untested veterans identified as at risk for hepatitis C. The process, referred to as a look back, involves searching all medical records to identify veterans who have risk factors for hepatitis C but have not been tested either because the providers did not order the tests or ordered tests were not completed. The look back serves as a safety net for these veterans. The network or facility can perform the look back with any chosen frequency and over any period of time. The population searched in a look back includes all veteran users of the VA facility and is more inclusive than the population that is sampled monthly in VA s performance measurement process. As a result of a look back, one facility manager in Network 5 identified 2,000 veterans who had hepatitis C risk factors identified since January 2001 but had not been tested as of August 2002. Facility staff began contacting the identified veterans in October 2002 to offer them the opportunity to be tested. Although officials in the other two Network 5 facilities have the technical capability to identify and contact all untested veterans determined to be at risk for hepatitis C, they have not done so. An official at one facility not currently conducting look back searches stated that the facility would need support from those with computer expertise to conduct a look back search. A facility manager in Network 9 identified, through a look back, more than 1,500 veterans who had identified risk factors for hepatitis C but were not tested from January 2001 to September 2002. The manager in this facility began identifying untested, at-risk veterans in late March 2003 and providers subsequently began contacting these veterans to arrange testing opportunities. Other Network 9 facility managers have also begun to identify untested, at-risk veterans. Given that two facilities in our review have identified over 3,000 at-risk veterans in need of testing through look back searches, it is likely that similar situations exist at other VA facilities. <5. Conclusions> Although VA met its goal for fiscal year 2002, thousands of veterans at risk for hepatitis C remained untested. Problems persisted with obtaining and completing hepatitis C test orders. As a result, many veterans identified as at risk did not know if they have hepatitis C. These undiagnosed veterans risk unknowingly transmitting the disease as well as potentially developing complications resulting from delayed treatment. Some networks and facilities have upgraded VA s required hepatitis C clinical reminder to continue to alert providers until a hepatitis C test result is present in the medical record. Such a system appears to have merit, but neither the networks nor VA has evaluated its effectiveness. Network and facility managers would benefit from knowing, in addition to the cumulative results, current fiscal year performance results for hepatitis C testing to determine the effectiveness of actions taken to improve hepatitis C testing rates. Some facilities have compensated for weaknesses in hepatitis C test ordering and completion processes by conducting look backs through computerized medical record systems to identify all at-risk veterans in need of testing. If all facilities were to conduct look back searches, potentially thousands more untested, at-risk veterans would be identified. <6. Recommendations for Executive Action> To improve VA s testing of veterans identified as at risk of hepatitis C infection, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to determine the effectiveness of actions taken by networks and facilities to improve the hepatitis C testing rates for veterans and, where actions have been successful, consider applying these improvements systemwide and provide local managers with information on current fiscal year performance results using a subset of the performance measurement sample of veterans in order for them to determine the effectiveness of actions taken to improve hepatitis C testing processes. <7. Agency Comments and Our Evaluation> In commenting on a draft of this report VA concurred with our recommendations. VA said its agreement with the report s findings was somewhat qualified because it was based on fiscal year 2002 performance measurement results. VA stated that the use of fiscal year 2002 results does not accurately reflect the significant improvement in VA s hepatitis C testing performance up from 62 percent in fiscal year 2002 to 86 percent in fiscal year 2003, results that became available recently. VA, however, did not include its fiscal year 2003 hepatitis C testing performance results by individual network, and as a result, we do not know if the wide variation in network results, which we found in fiscal year 2002, still exists in fiscal year 2003. We incorporated updated performance information provided by VA where appropriate. VA did report that it has, as part of its fiscal year 2003 hepatitis C performance measurement system, provided local facility managers with a tool to assess real-time performance in addition to cumulative performance. Because this tool was not available at the time we conducted our audit work, we were unable to assess its effectiveness. VA s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. We also will make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix III. Appendix I: Scope and Methodology To follow up on the Department of Veterans Affairs (VA) implementation of performance measures for hepatitis C we (1) reviewed VA s fiscal year 2002 performance measurement results of testing veterans it identified as at risk for hepatitis C, (2) identified factors that impede VA s efforts to test veterans for hepatitis C in one VA health care network, and (3) identified actions taken by VA networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C. We reviewed VA s fiscal year 2002 hepatitis C testing performance results, the most recently available data at the time we conducted our work, for a sample of 8,501 veterans identified as at risk and compared VA s national and network results for fiscal year 2002 against VA s performance goal for hepatitis C testing. The sample of veterans identified as at risk for hepatitis C was selected from VA s performance measurement process also referred to as the External Peer Review Process that is based on data abstracted from medical records by a contractor. In addition, we looked at one VA health care network s testing rate for at-risk veterans visiting its clinics in fiscal year 2002. To test the reliability of VA s hepatitis C performance measurement data, we reviewed 288 medical records in Network 5 (Baltimore) and compared the results against the contractor s results for the same medical records and found that VA s data were sufficiently reliable for our purposes. To augment our understanding of VA s performance measurement process for hepatitis C testing, we reviewed VA documents and interviewed officials in VA s Office of Quality and Performance and Public Health Strategic Health Care Group. To identify the factors that impede VA s efforts to test veterans for hepatitis C, we conducted a case study of the three medical facilities located in VA s Network 5 Martinsburg, West Virginia; Washington, D.C.; and the VA Maryland Health Care System. We chose Network 5 for our case study because its hepatitis C testing performance, at 60 percent, was comparable to VA s national performance of 62 percent. As part of the case study of Network 5, we reviewed medical records for all 288 veterans identified as at risk for hepatitis C who were included in that network s sample for VA s fiscal year 2002 performance measurement process. Of the 288 veterans identified as at risk who needed hepatitis C testing, VA s performance results found that 115 veterans in VA s Network 5 were untested. We reviewed the medical records for these 115 veterans and found hepatitis C testing results or indications that the veterans refused testing in 21 cases. Eleven veterans had hepatitis C tests performed subsequent to VA s fiscal year 2002 performance measurement data collection. Hepatitis C test results or test refusals for 10 veterans were overlooked during VA s data collection. As such, we consider hepatitis C testing opportunities to have been missed for 94 veterans. On the basis of our medical record review, we determined if the provider ordered a hepatitis C test and, if the test was ordered, why the test was not completed. For example, if a hepatitis C test had been ordered but a test result was not available in the computerized medical record, we determined whether the veteran visited the laboratory after the test was ordered. If the veteran had visited the laboratory, we determined if the test order was active at the time of the visit and was overlooked by laboratory staff. Based on interviews with providers, we identified the reason why hepatitis C tests were not ordered. We also analyzed medical records to determine how many times veterans with identified risk factors and no hepatitis C test orders returned for primary care visits. To determine actions taken by networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C, we expanded our review beyond Network 5 to include Network 2 and Network 9. We reviewed network and facility documents and conducted interviews with network quality managers and medical facility staff primary care providers, nurses, quality managers, laboratory chiefs and supervisors, and information management staff. Our review was conducted from April 2002 through November 2003 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Acknowledgments> In addition to the contact named above, Carl S. Barden, Irene J. Barnett, Martha A. Fisher, Daniel M. Montinez, and Paul R. Reynolds made key contributions to this report. Related GAO Products VA Health Care: Improvements Needed in Hepatitis C Disease Management Practices. GAO-03-136. Washington, D.C.: January 31, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. Veterans Health Care: Standards and Accountability Could Improve Hepatitis C Screening and Testing Performance. GAO-01-807T. Washington, D.C.: June 14, 2001. Veterans Health Care: Observations on VA s Assessment of Hepatitis C Budgeting and Funding. GAO-01-661T. Washington, D.C.: April 25, 2001.
Why GAO Did This Study Hepatitis C is a chronic disease caused by a blood-borne virus that can lead to potentially fatal liverrelated conditions. In 2001, GAO reported that the VA missed opportunities to test about 50 percent of veterans identified as at risk for hepatitis C. GAO was asked to (1) review VA's fiscal year 2002 performance measurement results in testing veterans at risk for hepatitis C, (2) identify factors that impede VA's efforts to test veterans for hepatitis C, and (3) identify actions taken by VA networks and medical facilities to improve the testing rate of veterans at risk for hepatitis C. GAO reviewed VA's fiscal year 2002 hepatitis C performance results and compared them against VA's national performance goals, interviewed headquarters and field officials in three networks, and conducted a case study in one network. What GAO Found VA's performance measurement result shows that it tested, in fiscal year 2002 or earlier, 5,232 (62 percent) of the 8,501 veterans identified as at risk for hepatitis C in VA's performance measurement sample, exceeding its fiscal year 2002 national goal of 55 percent. Thousands of veterans (about one-third) of those identified as at risk for hepatitis C infection in VA's performance measurement sample were not tested. VA's hepatitis C testing result is a cumulative measure of performance over time and does not only reflect current fiscal year performance. GAO found Network 5 (Baltimore) tested 38 percent of veterans in fiscal year 2002 as compared to Network 5's cumulative performance result of 60 percent. In its case study of Network 5, which was one of the networks to exceed VA's fiscal year 2002 performance goal, GAO identified several factors that impeded the hepatitis C testing process. These factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. For more than two-thirds of the veterans identified as at risk but not tested for hepatitis C, the testing process failed because hepatitis C tests were not ordered, mostly due to poor communication between clinicians. For the remaining veterans, the testing process was not completed because orders had expired by the time veterans visited the laboratory or test orders were overlooked because laboratory staff had to scroll back and forth through daily lists, a cumbersome process, to identify active orders. Moreover, during subsequent primary care visits by these untested veterans, providers often did not recognize that hepatitis C tests had not been ordered nor had their results been obtained. Consequently, undiagnosed veterans risk unknowingly transmitting the disease as well as potential complications resulting from delayed treatment. The three networks GAO looked at--5 (Baltimore), 2 (Albany), and 9 (Nashville)--have taken steps intended to improve the testing rate of veterans identified as at risk for hepatitis C. To do this, in two networks officials modified clinical reminders in the computerized medical record to alert providers that for ordered hepatitis C tests, results were unavailable. Officials at two facilities developed a "look back" method to search computerized medical records to identify all at-risk veterans who had not yet been tested and identified approximately 3,500 untested veterans. The look back serves as a safety net for veterans identified as at risk for hepatitis C who have not been tested. The modified clinical reminder and look back method of searching medical records appear promising, but neither the networks nor VA has evaluated their effectiveness.
<1. Background> <1.1. ATF Roles and Responsibilities> ATF s mission is to protect communities from violent criminals, criminal organizations, and illegal use and trafficking of firearms, among other things. To fulfill this mission, ATF has 25 field divisions located throughout the United States. To efficiently and effectively carry out its criminal enforcement responsibilities related to firearms, ATF maintains certain computerized information on firearms, firearms transactions, and firearms purchasers. To balance ATF s law enforcement responsibility with the privacy of firearms owners, Congress has required FFLs to provide ATF certain information about firearms transactions and the ownership of firearms while placing restrictions on ATF s maintenance and use of such data. In addition to its enforcement activities, ATF also regulates the firearms industry, including issuing firearms licenses to prospective FFLs, and conducting FFL qualification and compliance inspections. <1.2. Firearms Tracing Process> A critical component of ATF s criminal enforcement mission is the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make these records available to ATF for inspection and search under certain prescribed circumstances, such as during a firearms trace. The system was intended to permit law enforcement officials to trace firearms involved in crimes while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. Figure 1 shows one possible scenario in which a firearm is purchased at an FFL, the FFL maintains records on the purchase, the firearm is used in a crime, and a law enforcement agency recovers the firearm and submits it for tracing. Through the use of these records maintained by FFLs and provided to ATF in certain circumstances, ATF provides firearms tracing services to federal, state, local, and foreign law enforcement agencies. The objective of the trace is to identify the first retail purchaser of the firearm. To carry out its firearms tracing responsibilities, ATF maintains a firearms tracing operation at NTC in Martinsburg, West Virginia. As shown in figure 2, NTC traces firearms suspected of being involved in crimes to the first retail purchaser to assist law enforcement agencies in identifying suspects. NTC generally receives trace requests through eTrace, a web-based submission system, but also receives requests by fax, telephone, and mail. To conduct a trace, NTC must receive the recovered firearm s description including manufacturer and serial number from the law enforcement agency. NTC determines the ownership of the firearm by first conducting automated checks of data systems that are maintained at NTC. If these automated checks do not identify a matching firearm description within the systems, an NTC analyst contacts the chain of distribution for the firearm the series of businesses that are involved in manufacturing and selling the firearm. For example, after automated data system checks, an NTC analyst may call the manufacturer of the firearm, who informs NTC that the firearm was sold to a certain distributor. The NTC analyst will then call that distributor, and so on until the individual is identified. For many traces, an FFL in the chain of distribution has gone out of business, so an NTC analyst must consult the FFL s out-of- business records, which are also maintained by NTC. ATF documents each trace request and its results, and provides that information to the law enforcement requester. ATF considers a request completed when it traces the firearm to a retail purchaser, or when it cannot identify the purchaser for various reasons. For example, the description of the firearm as submitted by the requester may not have contained sufficient information to perform a trace. For fiscal year 2015, ATF received a total of 373,349 trace requests, completed 372,992 traces, and identified a retail FFL or a purchaser of the traced firearm in about 68 percent of the completed traces. <1.3. Statutory Data Restrictions> Since the passage of the Gun Control Act of 1968, Congress has passed provisions that place restrictions on ATF s handling of FFL records. In 1978, citing to the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. These proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Since then, Congress has placed restrictions on ATF s use of funds to consolidate or centralize firearms records, as discussed below. In 1978, the Treasury, Postal Service, and General Government Appropriations Act, 1979, prohibited the use of funds for administrative expenses in connection with the consolidation or centralization of FFL records at the agency, or the final issuance of the 1978 proposed regulations. This restriction was included in each of ATF s annual appropriations through fiscal year 1993. In 1993, the Treasury, Postal Service, and General Government Appropriations Act, 1994, removed the reference to the 1978 proposed rules, but expanded the prohibition to include the consolidation or centralization of portions of records, and to apply to the use of funds for salaries as well as administrative expenses. This provision was included in each of ATF s annual appropriations through fiscal year 2011. hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees. <2. ATF Has 16 Data Systems That Contain Retail Firearms Purchaser Data; Selected Systems Are Involved in Tracing Process> <2.1. ATF Has 16 Systems with Retail Purchaser Data> ATF collects and maintains data from the firearms industry to carry out its criminal and regulatory enforcement responsibilities, and has established 25 national ATF data systems relating to firearms to maintain the data it collects. Of these 25 data systems, the following 16 data systems contain retail firearms purchaser information: 1. Access 2000 (A2K) 2. ATF NICS Referral 3. Firearm Recovery Notification Program (FRNP) 4. Firearms and Explosives Import System 5. Firearms Information Reporting System 6. Firearms Tracing System 9. Multiple Sales (MS) 10. National Firearms Act System / National Firearms Registration and Transfer Record System 14. Out-of-Business Records Imaging System (OBRIS) 15. Suspect Person Database More details on these systems are provided in appendix II. <2.2. Four ATF Systems We Selected Are Used in the Firearms Tracing Process> From the 16 data systems that contain retail purchaser information, we selected 4 systems for an in-depth review of compliance with the appropriations act restriction on consolidation or centralization, and adherence to ATF policies: OBRIS, A2K, FRNP, and MS, including Demand Letter 3. See appendix I for our selection criteria. These systems are operated and maintained by NTC and play a significant role in the firearms tracing process as shown in figure 3. OBRIS is a repository of nonsearchable images of firearms records that allows NTC employees to manually search for and retrieve records during a firearms trace using an FFL number and a firearm description (e.g., serial number). Out-of-business records are integral to the firearms tracing process. According to ATF officials, in approximately 35 to 38 percent of trace requests, there is at least one entity in the chain of distribution that has gone out of business. Therefore, in more than one- third of firearms trace requests, NTC analysts must consult OBRIS at least once. According to ATF data, as of May 5, 2016, there were 297,468,978 images of firearms records in OBRIS. Further, in fiscal year 2015, NTC accomplished 134,226 of 372,992 total completed trace requests using OBRIS. OBRIS was developed in 2006 to assist NTC with maintaining the out-of- business FFL records that are received each year. By statute, when FFLs discontinue their businesses and there is no successor, the records required to be kept under the Gun Control Act of 1968, as amended, must be delivered within 30 days to the Attorney General. This includes all acquisition and disposition logbooks, firearms transactions records such as Form 4473 that contains purchaser information and other required records. NTC receives an average of about 1.9 million out-of-business records per month, of which a large percentage are paper-based. Since 2006, when paper records are received from an FFL that has gone out of business, NTC scans them as TIFF image files and stores them in OBRIS. By design, the files are stored as images (with no optical character recognition) so that they cannot be searched using text queries. In addition, ATF sometimes receives electronic FFL out-of- business records in the forms of computer external removable drives and hard drives. In these cases, ATF converts the data to a nonsearchable format consistent with OBRIS records. During processing of OBRIS records, NTC conducts a quality-assurance process, including document sorting, scanning, and error checks on 100 percent of the records received. Officials stated that the imaged records are maintained indefinitely in OBRIS. For more information on OBRIS, see appendix III. <2.2.1. A2K> ATF implemented A2K in 1995 at the request of firearms industry members to allow manufacturer, importer, and wholesaler FFLs to more efficiently respond to requests from NTC for firearms traces. By statute, FFLs are required to respond within 24 hours to a firearms trace a request from ATF for firearms disposition information needed for a criminal investigation. Normally, when an NTC analyst contacts an FFL in the chain of distribution during a trace, the analyst contacts the FFL by phone, fax, or e-mail. ATF officials reported that this can be burdensome if the FFL receives a large number of trace requests, and that such requests can number more than 100 per day. With A2K a voluntary program the participating industry member uploads electronic firearms disposition records (i.e., information on the FFL or, in rare cases, the individual to whom the firearm was sold) onto a server that ATF owns and maintains, but is located at the site of the industry member. A2K provides a secure user web interface to this server, through which authorized NTC personnel can search by firearm serial number only to obtain disposition data for a firearm during a trace. According to the A2K memorandum of understanding with industry members, each participating industry member maintains ownership over its data. Further, NTC access to A2K s search function is limited to analysts conducting traces for each particular industry member. NTC analysts access A2K using a different URL and login information for each participating industry member, and can only retrieve the disposition data for the particular firearm they are tracing. Participation in A2K is voluntary and, according to ATF officials and the three industry members we spoke with, can reduce an industry member s costs associated with responding to firearms trace requests. According to ATF officials, as of April 25, 2016, there are 35 industry members using A2K, which account for 66 manufacturer, importer, and wholesaler FFLs. All three of the participating industry members we spoke with agreed that A2K has been beneficial since it reduces the industry member resources necessary to respond to trace requests. A2K also benefits NTC by providing immediate access to industry member data at all times, thereby allowing tracing operations to continue outside of normal business hours, which can be crucial for urgent trace requests. According to ATF data, as of March 17, 2016, there were 290,256,532 firearms in A2K. Further, in fiscal year 2015, NTC accomplished 130,982 of 372,992 total completed trace requests using A2K. Established in 1991, FRNP (formerly known as the Suspect Gun Program) provides a criminal investigative service to ATF agents by maintaining a database of firearms that have not yet been recovered by law enforcement, but are suspected to be involved in criminal activity. An ATF agent submits firearms information to FRNP, in connection with a specific ATF criminal investigation, to flag a particular firearm so that in the event that it is recovered and traced at some future time, the requesting agent will be notified. A request to enter a firearm into FRNP could start with an ATF agent recovering another firearm during an undercover investigation of illegal sales from a firearms trafficker. By searching eTrace, the agent may discover that the recovered firearm was part of a multiple sale with three other firearms. The ATF agent then may request that the other three firearms be entered into FRNP because they are associated with the firearm the agent recovered and, therefore, are likely to also be trafficked. ATF officials stated that, in this hypothetical case, it is likely that those three firearms, if recovered and traced in the future, would support a potential firearms trafficking case. If the firearms are in FRNP, if and when they are recovered and traced, NTC would notify the requesting agent, who could then contact the agency that recovered and traced the firearms to coordinate building such a case. To enter a firearm into FRNP, an ATF agent submits ATF Form 3317.1 (see app. IV) to NTC. According to ATF, no other law enforcement agencies may submit firearms to FRNP or view information in the system; only ATF agents and NTC staff have access. When a firearm is recovered in a crime and is traced, NTC conducts an automated check to determine whether the firearm description in the trace request matches a firearm description in FRNP. If so, an analyst will validate that the entries match. If they do, NTC generally notifies the ATF agent who submitted the firearm for inclusion in FRNP that the firearm has been recovered and traced. Then, the analyst completes the trace and sends the results to the requester of the trace. Occasionally, in submitting the firearm to FRNP, the agent directs NTC to not complete the trace on the firearm in the event that the firearm is recovered and traced (i.e., not provide the trace results to the law enforcement agency who requested the trace). For example, an agent might want to prevent trace information from being released to protect an undercover operation or other investigation. According to ATF data, as of May 3, 2016, there were 174,928 firearms and the names of 8,705 unique persons (e.g., criminal suspects, firearms purchasers, associates) in FRNP, making up 41,964 total FRNP records. Further, in fiscal year 2015, NTC accomplished 110 of 372,992 total completed trace requests using FRNP. Also, according to ATF data, as of May 5, 2016, there were 23,227 firearms in FRNP that had been linked to a firearms trace. Once the ATF investigation that led to the FRNP firearms submission has been closed, any FRNP entries associated with that investigation are to be labeled as inactive in FRNP. Information from inactive records is used to assist with the tracing process, but when a trace hits on an inactive FRNP record, NTC does not notify the ATF agent who submitted the firearm since the associated investigation is closed and the information would no longer be useful to the agent. According to our review of all FRNP records, as of July 2015, about 16 percent of the 41,625 records were designated active and about 84 percent were designated inactive. Inactive records remain in the system for tracing purposes. The original submission form is also preserved as a digital image. MS was developed in 1995 to collect and track reports of the purchase by one individual of two or more pistols or revolvers, or both, at one time or during any 5 consecutive business days. FFLs are required by statute to report these sales to ATF. The multiple sales reports are completed by FFLs, submitted to NTC using ATF form 3310.4 (see app. V), and entered into MS. According to ATF, these reports, when cross-referenced with firearms trace information, serve as an important indicator in the detection of potential firearms trafficking. They can also allow successful tracing of older firearms that have reentered the retail market. MS also maintains the information from Demand Letter 3 reports. In 2011, ATF issued Demand Letter 3 to dealer and pawnbroker FFLs located in Arizona, California, New Mexico and Texas. The letter requires these FFLs to prepare reports of the purchase or disposition of two or more semiautomatic rifles capable of accepting a detachable magazine and with a caliber greater than .22, at one time or during any 5 consecutive business days, to a person who is not an FFL. According to ATF, this information is intended to assist ATF in its efforts in investigating and combatting the illegal movement of firearms along and across the southwest border. Demand Letter 3 reports are completed by FFLs, submitted to NTC using ATF form 3310.12 (see app. VI), and entered into MS. According to ATF officials and our observations, Demand Letter 3 and multiple sales reports are managed identically within MS. During a firearms trace, MS is automatically checked for a match with the firearm serial number. If a match is found, the trace time can be substantially shortened since the retail FFL and purchaser name to complete the trace are contained within the MS record. According to ATF data, as of May 3, 2016, there were 8,950,209 firearms in MS, making up 3,848,623 total MS records. Further, in fiscal year 2015, NTC accomplished 15,164 of 372,992 total completed trace requests using MS. In November 1995, ATF implemented a policy to computerize multiple sales reports at NTC, which now also applies to Demand Letter 3 reports. The original multiple sales or Demand Letter 3 paper report received from the FFL is scanned in a nonsearchable, TIFF image format and tagged with the MS transaction number. The TIFF file is then stored in an image-only repository, and is retained indefinitely. However, as part of the computerization policy, ATF included a requirement for deleting firearms purchaser names from MS 2 years after the date of sale if such firearms are not connected to a trace. ATF preserves the remainder of the data, such as the firearm description, for the purpose of supporting investigations. In contrast, if an MS record is connected to a firearms trace, then ATF preserves the entire record, including purchaser information, in the system. MS reports are available to any ATF staff that has access to eTrace but not to outside law enforcement agencies with eTrace access. However, after the purchaser name in a MS record has been deleted in accordance with the 2-year deletion policy, only NTC officials have access to this information in the digital image of the original multiple sales or Demand Letter 3 reports. If an ATF agent needs to see the deleted information, the agent must contact NTC. <3. ATF Did Not Always Comply with the Appropriations Act Restriction and Does Not Consistently Adhere to Its Policies on Maintenance of Firearms Data> Of the four data systems we reviewed, two systems were in full compliance with the appropriations act restriction. The other two data systems did not always comply with the restriction, although ATF addressed the compliance issues during the course of our review. In addition, three data systems could better adhere to ATF policies. Specifically: OBRIS complies with the appropriations act restriction and adheres to ATF policies. A2K for in-business industry members records complies with the appropriations act restriction, but ATF s collection and maintenance of A2K out-of-business records in A2K on a server at NTC violated the appropriations act restriction. ATF deleted the records from the server in March 2016. In addition, industry members may benefit from clearer ATF guidance to ensure that they are submitting out-of-business records as required. FRNP generally complies with the appropriations act restriction. However, a regional program using FRNP from 2007 through 2009 did not comply with the restriction, and ATF removed the data it collected through this program from FRNP in March 2016. Further, FRNP generally adheres to ATF policies, but a technical defect allows ATF agents to view and print FRNP data beyond what ATF policy permits. MS complies with the appropriations act restriction, but ATF continues to inconsistently adhere to its own policy when deleting these records. For a more detailed legal analysis of compliance with the appropriations act restriction, see appendix VII. <3.1. Framework for Legal Analysis Set Forth in 1996 Report> We previously considered ATF s compliance with the restriction on using appropriated funds for consolidation or centralization in connection with ATF s Microfilm Retrieval System and MS in 1996. In that report, we stated that the appropriations act restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization. We interpreted the restriction in light of its purpose and in the context of other statutory provisions governing ATF s acquisition and use of information on firearms. We found that the two systems complied with the appropriations act restriction on the grounds that ATF s consolidation of records in these systems was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. We are employing a similar analytical approach to the systems under review here: we consider whether ATF s aggregation of records in each system serves a statutory purpose, and how it relates to that purpose. <3.2. OBRIS Complies with the Appropriations Act Restriction and Adheres to ATF Data-Processing Policies> OBRIS complies with the appropriations act restriction and adheres to policies designed to help ensure that the system is in compliance with the restriction. FFLs are specifically required to submit records to ATF when going out of business, and the system limits the accessibility of key firearms records information, such as retail purchaser data. As we reported in 1996, ATF first issued regulations in 1968 requiring FFLs that permanently go out of business to deliver their firearms transaction records to the federal government within 30 days. This provided a means of accessing the records for firearms tracing purposes after an FFL went out of business. The legislative history related to ATF s fiscal year 1979 appropriation did not provide any indication that Congress intended a change in ATF s existing practice. In 1986, the Firearms Owners Protection Act (FOPA) codified this regulatory reporting requirement, affirming ATF s authority to collect this information. In 1996, we also reported that the predecessor to OBRIS the Microfilm Retrieval System as designed, complied with the statutory data restrictions and that ATF operated the system consistently with its design. We found that the Microfilm Retrieval System included in a computerized index the information necessary to assist ATF in completing a firearms trace, and did not aggregate information in a manner beyond that necessary to implement the Gun Control Act. Notably, ATF s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in a searchable format. In response to logistical challenges and technological advances, ATF developed OBRIS in 2006 as the repository to maintain digital images of out-of-business FFL records. ATF transitioned from using microfilm images of records to scanning records into OBRIS as digital images not searchable through character recognition, consistent with ATF s design and use of its prior Microfilm Retrieval System. It is our view that, like its microfilm predecessor system, OBRIS also complies with the appropriations act restriction because OBRIS s statutory basis and accessibility are essentially the same as the prior system. As with the prior system, OBRIS generally allows users to identify potentially relevant individual records through manual review by searching an index using an FFL number. Other information, specifically firearms purchaser information, remains stored in nonsearchable images, and is not accessible to ATF through a text search. In OBRIS, ATF put data processing policies in place to maintain records in compliance with the appropriations act restriction. Specifically, when an FFL going out of business sends records to NTC, according to ATF policy and verified by our observations, NTC personnel follow policies to sort and scan the records in OBRIS in a manner that maintains the nonsearchability of the records. For example, NTC personnel spend extra time indexing the images by FFL number, and chronologically sorting FFL records, typically by month and by year. When tracing a firearm, according to ATF policy and verified by our observations, NTC personnel generally identify a group of FFL records through the FFL number index, then manually search the dates of the FFL records to narrow in on a group of records that might contain the firearm being traced. NTC personnel then manually skim through each record in this group until they identify the relevant firearm information. According to NTC officials, NTC staff sometimes search thousands of pages of records to find the record that matches the trace request. This policy for a manual process to maintain and use records in OBRIS helps to ensure its compliance with the appropriations act restriction. For more details on OBRIS s data processing policies, see appendix III. <3.3. A2K for Out-of-Business Records Did Not Comply with the Appropriations Act Restriction, and ATF Could Improve Guidance to Industry> ATF maintains A2K for in-business industry members who store their own A2K data and maintained A2K for certain records of out-of-business industry members at NTC. ATF s collection and maintenance of the records of out-of-business A2K industry members at NTC violated the appropriations act restriction on consolidation or centralization of firearms records. However, ATF officials transferred the records to OBRIS, and in March 2016 removed these records from A2K. In addition, industry members would benefit from clearer A2K guidance from ATF to ensure that they are submitting required out-of-business records. <3.3.1. A2K for In-Business Records Complies with the Appropriations Act Restriction> A2K for firearms records of in-business industry members complies with the appropriations act restriction on consolidation and centralization based on A2K s statutory foundation and its features. ATF believes, and we agree, that A2K for in-business records appropriately balances the restriction on consolidating and centralizing firearms records with ATF s need to access firearms information in support of its mission to enforce the Gun Control Act of 1968, as amended. Federal law requires FFLs to provide firearms disposition information to ATF within 24 hours in response to a trace request in the course of a criminal investigation. ATF officials told us that they developed A2K in response to industry member requests for an automated option for responding to trace requests. Prior to A2K, FFLs could only respond to trace requests by having dedicated personnel research firearms disposition information and then submit that information to ATF by phone, fax, or e-mail. In contrast, A2K provides industry members who voluntarily participate in A2K with servers to facilitate automated electronic responses to ATF trace requests. Under A2K, industry members upload their electronic firearms disposition information onto the servers located at their premises on a regular basis. Industry members not ATF retain possession and control of their disposition records and, according to ATF officials, they may withdraw from A2K and remove their records from the servers at any time. A2K includes a secure user web interface to each of the servers and ATF may only obtain A2K disposition information by searching individual industry member servers by exact firearm serial number. Through this search, ATF obtains the same information from each industry member as it would otherwise obtain by phone, fax, or e-mail, and in similar disaggregated form. <3.3.2. A2K for Out-of-Business Records Did Not Comply with the Appropriations Act Restriction, and ATF Could Provide Clearer Guidance> Beginning in 2000, ATF maintained A2K disposition data from out-of- business industry members on a single partitioned server within NTC, and removed the records from the server in March 2016. ATF s maintenance of the disposition records in this manner violated the appropriations act restriction on consolidation or centralization. This arrangement was not supported by any specific authority. As described earlier, A2K was designed as an alternative for FFLs to meet the requirement to respond promptly to ATF trace requests, which does not apply to FFLs once they go out of business. Another statutory provision requires FFLs to submit firearms records to ATF when they go out of business, and ATF has designed a separate system for this purpose OBRIS as described earlier. A2K for out-of-business records functioned differently than OBRIS and went beyond the consolidation of out-of-business records in that system incident to specific responsibilities under the Gun Control Act. As discussed earlier, out-of-business records are maintained as nonsearchable digital images in OBRIS to comply with the appropriations act restriction, while at the same time allowing ATF to perform its tracing function. ATF completed traces using A2K disposition data from out-of- business industry members through the same type of secure user web interface as used while the industry members were in business. According to ATF, this was more efficient than relying on OBRIS to complete firearms traces. Our observations of A2K out-of-business searches in August 2015 confirmed ATF officials statements that these records were accessed in the same way as in-business records. Records were only retrievable by exact serial number search, in accordance with ATF policy. However, according to ATF officials, it would have been technically possible for ATF to reconfigure the server to allow the records to be queried by any field, including fields with retail purchaser information. ATF agreed with our assessment that treating disposition information from industry members that go out of business in the same manner as disposition information from in-business industry members would violate the appropriations act restriction. After we raised concerns about A2K out-of-business records on the server at NTC, ATF told us that they had begun a process of transferring the out-of-business A2K records from the server into OBRIS as digital images. ATF permanently deleted the records from the out-of-business A2K server in March 2016. In addition, ATF could provide clearer ATF guidance to ensure that industry members submit out-of-business records in accordance with the Gun Control Act of 1968, as amended. These industry members and their corresponding FFLs are required to provide transaction forms, acquisition records, and disposition records to ATF within 30 days of going out of business. However, it is unclear how the requirements apply to industry members A2K disposition data. A2K agreements specifically state that the A2K data belong to the industry member. Conversely, ATF requires that the ATF-owned A2K equipment be returned when industry members go out of business, which includes the hardware and software on which the data were housed at the industry member s location. The A2K memorandums of understanding and ATF guidance to industry members do not specify that industry members may retain the backup disk or how A2K data may be used to meet the out-of-business record submission requirements to ATF, if at all. All of the eight industry members that have gone out of business have provided their backup disks with data to ATF. According to ATF, six industry members separately provided their acquisition and disposition information, while the other two industry members, which were licensed importers, only provided invoices. According to ATF officials, discussions with these industry members did not include the industry member s option to keep the backup disk where the data are stored or whether submitting the backup disk to ATF would fulfill part of the industry member s submission requirement. Further, the three industry members we spoke with corroborated that ATF lacks guidance for its requirements related to industry members submitting out-of-business A2K data in accordance with the Gun Control Act, as amended. Federal internal control standards require that agencies communicate necessary quality information with external parties to achieve agency objectives, which includes providing industry members with record submission guidance so that ATF has the necessary records for firearms tracing. According to ATF officials, ATF has not provided guidance to A2K industry members on how to submit out-of-business records because industry members already have the standard requirements that apply to all FFLs, and industry members have not asked for guidance specific to A2K. Industry members that we spoke to had not contemplated the process for providing A2K equipment and records to ATF because they did not anticipate going out of business. However, if ATF does not have all required out-of-business records, the agency may not be able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. ATF officials agreed that providing such guidance for example, in the A2K memorandum of understanding between an industry member and A2K would be helpful to industry members to ensure that records are submitted to ATF as required. Industry members could benefit from clear ATF guidance on, for example, whether they are required to submit their A2K records in electronic format; whether they are allowed to only submit hard copy records; or what to do if one part of the company goes out of business, but A2K continues at the industry member s remaining FFLs. Such ATF guidance could clarify how industry members may submit A2K data to fulfill a portion of Gun Control Act requirements. <3.4. FRNP Generally Complies with the Appropriations Act Restriction, but a Past Regional Program Did Not Comply, and ATF Agents Are Able to Access Information That Is Not Permitted by ATF Policy> FRNP generally complies with the appropriations act restriction and generally adheres to ATF policies that help ensure such compliance. However, a regional ATF program using FRNP from 2007 through 2009 was not in compliance with the appropriations act restriction. ATF deleted the data it collected through this program from FRNP in March 2016. In addition, a technical defect in one of ATF s key data systems allows ATF agents to access FRNP records in a manner that is inconsistent with ATF policy. <3.4.1. FRNP Generally Complies with the Appropriations Act Restriction and Adheres to ATF Policies That Help Ensure Compliance> ATF gathers and combines specific firearms transaction data to a limited degree in FRNP in order to implement its statutory responsibilities related to firearms criminal enforcement and, in this respect, the system complies with the appropriations act restriction. By statute, ATF is responsible for enforcing the federal statutes regarding firearms, including those related to the illegal possession, use, transfer, or trafficking of firearms. FRNP was established to provide an investigative service to ATF agents by maintaining a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation. As discussed earlier, the appropriations act restriction does not preclude all information practices and data systems that involve an element of consolidating or centralizing FFL records. As designed, the aggregation of firearms transaction records in FRNP is incident to carrying out specific ATF criminal enforcement responsibilities and is limited to that purpose. Therefore, FRNP when used for the purpose as a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation complies with the appropriations act restriction. Moreover, based on our analysis of FRNP records, virtually all records in FRNP are associated with an ATF criminal investigation, and thus are related to ATF s statutory responsibilities. ATF policies for the implementation of FRNP support the conclusion that it complies with the appropriations act restriction, when operated as designed. ATF policies specify that ATF agents may submit a firearm for entry into FRNP if the firearm is associated with an active, nongeneral ATF criminal investigation and meets certain submission criteria. ATF agents must use a designated submission form when requesting that firearms information be entered in the FRNP system, which, among other things, contains a field for the agent to include an active, nongeneral investigation number. The form also contains a field to indicate the additional, specific submission criteria for the firearm, which align with ATF s statutory responsibility of enforcing criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. These criteria include: (1) Large quantities of firearms purchased by individual; (2) Firearms suspected in trafficking, but not stolen from an FFL dealer; (3) FFL dealers suspected of performing firearms transactions without proper documentation; (4) Firearms purchased by suspected straw purchasers; and (5) Other a category that the submitting agent is to explain on the form. According to NTC procedures, and verified by our observations, upon receiving an FRNP submission form, an NTC analyst reviews the form for completeness and conducts several validation and verification steps. For example, the analyst uses ATF s case-management system to verify that the investigation number on the FRNP submission form is active and that at least one criterion was selected on the submission form. Once the validation and verification checks are complete, the NTC analyst either enters the firearms information into FRNP or contacts the requesting ATF agent if information is missing or not in alignment with the criteria required for FRNP submission. During our review of selected fields for all 41,625 FRNP records, and a generalizable sample of records and submission forms, we found that for the vast majority of firearms entered, ATF abided by its policy for entries to be associated with an active investigation. Out of the entire population of 41,625 records reviewed, less than 1/10 of 1 percent of records were not associated at all with an investigation number and, according to ATF officials, were likely data-entry errors or records entered for testing or training purposes. Moreover, based on our sample review, an estimated 96 percent of FRNP records were entered while the related criminal investigation was open. ATF officials stated that most of the remaining records entered before the related investigation was open or after it was closed were the result of data-entry errors or the result of investigation numbers being reopened at a later date. Additional, specific submission criteria were required to be noted on the FRNP submission form since November 2004. Based on our sample review, an estimated 97 percent of FRNP submission forms from November 2004 through July 2015 included the selection of at least one criterion. For an estimated 13 percent of these or 23 submission forms in our sample the Other criteria was selected, and all but 2 of these had an explanation for why the firearms were entered in FRNP. For example, in 1 submission form that contained an explanation for Other, business owners were suspected of selling firearms without a license. ATF officials could not definitively state why an estimated 3 percent of submissions from November 2004 through July 2015 did not contain criteria selection. Officials speculated, for example, that an NTC analyst may have obtained the criteria selection from the requesting agent by phone or e-mail and may not have noted his or her conversation in the FRNP file. However, officials acknowledged that the criteria selection is an important quality control and allows ATF the ability to audit records related to an investigation if necessary. ATF officials told us that only names associated with the criminal investigation are entered in the FRNP system. These names are generally limited to suspects and purchasers, but ATF officials acknowledged that the names of victims or witnesses may be included in the system if they are associated with the criminal investigation, though this does not happen routinely. Based on our observations of FRNP entry procedures, an NTC analyst verifies that any names on the submission form match the names listed in the case-management system for that particular investigation, prior to entering the information in the FRNP system. <3.4.2. A Past ATF Regional Program Did Not Comply with the Appropriations Act Restriction> An ATF regional program conducted from 2007 through 2009 to enter firearms into FRNP the Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program did not comply with the appropriations act restriction on consolidating or centralizing FFLs firearms records, because the individual firearms were not suspected of being involved in criminal activity associated with an ATF criminal investigation. During the course of our review, ATF reported that it planned to delete the related data from FRNP, and ATF did so in March 2016. According to ATF officials, the SWBWOC Program was in place in ATF s four southwest border field divisions in order to more effectively identify during a trace the purchasers of used firearms trafficked to Mexico. The program was implemented during routine regulatory inspections of FFLs in the region who were engaged primarily in the sale of used firearms generally pawnbrokers. According to ATF, used firearms sales, referred to as secondary market sales, played a significant role in firearms trafficking to Mexico, particularly certain firearms most sought by the Mexican drug cartels, referred to as weapons of choice. According to ATF officials, this program was developed to record certain firearms in an effort to enhance ATF s ability to trace those firearms to a retail purchaser in the event of crime-related recoveries of the firearms. As part of the program, during regulatory inspections, ATF investigators were to record any specified weapons of choice that were found in the FFLs inventory or sold or disposed of by the FFLs within the inspection period. According to ATF officials, the information recorded was limited to the serial number and description of the firearm, and was not to include any purchaser information. The firearms information was then submitted to FRNP for all of the used firearms identified during the inspection. If the firearm was subsequently recovered by law enforcement and submitted for a trace, NTC s automatic checks on the firearm description would result in a match in the FRNP system. ATF would then be able to more quickly identify the FFL pawn shop that previously had the firearm in its inventory. According to ATF officials and documentation, the program was cancelled on October 2, 2009, following ATF s legal review of the process by which the firearms information entered during the program was recorded and submitted to FRNP. ATF s legal review determined that the program was not consistent with the appropriations act restriction on consolidation or centralization. According to ATF officials, the program was not reviewed by the ATF Chief Counsel s office prior to its initiation in June 2007. They stated that the program s existence was the result of incomplete communication by ATF executives responsible for industry operations programs with ATF s Chief Counsel prior to the implementation of the program. Upon learning of the program, ATF Counsel determined that FFL information on a firearm, in and of itself even when unaccompanied by purchaser information is not permitted to be collected and consolidated without a specific basis in statute or regulation, or a direct nexus to a law enforcement purpose, such as a criminal investigation. The ATF Chief Counsel s office advised that the program be immediately terminated and, in October 2009, the program was cancelled and the firearms information already entered into FRNP during the program was marked as Inactive. We concur with ATF s assessment that the inclusion of firearms information from the program in FRNP did not comply with the appropriations act restriction. It is our view that information obtained from an FFL about a firearm in and of itself, and unaccompanied by purchaser information, is not permitted to be collected and consolidated within ATF without a specific basis in statute. As a result of our review, ATF officials deleted the records for the affected data from FRNP 855 records relating to 11,693 firearms in March 2016. <3.4.3. ATF Agents Are Able to Access FRNP Information beyond What Is Permitted by ATF Policy> A technical defect in eTrace 4.0 allows ATF agents to view and print FRNP data beyond what ATF policy permits. These data include purchaser names and suspect names in a summary format called a Suspect Gun Summary Report. Any ATF agent with eTrace access can view or print these reports, including up to 500 FRNP records at one time. According to ATF officials, the eTrace defect occurred when the contractor developing eTrace 4.0 included a global print function for Suspect Gun Summary Reports which can contain retail purchaser information that was accessible from the search results screen. In December 2008, prior to the release of eTrace 4.0 in 2009, ATF provided the contractor with a list of the new system s technical issues, including this FRNP printing defect. ATF officials explained that because all ATF eTrace users had the appropriate security clearances, and because there would not be a reason for ATF agents to access the Suspect Gun Summary Reports, the print issue was not considered a high-priority concern. However, ATF officials told us that no audit logs or access listings are available to determine how often ATF agents have accessed records containing purchaser information. Therefore, ATF has no assurance that the purchaser information entered in FRNP and accessible through eTrace is not being improperly accessed. eTrace is available to federal, state, and local law enforcement entities that have entered into an eTrace memorandum of understanding with ATF. ATF agents have access to information in eTrace that is unavailable to state and local law enforcement entities, such as FRNP data. However, according to eTrace system documentation, ATF agents are to be limited in their access to FRNP records. Specifically, ATF agents should only be able to view the firearm description and the name and contact information of the ATF case agent associated with the investigation, and not purchaser information or FFL information. If an ATF agent wanted further information about the FRNP data, the agent should have to contact the case agent. ATF officials told us that ATF s policy is intended to provide FRNP information to ATF agents on a need-to-know basis in order to protect the security of ATF investigations, and protect gun owner information. Moreover, federal internal control standards specify that control activities to limit user access to information technology include restricting authorized users to the applications or functions commensurate with assigned responsibilities. According to ATF officials, options are limited for resolving the global print function defect. ATF s contract with the eTrace 4.0 developer has ended, and therefore ATF cannot contact the developer to fix the printing issue. ATF could have the issue resolved when a new version of eTrace, version 5.0, is released, but there is no timeline for the rollout of eTrace 5.0. ATF officials told us that, in the short term, one method to fix the printing issue would be to remove individuals names and identifying information from the FRNP system, so it is not available for Suspect Gun Summary Reports. The firearms information and case agent information would remain available to all ATF agents, and ATF officials indicated that they did not think that removing the identifying information would hamper ATF agents investigations. Developing and implementing short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to purchaser information for ATF agents could ensure that firearms purchaser information remains limited to those with a need to know. <3.5. MS Complies with the Appropriations Act Restriction, but ATF Continues to Inconsistently Adhere to ATF Policy When Deleting Records> MS complies with the appropriations act restriction; however, ATF lacks consistency among its MS deletion policy, system design, and policy implementation timing. Since we reported on MS in 1996, ATF has made minimal changes to the system itself, but the information contained in MS has changed with the inclusion of Demand Letter 3 reports, in addition to multiple sales reports. <3.5.1. Multiple Sales Reports and Demand Letter 3 Reports Maintained in MS Comply with the Appropriations Act Restriction> Multiple sales reports. By statute, FFLs are required to provide to ATF a multiple sales report whenever the FFL sells or otherwise disposes of, within any 5 consecutive business days, two or more pistols or revolvers, to an unlicensed person. The reports provide a means of monitoring and deterring illegal interstate commerce in pistols and revolvers by unlicensed persons. ATF s maintenance of multiple sales reports in MS complies with the appropriations act restriction because of ATF s statutory authority related to multiple sales reports, and the lack of significant changes to the maintenance of multiple sales reports in MS since we found it to be in compliance in 1996. As we reported in 1996, ATF operates MS with specific statutory authority to collect multiple sales reports. In 1975, under the authority of the Gun Control Act of 1968, ATF first issued regulations requiring FFLs to prepare multiple sales reports and submit those reports to ATF. The legislative history related to ATF s fiscal year 1979 appropriations act restriction did not provide any indication that Congress intended a change in ATF s existing practice. In 1986, a provision of FOPA codified FFLs regulatory reporting requirement, affirming ATF s authority to collect multiple sales reports. In addition, this provision required, among other things, FFLs to forward multiple sales reports to the office specified by ATF. Therefore, under this provision, ATF was given the statutory authority to specify that FFLs forward multiple sales reports to a central location. In our 1996 report, we examined MS and found that it did not violate the prohibition on the consolidation or centralization of firearms records because ATF s collection and maintenance of records was incident to its specific statutory responsibility. As we noted at that time, multiple sales reports are retrievable by firearms and purchaser information, such as serial number and purchaser name. We did not identify any significant changes to the maintenance of the multiple sales reports since we last reported on ATF s compliance with the statutory restriction that would support a different conclusion in connection with this review. Demand Letter 3 reports. In 2011, in an effort to reduce gun trafficking from the United States to Mexico, ATF issued demand letters to FFLs classified as dealers or pawnbrokers in four southwest border states: Arizona, California, New Mexico, and Texas. The letter, referred to as Demand Letter 3, required these FFLs to submit a report to ATF on the sale or other disposition of two or more of a specific type of semiautomatic rifle, at one time or during any 5 consecutive business days, to an unlicensed person. Federal courts that have considered the issue have held that ATF s collection of Demand Letter 3 reports are consistent with the appropriations act restriction. It is our view that ATF s maintenance of Demand Letter 3 reports in MS is consistent with the appropriations act restriction in light of the statutory basis for Demand Letter 3, the courts decisions, and the way in which the records are maintained. ATF has specific statutory authority to collect reports like Demand Letter 3 reports. As discussed, FFLs are required to maintain certain firearms records at their places of business. By statute, FFLs may be issued letters requiring them to provide their record information or any portion of information required to be maintained by the Gun Control Act of 1968, as amended, for periods and at times specified by the letter. Some FFLs have challenged the legality of Demand Letter 3 reports for a number of reasons, including that it did not comply with the appropriations act restriction. Federal courts that have considered the issue have upheld ATF s use of Demand Letter 3 as consistent with the appropriations act restriction. In one case before the U.S. Court of Appeals for the Tenth Circuit, the FFL contended that the demand letter created a national firearms registry in violation of the restriction on consolidation or centralization. The Tenth Circuit stated that the plain meaning of consolidating or centralizing does not prohibit the mere collection of some limited information. The court went on to state that the July 2011 demand letter requested very specific information from a limited segment of FFLs. In addition, the court pointed out that Congress authorized the issuance of the letters in 1986, after passing the first appropriations act restriction, and Congress could not have intended to authorize the record collection in statute while simultaneously prohibiting it in ATF s annual appropriations act. In other similar cases, the courts have also held that ATF had the authority to issue the demand letter and that ATF s issuance of the demand letter complied with the appropriations act restriction. In addition, Demand Letter 3 reports are maintained in MS in an identical manner to multiple sales reports. <3.5.2. ATF s Long-Standing Struggle to Implement Its MS Deletion Policy Persists> Although not required by statute, ATF policy requires that firearms purchaser names be deleted from MS 2 years after the date of the reports, if the firearm has not been connected to a firearms trace. However, ATF s method to identify records for deletion is not comprehensive and, therefore, 10,041 names that should have been deleted remained in MS until May 2016. According to ATF officials, because of MS system design limitations, analysts must write complex queries to locate such names in MS. For example, since the information needed to identify the correct records could exist in free-form fields, the success of the queries in comprehensively identifying all appropriate records depends on consistent data entry of several text phrases throughout the history of the system. In addition, ATF s queries have inconsistently aligned with its system design for instance, as the system was modified and updated, the query text remained aligned with the outdated system and therefore these queries resulted in incomplete identification of records to be deleted. Changes to MS to address system query limitations would require a system-wide database enhancement, but there is currently not an operations and maintenance support contract in place for this system. Moreover, even if the system could ensure that deletions capture all required records, ATF has inconsistently adhered to the timetable of deletions required by its policy. For example, according to ATF s deletion log and our verification of the log, some records entered in 1997 were not deleted until November 2009 about 10 years after the required 2 years. As shown in table 1 below, ATF s timing for implementing deletions did not adhere to ATF policy directives. As shown in table 1 below, the ATF deletion policy for MS has changed over time including variations in the frequency of deletions (e.g., annually, monthly, weekly), and pauses to the deletion policy because of, according to ATF officials, litigation and requests from Congress. According to NTC officials, delayed deletions occurred because deleting a large number of records at once negatively affects the system, slowing system response time or stopping entirely the larger related data system. However, according to NTC s deletion log and verified by our observations of NTC system queries, deletions were conducted in average increments of almost 100,000 records per day representing on average a full year s worth of records to be deleted. In addition, ATF confirmed that a single deletion of 290,942 records on one day in January 2011 did not affect the system. Therefore, system constraints do not seem to be the reason for the delayed deletion. ATF did not identify further causes for the delays in deletions. ATF reported that the objective for its deletion policy was primarily to delete data that may not be useful because of its age and to safeguard privacy concerns related to retaining firearms purchaser data. Federal internal control standards require control activities to help ensure that management s directives are carried out. Additionally, information systems and related control activities should be designed to achieve objectives and respond to risks. Specifically, an organization s information system should be designed by considering the processes for which the information system will be used. For example, to alleviate the risk of not meeting the objectives established through the MS deletion policy, ATF must ensure the policy is consistent with the design of the MS data system and ATF must ensure that it meets the policy s timeline requirements. In September 1996, we reported that ATF had not fully implemented its 2-year deletion requirement. During the course of our 1996 review, ATF provided documentation that it had subsequently deleted the required records and that it would conduct weekly deletions in the future. Similarly, as a result of our current review, according to ATF documentation, in May 2016, the agency deleted the 10,041 records that should have been deleted earlier. However, given that this has been a 20-year issue, it is critical that ATF develop consistency between its deletion policy, the design of the MS system, and the timeliness with which deletions are carried out. By aligning the MS system design and the timeliness of deletion practices with its policy, ATF could ensure that it maintains only useful purchaser information while safeguarding the privacy of firearms purchasers. <4. Conclusions> ATF has an important role in combatting the illegal use of firearms, and must balance this with protecting the privacy rights of law-abiding firearms owners. Of the four ATF firearms data systems we reviewed that contained firearms purchaser information, we found that certain aspects of two of these systems violated the appropriations act restriction on consolidating or centralizing FFL firearms records, but ATF resolved these issues during the course of our review. With regard to ATF policies on maintenance of firearms records, ATF should do more to ensure that these policies are followed and that they are clearly communicated. Specifically, providing guidance to industry members participating in A2K for how to submit their records when they go out of business would help ensure they submit required records to ATF. Without this clear guidance, ATF risks not being able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. In addition, aligning eTrace system capability with ATF policy to limit access to firearms purchaser information in FRNP would ensure that such information is only provided to those with a need to know. Finally, aligning the MS system design and the timeliness of deletion practices with the MS deletion policy would help ATF maintain only useful purchaser data and safeguard the privacy of firearms purchasers. <5. Recommendations for Executive Action> In order to help ensure that ATF adheres to its policies and facilitates industry compliance with requirements, we recommend that the Deputy Director of ATF take the following three actions: provide guidance to FFLs participating in A2K for provision of out-of- business records to ATF, so that FFLs can better ensure that they are in compliance with statutory and regulatory requirements; develop and implement short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents; and align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF s compliance with the policy. <6. Agency Comments and Our Evaluation> We provided a draft of this report to ATF and DOJ on May 25, 2016 for review and comment. On June 16, 2016, ATF provided an email response, stating that the agency concurs with all three of our recommendations and is taking several actions to address them. ATF concurred with our recommendation that ATF provide guidance to FFLs participating in A2K for provision of out-of-business records to ATF. ATF stated that the agency is modifying its standard Memorandum of Understanding with A2K participants to incorporate specific guidance regarding the procedures to be followed when a participant goes out of business. ATF also stated that, as a condition of participation, all current and future A2K participants will be required to adopt the revised Memorandum of Understanding. The implementation of such guidance in the Memorandum of Understanding for A2K participants should meet the intent of our recommendation. ATF concurred with our recommendation that ATF develop and implement mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents. ATF stated that, in the short term, the agency will delete all purchaser information associated with a firearm entered into FRNP, and will no longer enter any purchaser information into FRNP. ATF stated that, in the long term, the agency will modify the Firearms Tracing System to remove the purchaser information fields from the FRNP module, and will modify eTrace as necessary to reflect this change. These short- and long-term plans, if fully implemented, should meet the intent of our recommendation. ATF concurred with our recommendation that ATF align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF s compliance with the policy. As we reported above, ATF stated that the agency deleted all purchaser names from MS that should have been deleted earlier. ATF also stated that the agency is implementing protocols to ensure that deleting purchaser names from MS aligns with ATF policy. If such protocols can be consistently implemented in future years, and address both the timeliness of deletions and the comprehensive identification of records for deletion, they should meet the intent of our recommendation. On June 22, 2016, DOJ requested additional time for its Justice Management Division to review our conclusions regarding ATF s compliance with the appropriations act restriction and the Antideficiency Act. As noted earlier, we solicited ATF s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems under review by letter of December 21, 2015, consistent with our standard procedures for the preparation of legal opinions. ATF responded to our inquiry on January 27, 2016, and its views are reflected in the report. Nevertheless, DOJ stated that ATF and DOJ officials had not followed DOJ s own processes regarding potential violations of the Antideficiency Act, specifically promptly informing the Assistant Attorney General for Administration. As a result, DOJ requested additional time to review the appropriations law issues raised by the draft report. As explained in appendix VII, ATF s failure to comply with the prohibition on the consolidation or centralization of firearms records violated the Antideficiency Act, which requires the agency head to submit a report to the President, Congress, and the Comptroller General. The Office of Management and Budget (OMB) has published requirements for executive agencies for reporting Antideficiency Act violations in Circular A-11, and has advised executive agencies to report violations found by GAO. OMB has further advised that f the agency does not agree that a violation has occurred, the report to the President, Congress, and the Comptroller General will explain the agency s position. We believe that the process set forth by OMB affords DOJ the opportunity to consider and express its views. ATF also provided us written technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Deputy Director of ATF, the Attorney General of the United States, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana C. Maurer at (202) 512-9627 or [email protected], or Helen T. Desaulniers at (202) 512-4740 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology This report addresses the following objectives: 1. Identify the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) data systems that contain retail firearms purchaser data and describe the characteristics of selected systems. 2. Determine whether selected ATF data systems comply with the appropriations act restriction on consolidation or centralization of firearms records and ATF policies. To calculate the estimated number of firearms in the United States in 2013, we used data from ATF s February 2000 report on Commerce in Firearms in the United States and ATF s 2015 Annual Statistical Update to this report. To calculate the approximate number of murders in which firearms were involved in 2014, we used data from the Federal Bureau of Investigation s Uniform Crime Reports from 2014. To address the first objective, we reviewed ATF policy and program documents to identify ATF data systems related to firearms. For the purposes of this report, data systems or systems refers to ATF s data systems and system components, including what ATF refers to as modules of a larger system, and what ATF refers to as programs whose associated data are contained within related systems. These policy and program documents included, among other things, ATF orders, system descriptions, system user manuals, system training materials, and data submission forms. We compared this information to the systems identified in our September 1996 report, and conducted searches of publicly available information to develop a comprehensive and current list of systems. In order to identify the systems and better understand them and their contents, we spoke with ATF officials in headquarters and at ATF s National Tracing Center (NTC). We also discussed these systems with ATF investigative and regulatory officials in the Baltimore and Los Angeles field offices, who provided varying perspectives due to geographic factors. These actions enabled us to confirm a comprehensive list of systems, and determine the presence of retail purchaser information within these systems. We selected four systems for a more in-depth review: Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS). Selected systems, at a minimum, contained retail purchaser information and contained original records as opposed to systems that transmitted information, such as a system that only pulls data from another system in order to print a report or fill out a form. A system was more likely to be selected if (1) it contained data unrelated to a criminal investigation, (2) a large percentage of system records contained retail purchaser information, (3) the retail purchaser information was searchable, or (4) ATF initiated the system as opposed to ATF being statutorily required to maintain the system. See table 2 for more details. For the selected systems, we reviewed ATF data on the number of system records, among other things for OBRIS and A2K for fiscal year 2015, and for FRNP and MS from fiscal years 2010 through 2015. We assessed the reliability of these data by interviewing ATF staff responsible for managing the data and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. We reviewed ATF policy and program documents to obtain in-depth descriptions of these selected systems, and discussed these systems with ATF officials. We visited NTC to observe the selected systems in operation. To address the second objective, we reviewed relevant laws, including statutory data restrictions, and ATF policy and program documents relating to ATF s firearms tracing operations and the selected systems. We also solicited the agency s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems, and interviewed ATF officials regarding the data systems compliance with that restriction and ATF policies. We visited NTC to observe how selected systems data are collected, used, and stored. For OBRIS, A2K, FRNP, and MS, we observed NTC analysts using the systems during firearms traces and observed the extent to which the systems are searchable for retail purchaser information. For OBRIS, FRNP, and MS, we observed NTC analysts receiving and entering data into the systems and processing the original data submissions either electronically or through scanning and saving documents including quality-control checks. For A2K, we reviewed budgetary information to determine the source of funding for the system for fiscal year 2008 through fiscal year 2014. We also interviewed representatives from the contractor that manages A2K, and 3 of 35 industry members that use A2K, to better understand how the system functions. We selected industry members that had several years of experience using A2K and reflected variation in federal firearms licensee (FFL) size and type. Although our interviews with these industry members are not generalizable, they provided us with insight on the firearms industry s use of A2K. In order to evaluate the contents of FRNP for the presence of retail purchaser information and compliance with the appropriations act restriction and FRNP policies, we reviewed several fields of data for the entire population of records. During our site visit, we also reviewed additional fields of data for a generalizable sample of records and the associated submission forms that are used to populate the records. For this sample, we compared selected data in the system to information on the forms, and collected information from the forms. We drew a stratified random probability sample of 434 records from a total population of 41,625 FRNP records entered from June 1991 through July 2015. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. We stratified the population by active/inactive record status and new/old (based on a cutoff of Nov. 1, 2004). Each sample element was subsequently weighted in the analysis to account statistically for all the records, including those that were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the review of the generalizable sample of FRNP records have margins of error at the 95 percent confidence level of plus or minus 5 percentage points or less, unless otherwise noted. For our review of the submission forms associated with FRNP records, we reviewed 195 forms entered into FRNP from November 2004 through July 2015 that were sampled from the new stratum. Prior to November 2004, the submission forms did not include selection options for criteria for entry into FRNP. We therefore only reviewed the more recent forms in order to assess the presence of criteria on these forms. Our review of these forms is generalizable to submission forms entered into FRNP from November 2004 through July 2015. All percentage estimates from the review of submission forms have margins of error at the 95 percent confidence level of plus or minus 3 percentage points or less, unless otherwise noted. We assessed the reliability of the FRNP data by conducting electronic tests of the data for obvious errors and anomalies, interviewing staff responsible for managing the data, and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. For MS, we observed the process of querying to identify particular records. We determined the selected data systems compliance with the appropriations act restriction, and compared them to multiple ATF policies on collection and maintenance of information, and criteria in Standards for Internal Control in the Federal Government related to control activities for communication and for the access to and design of information systems. We conducted this performance audit from January 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) Firearms- Related Data Systems Data sources FFLs send reports to NTC on a specified form (ATF Form 3310.4) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information About 396 ATF Firearms Tracing System (FTS) users, primarily NTC personnel, and the 3,050 ATF users, which includes ATF agents. ATF eTrace users outside of NTC are generally to be limited to viewing firearms and requesting agent information. Exports information to eTrace; FIRES; FTS (Data related to MS are contained in FTS.) Out-of-business FFLs send firearms transaction records to NTC, specifically acquisition and disposition logbooks and a specified form (ATF Form 4473) Contents related to firearms purchaser information Retail purchaser information of prohibited individuals who attempted to purchase a firearm (e.g., name); firearms information (e.g., serial number, model) Imports information from Federal Licensing System (FLS) National Tracing Center (NTC) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); federal firearms licensee (FFL) information (e.g., FFL name, FFL number) ATF employees; federal, state, local, and foreign law enforcement agencies. Non-ATF users have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, address); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model); retail purchaser, possessor, and associates information (e.g., first and last name); FFL information (e.g., city and state) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model), retail purchaser information (e.g. name); FFL information (e.g., FFL name, FFL number) eTrace; FIRES; FTS (Data related to Interstate Theft are contained in FTS.) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number). Original and subsequent purchasers are maintained as part of the system. FLS; National Firearms Act Special Occupational Tax System (NSOT) Contents related to firearms purchaser information Firearms information (e.g., serial number, model). Firearms possessor information limited to first, middle, and last name but that information is not searchable. Firearms information (e.g., serial number, model); personal information for individuals including possessors, legal owners, or individuals who recovered the firearm (e.g., first and last name) Collects information related to an individual currently under active criminal investigation who is suspected of illegally using or trafficking firearms. Suspect information (e.g., name, identification numbers such as driver s license number) <7. Data system Trace> Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information ATF employees; federal, state, local, and foreign law enforcement agencies. Federal, state, local, and foreign law enforcement agencies only have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Exports information to Electronic Trace Operation Workflow Reporting System; eTrace; FIRES; FTS (Data related to Trace are contained in FTS.) Under the Brady Handgun Violence Prevention Act, Pub. L. No. 103-159, 107 Stat. 1536 (1993), and implementing regulations, the Federal Bureau of Investigation, within DOJ, and designated state and local criminal justice agencies use NICS to conduct background checks on individuals seeking to purchase firearms from FFLs or obtain permits to possess, acquire, or carry firearms. NICS was established in 1998. FTS does not contain original records, rather it imports data from its subsystems in order to conduct analysis. NFRTR contains firearms purchaser information pursuant to Title 26 of the IRS code, 26 U.S.C. Chapter 53, regarding the registration and transfers of registration taxes. Specifically, it states that there should be a central registry, called the National Firearms Registration and Transfer Record, of all firearms as defined in the code, including machine guns, destructive devices such as bazookas and mortars, and other gadget-type weapons such as firearms made to resemble pens. Appendix III: Out-of-Business Records Imaging System (OBRIS) Appendix III: Out-of-Business Records Imaging System (OBRIS) Since 1968, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has received several hundred million out-of-business records. According to ATF officials, as of May 5, 2016 there are about 8,060 boxes of paper records at the National Tracing Center (NTC) awaiting scanning into digital images before they are to be destroyed. At NTC, we observed these boxes lining the walls and stacked along cubicles and file cabinets, as shown in figure 4. The officials stated that, according to the General Services Administration, the facility floor will collapse if the number of boxes in the building increases to 10,000. Therefore, when the number of boxes approaches this quantity, NTC staff move the boxes to large shipping containers outside. Currently, there are three containers of boxes on the property, which contain records awaiting destruction. Prior to digital imaging, records were housed on microfilm or in storage boxes, and the system was referred to simply as Microfilm Retrieval System. According to NTC officials, ATF is transitioning to digital imaging because of the benefits of improved image resolution, speed in accessing images, simultaneous accessibility of images to complete urgent traces, and less voluminous storage. The digitized records also helped mitigate the challenges of deteriorating microfilm images and maintaining the obsolete technology of microfilm. According to officials, NTC has completed the process of converting the microfilm records to digital images, and officials expect that the images will become fully available to NTC analysts for tracing during fiscal year 2016. Currently, access is limited to a single workstation within NTC. While ATF finalizes this effort, staff continue to access the records in the NTC microfilm archive in order to respond to trace requests, as shown in figure 5. Before fiscal year 1991, ATF stored the out-of-business records in boxes with an NTC file number assigned to each federal firearms licensee (FFL). If, during a trace, ATF determined that the FFL who sold the firearm was out of business and had sent in its records, ATF employees were to locate the boxes containing the records and manually search them for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, which also created storage problems. In 1991, ATF began a major project to microfilm the out-of- business records and destroy the originals. Instead of in boxes, the out- of-business records were stored on microfilm cartridges, with the FFL numbers assigned to them. Although this system occupied much less space than the hard copies of the records, ATF officials said it was still time-consuming to conduct firearms traces because employees had to examine up to 3,000 images on each microfilm cartridge to locate a record. The officials stated that scanning records and creating digital images in OBRIS has sped up the ability to search for out-of-business records during a trace. According to the officials, it takes roughly 20 minutes to complete a trace with digital images and roughly 45 minutes using microfilm. Appendix IV: Firearm Recovery Notification Program Submission Form Appendix V: Multiple Sales Submission Form for Multiple Sales Reports Appendix VI: Multiple Sales Submission Form for Demand Letter 3 Reports Appendix VII: Legal Analysis of Compliance with the Restriction on Consolidation or Centralization of Firearms Records A provision in the fiscal year 2012 appropriation for the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) prohibits the use of the appropriation to consolidate or centralize records on the acquisition and disposition of firearms maintained by federal firearms licensees (FFL). This statutory restriction originated in the agency s appropriation for fiscal year 1979 and, with some modification, was made permanent in fiscal year 2012. We reviewed whether ATF s collection and maintenance of acquisition and disposition records in four data systems Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS) violated this restriction. As discussed below, we considered the critical characteristics of each data system and related ATF activities in light of the restriction and in the context of ATF s statutory authorities. We conclude that ATF violated the restriction when it collected and maintained the disposition records of FFL participants in A2K on a single server within the National Tracing Center (NTC) after those FFLs had discontinued their operations. We also agree with ATF s 2009 determination that the agency violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. ATF s failure to comply with the restriction on consolidation or centralization also violated the Antideficiency Act. Under section 1351 of title 31, United States Code, the agency is required to report these violations to the President and Congress. <8. Background> ATF, a criminal and regulatory enforcement agency within the Department of Justice (DOJ), is responsible for the regulation of the firearms industry and enforcement of federal statutes regarding firearms, including criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. One component of ATF s criminal enforcement mission involves the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. To conduct a trace, the requesting law enforcement agency must identify the manufacturer or importer of the firearm and its type, caliber, and serial number, as well as other information related to the recovery, crime, and possessor. According to ATF, NTC personnel must typically use the information provided by the law enforcement agency to contact the manufacturer or importer to determine when and to whom the firearm in question was sold. The manufacturer or importer may have sold the firearm to an FFL wholesaler. In that case, NTC personnel would contact the FFL wholesaler to determine when and to whom the firearm in question was sold, usually to an FFL retailer. The tracing process continues until NTC identifies the first retail purchaser who is a nonlicensee. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make such records available to ATF for inspection and search under certain prescribed circumstances. This system was intended to permit law enforcement officials to trace firearms involved in crimes as described above while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. As originally enacted, the Gun Control Act required FFLs to submit such reports and information as the Secretary of the Treasury prescribed by regulation and authorized the Secretary to prescribe such rules and regulations as deemed reasonably necessary to carry out the provisions of the act. In 1978, citing the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. However, the proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Congress included in ATF s fiscal year 1979 appropriation for salaries and expenses a provision prohibiting the use of funds for administrative expenses for the consolidation or centralization of certain FFL records, or the final issuance of the 1978 proposed regulations. The provision continues to apply, with some modifications as described below. hat no funds appropriated herein shall be available for administrative expenses in connection with consolidating or centralizing within the Department of the Treasury the records of receipt and disposition of firearms maintained by Federal firearms licensees or for issuing or carrying out any provisions of the proposed rules of the Department of the Treasury, Bureau of Alcohol, Tobacco and Firearms, on Firearms Regulations, as published in the Federal Register, volume 43, number 55, of March 21, 1978. The Bureau of Alcohol, Tobacco, and Firearms (BATF) has proposed implementation of several new regulations regarding firearms. The proposed regulations, as published in the Federal Register of March 21, 1978 would require: (1) A unique serial number on each gun manufactured or imported into the United States. (2) Reporting of all thefts and losses of guns by manufacturers, wholesalers and dealers. (3) Reporting of all commercial transactions involving guns between manufacturers, wholesalers and dealers. The Bureau would establish a centralized computer data bank to store the above information. It is important to note that the proposed regulations would create a central Federal computer record of commercial transactions involving all firearms whether shotguns, rifles, or handguns. There are approximately 168,000 federally licensed firearms dealers, manufacturers, and importers. It is estimated that the proposed regulations would require submission of 700,000 reports annually involving 25 million to 45 million transactions. It is the view of the Committee that the proposed regulations go beyond the intent of Congress when it passed the Gun Control Act of 1968. It would appear that BATF and the Department of Treasury are attempting to exceed their statutory authority and accomplish by regulation that which Congress has declined to legislate. The reference to the 1978 proposed rules was removed from the annual provision as of the fiscal year 1994 appropriations act, but the prohibition against using funds for administrative expenses for consolidating or centralizing records was included in each of ATF s annual appropriations through fiscal year 2012 in much the same form. In fiscal year 1994, the Treasury, Postal Service, and General Government Appropriations Act, 1994, expanded the prohibition to include the consolidation or centralization of portions of records and to apply to the use of funds for salaries as well as administrative expenses, stating hat no funds appropriated herein shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of the Treasury, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees (emphasis added). hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees (emphasis added). The conference report accompanying the act explained that the provision had been made permanent. We previously considered ATF s compliance with the restriction on consolidation or centralization in 1996 in connection with the agency s Microfilm Retrieval System and Multiple Sales System. We stated that the restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization, but that it had to be interpreted in light of its purpose and in the context of other statutory provisions governing ATF s acquisition and use of information on firearms. In this respect, our analyses reflected the well-established principle that statutory provisions should be construed harmoniously so as to give them maximum effect whenever possible, avoiding the conclusion that one statute implicitly repealed another in the absence of clear evidence to the contrary. We found that the two systems complied with the statutory restriction on the grounds that ATF s consolidation of records was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. Thus, our analysis did not turn on the presence or absence of retail purchaser information in the system, but rather on the extent to which the aggregation of data corresponded to a statutory purpose. We employ a similar analytical approach, which ATF has also adopted, in assessing the four systems under review here, taking into account ATF s statutory authorities and the critical characteristics of each system. <9. Discussion> Two of the four data systems we reviewed OBRIS and MS do not consolidate or centralize firearms in violation of the restriction contained in the fiscal year 2012 appropriations act. In contrast, ATF violated the restriction when it collected and maintained disposition records of FFL participants in A2K on a single server at NTC after they had discontinued their operations. ATF also violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. OBRIS is ATF s repository for records submitted by FFLs that have permanently discontinued their operations, as required by the Gun Control Act of 1968, as amended. Section 923(g)(1)(A) of title 18, United States Code, requires each FFL to maintain such records of importation, production, shipment, receipt, sale, or other disposition of firearms at its place of business as prescribed by the Attorney General. Under 18 U.S.C. 923(g)(4), when a firearms business is discontinued and there is no successor, the records required to be maintained by FFLs must be delivered within 30 days to ATF. ATF s system for maintaining the records of out-of-business FFLs for its statutory tracing function has evolved over time in response to logistical challenges and technological advances. Prior to fiscal year 1991, ATF maintained out-of-business FFLs records in hard copy, with a file number assigned to each FFL. During a trace, if ATF determined that a firearm had been transferred or disposed of by an out-of-business FFL, ATF employees manually searched the FFL s records until they found the records corresponding to the serial number of the firearm being traced. According to ATF, this was a time-consuming and labor-intensive process, and the volume of records created storage problems. In 1991, ATF began a major project to microfilm these records and destroy the originals. For fiscal year 1992, Congress appropriated $650,000 solely for improvement of information retrieval systems at the National Firearms Tracing Center. In fiscal year 1992, ATF began creating a computerized index of the microfilmed records containing the information necessary to identify whether ATF had a record relating to a firearm being traced. The index contained the following information: (1) the cartridge number of the microfilm; (2) an index number; (3) the serial number of the firearm; (4) the FFL number; and (5) the type of document on microfilm, i.e., a Firearms Transaction Record form or acquisition and disposition logbook pages. This information was stored on a database in ATF s mainframe computer to allow searches. Other information, however, including a firearms purchaser s name or other identifying information and the manufacturer, type, and model remained stored on microfilm cartridges and was not computerized. Therefore, this information was not accessible to ATF personnel through a text search. In our 1996 report, we concluded that the Microfilm Retrieval System did not violate the restriction on consolidation or centralization due to its statutory underpinnings and design. ATF had initially required out-of- business FFLs to deliver their records to ATF through a 1968 regulation. We found no indication in its legislative history that the appropriations act restriction was intended to overturn this regulation and noted that, historically, out-of-business records had been maintained at a central location. We also explained that the Firearms Owners Protection Act of 1986 (FOPA) had codified the ATF regulation, affirming the agency s authority to collect this information, and that a subsequent appropriations act had provided funding specifically for ATF s microfilming effort. Finally, ATF s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in an automated file. In this regard, we found that the system did not aggregate information in a manner beyond that necessary to implement the Gun Control Act of 1968, as amended by FOPA. Conversion of Records. The conferees recognize the need for the ATF to begin converting tens of thousands of existing records of out-of-business Federal firearms dealers from film to digital images at the National Tracing Center. Once the out-of- business records are fully converted, the search time for these records will be reduced to an average of 5 minutes per search from the current average of 45 minutes per search. This significant time saving will ultimately reduce overall costs and increase efficiency at the National Tracing Center. Therefore, the conference agreement includes a $4,200,000 increase for the ATF to hire additional contract personnel to begin this conversion. Similarly, the conference report accompanying the fiscal year 2006 appropriations act reflected the conferees support for ATF s transition of out-of-business records to OBRIS. Since 2006, NTC has converted records submitted by FFLs discontinuing their operations to digital images in OBRIS. Specifically, NTC sorts and scans records provided by out-of-business FFLs, converting and storing them in an image repository on an electronic server. Images stored in OBRIS are generally indexed by FFL number. The records themselves are stored as images without optical character recognition so that they cannot be searched or retrieved using text queries, but must be searched through the index, generally by FFL number. After narrowing down the possible records through an index search, an NTC analyst must manually scroll through digital images to identify the record of the particular firearm in question. The technological changes represented by OBRIS do not compel a different conclusion regarding ATF s compliance with the restriction on consolidation or centralization from the one we reached in 1996 with respect to the predecessor system. The statutory basis for OBRIS is the same as for the Microfilm Retrieval System and OBRIS makes records accessible to the same extent as that system, functioning in essentially the same manner though with enhanced technology. As with the prior microfilm system, users identify potentially relevant individual records through manual review after searching an index using an FFL number, or firearms information if available. In this regard, OBRIS, like its predecessor, does not aggregate records in a manner beyond that required to implement the Gun Control Act of 1968, as amended by FOPA. <9.1. A2K> We assessed A2K with regard to in-business records and out-of-business records. We conclude that A2K for in-business records complies with the restriction on consolidation or centralization, while A2K for out-of- business records violated the restriction. <9.1.1. A2K In-Business Records> The Gun Control Act of 1968, as amended, requires FFLs to provide firearms disposition information to ATF in response to a trace request. Specifically, section 923(g)(7) of title 18, United States Code, requires FFLs to respond within 24 hours to a request for records to determine the disposition of firearms in the course of a criminal investigation. Prior to the implementation of A2K, FFLs could only respond to such requests manually. A2K provides manufacturer, importer, and wholesaler FFLs with an automated alternative to facilitate their statutorily required response to ATF requests. he conferees are aware that the Access 2000 program was initiated by ATF to improve the efficiency and reduce the costs associated with firearms tracing incurred by Federal Firearms Licensees (FFLs). ATF and FFL importers, manufacturers, and wholesalers form a partnership in this effort. FFLs take their data from their mainframe computer and import it into a stand-alone server provided by the ATF. The National Tracing Center is connected to this server remotely by secure dial-up and obtains information on a firearm that is subject to a firearms trace. The conferees support this program, which reduces the administrative burdens of the FFL and allows the ATF around the clock access to the records. The ATF currently has 36 Access 2000 partners. The conferees encourage the ATF to place more emphasis on this program and expand the number of partners to the greatest extent possible. According to ATF, as of April 25, 2016, there are 35 industry members representing 66 individual manufacturer, importer, and wholesaler FFLs currently participating in A2K. ATF believes that A2K has appropriately balanced Congressional concerns related to the consolidation of firearm records with the necessity of being able to access firearm information in support of its underlying mission to enforce the Gun Control Act, as amended. We agree. Given the statutory underpinning and features of the system for in-business FFLs, we conclude that ATF s use of A2K for in-business records does not violate the restriction on the consolidation or centralization of firearms records. ATF s use of A2K for in-business records is rooted in the specific statutory requirement that FFLs respond promptly to ATF trace requests in connection with criminal investigations. In addition, although the system allows FFLs to respond to ATF s trace requests virtually, ATF obtains the same information as it would otherwise obtain by phone, fax, or e-mail and in similar disaggregated form, that is, through multiple servers located at individual FFLs. Moreover, industry members retain possession and control of their disposition records and, according to ATF officials, may withdraw from using A2K and remove their records from the ATF- accessible servers at any time. For these reasons, we do not view A2K for in-business records to constitute the type of data aggregation prohibited by the appropriations act restriction on the consolidation or centralization of records within DOJ. <9.1.2. A2K Out-of-Business Records> During the course of our review, we found that when participating industry members permanently discontinued their operations, the disposition data maintained in connection with A2K was transferred to ATF, and ATF used the data when conducting firearms traces. Specifically, when an A2K participant went out of business, an ATF contractor remotely transferred the data on the server to a backup disk and the industry member shipped the backup disk with intact disposition records, as well as the blank server, to ATF s NTC. ATF officials placed the data from the backup disk on a single partitioned server at NTC and accessed the data for firearms traces using the same type of interface and URL as used while the industry member was in business. As a result, in response to an industry member specific query using an exact firearm serial number, the A2K out-of-business server would automatically generate the disposition information related to that firearm serial number. According to ATF, records of eight industry members were placed on the server at NTC from as early as late 2000 through mid-2012. While ATF estimated that there were approximately 20 million records associated with these industry members on the server, the agency did not have a means of ascertaining the actual number of records. The number of records on the ATF server would have been expected to grow as additional A2K participants discontinued their operations and provided their backup disks to ATF. However, during the course of our review, ATF officials told us that the agency planned to move all of the A2K records into OBRIS and that, once converted to OBRIS images, the records would be searchable like other OBRIS records. In January 2016, ATF officials reported that NTC was in the process of transferring all of the records from the A2K out-of-business records server to OBRIS and a quality-control process was under way to verify the accuracy of the transfer. They subsequently deleted all records from the server in March 2016. We conclude that ATF s use of A2K with respect to out-of-business records violated the restriction on consolidation or centralization. In contrast to the discrete servers in the possession of the in-business industry members, ATF combined disposition records across industry members on the single, though partitioned, A2K server at NTC. In addition, the records were stored on the single A2K server in a manner that made them more easily searchable than other out-of-business records. Unlike OBRIS, which requires the manual review of potentially relevant records identified through an index, the A2K server within NTC generated records automatically in response to an industry member specific text query, that is, exact firearm serial number. In addition, according to NTC officials, they could have modified the structure of the NTC server to achieve further aggregation, by programming the system to allow text searches across a broader set of data fields. As a result, ATF could have searched for records by name or other personal identifier. As explained earlier, our analysis of ATF s aggregation of firearms records turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. ATF s maintenance of out-of- business industry members disposition records on a single server at NTC was not incident to the implementation of a specific statutory requirement. As discussed above, A2K was designed to allow in-business industry members to respond promptly to ATF trace requests as required by 18 U.S.C. 923(g)(7) without having to dedicate personnel to this function. Section 923(g)(7), however, has no applicability to FFLs once they discontinue operations. A separate statutory provision, 18 U.S.C. 923(g)(4), applies to FFLs that permanently discontinue their operations. ATF has long maintained a separate system formerly the Microfilm Retrieval System and currently OBRIS to hold the records submitted under that provision, and the disposition records that ATF maintained on the NTC server were among the types of records required to be submitted under section 923(g)(4) for which ATF had created that system. Therefore, we find no statutory underpinning for ATF s maintenance of out-of-business A2K participants disposition records on the server at NTC. Our implementation of A2K included strict security protocols to limit ATF access to only that information to which it is statutorily required, e.g., the next step in the distribution of the traced firearm. That is, ATF would simply have access to the same information it could obtain by calling the participating FFL. However, that calculus is altered when an FFL ceases participation in A2K. At that point, that FFL s records become just like any other FFL records and, as such, must be stored in the same manner. Otherwise, records which were formerly accessible on a discrete basis under A2K would be readily accessible in a database which would, in our opinion based on the 1996 GAO Report, violate the appropriation rider. Our decision, therefore, was to ensure that A2K records have the same character and are retrievable in the same manner as any other out-of-business records. In addition to removing all data from the A2K out-of-business records server, ATF officials reported that, going forward, the agency plans to convert records of A2K participants that go out of business directly into OBRIS images. However, they said, when such records are received by out-of-business FFLs, the time frame for converting the records into OBRIS images will depend on the backlog of electronic records awaiting conversion. Similarly, ATF officials told us that they had anticipated that A2K participants would submit acquisition and disposition records together, consistent with the format provided for in ATF s regulations, for inclusion in OBRIS. They had not expected that A2K participants would satisfy any part of their statutory responsibility by providing their backup disks to the agency. However, even if industry members submission of disposition data on the backup disks could be said to be in furtherance of the portion of the statutory requirement pertaining to disposition records, given the existence and successful functioning of OBRIS, we conclude that ATF s maintenance of those records on the NTC server went beyond the purposes of the Gun Control Act of 1968, as amended. We conclude that FRNP complies with the restriction on consolidation and centralization of firearms records when used as a tool for ATF agents in connection with an ATF criminal investigation. However, ATF s use of FRNP to maintain information on firearms identified during regulatory inspections of FFLs under the Southwest Border Secondary Market Weapons of Choice Program (SWBWOC), as discussed below, was a violation of the restriction. <9.1.3. FRNP for Criminal Investigations> Under section 599A of title 28, United States Code, ATF is responsible for investigating criminal and regulatory violations of federal firearms laws, and for carrying out any other function related to the investigation of violent crime or domestic terrorism that is delegated to it by the Attorney General. Among other things, ATF is responsible for enforcing federal statutes regarding firearms, including those regarding illegal possession, use, transfer, or trafficking. FRNP, formerly known as the Suspect Gun Program, was established in 1991 within the Firearms Tracing System to provide an investigative service to ATF agents conducting criminal investigations. Through this program, ATF records information manufacturer, serial number, and type about firearms that have not yet been recovered by other law enforcement authorities, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. When such firearms are recovered, ATF uses the information available through the program to notify the investigating ATF official and to coordinate the release of trace results to other law enforcement authorities with the ongoing ATF investigation. To enter firearms information into the system, ATF agents investigating potential criminal activity involving firearms must identify the firearms at issue, the number of an open ATF criminal investigation, and at least one of five specified criteria for using the system. The five criteria correspond to bases for ATF investigation. ATF agents also indicate on the submission form whether NTC should release trace results to requesters of a trace for the firearms listed on the form. Where criminal investigations are ongoing and FRNP records are designated as active, NTC will notify the investigating ATF agent when the firearm described on the form is recovered. In addition, where the ATF agent has indicated that NTC should release trace information, NTC will notify the ATF agent and the requesting law enforcement agency of trace results. Where the ATF agent has indicated that NTC should not release trace information, the ATF agent is notified of the trace results and determines when that information may be released to the requesting law enforcement agency. For criminal investigations that have been closed, the FRNP record associated with the investigation is labeled inactive, although the records may provide investigative leads, according to ATF officials. In such cases, the ATF agent associated with the investigation is not notified of the recovery of the identified firearms or related trace requests, and the release of trace results to requesting law enforcement agencies proceeds without any delay. ATF is authorized by statute to investigate violations of federal firearms laws. As described above, FRNP is designed for the limited purpose of facilitating ATF s conduct of specific criminal investigations under its jurisdiction. The inclusion of data in FRNP requires an open ATF investigation of an identified criminal matter, which helps to ensure that the data are maintained only as needed to support this investigative purpose. Further, ATF requires its agents to identify with specificity the firearms relevant to the investigation. As we observed in 1996, the restriction on consolidation or centralization does not preclude all data systems that involve an element of consolidation. Where ATF adheres to the limitations incorporated in the design of FRNP, the maintenance of information through FRNP is incident to ATF s exercise of its statutory authority to conduct criminal investigations and does not involve the aggregation of data in a manner that goes beyond that purpose. In this respect, we conclude that it does not represent a consolidation or centralization of records in violation of the statutory restriction. <9.1.4. Southwest Border Secondary Market Weapons of Choice Program> In response to our inquiries about FRNP data, ATF officials told us that in 2009, the ATF Chief Counsel had concluded that the agency had violated the appropriations restriction in connection with the system. Specifically, ATF officials told us that the agency had maintained records on the inventories of certain FFLs in violation of the restriction, from 2007 through 2009 under ATF s Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program. We agree with the ATF Chief Counsel s conclusion that its collection and maintenance of information in connection with this program violated the restriction on the consolidation or centralization of firearms records. In October 2005, the governments of the United States and Mexico instituted a cooperative effort to address surging drug cartel driven violence in Mexico and along the southwest border of the United States. ATF s main role in this initiative was to develop strategies and programs to stem the illegal trafficking of firearms from the United States to Mexico. ATF determined that used gun sales referred to in the industry as secondary market sales played a significant role in firearms trafficking to Mexico, particularly for the types of firearms most sought by the Mexican drug cartels, known as weapons of choice. Accordingly, in June 2007, the agency developed a protocol to be used during its annual inspections of FFLs in the region engaged primarily in the sale of used firearms. This protocol, known as the SWBWOC Program was intended to enhance ATF s ability to track secondary market sales. It called for ATF investigators to record the serial number and description of all used weapons of choice in each FFL s inventory and those sold or otherwise disposed of during the period covered by the inspection. Under the protocol, the investigators forwarded the information to the relevant ATF field division, which opened a single investigative file for all submissions from the area under its jurisdiction and determined whether any of the weapons had been traced since their last retail sale. After review, the field division forwarded the information to FRNP. According to ATF, the Dallas, Houston, and Los Angeles Field Divisions began to submit records from the SWBWOC Program to FRNP in July 2007, and the Phoenix Field Division began to do so in October 2007. The SWBWOC Program was cancelled on October 2, 2009, following a review by ATF s Office of Chief Counsel of the process by which the secondary market weapons of choice information had been recorded and submitted to FRNP. The Office of Chief Counsel determined that the SWBWOC Program was not consistent with the consolidation or centralization restriction. It advised that information obtained from an FFL about a firearm in and of itself and unaccompanied by purchaser information could not be collected and consolidated absent a specific basis in statute or regulation, or a direct nexus to discrete law enforcement purposes such as a specific criminal investigation. The Office of Chief Counsel found that the collection of information from FFLs under the SWBWOC Program lacked these essential, individualized characteristics. We agree with ATF s conclusion that the collection and maintenance of firearms information from the SWBWOC Program in FRNP exceeded the permissible scope of the appropriations act restriction. As discussed above, our analysis of ATF s aggregation of firearms data turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. Here, ATF collected and maintained acquisition and disposition data without a statutory foundation based on nothing more than the characteristics of the firearms. The collection and maintenance of information about a category of firearms, weapons of choice, from a category of FFLs, primarily pawnbrokers, did not pertain to a specific criminal investigation within the scope of ATF s statutory investigative authority. Nor did it fall within the scope of ATF s authority to conduct regulatory inspections. For this reason, we conclude that the program involved the type of aggregation of information contemplated by Congress when it passed the restriction on the consolidation or centralization of firearms records. ATF deleted the related data from FRNP in March 2016. <9.2. Multiple Sales and Demand Letter 3 Reports> The Gun Control Act of 1968, as amended, requires FFLs to report transactions involving the sales of multiple firearms. Specifically, under 18 U.S.C. 923(g)(3)(A), an FFL is required to report sales or other dispositions of two or more pistols or revolvers to a non-FFL at one time or during 5 consecutive business days. Under these circumstances, the FFL is required to report information about the firearms, such as type, serial number, manufacturer, and model, and the person acquiring the firearms, such as name, address, ethnicity, race, identification number, and type of identification to ATF. ATF enters data from these reports into the MS portion of its Firearms Tracing System so that it can monitor and deter illegal interstate commerce in pistols and revolvers. Our 1996 report examined the Multiple Sales System and found that it did not violate the prohibition on the consolidation or centralization of firearms records because the collection and maintenance of records was incident to a specific statutory responsibility. In connection with our current review, we observed the functioning of the present system for reports of multiple sales. We found no changes since 1996 that would suggest a different conclusion with respect to ATF s compliance with the appropriations act restriction. As we reported in 1996, a regulatory requirement for FFLs to prepare and provide multiple sales reports to ATF existed before the prohibition on consolidation or centralization of firearms records was enacted in fiscal year 1979 and there was no indication in the legislative history that the prohibition was intended to overturn ATF s existing practices with respect to multiple sales. In addition, we explained that the Firearms Owners Protection Act had codified the ATF regulation, affirming the agency s authority to collect this information. FOPA s requirement that FFLs send the reports to the office specified on an ATF form suggested that ATF could specify that the information be sent to a central location. Our review of FOPA s legislative history confirmed our interpretation of the statute. When considering the passage of FOPA, Congress clearly considered placing constraints on ATF s maintenance of multiple sales reports, but declined to do so. Specifically, the Senate-passed version of FOPA prohibited the Secretary of the Treasury from maintaining multiple sales reports at a centralized location and from entering them into a computer for storage or retrieval. This provision was not included in the version of the bill that was ultimately passed. In light of the above, we reach the same conclusion as we did in 1996 and find that ATF s use of MS complies with the restriction on the consolidation or centralization of firearms records. In addition, ATF has collected and maintained information on the multiple sales of firearms under a separate authority, 18 U.S.C. 923(g)(5)(A). Section 923(g)(5)(A) authorizes the Attorney General to require FFLs to submit information that they are required to maintain under the Gun Control Act of 1968, as amended. This provision was also included in FOPA. Relying on this authority, ATF issues demand letters requiring FFLs to provide ATF with specific information. In 2011, ATF issued a demand letter requiring certain FFLs in Arizona, California, New Mexico, and Texas to submit reports of multiple sales or other dispositions of particular types of semiautomatic rifles to non-FFLs (referred to as Demand Letter 3 reports). These reports are submitted to ATF and included in the MS portion of its Firearms Tracing System. According to ATF, the information was intended to assist in its efforts to investigate and combat the illegal movement of firearms along and across the southwest border. Several FFLs challenged the legality of ATF s demand letter, asserting, among other things, that it would create a national firearms registry in violation of the fiscal year 2012 appropriations act restriction. In each of the cases, the court placed ATF s initiative in its statutory context and held that the appropriations act did not prohibit ATF s issuance of the demand letter. Similar to our 1996 analyses of the Out-of-Business Records and Multiple Sales Systems, the United States Court of Appeals for the Fifth Circuit examined the enactment of ATF s authority to issue demand letters in relation to the appropriations act restriction. The court observed that ATF s demand letter authority was enacted as part of FOPA and that because FOPA clearly contemplate ATF s collection of some firearms records, the appropriations provision did not prohibit any collection of firearms transaction records. In this regard, the court further noted that the plain meaning of consolidating or centralizing did not prohibit the collection of a limited amount of information. Other courts also emphasized that the ATF 2011 demand letter required FFLs to provide only a limited subset of the information that they were required to maintain, as opposed to the substantial amount of information that they believed would characterize a consolidation or centralization. For example, the Court of Appeals for the District of Columbia Circuit enumerated the limitations on ATF s 2011 collection of information, noting that it applied to (1) FFLs in four states; (2) who are licensed dealers and pawnbrokers; (3) and who sell two or more rifles of a specific type; (4) to the same person; (5) in a 5-business-day period. The court found that because ATF sent the demand letter to a limited number of FFLs nationwide and required information on only a small number of transactions, the . . . demand letter does not come close to creating a national firearms registry. In light of the court decisions regarding ATF s exercise of its statutory authority in this context, we conclude that the Demand Letter 3 initiative does not violate the restriction on the consolidation or centralization of firearms records. <10. Conclusion> Two of the data systems under review, OBRIS and MS, comply with the provision in ATF s fiscal year 2012 appropriation prohibiting the use of funds for the consolidation or centralization of firearms records. ATF collects and maintains firearms transaction information in each system incident to the implementation of specific statutory authority and it does not exceed those statutory purposes. ATF s A2K system for in-business FFLs and its maintenance of certain firearms information pertinent to criminal investigations in FRNP are likewise consistent with the appropriations act restriction. However, ATF s collection and maintenance of out-of-business A2K records on the server at NTC violated the restriction, as did its collection and maintenance of data from certain FFLs as part of the SWBWOC Program. In both cases, ATF s aggregation of information was not supported by any statutory purpose. ATF s failure to comply with the prohibition on the consolidation or centralization of firearms records also violated the Antideficiency Act. The Antideficiency Act prohibits making or authorizing an expenditure or obligation that exceeds available budget authority. As a result of the statutory prohibition, ATF had no appropriation available for the salaries or administrative expenses of consolidating or centralizing records, or portions of records, of the acquisition and disposition of firearms in connection with the SWBWOC Program or A2K for out-of-business records. The Antideficiency Act requires that the agency head shall report immediately to the President and Congress all relevant facts and a statement of actions taken. In addition, the agency must send a copy of the report to the Comptroller General on the same date it transmits the report to the President and Congress. Appendix VIII: GAO Contacts and Staff Acknowledgments <11. GAO Contacts> <12. Staff Acknowledgments> In addition to the contact named above, Dawn Locke (Assistant Director) and Rebecca Kuhlmann Taylor (Analyst-in-Charge) managed this work. In addition, Willie Commons III, Susan Czachor, Michele Fejfar, Justin Fisher, Farrah Graham, Melissa Hargy, Jan Montgomery, and Michelle Serfass made significant contributions to the report. Also contributing to this report were Dominick M. Dale, Juan R. Gobel, Eric D. Hauswirth, Ramon J. Rodriguez, and Eric Winter.
Why GAO Did This Study ATF is responsible for enforcing certain criminal statutes related to firearms, and must balance its role in combatting the illegal use of firearms with protecting the privacy rights of law-abiding gun owners. As part of this balance, FFLs are required to maintain firearms transaction records, while ATF has the statutory authority to obtain these records under certain circumstances. ATF must also comply with an appropriations act provision that restricts the agency from using appropriated funds to consolidate or centralize FFL records. GAO was asked to review ATF's compliance with this restriction. This report (1) identifies the ATF data systems that contain retail firearms purchaser data and (2) determines whether selected ATF data systems comply with the appropriations act restriction and adhere to ATF policies. GAO reviewed ATF policy and program documents, observed use of data systems at NTC, reviewed a generalizable sample of one system's records, and interviewed ATF officials at headquarters and NTC. What GAO Found To carry out its criminal and regulatory enforcement responsibilities, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has 25 firearms-related data systems, 16 of which contain retail firearms purchaser information from a federal firearms licensee (FFL)—such as firearms importers and retailers. GAO selected 4 systems for review that are used in the firearms tracing process, based on factors such as the inclusion of retail purchaser information and original data. The Out-of-Business Records Imaging System (OBRIS) stores nonsearchable images of firearms records from out-of-business FFLs. Such FFLs are required by law to provide their records to ATF. Access 2000 (A2K) provides servers for National Tracing Center (NTC) personnel to electronically search participating FFLs' records at their premises for firearms disposition information during a trace. The Firearm Recovery Notification Program (FRNP) maintains information on firearms that have not yet been recovered by law enforcement, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. Multiple Sales (MS) includes firearms information from multiple sales reports. FFLs are required by law to report to ATF sales of two or more revolvers or pistols during 5 consecutive business days. ATF policy requires that certain information in MS be deleted after 2 years if the firearm has not been connected to a trace. Of the 4 data systems, 2 fully comply and 2 did not always comply with the appropriations act restriction prohibiting consolidation or centralization of FFL records. ATF addressed these compliance issues during the course of GAO's review. ATF also does not consistently adhere to its policies. Specifically: OBRIS complies with the restriction and adheres to policy. A2K for in-business FFL records complies with the restriction. A2K for out-of-business FFL records did not comply with the restriction because ATF maintained these data on a single server at ATF. Thus, ATF deleted the records in March 2016. In addition, ATF policy does not specify how, if at all, FFLs may use A2K records to meet out-of-business record submission requirements. Such guidance would help ensure they submit such records. FRNP generally complies with the restriction. However, a 2007 through 2009 program using FRNP did not comply. ATF cancelled this program in 2009 and deleted the related data in March 2016. Also, a technical defect allows ATF agents to access FRNP data—including purchaser data—beyond what ATF policy permits. Aligning system capability with ATF policy would ensure that firearms purchaser data are only provided to those with a need to know. MS complies with the restriction, but ATF inconsistently adheres to its policy when deleting MS records. Specifically, until May 2016, MS contained over 10,000 names that were not consistently deleted within the required 2 years. Aligning the MS deletion policy with the timing of deletions could help ATF maintain only useful MS purchaser data and safeguard privacy. What GAO Recommends GAO recommends that ATF provide guidance to FFLs participating in A2K on the provision of records to ATF when they go out of business; align system capability with ATF policy to limit access to FRNP firearms purchaser information for ATF agents; and align timing and ATF policy for deleting MS records. ATF concurred with our recommendations.
<1. Agencies Have Improved Sharing as They Build the ISE, but a Better Roadmap and System of Accountability Could Guide Future Development> <1.1. ISE Has Improved Sharing By Advancing Goals and Priority Programs> In our July 2011 report, we noted that the Program Manager for the ISE and key security agencies have continued to make progress in addressing issues that keep terrorism-related information sharing on our high-risk list. For example, they developed a corrective action plan or framework to implement a set of initial goals and priority programs that help to establish the ISE, partly responding to recommendations we made in 2008. Goals included reducing barriers to sharing and improving information sharing practices with federal, state, local, tribal, and foreign partners. Priority programs included developing common information sharing standards; building a national integrated network of fusion centers; implementing a system whereby state and local partners can report suspicious activity; and controlling and handling sensitive but unclassified information. Activities under the framework also included establishing information sharing incentive programs for federal employees and strengthening privacy, civil rights, and civil liberties considerations. The administration has recognized, however, that the framework was useful in promoting this initial set of programs and activities, but it did not define what the fully functioning ISE is to achieve and include. Therefore, as discussed in the following sections, the framework does not provide the comprehensive roadmap that is needed to further develop and implement the ISE going forward. <1.2. More Fully Defining the ISE, Related Costs, and What Work Remains Would Help Provide a Roadmap and Accountability for Results> <1.2.1. Defining an End State Vision> The Program Manager has acknowledged the importance of defining what the ISE is intended to achieve and include or the end state vision and noted that he is doing so as part of ongoing efforts to update the 2007 National Strategy for Information Sharing. He said that this update will drive future ISE implementation efforts and will help individual agencies adapt their information sharing policies, related business processes, architectures, standards, and systems to effectively operate within the ISE. The Program Manager also noted that after development of the end state vision is completed, supporting implementation plans will be needed to help guide achievement of the vision, including plans that define what activities and initiatives will be needed to achieve the end state and guide ISE development and implementation. Such plans would be consistent with our recommendation for a roadmap if they contain key elements such as roles, responsibilities, and time frames for these activities, among other things. Consistent with the Intelligence Reform Act, the ISE is to provide the means for sharing terrorism-related information across five communities homeland security, law enforcement, defense, foreign affairs, and intelligence in a manner that, among other things, leverages ongoing efforts. As we reported in July 2011, the ISE has primarily focused on the homeland security and law enforcement communities and related sharing between the federal government and state and local partners, in part to align with information sharing priorities outlined by the administration. We recognize that recent homeland security incidents and the changing nature of domestic threats make continued progress in improving sharing between federal, state, and local partners critical. However, consistent with the Intelligence Reform Act, the ISE is intended to provide the means for sharing terrorism-related information across all five communities. The Program Manager and ISE agencies have not yet ensured that initiatives within the foreign affairs, defense, and intelligence communities have been fully leveraged by the ISE to enhance information sharing within and across all communities. For example, according to Department of State (State) officials, the department shares terrorism-related information with other agencies through a variety of efforts and initiatives related to national and homeland security, but State initiated these efforts independently and not through the Office of the Program Manager. According to the Program Manager, State also possesses information about entrants to the country that could be valuable to the ISE. However, in April 2011, State officials said that the Office of the Program Manager had not contacted the department s coordinator for the ISE to request information on programs or initiatives related to people entering the country to determine if this information could be useful to the broader ISE communities. Further, intelligence agencies have technology initiatives including new ways of ensuring that authorized users have access to, and are able to search across, classified systems and networks to facilitate information sharing but it is not clear to what extent transferring this best practice to non-classified information is being considered under the ISE. The Program Manager also noted that his office has engaged all five communities in ISE activities. For example, in addition to working with the homeland security and law enforcement communities, he said his office has worked with State to standardize terrorism-related information sharing agreements with foreign governments; with the Department of Defense to develop information technology standards that allow different agencies to exchange information; and the intelligence community to develop terrorism-related information products for state, local, and tribal governments. He also noted that all five communities have been afforded opportunities to help set ISE programmatic priorities. However, the Program Manager and agencies had not yet taken actions to ensure that all relevant information sharing initiatives across the five communities are fully leveraged, which could help enhance information sharing government-wide. In our July 2011 report, we recommended that they take such actions. They generally agreed and have started to address this issue. The Program Manager and agencies have not yet identified the incremental costs necessary to implement the ISE, as envisioned by the Intelligence Reform Act. Our prior work shows that cost information can help agencies allocate resources and investments according to priorities and constraints, track costs and performance, and shift such investments and resources as appropriate. We recognize that developing accurate and reliable incremental cost estimates for the ISE is a difficult undertaking, complicated further by the fact that the Program Manager and agencies are still defining what the ISE is, is to include, and is to attain. In our July 2011 report, we recommended that the Program Manager in coordination with the Office of Management and Budget task the key ISE agencies to define, to the extent possible, the incremental costs needed to help ensure successful implementation of the ISE. The Program Manager acknowledged the importance of identifying incremental costs and noted that the Office of the Program Manager will continue to work directly with the Office of Management and Budget to provide agencies with budget guidance that calls for them to identify their costs to implement the ISE. The Intelligence Reform Act requires the Program Manager to, among other things, monitor implementation of the ISE by federal departments and agencies to ensure adequate progress is being made and regularly report the findings to Congress. In June 2008, we reported that the Office of the Program Manager was monitoring ISE implementation as demonstrated through its annual report to Congress but that such monitoring did not include an overall assessment of progress in implementing the ISE and how much work remained. Thus, we recommended, among other things, that the Program Manager develop a way to measure and demonstrate results and to show the extent to which the ISE had been implemented, as well as more fully define the key milestones needed to achieve the ISE. The Program Manager generally agreed and in January 2011, the Information Sharing and Access Interagency Policy Committee (ISA IPC) and the Office of the Program Manager initiated an effort to make ISE priority programs and related goals more transparent and to better monitor progress. Specifically, according to the Deputy Program Manager, agencies that are responsible for implementing ISE priority programs are leading efforts to establish 3-, 6-, and 12-month goals for these programs. Information on progress made in reaching these goals may be included in future ISE annual reports. In addition he explained that the Office of the Program Manager is working with agencies to develop a performance management framework that will be linked to the updated national strategy. These actions should help to provide an accurate accounting for progress to Congress and other stakeholders and would be consistent with the criteria we use to evaluate a program s risk, which calls for a way to demonstrate progress and results. Our prior work on high-risk issues shows that a strong commitment from top leadership to address problems and barriers to sharing terrorism- related information is important to reducing related risks. In July 2009, the White House established the ISA IPC to subsume the role of its predecessor interagency body the Information Sharing Council. The Program Manager at that time cited concerns about the Program Manager s authority and provided recommendations intended to help strengthen the ISE effort. For example, among other things, he recommended that the Program Manager be appointed by the President and serve as co-chair of the ISA IPC. Subsequently, both changes were implemented, which were intended to bring high-level policy decision making and oversight to the development of the ISE. At the time of our review, it was too early to tell how the new structure would impact the continued development and implementation of the ISE and if the Program Manager s new role would provide him sufficient leverage and authority to ensure that agencies fully participate in the ISE. <1.3. The Enterprise Architecture Management Foundation for Supporting ISE Implementation Could Be Improved> In our July 2011 report, we noted that the process of defining an enterprise architecture (EA) for the ISE could help the Program Manager and agencies in their efforts to define the current operational and technological capabilities within the ISE, the future capabilities needed, and a plan to transition between the two. Under an EA approach, agencies are to define the business processes involved in information sharing, map out the exchange of information to be achieved, and build the technology and other resources they need to accomplish the sharing in their EA plans and budget requests, among other things. Doing so could help the government more fully define the necessary components of the ISE. We reported that agencies had begun to build ISE initiatives, such as suspicious activity reporting, into their EAs. To better define ISE EA guidance and effectively manage EA architecture, we recommended that the Program Manager, ISA IPC, and agencies establish an EA management plan for the ISE to improve ISE EA management practices and address missing architecture content and a mechanism to ensure implementation. The Program Manager and the Office of Management and Budget generally agreed and are taking steps to address the intent of this recommendation. <2. Federal Agencies Are Helping Fusion Centers Build Capabilities, but Have More Work to Help Them Sustain Operations and Measure Their Value> <2.1. Federal Agencies Have Provided Resources to Develop a National Fusion Center Network, but Centers Are Concerned about Sustaining Operations> The federal government recognizes that fusion centers represent a critical source of local information about potential threats, including homegrown terrorism, and a means to disseminate terrorism-related information and intelligence from federal sources. DHS, which has a statutory lead for state and local information sharing, in collaboration with the Department of Justice (DOJ) and the Program Manager for the ISE, has taken steps to partner with and leverage fusion centers a top priority for the ISE. In accordance with the 9/11 Commission Act, over the years, DHS has provided centers with a variety of support, including personnel assigned to centers, access to classified and unclassified homeland security and terrorism information and systems, training and technical assistance, and federal grant funding. For instance, as of July 2010, DHS had deployed 74 intelligence officers to fusion centers. In addition, states have reported to DHS that they have used about $426 million in grant funding from fiscal year 2004 through 2009 to support fusion-related activities nationwide. In September 2010, we reported that fusion centers cited federal funding as critical to their long-term sustainability and to achieving and maintaining a set of baseline capabilities. These baseline capabilities were defined by the federal government and fusion centers as being necessary for centers to be considered capable of performing basic functions in the national information sharing network. They include, for example, capabilities related to information gathering, recognition of indicators and warnings, and intelligence and information dissemination. According to a survey of all fusion centers conducted by DHS and the Program Manager for the ISE, of the 52 fusion centers that responded, on average, over half of their 2010 budgets were supported by federal funding. Concerns about and challenges related to funding for sustainability are long-standing issues. Fusion centers do not have their own federal funding source but must compete each year with other state homeland security, law enforcement, and emergency management agencies and missions for a portion of the total federal homeland security grant funding awarded to each state. We and others have reported on the centers concerns about the lack of a predictable funding source. For example, in September 2010 we reported that officials in all 14 fusion centers we contacted stated that without sustained federal funding, centers could not expand operations to close the gaps between their current operations and the baseline capabilities, negatively impacting their ability to function as part of the national network. Senior DHS officials have acknowledged the fusion centers concerns and in an effort to further prioritize the development of the national network of fusion centers, DHS revised fiscal year 2011 grant guidance. It now requires, among other enhancements, that (1) each state submit a fusion center investment justification and (2) the justification must be related to mitigating capability gaps. Nevertheless, concerns about federal funding could be exacerbated given that overall homeland security grant funding of $2.1 billion for fiscal year 2011 is $780 million less than the previous year. <2.2. Federal Agencies Plan to Assess Centers Capabilities and Develop Performance Metrics to Determine Centers Value to the ISE> Consistent with efforts to develop this national network of fusion centers, federal agencies have also issued a series of guidance documents, including the baseline capabilities, to support fusion centers in establishing their operations. The baseline capabilities are intended to help ensure that a fusion center will have the necessary structures, processes, and tools in place to support the gathering, processing, analysis, and dissemination of terrorism, homeland security, and law enforcement information. As a first step, the Program Manager for the ISE, DHS, and DOJ conducted a systematic assessment of centers capabilities in 2010 and analyzed results to identify strengths, gaps, and weaknesses across the national network of fusion centers. The assessment specifically focused on four operational capabilities identified as critical which are generally defined as a fusion center s ability to receive, analyze, disseminate, and gather information. The assessment also focused on centers progress in implementing privacy, civil rights, and civil liberties protections. The results of this assessment and a subsequent survey effort conducted in January 2011 showed that over half of the 72 fusion centers had developed and implemented a final written plan, policy, or standard operating procedure to achieve three of the four capabilities receive (44 centers), disseminate (46 centers), and gather (42 centers). However, 37 centers indicated that they had not implemented a plan related to developing capabilities to analyze time sensitive information. According to DHS officials who oversee the fusion center initiative, using the results of the 2010 assessment, along with feedback obtained from fusion center directors, DHS developed and implemented a Fusion Center Assessment Process in 2011. This process will be conducted annually to identify capability gaps, enable gap mitigation planning, and continue to drive the allocation of resources to mitigate those gaps. DHS expects to release the results of the 2011 assessment in January 2012, according to DHS officials. We also reported in September 2010 that if centers are to receive continued federal financial support, it is important that they are also able to demonstrate their impact and value added to the national network and the nation s overall information sharing goals. However, the federal government had not established standard performance measures that it could use across all fusion centers to assess their contributions. We recommended that DHS define the steps it needed to take to design and implement a set of measures and commit to a target timeframe for their completion. According to senior DHS officials overseeing the office, in March 2011, the State and Local Program Office and a representative group of fusion center directors began developing an overarching strategy document to define the vision, mission, goals, objectives, and specific outcomes that fusion centers will be expected to achieve, and associated performance measures for the national network of fusion centers. According to these officials, such performance measures are to be in place by the end of 2011. <2.3. DHS and DOJ Are Helping Centers Develop Privacy and Civil Liberties Policies and Protections but Monitoring Implementation Will Be Important> Because fusion centers collect, analyze, and disseminate information on potential criminal and terrorist threats, some entities, such as the American Civil Liberties Union, have raised concerns that centers are susceptible to privacy and civil liberties violations. We reported in September 2010 that consistent with federal requirements, DHS and DOJ have provided technical assistance and training to help centers develop privacy and civil liberties policies and protections. For example, DHS and DOJ provided fusion centers with guidance and technical assistance, including a template on which to base a privacy policy and a process for reviewing centers policies to ensure they are consistent with federal requirements. DHS reported that all operational fusion centers now have a final, approved privacy policy in place that is at least as comprehensive as the ISE Privacy Guidelines. With respect to training, we reported that DHS, in partnership with DOJ and other entities, has implemented a three-part training and technical assistance program in support of fusion centers efforts to provide appropriate privacy, civil rights, and civil liberties training for personnel. We also reported that DHS, in conjunction with DOJ and the Program Manager for the ISE, was taking steps to assess the implementation of centers privacy protections to ensure that the protections described in centers policies were implemented in accordance with all applicable privacy regulations, laws, and constitutional protections. Federal agencies are also encouraging centers to assess their own protections to identify any existing privacy and civil liberties risks and to develop strategies to mitigate the risks. Continuous assessment and monitoring are key steps to help ensure that fusion centers are implementing privacy and civil liberties protections and that DHS, and other federal agencies, are supporting them in their efforts. <3. DHS Has Enhanced Support to State and Local Partners but Could Better Define the Actions It Will Take to Meet This Mission and Measure Progress> In addition to supporting fusion centers, DHS is responsible for sharing terrorism-related information with its state and local partners, and within DHS, I&A is the designated lead component for this mission. In December 2010, we reported that I&A had initiatives underway to identify state and local information needs, developing intelligence products to meet these needs, and obtaining more detailed feedback on the timeliness and usefulness of these products, among other things. I&A also provided a number of services to its state and local partners primarily through fusion centers that were generally well received by the state and local officials we contacted. For example, in addition to deploying personnel and providing access to networks disseminating classified and unclassified information, I&A provides training directly to state and local personnel and operates a 24-hour service to respond to state and local requests for information and other support. We also reported that a Congressional committee that had been trying to hold I&A accountable for achieving its state and local mission was concerned about I&A s inability to demonstrate the priority and level of investment it is giving to this mission compared to its other functions, as evidenced by hearings conducted over the past several years. We reported that, historically, I&A had focused its state and local efforts on addressing statutory requirements and responding to I&A leadership priorities. However, I&A had not yet defined how it plans to meet its state and local information-sharing mission by identifying and documenting the specific programs and activities that are most important for executing this mission. Our prior work has found that successful organizations clearly articulate the programs and activities that are needed to achieve specified missions or results, and the organization s priorities, among other things. Further, we reported that I&A had not defined what state and local information-sharing results it expected to achieve from its program investments and the measures it would use to track the progress it is making in achieving these results. For example, all of I&A s state and local measures provided descriptive information regarding activities and services that I&A provided, such as the percentage of fusion centers with I&A personnel and the number of requests for support. However, none of these measures accounted for the actual results, effects, or impacts of programs and activities or the overall progress I&A is making in meeting its partners needs. For example, the personnel measure did not provide information related to the effectiveness of the I&A personnel or the value they provide to their customers, such as enhanced information sharing, analytic capabilities, and operational support. To help I&A strengthen its efforts to share information with state and local partners, we recommended, among other things, that I&A (1) identify and document priority programs and activities related to its state and local mission, and (2) take actions to develop additional performance measures that gauge the results that I&A s information-sharing efforts have achieved and how they have enhanced homeland security. By taking these steps, I&A could potentially increase the usefulness of its products and services; the effectiveness of its investments; and the organization s accountability to Congress, key stakeholders, and the public. DHS agreed with these recommendations and expects to address them as part of new strategic planning efforts. <4. Agencies Are Addressing Watchlisting Gaps but Could Benefit from Assessing Impacts of Changes> The Executive Office of the President s review of the December 2009 attempted airline bombing found that the U.S. government had sufficient information to have uncovered and potentially disrupted the attack, but shortcomings in the nominations process resulted in the failure to nominate the attempted bomber for inclusion in the Terrorist Screening Database. Thus, screening agencies that could have identified him as a potential threat were unable to identify him and take action. The Executive Office of the President tasked departments and agencies to undertake a number of corrective actions to help address such gaps. We have ongoing work to assess the changes implemented and their impacts. This work is assessing (1) the actions the federal government has taken since the attempted attack to strengthen the watchlist nominations process, as well as any resulting challenges and impacts; (2) how the composition of the TSDB changed as a result of agency actions; and (3) how screening agencies are addressing vulnerabilities exposed by the attempted attack, the outcomes of related screening, and the extent to which federal agencies assessing the impacts of this screening. Our preliminary observations show that federal agencies have made progress in implementing corrective actions to address problems in watchlist-related processes that were exposed by the December 2009 attempted attack. These actions are intended to address problems in the way agencies share and use information to nominate individuals to the TSDB, and use the watchlist to prevent persons of concern from boarding planes to the United States or entering the United States at a port of entry. For example, according to TSA, the agency s assumption of the screening function from air carriers under the Secure Flight program has improved the government s ability to correctly determine whether passengers are on the No Fly or Selectee lists and has resulted in more individuals on these lists being identified and denied boarding an aircraft or subjected to additional physical screening before they board, as appropriate. Also, in April 2011, TSA began screening airline passengers against a broader set of TSDB information, which has helped mitigate risks. As part of its border and immigration security mission, CBP implemented the Pre-Departure Targeting Program to expand its practice of identifying high-risk and improperly documented passengers including those in the TSDB before they board flights bound for the United States, and recommending that air carriers deny boarding to individuals that the agency would likely deem inadmissible upon arrival at a U.S. airport. This program has resulted in more known or suspected terrorists being denied boarding. Our preliminary work also suggests that the outcomes of these DHS programs demonstrate the homeland security benefits of terrorist-related screening, but such screening could have impacts on agency resources and the traveling public. For example, new or expanded screening programs have could require agencies to dedicate more staff to check traveler information against watchlist information and take related law enforcement actions. Also, new or expanded screening programs could result in more individuals misidentified as being in the TSDB, which can cause traveler delays and other inconvenience. It will be important for agencies to monitor and address these impacts as appropriate moving forward. We plan to issue a report with the final results or our work later this year. Chairman Lieberman, Ranking Member Collins, and Members of the Committee, this concludes my statement for the record. <5. Contacts and Acknowledgments> For additional information regarding this statement, please contact Eileen R. Larence at (202) 512-6510 or [email protected]. In addition, Eric Erdman, Mary Catherine Hult, Thomas Lombardi, Victoria Miller, and Hugh Paquette made key contributions to this statement. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Related GAO Products Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C: September 7, 2011. Information Sharing Environment: Better Road Map Needed to Guide Implementation and Investments. GAO-11-455. Washington, D.C: July 21, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Information Sharing: DHS Could Better Define How It Plans to Meet Its State and Local Mission and Improve Performance Accountability. GAO-11-223. Washington, D.C.: December 16, 2010. Information Sharing: Federal Agencies Are Helping Fusion Centers Build and Sustain Capabilities and Protect Privacy, but Could Better Measure Results. GAO-10-972. Washington, D.C.: September 29, 2010. Terrorist Watchlist Screening: FBI Has Enhanced Its Use of Information from Firearm and Explosives Background Checks to Support Counterterrorism Efforts. GAO-10-703T. Washington, D.C.: May 5, 2010. Homeland Security: Better Use of Terrorist Watchlist Information and Improvements in Deployment of Passenger Screening Checkpoint Technologies Could Further Strengthen Security. GAO-10-401T. Washington, D.C.: January 27, 2010. Information Sharing: Federal Agencies Are Sharing Border and Terrorism Information with Local and Tribal Law Enforcement Agencies, but Additional Efforts Are Needed. GAO-10-41. Washington, D.C.: December 18, 2009. Information Sharing Environment: Definition of the Results to Be Achieved in Improving Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-492. Washington, D.C.: June 25, 2008. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Terrorist Watch List Screening: Efforts to Help Reduce Adverse Effects on the Public. GAO-06-1031. Washington, D.C.: September 29, 2006. Information Sharing: The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information. GAO-06-385. Washington, D.C.: March 17, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study A breakdown in information sharing was a major factor contributing to the failure to prevent the September 11, 2001, terrorist attacks. Since then, federal, state, and local governments have taken steps to improve sharing. This statement focuses on government efforts to (1) establish the Information Sharing Environment (ISE), a government-wide approach that facilitates the sharing of terrorism-related information; (2) support fusion centers, where states collaborate with federal agencies to improve sharing; (3) provide other support to state and local agencies to enhance sharing; and (4) strengthen use of the terrorist watchlist. GAO's comments are based on products issued from September 2010 through July 2011 and selected updates in September 2011. For the updates, GAO reviewed reports on the status of Department of Homeland Security (DHS) efforts to support fusion centers, and interviewed DHS officials regarding these efforts. This statement also includes preliminary observations based on GAO's ongoing watchlist work. For this work, GAO is analyzing the guidance used by agencies to nominate individuals to the watchlist and agency procedures for screening individuals against the list, and is interviewing relevant officials from law enforcement and intelligence agencies, among other things.. What GAO Found The government continues to make progress in sharing terrorism-related information among its many security partners, but does not yet have a fully-functioning ISE in place. In prior reports, GAO recommended that agencies take steps to develop an overall plan or roadmap to guide ISE implementation and establish measures to help gauge progress. These measures would help determine what information sharing capabilities have been accomplished and are left to develop, as well as what difference these capabilities have made to improve sharing and homeland security. Accomplishing these steps, as well as ensuring agencies have the necessary resources and leadership commitment, should help strengthen sharing and address issues GAO has identified that make information sharing a high-risk area. Federal agencies are helping fusion centers build analytical and operational capabilities, but have more work to complete to help these centers sustain their operations and measure their homeland security value. For example, DHS has provided resources, including personnel and grant funding, to develop a national network of centers. However, centers are concerned about their ability to sustain and expand their operations over the long term, negatively impacting their ability to function as part of the network. Federal agencies have provided guidance to centers and plan to conduct annual assessments of centers' capabilities and develop performance metrics by the end of 2011 to determine centers' value to the ISE. DHS and the Department of Justice are providing technical assistance and training to help centers develop privacy and civil liberties policies and protections, but continuous assessment and monitoring policy implementation will be important to help ensure the policies provide effective protections. In response to its mission to share information with state and local partners, DHS's Office of Intelligence and Analysis (I&A) has taken steps to identify these partner's information needs, develop related intelligence products, and obtain more feedback on its products. I&A also provides a number of services to its state and local partners that were generally well received by the state and local officials we contacted. However, I&A has not yet defined how it plans to meet its state and local mission by identifying and documenting the specific programs and activities that are most important for executing this mission. The office also has not developed performance measures that would allow I&A to demonstrate the expected outcomes and effectiveness of state and local programs and activities. In December 2010, GAO recommended that I&A address these issues, which could help it make resource decisions and provide accountability over its efforts. GAO's preliminary observations indicate that federal agencies have made progress in implementing corrective actions to address problems in watchlist-related processes that were exposed by the December 25, 2009, attempted airline bombing. These actions are intended to address problems in the way agencies share and use information to nominate individuals to the watchlist, and use the list to prevent persons of concern from boarding planes to the United States or entering the country, among other things. These actions can also have impacts on agency resources and the public, such as traveler delays and other inconvenience. GAO plans to report the results of this work later this year. What GAO Recommends GAO is not making new recommendations, but has made recommendations in prior reports to federal agencies to enhance information sharing. The agencies generally agreed and are making progress, but full implementation of these recommendations is needed.
<1. Background> A safe and secure aviation system is a critical component to securing the nation s overall physical infrastructure and maintaining its economic vitality. Billions of dollars and a myriad of programs and policies have been devoted to achieving such a system. Critical to ensuring aviation security are screening checkpoints, at which screening personnel check over 2 million individuals and their baggage each day for weapons, explosives, and other dangerous articles that could pose a threat to the safety of an aircraft and those aboard it. All passengers who seek to enter secure areas at the nation s airports must pass through screening checkpoints and be cleared by screeners. In addition, many airline and airport employees, including flight crews, ground personnel, and concession vendors, have to be cleared by screeners. At the nation s 429 commercial airports that are subject to security requirements, screeners use a variety of technologies and procedures to screen individuals. These include x-ray machines to examine carry-on baggage, metal detectors to identify any hidden metallic objects, and physical searches of items, including those that cannot be scanned by x-rays, such as baby carriers or baggage that has been x-rayed and contains unidentified objects. In response to the terrorist attacks of September 11, 2001, the Federal Aviation Administration (FAA) and the air carriers implemented new security controls to improve security. These actions included increased screening of baggage and passengers at airport checkpoints with the use of explosives trace detection devices and hand-held metal detectors, the mandatory removal of laptop computers from carrying cases, and the removal of shoes. They included additional screening of randomly selected passengers at an airline s boarding gate. Although these initiatives have been a visible sign of heightened security procedures, they have also, in some instances, caused longer security delays, inconvenienced the traveling public, and raised questions about the merits of using these techniques on assumed lower-risk travelers, such as young children. Congress has also taken actions to improve aviation security. In November 2001, it passed the Aviation and Transportation Security Act, which transferred aviation security from FAA to the newly created TSA and directed TSA to take over responsibility for airport screening. The Act also left to TSA s discretion whether to establish requirements to implement trusted passenger programs and use available technologies to expedite security screening of passengers who participate in such programs, thereby allowing security screening personnel to focus on those passengers who should be subject to more extensive screening. In response to this Act, officials representing aviation and business travel groups have proposed developing a registered traveler program. Under their proposals, travelers who voluntarily provide personal information and clear a background check would be enrolled as registered travelers. These participants would receive some form of identification, such as a card that includes a unique personal characteristic like a fingerprint, which they would use at an airport to verify their identity and enrollment in the program. Because they would have been prescreened, they would be entitled to different security screening procedures at the airport. These could be as simple as designating a separate line for registered travelers, or could include less intrusive screening. Although TSA had initially resisted such a program because of concerns that it could weaken the airport security system, it has recently changed its position and has begun assessing the feasibility and need for such a program and considering the implementation of a test program. The concept underlying a registered traveler program is similar to one that TSA has been studying for transportation workers a Transportation Worker Identity Credential (TWIC) that could be used to positively identify transportation workers such as pilots and flight attendants and to expedite their processing at airport security checkpoints. TSA had been studying the TWIC program for several months. Initially, the agency had planned to implement the TWIC program first, saying that any registered traveler program would be implemented after establishing the TWIC program. In recent months, congressional appropriations restrictions have caused TSA to postpone TWIC s development. According to a senior agency official, however, TSA was still planning to go forward with studying the registered traveler program concept. <2. A Registered Traveler Program Is Intended to Improve Airport Security While Reducing the Inconvenience of Security Screening> Although most of the 22 stakeholders we interviewed supported a registered traveler program, several stakeholders opposed it. Our literature review and supporters of the program whom we interviewed identified two primary purposes for such a program improving the quality and efficiency of airport security and reducing the inconvenience that some travelers have experienced by reducing uncertainties about the length of delay and the level of scrutiny they are likely to encounter. The literature we reviewed and more than a half-dozen of the 22 stakeholders we contacted suggested that such a program could help improve the quality and efficiency of security by allowing security officials to target resources at potentially higher risk travelers. Several stakeholders also indicated that it could reduce the inconvenience of heightened security measures for some travelers, thus encouraging Americans to fly more often, and thereby helping to improve the economic health of the aviation industry. Representatives of air traveler groups identified other potential uses of a registered traveler program that were not directly linked to improving aviation security, such as better tracking of frequent flier miles for program participants. <2.1. Many Stakeholders We Contacted Indicated That a Registered Traveler Program Could Potentially Improve Aviation Security and More Effectively Target Resources> Many of the 22 stakeholders we contacted and much of the literature we reviewed identified the improvement of aviation security as a key purpose for implementing a registered traveler program. Such a program would allow officials to target security resources at those travelers who pose a greater security risk or about whom little is known. This concept is based on the idea that not all travelers present the same threat to aviation security, and thus not everyone requires the same level of scrutiny. Our recent work on addressing homeland security issues also highlights the need to integrate risk management into the nation s security planning and to target resources at high-priority risks. The concept is similar to risk- based security models that have already been used in Europe and Israel, which focus security on identifying risky travelers and more appropriately matching resources to those risks, rather than attempting to detect objects on all travelers. For example, one study suggested that individuals who had been prescreened through background checks and credentialed as registered travelers be identified as low risk and therefore subjected to less stringent security. This distinction would allow security officials to direct more resources and potentially better screening equipment at other travelers who might pose a higher security risk, presumably providing better detection and increased deterrence. In addition, several stakeholders also suggested that a registered traveler program would enable TSA to more efficiently use its limited resources. Several of these stakeholders suggested that a registered traveler program could help TSA more cost-effectively focus its equipment and personnel needs to better meet its security goals. For example, two stakeholders stated that TSA would generally not have to intensively screen registered travelers checked baggage with explosives detection systems that cost about $1 million each. As a result, TSA could reduce its overall expenditures for such machines. In another example, a representative from a major airline suggested that because registered travelers would require less stringent scrutiny, TSA could provide a registered traveler checkpoint lane that would enable TSA to use fewer screeners at its checkpoint lanes; this would reduce the number of passenger screeners from the estimated 33,000 that it plans to hire nationwide. In contrast, several stakeholders and TSA officials said that less stringent screening for some travelers could weaken security. For example, two stakeholders expressed concerns that allowing some travelers to undergo less stringent screening could weaken overall aviation security by introducing vulnerabilities into the system. Similarly, the first head of TSA had publicly opposed the program because of the potential for members of sleeper cells terrorists who spend time in the United States building up a law-abiding record to become registered travelers in order to take advantage of less stringent security screening. The program manager heading TSA s Registered Traveler Task Force explained that the agency has established a baseline level of screening that all passengers and workers will be required to undergo, regardless of whether they are registered. Nevertheless, a senior TSA official told us that the agency now supports the registered traveler concept as part of developing a more risk- based security system, which would include a refined version of the current automated passenger prescreening system. While the automated prescreening system is used on all passengers, it focuses on those who are most likely to present threats. In contrast to a registered traveler program, the automated system is not readily apparent to air passengers. Moreover, the registered traveler program would focus on those who are not likely to present threats, and it would be voluntary. Some stakeholders we contacted said that a registered traveler program, if implemented, should serve to complement the automated system, rather than replace it. <2.2. Some Believe That a Registered Traveler Program Could Potentially Reduce the Inconvenience of Security Screening Procedures> According to the literature we reviewed and our discussions with several stakeholders, reducing the inconvenience of security screening procedures implemented after September 11, 2001, constitutes another major purpose of a registered traveler program, in addition to potentially improving security. The literature and these stakeholders indicated that participants in a registered traveler program would receive consistent, efficient, and less intrusive screening, which would reduce their inconvenience and serve as an incentive to fly more, particularly if they are business travelers. According to various representatives of aviation and business travelers groups, travelers currently face uncertainty regarding the time needed to get through security screening lines and inconsistency about the extent of screening they will encounter at various airports. For example, one stakeholder estimated that prior to September 11, 2001, it took about 5 to 8 seconds, on average, for a traveler to enter, be processed, and clear a security checkpoint; since then, it takes about 20 to 25 seconds, on average, resulting in long lines and delays for some travelers. As a result, travelers need to arrive at airports much earlier than before, which can result in wasted time at the airport if security lines are short or significant time spent in security lines if they are long. Additionally, a few stakeholders stated that travelers are inconvenienced when they are subjected to personal searches or secondary screening at the gates for no apparent reason. While some stakeholders attributed reductions in the number of passengers traveling by air to these inconveniences, others attributed it to the economic downturn. Some literature and three stakeholders indicated that travelers, particularly business travelers making shorter trips (up to 750 miles), have as a result of these inconveniences reduced the number of flights they take or stopped flying altogether, causing significant economic harm to the aviation industry. For example, according to a survey of its frequent fliers, one major airline estimates that new airport security procedures and their associated inconveniences have caused 27 percent of its former frequent fliers to stop flying. Based on this survey s data, the Air Transport Association, which represents major U.S. air carriers, estimates that security inconveniences have cost the aviation industry $2.5 billion in lost revenue since September 11, 2001. Supporters of a registered traveler program indicated that it would be a component of any industry recovery and that it is particularly needed to convince business travelers to resume flying. To the extent that registered travelers would fly more often, the program could also help revitalize related industries that are linked to air travel, including aviation-related manufacturing and such tourism-related businesses as hotels and travel agencies. However, not all stakeholders agreed that a registered traveler program would significantly improve the economic condition of the aviation industry. For example, officials from another major U.S. airline believed that the declining overall economy has played a much larger role than security inconveniences in reducing air travel. They also said that most of their customers currently wait 10 minutes or less in security lines, on average significantly less than immediately after September 11, 2001 and that security inconveniences are no longer a major issue for their passengers. <2.3. Other Potential Uses for a Registered Traveler Program> In addition to the two major purposes of a registered traveler program, some stakeholders and some literature we reviewed identified other potential uses. For example, we found that such a program could be part of an enhanced customer service package for travelers and could be used to expedite check-in at airports and to track frequent flier miles. Some stakeholders identified potential law enforcement uses, such as collecting information obtained during background checks to help identify individuals wanted by the police, or tracking the movement of citizens who might pose criminal risks. Finally, representatives of air traveler groups envisioned extensive marketing uses for data collected on registered travelers by selling it to such travel-related businesses as hotels and rental car companies and by providing registered travelers with discounts at these businesses. Two stakeholders envisioned that these secondary uses would evolve over time, as the program became more widespread. However, civil liberties advocates we spoke with were particularly concerned about using the program for purposes beyond aviation security, as well as about the privacy issues associated with the data collected on program participants and with tracking their movements. <3. Key Policy and Implementation Issues Associated with a Registered Traveler Program> Our literature review and discussions with stakeholders identified a number of policy and implementation issues that might need to be addressed if a registered traveler program is to be implemented. Stakeholders we spoke with held a wide range of opinions on such key policy issues as determining (1) who should be eligible to apply to the program; (2) the type and the extent of background checks needed to certify that applicants can enroll in the program, and who should perform them; (3) the security screening procedures that should apply to registered travelers, and how these would differ from those applied to other travelers; and (4) the extent to which equity, privacy, and liability issues would impede program implementation. Most stakeholders indicated that only the federal government has the resources and authority to resolve these issues. In addition to these policy questions, our research and stakeholders identified practical implementation issues that need to be considered before a program could be implemented. These include deciding (1) which technologies to use, and how to manage the data collected on travelers; (2) how many airports and how many passengers should participate in a registered traveler program; and (3) which entities would be responsible for financing the program, and how much it would cost. <3.1. Most Stakeholders We Contacted Agreed That the Federal Government Should Address Key Policy Issues When Developing a Registered Traveler Program> Most stakeholders we contacted agreed that, ultimately, the federal government should make the key policy decisions on program eligibility criteria, requirements for background checks, and specific security- screening procedures for registered travelers. In addition, the federal government should also address equity, privacy, and liability issues raised by such a program. Stakeholders also offered diverse suggestions as to how some of these issues could be resolved, and a few expressed eagerness to work with TSA. <3.1.1. Stakeholders Identified Differing Options for Program Eligibility> Although almost all the stakeholders we contacted agreed that a registered traveler program should be voluntary, they offered a wide variety of suggestions as to who should be eligible to apply to the program. These suggestions ranged from allowing any U.S. or foreign citizen to apply to the program to limiting it only to members of airline frequent flier programs. Although most stakeholders who discussed this issue with us favored broad participation, many of them felt it should be limited to U.S. citizens because verifying information and conducting background checks on foreigners could be very difficult. Several stakeholders said that extensive participation would be desirable from a security perspective because it would enable security officials to direct intensive and expensive resources toward unregistered travelers who might pose a higher risk. Several stakeholders indicated that it would be unfair to limit the program only to frequent fliers, while representatives from two groups indicated that such a limitation could provide airlines an incentive to help lure these travelers back to frequent air travel. <3.1.2. Stakeholders Proposed Alternatives for Background Check Requirements> We also found differing opinions as to the type and extent of background check needed to determine whether an applicant should be eligible to enroll in a registered traveler program. For example, one stakeholder suggested that the background check should primarily focus on determining whether the applicant exists under a known identity and truly is who he or she claims to be. This check could include verification that an individual has paid income taxes over a certain period of time (for example, the past 10 years), has lived at the same residence for a certain number of years, and has a sufficient credit history. Crosschecking a variety of public and private data sources, such as income tax payment records and credit histories, could verify that an applicant s name and social security number are consistent. However, access to income tax payment records would probably require an amendment to existing law. Another stakeholder said that the program s background check should be similar to what is done when issuing a U.S. passport. A passport check consists, in part, of a name check against a database that includes information from a variety of federal sources, including intelligence, immigration, and child support enforcement data. In contrast, others felt that applicants should undergo a more substantial check, such as an FBI- type background check, similar to what current airline or federal government employees must pass; or a criminal background check, to verify that the applicant does not have a criminal history. This could include interviewing associates and neighbors as well as credit and criminal history checks. In this case, applicants with criminal histories might be denied the right to participate in a registered traveler program. No matter what the extent of these checks, most stakeholders generally agreed that the federal government should perform or oversee them. They gave two reasons for this: (1) the federal government has access to the types of data sources necessary to complete them, and (2) airlines would be unwilling to take on the responsibility for performing them because of liability concerns. One stakeholder also suggested that the federal government could contract out responsibility for background checks to a private company, or that a third-party, nonprofit organization could be responsible for them. A majority of stakeholders also agreed that the federal government should be responsible for developing the criteria needed to determine whether an applicant is eligible to enroll and for making the final eligibility determination. Some stakeholders also stated that background checks should result in a simple yes or no determination, meaning that all applicants who passed the background check would be able to enroll in the program and the ones who did not pass would be denied. Other stakeholders alternatively recommended that all applicants be assigned a security score, determined according to the factors found during the background check. This security score would establish the level of screening given an individual at a security checkpoint. TSA has indicated that, at a minimum, the government would have to be responsible for ensuring that applicants are eligible to enroll and that the data used to verify identities and perform background checks are accurate and up-to-date. <3.1.3. Security Screening Procedures for Registered Travelers Would Differ from Procedures for Other Passengers> All the stakeholders we contacted agreed that registered travelers should be subjected to some minimum measure of security screening, and that the level of screening designated for them should generally be less extensive and less intrusive than the security screening required for all other passengers. Most stakeholders anticipated that a participant would receive a card that possessed some unique identifier, such as a fingerprint or an iris scan, to identify the participant as a registered traveler and to verify his or her identity. When arriving at an airport security checkpoint, the registered traveler would swipe the card through a reader that would authenticate the card and verify the individual s identity by matching him or her against the specific identifier on the card. If the card is authenticated and the holder is verified as a registered traveler, the traveler would proceed through security. Most stakeholders suggested that registered travelers pass through designated security lines, to decrease the total amount of time they spend waiting at the security checkpoint. If the equipment cannot read the card or verify the traveler s identity, or if that passenger is deemed to be a security risk, then the traveler would be subjected to additional security screening procedures, which might also include full-body screening and baggage searches. If the name on the registered traveler card matches a name on a watch-list or if new concerns about the traveler emerge, the card could be revoked. A common suggestion was that registered travelers would undergo pre- September 11th security-screening measures, which involved their walking through a magnetometer and the x-raying of their carry-on baggage. Moreover, they would not be subjected to random selection or additional security measures unless warranted, and they would be exempted from random secondary searches at the boarding gate. According to TSA officials, the agency is willing to consider some differentiated security procedures for program participants. As for security procedures for those not enrolled in such a program, several stakeholders agreed that nonparticipants would have to undergo current security screening measures, at a minimum. Current security measures involve walking through a magnetometer, having carry-on baggage run through an x-ray machine, and being subjected to random searches of baggage for traces of explosives, hand searches for weapons, and the removal of shoes for examination. Travelers may also be randomly selected for rescreening in the gate area, although TSA has planned pilot programs to determine whether to eliminate this rescreening. Other stakeholders suggested that travelers who were not enrolled in the registered traveler program should be subjected to enhanced security screening, including more stringent x-rays and baggage screening than are currently in place at the airports. These stakeholders thought that because little would be known about nonparticipants, they should be subjected to enhanced security screening measures. In addition, several stakeholders mentioned that a registered traveler program might be useful in facilitating checked-baggage screening. For example, one stakeholder suggested that the x-ray screening of registered travelers baggage could be less intensive than the screening required for all other passengers, thus reducing the time it would take to screen all checked baggage. A few stakeholders even suggested that the most sophisticated baggage screening technology, such as explosives detection machines, would not be needed to screen a registered traveler s checked baggage. However, the 2001 Aviation and Transportation Security Act requires the screening of all checked baggage, and using a registered traveler program to lessen the level of the checked baggage screening would not be permissible under the requirements of the Act. <3.1.4. Stakeholders Raised Equity, Privacy, and Liability Concerns> Finally, our research and discussions with stakeholders raised nonsecurity- related policy issues, including equity, privacy, and liability concerns that could impede implementation of a registered traveler program. With respect to equity issues, some stakeholders raised concerns that the federal government should carefully develop eligibility and enrollment criteria that would avoid automatically excluding certain classes of people from participating in the program. For example, requiring applicants to pay a high application or enrollment fee could deter some applicants for financial reasons. In addition, concern was expressed that certain races and ethnicities, mainly Arab-Americans, would be systematically excluded from program participation. Most stakeholders, however, did not generally view equity issues as being a major obstacle to developing the program, and one pointed to the precedent set by existing government programs that selectively confer known status to program participants. For example, the joint U.S./Canadian NEXUS pilot program, a program for travelers who frequently cross the U.S./Canadian border, is designed to streamline the movement of low-risk travelers across this border by using designated passage lanes and immigration-inspection booths, as well as some risk- management techniques similar to those proposed for use in a registered traveler program. With respect to privacy issues, civil liberties advocates we spoke with expressed concerns that the program might be used for purposes beyond its initial one and that participants information would need protection. They were particularly concerned about the potential for such a program to lead to the establishment of a national identity card, or to other uses not related to air travel. For example, some suggested that there could be enormous pressure on those who are not part of the program to apply, given the advantages of the program, and this would therefore, in effect, lead to a national identity card. One stakeholder raised a concern about the card s becoming a prerequisite for obtaining a job that includes traveling responsibilities, or the collected information s being used for other purposes, such as identifying those sought by police. Others countered that because participation in a registered traveler program would be voluntary, privacy concerns should not be a significant issue. According to TSA attorneys, legal protections already in place to prevent the proliferation of private information are probably applicable, and additional safeguards for this program could be pursued. Through our review, we identified two particular liability issues potentially associated with the concept of a registered traveler program. First, it is uncertain which entity would be liable and to what extent that entity would be liable if a registered traveler were to commit a terrorist act at an airport or on a flight. Second, it is also unclear what liability issues might arise if an applicant were rejected based on false or inaccurate information, or the applicant did not meet the eligibility criteria. For the most part, stakeholders who addressed the liability issue maintained that, because the federal government is already responsible for aviation security, and because it is likely to play an integral role in developing and administering such a program, security breaches by registered travelers would not raise new liability concerns. Although the assumption of screening responsibilities has increased the federal government s potential exposure to liability for breaches of aviation security, TSA representatives were unsure what the liability ramifications would be for the federal government for security breaches or terrorist acts committed by participants of a registered traveler program. Fewer stakeholders offered views on whether there would be liability issues if an applicant were denied participation in a registered traveler program because of false or inaccurate information. However, some indicated that the federal government s participation, particularly in developing eligibility criteria, would be key to mitigating liability issues. One stakeholder said that the program must include appeal procedures to specify under what conditions an individual could appeal if denied access to the program, who or what entity would hear an appeal, and whether an individual would be able to present evidence in his or her defense. Other stakeholders, however, stressed the importance of keeping eligibility criteria and reasons for applicant rejection confidential, because they believe that confidentiality would be crucial to maintaining the security of the program. TSA maintained that if the program were voluntary, participants might have less ability to appeal than they would in a government entitlement program, in which participation might be guaranteed by statute. <3.2. Some Stakeholders Also Identified Practical Implementation Issues to Consider> In addition to key policy issues, some stakeholders we spoke with identified a number of key program implementation issues to consider. Specifically, they involve choosing appropriate technologies, determining how to manage data collection and security, defining the program s scope, and determining the program s costs and financing structure. <3.2.1. Stakeholders Differed on the Use of Biometric Technology in a Registered Traveler Program> Our research indicated that developing and implementing a registered traveler program would require key choices about which technologies to use. Among the criteria cited by stakeholders were a technology s ability to (1) provide accurate data about travelers, (2) function well in an airport environment, and (3) safeguard information from fraud. One of the first decisions that would have to be made in this area is whether to use biometrics to verify the identity of registered passengers and, if so, which biometric identifier to use. The term biometrics refers to a wide range of technologies that can be used to verify a person s identity by measuring and analyzing human characteristics. Identifying a person s physiological characteristics is based on data derived from scientifically measuring a part of the body. Biometrics provides a highly accurate confirmation of the identity of a specific person. While the majority of those we interviewed said that some sort of biometric identifier is critical to an effective registered traveler program, there was little agreement among stakeholders as to the most appropriate biometric for this program. Issues to consider when making decisions related to using biometric technology include the accuracy of a specific technology, user acceptance, and the costs of implementation and operation. Although there is no consensus on which biometric identifier should be used for a registered traveler program, three biometric identifiers were cited most frequently as offering the requisite capabilities for a program: iris scans (using the distinctive features of the iris), fingerprints, and hand geometry (using distinctive features of the hand). Although each of the three identifiers has been used in airport trials, there are disadvantages associated with each of them. (Appendix III outlines some of the advantages and disadvantages of each.) A few stakeholders also claimed that a biometric should not be part of a registered traveler program. Among the reasons cited were that biometric technology is expensive, does not allow for quick processing of numerous travelers, and is not foolproof. Some studies conducted have concluded that current biometric technology is not as infallible as biometric vendors claim. For example, a German technology magazine recently demonstrated that using reactivated latent images and forgeries could defeat fingerprint and iris recognition systems. In addition, one stakeholder stated that an identity card with a two-dimensional barcode that stores personal data and a picture would be sufficient to identify registered travelers. Such a card would be similar to those currently used as drivers licenses in many states. <3.2.2. Registered Traveler Program Raises Data Storage and Maintenance Issues> In addition to choosing specific technologies, stakeholders said that decisions will be needed regarding the storage and maintenance of data collected for the program. These include decisions regarding where a biometric or other unique identifier and personal background information should be stored. Such information could be stored either on a card embedded with a computer chip or in a central database, which would serve as a repository of information for all participants. Stakeholders thought the key things to consider in deciding how to store this information are speed of accessibility, levels of data protection, methods to update information, and protections against forgery and fraudulent use by others. One stakeholder who advocates storing passenger information directly on a smart card containing an encrypted computer chip said that this offers more privacy protections for enrollees and would permit travelers to be processed more quickly at checkpoints than would a database method. On the other hand, advocates for storing personal data in a central database said that it would facilitate the updating of participants information. Another potential advantage of storing information in a central database is that it could make it easier to detect individuals who try to enroll more than once, by checking an applicant s information against information on all enrollees in a database. In theory, this process would prevent duplication of enrollees. Another issue related to storing participant information is how to ensure that the information is kept up-to-date. If participant information is stored in a database, then any change would have to be registered in a central database. If, however, information is stored on an identification card, then the card would have to feature an embedded computer chip to which changes could be made remotely. Keeping information current is necessary to ensure that the status of a registered traveler has not changed because of that person s recent activities or world events. One stakeholder noted the possibility that a participant could do something that might cause his or her eligibility status to change. In response to that concern, he stressed that a registered traveler program should incorporate some sort of quick revoke system. When that traveler is no longer entitled to the benefits associated with the program, a notification would appear the next time the card is registered in a reader. <3.2.3. Stakeholders Had Different Opinions about the Scope of a Registered Traveler Program> Stakeholders differed in their opinions as to how many airports and how many passengers should participate in a registered traveler program. While some believe that the program should be as expansive as possible, others maintain that the program would function most efficiently and cost- effectively if it were limited to those airports with the most traffic and to those passengers who fly the most frequently. As for airports, some suggested that all 429 airports subject to security requirements in the United States should be equipped to support the program, to convince more passengers to enroll. Others contended that, because of equipment costs, the program should optimally include only the largest airports, such as the fewer than 100 airports that the FAA classifies as Category X and Category 1 airports, which the vast majority of the nation s air travelers use. There were also different opinions as to whether the program should limit enrollment to frequent travelers or should strive for wider enrollment to maximize participation. Representatives of a passenger group asserted that the program should be limited to passengers who fly regularly because one of the goals of the program would be to process known passengers more quickly, and that having too many enrollees would limit the time saved. Others, however, maintained that the program should enroll as many passengers as possible. This case is made largely based on security concerns the more people who register, the more information is known about a flight s passengers. <3.2.4. Views Differ on the Program s Costs and Financing> It is unclear who would fund any registered traveler program, although a majority of the stakeholders we contacted who discussed the issue expect that participants would have to fund most of its costs. Representatives of aviation traveler groups said that participants would be willing to bear almost all of the costs. One airline representative estimated that frequent passengers would be willing to pay up to $100 for initial enrollment and an additional $25 to $50 annually for renewal. For similar reasons, some stakeholders have suggested that the airlines bear some of the costs of the program, probably by offering subsidies and incentives for their passengers to join, since the aviation industry would also benefit. For instance, one stakeholder said that airlines might be willing to partially subsidize the cost if the airlines could have access to some of the participant information. A few stakeholders also expect that the federal government would pay for some of the cost to develop a registered traveler program. One stakeholder who said the government should pay for a significant portion of the program did so based on the belief that national security benefits will accrue from the program and so, therefore, funding it is a federal responsibility. Others maintained that significant long-term federal funding for the program is unrealistic because of the voluntary aspect of the program, the possibility that it might be offered only to selected travelers, and TSA s current funding constraints. In addition to the uncertainty about which entity would primarily fund a registered traveler program, there are also questions about how much the program would cost. None of the stakeholders who were asked was able to offer an estimate of the total cost of the program. A technology vendor who has studied this type of program extensively identified several primary areas of cost, which include but are not limited to background checks, computer-chip enabled cards, card readers, biometric readers, staff training, database development, database operations, and enrollment center staffing. The fact that the costs of many of these components are uncertain makes estimating the overall program costs extremely difficult. For example, one stakeholder told us that extensive background checks for enrollees could cost as much as $150 each, while another stakeholder maintained that detailed, expensive background checks would be unnecessary. Therefore, the choice of what type of background check to use if a program is implemented would likely significantly influence the program s overall costs. Our research indicated that there are also significant price range differences in computer-chip enabled cards and biometric readers, among other components. <4. Key Principles to Guide Program Implementation> Regardless of the policy and program decisions made about a registered traveler program, we identified several basic principles TSA might consider if it implements such a program. We derived these principles from our discussions with stakeholders and from review of pertinent literature as well as best practices for implementing new programs. Chief among these is the principle that vulnerabilities in the aviation system be assessed in a systematic way and addressed using a comprehensive risk management plan. Accordingly, the registered traveler program must be assessed and prioritized along with other programs designed to address security vulnerabilities, such as enhancing cockpit security, controlling access to secure areas of the airport, preventing unsafe items from being shipped in cargo or checked baggage, and ensuring the integrity of critical air traffic control computer systems. TSA officials also noted that the agency is responsible for the security of all modes of transportation, not just aviation. They added that a program such as registered traveler needs to be assessed in the broader context of border security, which can include the security of ports and surface border crossings overseen by a number of federal agencies, such as Customs, Coast Guard, and INS. TSA might consider the following principles if, and when, a registered traveler program is implemented: Apply lessons learned from and experience with existing programs that share similarities with the registered traveler program. This information includes lessons related to such issues as eligibility criteria, security procedures, technology choices, and funding costs. Test the program initially on a smaller scale to demonstrate its feasibility and effectiveness, and that travelers will be willing to participate. Develop performance measures and a system for assessing whether the program meets stated mission and goals. Use technologies that are interoperable across different enrollment sites and access-control points, and select technologies that can readily be updated to keep pace with new developments in security technology, biometrics, and data sharing. At a minimum, interoperability refers to using compatible technologies at different airport checkpoints across the country and, more broadly, could be seen as including other access- control points, such as border crossings and ports of entry. <4.1. Apply Lessons Learned from Similar Programs> Using lessons learned from existing programs offers TSA an opportunity to identify key policy and implementation issues as well as possible solutions to them. Although not of the scope that a nationwide U.S. registered traveler program would likely be, several existing smaller programs, both in the United States and abroad, address some of the same issues as the registered traveler concept and still present excellent opportunities for policymakers to learn from real-life experiences. For example, in the United States, the INS already has border control programs both at airports and roadway checkpoints to expedite the entry of known border crossers. Internationally, similar programs exist at Ben Gurion Airport in Israel, Schiphol Airport in Amsterdam, and Dubai International Airport in the United Arab Emirates. In the past, similar pilot programs have also been run at London s Gatwick and Heathrow airports. All of these programs rely on credentialing registered travelers to expedite their processing and are candidates for further study. Finally, programs established by the Department of Defense and the General Services Administration that use cards and biometrics to control access to various parts of a building offer potential technology-related lessons that could help design a registered traveler program. (Appendix IV offers a brief description of some of the U.S. and foreign programs.) TSA s program manager for the Registered Traveler Task Force stressed that his agency has no role in these other programs, which are different in purpose and scope from the registered traveler concept. He added that these programs focus on expediting crossing at international borders, while the registered traveler concept focuses on domestic security. <4.2. Test the Program to Demonstrate Its Feasibility, Effectiveness, and Acceptance> In addition to these programs, information could also be gleaned from a registered traveler pilot program. For example, the Air Transport Association has proposed a passenger and employee pilot program. ATA s proposed program would include over 6,000 participants, covering both travelers who passed a background check and airline employees. ATA s proposal assumes that (1) the appropriate pool of registered traveler participants will be based on background checks against the FBI/TSA watch list, and (2) airlines would determine which employees could apply, and would initiate background checks for them. ATA estimates that the pilot program would initially cost about $1.2 million to implement. To allow TSA and the airlines to evaluate the effectiveness of the program s technologies and procedures and their overall impact on checkpoint efficiency, ATA plans to collect data on enrollment procedures, including: the number of individuals who applied and were accepted, the reasons for rejection, and customer interest in the program; reliability of the biometric cards and readers; and checkpoint operational issues. In our discussions, the Associate Under Secretary for Security Regulation and Policy at TSA made it clear that he thought developing a registered traveler pilot program on a small scale would be a necessary step before deciding to implement a national program. TSA officials responsible for assessing a registered traveler program said that they hope to begin a pilot program by the end of the first quarter of 2003. They also noted that much of the available information about the registered traveler concept is qualitative, rather than quantitative. They added that, because the cost- effective nature of a registered traveler program is not certain, a financial analysis is needed that considers the total cost of developing, implementing, and maintaining the technology and the program. Along these lines, they believe that a pilot program and rigorous, fact-based analysis of the costs and benefits of this program will be useful for determining (1) whether the hassle factor really exists, and if so to what extent, (2) whether a registered traveler program will effectively address the need to expedite passenger flow or to manage risk, and (3) whether such a program would be the risk-mitigation tool of choice, given the realities of limited resources. <4.3. Develop Performance Measures to Ensure the Program Is Achieving Its Goals> In addition to developing performance-based metrics to evaluate the effectiveness of a pilot program, TSA could consider developing similar metrics to measure the performance of a nationwide program if one is created. Our previous work on evaluating federal programs has stressed the importance of identifying goals, developing related performance measures, collecting data, analyzing data, and reporting results. Collecting such information is most useful if the data-gathering process is designed during the program s development and initiated with its implementation. Periodic assessment of the data should include comparisons with previously collected baseline data. The implementation of a registered traveler program could be helped by following those principles. For example, determining whether, and how well, the program improves aviation security and alleviates passenger inconvenience requires that measurements be developed and data collected and analyzed to demonstrate how well these goals are being met. Such information could include the success of screeners at detecting devices not allowed on airplanes for both enrollees and nonparticipants, or the average amount of time it takes for enrollees to pass through security screening. <4.4. Use Technologies That Are Interoperable and That Can Be Upgraded in the Future> An effective registered traveler program depends on using technologies that are interoperable across various sites and with other technologies, and can be readily updated to keep pace with new developments in security technology, biometrics, and data sharing. Such a program is unlikely to be airport- or airline-specific, which means that the various technologies will have to be sufficiently standardized for enrollees to use the same individual cards or biometrics at many airports and with many airlines. Consequently, the technologies supporting the nationwide system need to be interoperable so that they can communicate with one another. The FAA s experience with employee access cards offers a good lesson on the dangers of not having standards to ensure that technologies are interoperable. As we reported in 1995, different airports have installed different types of equipment to secure doors and gates. While some airports have installed magnetic stripe card readers, others have installed proximity card readers, and still another has installed hand-scanning equipment to verify employee identity. As a result, an official from one airline stated that employees who travel to numerous airports have to carry several different identity cards to gain access to specific areas. Another important interoperability issue is the way in which the personal data associated with a registered traveler program relates to other existing information on travelers, most important of which is the automated passenger prescreening system information. Some stakeholders believe it will be crucial that the registered traveler program is integrated into the automated system. Given TSA s focus on developing and launching a revised automated passenger prescreening system, such integration will likely be essential. Integrating the data depends on finding a workable technology solution. Furthermore, TSA officials added that interoperability may extend beyond aviation to passengers who enter the United States at border crossings or seaports. They noted that ensuring the interoperability of systems across modes of transportation overseen by a variety of different federal agencies will be a complex and expensive undertaking. An equally important factor to consider is how easily a technology can be upgraded as related technologies evolve and improve. As stakeholders made clear to us, because technologies surrounding identification cards and biometrics are evolving rapidly, often in unpredictable ways, the technology of choice today may not be cost-effective tomorrow. To ensure that a registered traveler program will not be dependent on outdated technologies, it is essential to design a system flexible enough to adapt to new technological developments as they emerge. For example, if fingerprints were initially chosen as the biometric, the supporting technologies should be easily adaptable to other biometrics, such as iris scans. An effective way to make them so is to use technology standards for biometrics, data storage, and operating systems, rather than to mandate specific technology solutions. <5. Concluding Observations> A registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation s aviation and broader transportation systems. However, numerous unresolved policy and programmatic issues would have to be addressed before developing and implementing such a program. These issues include, for example, the central question of whether such a program will effectively enhance security or will inadvertently provide a means to circumvent and compromise new security procedures. These issues also include programmatic and administrative questions, such as how much such a program would cost and what entities would provide its financing. Our analysis of existing literature and our interviews with stakeholders helped identify some of these key issues but provide no easy answers. The information we developed should help to focus and shape the debate and to identify key issues to be addressed when TSA considers whether to implement a registered traveler program. <6. Agency Comments> We provided the Department of Transportation (DOT) with a draft of this report for review and comment. DOT provided both oral and written comments. TSA s program manager for the Registered Traveler Task Force and agency officials present with legal and other responsibilities related to this program said that the report does an excellent job of raising a number of good issues that TSA should consider as it evaluates the registered traveler concept. These officials provided a number of clarifying comments, which we have incorporated where appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to interested Members of Congress, the Secretary of Transportation, and the Under Secretary of Transportation for Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at [email protected]. Key contributors are listed in appendix V. Scope and Methodology To obtain and develop information on the purpose of a registered traveler program and the key policy and implementation issues in designing and implementing it, we conducted an extensive search of existing information and carried out interviews with key stakeholders. These interviews included officials from the federal government, the aviation industry, aviation security consultants, vendors developing and testing registered traveler applications, and organizations concerned with issues of data privacy and civil liberties. We conducted a literature search that identified existing studies, policy papers, and articles from the federal government, the aviation industry, and other organizations on numerous issues associated with designing and implementing a registered traveler program. These issues included the goals or purposes of a registered traveler program and policy and programmatic issues such as the potential costs, security procedures, and technology choices for such a program. We also identified existing studies and papers on specific items, such as the applicability of biometric technologies for use in a registered traveler program and the extent to which programs already exist in the United States and abroad (this detailed information is presented in appendix IV). This literature search also identified key stakeholders regarding designing and implementing a registered traveler program. Based on our literature search, we identified a list of 25 key stakeholders who could provide professional opinions on a wide range of issues involved in a registered traveler program. We chose these stakeholders based on their influence in the aviation industry as well as their expertise in such issues as aviation security, identification technologies, civil liberties, and the air-travel experience. In total, we conducted 22 interviews. We also visited and interviewed officials associated with registered traveler type programs in two European countries. The intent of our interviews was to gain a further understanding of the issues surrounding a registered traveler program and specific information on such items as the potential costs for implementing a registered traveler program and the technology needs of such a program. In conducting our interview process, we developed a standard series of questions on key policy and implementation issues, sent the questions to the stakeholders in advance, and conducted the interviews. We then summarized the interviews to identify any key themes and areas of consensus or difference on major issues. We did not, however, attempt to empirically validate the information provided to us by stakeholders through these interviews. To identify basic principles that TSA should consider if it decides to implement a registered traveler program, we analyzed existing studies to identify overriding themes that could impact the policy or implementation of such a program. We also analyzed the results of our interviews, to generate a list of key principles. We performed our work from July 2002 through October 2002 in accordance with generally accepted government auditing standards. Interviews Conducted Testing Results on Leading Biometrics The International Biometrics Group considers four types of biometric identifiers as the most suitable for air-travel applications. These identifiers are fingerprint recognition, iris recognition, hand geometry, and facial recognition. Each of these biometrics has been employed, at least on a small scale, in airports worldwide. The following information describes how each biometric works and compares their functionality. <7. Types of Biometric Technologies> This technology extracts features from impressions made by the distinct ridges on the fingertips. The fingerprints can be either flat or rolled. A flat print captures only an impression of the central area between the fingertip and the first knuckle; a rolled print captures ridges on both sides of the finger. The technology is one of the best known and most widely used biometric technologies. This technology is based on the distinctly colored ring surrounding the pupil of the eye. The technology uses a small, high-quality camera to capture a black-and-white high-resolution image of the iris. It then defines the boundaries of the iris, establishes a coordinate system over the iris, and defines the zones for analysis within that coordinate system. Made from elastic connective tissue, the iris is a very plentiful source of biometric data, having approximately 450 distinctive characteristics. This technology measures the width, height, and length of the fingers, distances between joints, and shapes of the knuckles. The technology uses an optical camera and light-emitting diodes with mirrors and reflectors to capture three-dimensional images of the back and sides of the hand. From these images, 96 measurements are extracted from the hand. Hand geometry systems have been in use for more than 10 years for access control at facilities ranging from nuclear power plants to day care centers. This technology identifies people by areas of the face not easily altered the upper outlines of the eye sockets, the areas around the cheekbones, and the sides of the mouth. The technology is typically used to compare a live facial scan with a stored template, but it can also be used to compare static images, such as digitized passport photographs. Facial recognition can be used in both verification and identification systems. In addition, because facial images can be captured from video cameras, facial recognition is the only biometric that can also be used for surveillance purposes. Information about Existing Programs for Registered Travelers To improve border security and passenger convenience. Passengers from European Union, Norway, Iceland, and Liechtenstein. In the enrollment phase, the traveler is qualified and registered. This process includes a passport review, background check, and iris scan. All collected information is encrypted and embedded on a smart card. 2,500 passengers have enrolled in the program. In the traveling phase, the passenger approaches a gated kiosk and inserts the smart card in a card reader. The system reads the card and allows valid registered travelers to enter an isolated area. The passenger then looks into an iris scan camera. If the iris scan matches the data stored on the card, the passenger is allowed to continue through the gate. If the system cannot match the iris scan to the information on the card, the passenger is directed to the regular passport check lane. As of October 1, 2002, there is a 99-119 Euro ($97 $118) annual fee for participating passengers. According to program officials, the entire automatic border passage procedure is typically completed in about 10 15 seconds. The system can process four to five people per minute. There are plans to expand the program so that airlines and airports can use it for passenger identification and for tracking such functions as ticketing, check-in, screening, and boarding. There are also plans to develop components of the technology to provide secure-employee and staff access to restricted areas of travel and transportation facilities. To expedite passenger processing at passport control areas. Israeli citizens and frequent international travelers. Travelers who have dual U.S./Israel citizenship can take advantage of the Ben Gurion program, as well as the INS s INSPASS program. During enrollment, applicants submit biographic information and biometric hand geometry. Applicants also receive an in-depth interview. Approximately 80,000 Israeli citizens have enrolled in the program. During arrival and departure, participants use a credit card for initial identification in one of 21 automated inspection kiosks at the airport. The participant then places his or her hand in the hand reader for identity verification. If verified, the system prints a receipt, which allows the traveler to proceed through a system-controlled gate. If the person s identity cannot be verified, the individual is referred to an inspector. $20 $25 annual membership fee for participants. According to program officials, the entire automated verification process takes 20 seconds. Passport control lines at Ben Gurion airport can take up to 1 hour. The program allows airport personnel to concentrate on high-risk travelers, reduces bottlenecks with automated kiosks, improves airport cost-effectiveness, generates new revenue for the airport authority, and expands security capabilities at other Israeli borders. To expedite passenger processing at passport control. Non-United Kingdom, non-European Union, non-visa frequent travelers (mostly American and Canadian business travelers) originating from John F. Kennedy International Airport or Dulles International Airport on Virgin Atlantic or British Airways. To enroll, participants record their iris images with EyeTicket, have their passports scanned, and submit to a background check with U.K. immigration. 900 of 1,000 applicants were approved for participation; 300 enrolled. Upon arrival in London, participants are able to bypass the regular immigration line and proceed through a designated border entry lane. Participants look into an iris scan camera, and the image is compared against the scan taken at enrollment. If the two iris images match, participants are able to proceed through immigration. There were no user fees associated with the pilot program. According to EyeTicket, the average processing time per passenger is 12 seconds. Completed. Six-month trial ran from January 31, 2002, to July 31, 2002. IP@SS (Integrated Passenger Security System) Newark International Airport, Newark, New Jersey (Continental Airlines); Gatwick Airport, London, England (Delta Airlines) To expedite and simplify the processes of passenger identification and security screening. In June 2002, 6,909 passengers were processed through IP@SS. Officials report that about 99 percent of passengers volunteered for the program. Continental Airlines has two kiosks for tourist class, one for business and first classes, and one at the Continental gate for flights between Newark and Tel Aviv. Each station is staffed with a trained security agent who asks passengers for travel documents, including the individual s passport, which is scanned by an automated reader. After being cleared, the passenger can enroll in a biometric program in which biometric information is transferred to a smart card. The passenger then takes the card to the boarding gate and inserts it into the card reader and inserts fingers into the reader. If the information corresponds with the information contained on the smart card, the passenger is cleared to board the plane. Cards are surrendered to program officials after each use, and the information is scrambled to prevent misuse. There were no user fees associated with the pilot programs. Ongoing. ICTS International plans to launch pilot programs at other U.S. and European airports. The pilot programs at Newark and Gatwick are technology demonstrations and are used only to aid in the departure process. ICTS may test a sister city concept, in which the participant can take the card to his or her destination to aid in the deplaning/arrival process there. To expedite border crossings for low-risk frequent commuters. CANPASS is a project of the Canada-U.S. Shared Border Accord. Citizens and permanent residents of the United States and Canada are eligible to participate in the CANPASS program. As part of the application process, an applicant provides personal identification, vehicle identification, and driver s license information. Background checks are performed on all applicants. As of October 1, 2001, there were approximately 119,743 participants in the CANPASS program. Technology varies from site to site. At Douglas, the participant receives only a letter of authorization and a windshield decal; at Windsor, a participant receives a photo ID card. A participant receives a letter of authorization and a windshield decal, which can be used only on a vehicle registered in the CANPASS system. When a vehicle enters the lane, a license plate reader reads the plate on the car. Membership in the CANPASS program is validated with data available through the license plate reader and other sources. At the applicable crossings, a participant must show the CANPASS identification card to the border inspector. There are no fees associated with the CANPASS system. The CANPASS Highway program was closed as a result of the events of September 11, 2001; however, the program is still currently available at the Whirlpool Bridge in Niagara Falls, Ontario. The CANPASS program operates in conjunction with the SENTRI/PORTPASS program. SENTRI/PORTPASS (Secure Electronic Network for Travelers Rapid Inspection/Port Passenger Accelerated Service System) Detroit, Michigan; Buffalo, New York; El Paso and Hidalgo, Texas; Otay Mesa and San Ysidro, California Citizens and permanent residents of the United States and Canada and certain citizens and non-immigrants of Mexico are eligible to apply for program participation. Applicants must undergo an FBI background check, an Interagency Border Inspection System (IBIS) check, vehicle search, and personal interview prior to participation. Applicants must provide evidence of citizenship, residence, and employment or financial support. Fingerprints and a digital photograph are taken at the time of application. If cleared for enrollment, the passenger receives an identification card and a transponder, which must be installed in the registered vehicle. During 2000, approximately 792 participants were registered for the Detroit program, and 11,700 were registered for the Otay Mesa program. Transponders and magnetic card readers recall electronic photographs of registered drivers and their passengers. Images are presented on a monitor for border inspectors to visually confirm participants. Participants use designated SENTRI lanes to cross the border. The system automatically identifies the vehicles and the participants authorized to use the program. Border inspectors compare digitized photographs that appear on computer screens in the inspectors booths with the vehicles passengers. There is no charge for the U.S./Canada program. The SENTRI program for the United States and Mexico is $129 ($25 enrollment fee per person, $24 fingerprinting fee, and $80 systems fee). According to an El Paso INS official, delays in border crossing are typically around 60 90 minutes, but can be more than 2 hours. The SENTRI lane at a bridge border crossing has wait times of no more than 30 minutes. According to program officials, in Otay Mesa, CA, SENTRI participants wait approximately 4 5 minutes in the inspection lane, while nonparticipants can wait up to 3 hours in a primary inspection lane. To expedite border crossings for low-risk frequent commuters. NEXUS is a pilot project of the Canada-U.S. Shared Border Accord. Canadian and U.S. lawful, national, and permanent residents are eligible to apply for program participation. Applicants complete an application that is reviewed by the U.S. Customs Service, INS, Canada Customs and Revenue Service, and Citizenship and Immigration, Canada. Applicants are required to provide proof of citizenship and residency, employment authorizations, and visas. Background checks are performed by officials of both countries. Participants must also provide a fingerprint biometric of two index fingers, which is verified against an INS database for any American immigration violations. (Unlike the CANPASS/PORTPASS programs, NEXUS is a harmonized border-crossing program with common eligibility requirements, a joint enrollment process, and a common application and identity card.) Since 2000, program administrators have issued 4,415 identification cards to participants. Enrollees must provide a two-finger print biometric. Photo identification cards are given to all participants. The NEXUS identification card allows participants to use NEXUS-designated lanes in the United States and Canada and to cross the border without routine customs and immigration questioning. A nonrefundable processing fee of $80 Canadian or $50 U.S. must be paid every 5 years. According to a study on the NEXUS Program, participants can save 20 minutes, compared with using the regular primary inspection lanes. Officials may request full fingerprints to verify identity. The two-finger print biometric or full prints may be shared with other government and law enforcement agencies. In addition, any personal information provided will also be shared with other government and law enforcement agencies. Additional crossing points are scheduled to open in 2003. INSPASS (INS Passenger Accelerated Service System)/CANPASS Airport Detroit, Michigan; Los Angeles, California; Miami, Florida; Newark, New Jersey; New York, New York; San Francisco, California; Washington, D.C.; Vancouver and Toronto, Canada To decrease immigration inspection for low-risk travelers entering the U.S. via international flights. Employed at seven airports in the United States (Detroit, Los Angeles, Miami, Newark, New York (JFK), San Francisco, Washington-Dulles) and at U.S. pre-clearance sites in Canada, in Vancouver and Toronto. INSPASS enrollment is open to all citizens of the United States, Canada, Bermuda, and visa-waiver countries who travel to the United States on business three or more times a year for short visits (90 days or less). INSPASS is not available to anyone with a criminal record or to aliens who are not otherwise eligible to enter the United States. The enrollment process involves capturing biographical information, hand geometry biometric data and facial picture and digital fingerprint information. A background check is done automatically for the inspector and, if approved, a machine-readable card is created for the traveler. The entire enrollment process typically takes 30 40 minutes. Over 98,000 enrollments have been performed in INSPASS, of which 37,000 are active as of September 2001. Once enrolled, the traveler is able to use an automated kiosk at passport control. A traveler is required to swipe the INSPASS card, enter flight information on a touchscreen, verify hand geometry, and complete a security check. Upon successful inspection, a receipt is printed that allows the traveler to proceed to U.S. Customs. Presently, there are no system cost fees or filing fees associated with INSPASS. The CANPASS Airport program has been suspended since September 11, 2001, and will be replaced by the Expedited Passenger Processing System in 2003. INSPASS is being reworked and plans for a new version are under way. GAO Contacts and Staff Acknowledgments <8. GAO Contacts> <9. Acknowledgments> Key contributors to this assignment were Jean Brady, David Dornisch, David Goldstein, David Hooper, Bob Kolasky, Heather Krause, David Lichtenfeld, and Cory Roman. <10. GAO s Mission> The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO s commitment to good government is reflected in its core values of accountability, integrity, and reliability. <11. Obtaining Copies of GAO Reports and Testimony> The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as Today s Reports, on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select Subscribe to daily E-mail alert for newly released products under the GAO Reports heading. <11.1. Order by Mail or Phone> <12. To Report Fraud, Waste, and Abuse in Federal Programs> <13. Public Affairs>
Why GAO Did This Study The aviation industry and business traveler groups have proposed the registered traveler concept as a way to reduce long waits in airport security lines caused by heightened security screening measures implemented after the September 11 terrorist attacks. In addition, aviation security experts have advocated this concept as a way to better target security resources to those travelers who might pose greater security risks. The Aviation and Transportation Security Act of November 2001 allows the Transportation Security Administration (TSA) to consider developing a registered traveler program as a way to address these two issues. GAO completed this review to inform Congress and TSA of policy and implementation issues related to the concept of a registered traveler program. What GAO Found Under a variety of approaches related to the concept of a registered traveler program proposed by industry stakeholders, individuals who voluntarily provide personal background information and who clear background checks would be enrolled as registered travelers. Because these individuals would have been pre-screened through the program enrollment process, they would be entitled to expedited security screening procedures at the airport. Through a detailed literature review and interviews with stakeholders, GAO found that a registered traveler program is intended to reduce the inconvenience many travelers have experienced since September 11 and improve the quality and efficiency of airport security screening. Although GAO found support for this program among many stakeholders, GAO also found concerns that such a program could create new aviation security vulnerabilities. GAO also identified a series of key policy and program implementation issues that affect the program, including (1) Criteria for program eligibility; (2) Level of background check required for participation; (3) Security-screening procedures for registered travelers; (4) Technology options, including the use of biometrics to verify participants; (5) Program scope, including the numbers of participants and airports; and (5) Program cost and financing options. Stakeholders offered many different options on how best to resolve these issues. Finally, GAO identified several best practices that Congress and TSA may wish to consider in designing and implementing a registered traveler program. GAO concluded that a registered traveler program is one possible approach for managing some of the security vulnerabilities in our nation's aviation systems. However, decisions concerning key issues are needed before developing and implementing such a program. TSA felt that GAO's report offered a good overview of the potential and the challenges of a registered traveler program. The agency affirmed that there are no easy answers to some of the issues that GAO raised and that these issues need more study.
<1. Background> Through its disability compensation program, VA pays monthly benefits to veterans with service-connected disabilities. Under its disability pension program, VA pays monthly benefits to low-income veterans who have disabilities not related to their military service or are age 65 or older. VA also pays compensation to the survivors of certain veterans who had service-connected disabilities and of servicemembers who died while on active duty. Veterans and their survivors claim benefits at one of the Veterans Benefits Administration s (VBA) 57 regional offices. Once the claim is received, a service representative assists the veteran in gathering the relevant evidence to evaluate the claim. Such evidence includes the veteran s military service records, medical examinations, and treatment records from VA medical facilities and private medical service providers. Also, if necessary for reaching a decision on a claim, the regional office arranges for the veteran to receive a medical examination. Once all necessary evidence has been collected, a rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If so, the rating specialist assigns a percentage rating. Veterans with multiple disabilities receive a single composite rating. Since 2001, VBA has created 15 resource centers that are staffed exclusively to process claims or appeals from backlogged regional offices. Most of these centers focus either on making rating decisions, or on developing the information needed to evaluate claims. In addition to the traditional claims process, any member of the armed forces who has seen active duty including those in the National Guard or Reserves is eligible to apply for VA disability benefits prior to leaving military service through VA s Benefits Delivery at Discharge (BDD) program or the related Quick Start program. In 2006, VA completed its consolidation of BDD rating activity into its Salt Lake City, Utah, and Winston-Salem, North Carolina, regional offices, to increase the consistency of BDD claims decisions. Also, under the Department of Defense (DOD) VA disability evaluation system pilot program, servicemembers undergoing disability evaluations, if found medically unfit for duty, receive VA disability ratings. This rating covers both the unfitting conditions identified by the military service and conditions identified by the servicemember during the process. The rating is used by both DOD and VA to determine entitlement for disability benefits. Enacted in October 2008, the Veterans Benefits Improvement Act of 2008 required VA to contract for an independent, 3-year review of VBA s quality assurance program. This review is to include, among other items, assessments of the accuracy of disability ratings and their consistency across VA regional offices. VA contracted with the Institute for Defense Analyses (IDA) to conduct this study. According to VA, IDA will provide preliminary findings in the Summer of 2010, and VA is scheduled to report to the Congress in October 2011. <1.1. STAR Program> Under the STAR program, which was implemented in fiscal year 1999, VBA selects a random sample of completed claims decisions each month from each of its regional offices to review for accuracy. STAR reviewers assess decision accuracy using a standard checklist. For decisions affecting benefit entitlement, this review includes an assessment of whether (1) all issues in the claim were addressed; (2) assistance was provided to the claimant, as required by the Veterans Claims Assistance Act of 2000; and (3) the benefit entitlement decision was correct. If a claim has any error, VBA counts the entire claim as incorrect for accuracy rate computation purposes. The STAR reviewer then returns the case file and the results of the review to the regional office that made the decision. If an error was found, the regional office is required to either correct it or request reconsideration of the error determination. VBA uses the national accuracy rate from STAR reviews of compensation entitlement decisions as one of its key claims processing performance measures. VA also uses STAR data to estimate improper compensation and pension benefit payments. <1.2. Consistency Review Activities> One VA consistency review activity involves conducting studies of regional offices decisions on specific conditions such as post-traumatic stress disorder where VBA found differences, such as in benefit grant rates, across regional offices through comparative statistical analysis. VBA uses the results of these reviews to identify root causes of inconsistencies and to target training. Under another VA consistency review activity, called inter-rater reliability reviews, VBA provides rating specialists a sample case file to assess how well raters from various regional offices agree on an eligibility determination when reviewing the same body of evidence. These reviews allow VBA officials to target a single rating issue and take remedial action to ensure the consistent application of policies and procedures nationally. <2. VA Has Implemented Procedures to Address Deficiencies Identified with the STAR Program, but Continues to Face Challenges in Improving Accuracy> Over the past decade, VBA has taken several actions to improve its STAR program and to address deficiencies identified by both GAO and VA s OIG. For example, in March 1999, we found that STAR review staff lacked sufficient organizational independence because they were also responsible for making claims decisions and reported to regional office managers responsible for claims processing. In response to our findings, VBA took steps to address this by utilizing reviewers who do not process claims and who do not report to managers responsible for claims processing. More recently, in February 2008, we found that STAR was not sampling enough initial pension claims to ensure the accuracy of pension claims decisions. Because initial pension claims constituted only about 11 percent of the combined compensation and pension caseload subject to accuracy review, few were likely to be included in the STAR review sample. We recommended that VBA take steps to improve its quality assurance review of initial claims, which could include reviewing a larger sample of pension claims. According to VBA, it has addressed this issue by consolidating pension claims processing in its three Pension Management Centers and establishing a separate STAR sample for pension claims. During fiscal year 2009, VBA began reviewing more pension claim decisions and reported that, for fiscal year 2009, its pension entitlement accuracy was 95 percent, exceeding its goal. In a September 2008 report, we noted that VA lacked sufficient and specific performance measures for assessing the accuracy of decisions on BDD claims and recommended that VA consider options for separately estimating the accuracy of such claims decisions. VA conducted an analysis of the costs of sampling pre-discharge claims as part of STAR and concluded that the costs would outweigh possible, unquantifiable benefits. VA also noted that the two sites that rate BDD claims surpassed the national average in accuracy for claims overall. While generally responsive to our recommendation, VA s analysis did not specifically review the accuracy of BDD claims relative to traditional claims. Moreover, because BDD claims do not comprise all claims reviewed at the two rating sites, we continue to believe VA s analysis was not sufficient to estimate the relative accuracy of BDD claims at these sites. While we agree that the benefits of reviewing accuracy are difficult to measure, if VA had better information on the accuracy of BDD claims, VA could use such information to inform training and focus its monitoring efforts. In contrast, VA currently performs STAR reviews that target rating decisions made by its Baltimore and Seattle offices under the DOD-VA disability evaluation system pilot program. Such a targeted review could also be conducted for BDD claims. In its March 2009 report, VA s OIG also identified several deficiencies in the STAR program and recommended corrective actions. The OIG found that (1) regional offices did not always submit all requested sample cases for review, (2) reviewers did not evaluate all documentation in sample files, and (3) reviewers were not properly recording some errors. The OIG also found that VBA was not conducting STAR reviews of redistributed cases (for example, claims assigned to resource centers for rating). The OIG reviewed a sample of redistributed claims and found that 69 percent had accurate entitlement decisions, well below VBA s reported rate of 87 percent for the 12-month period ending in February 2008. Further, the OIG found that VBA did not have minimum training requirements for STAR reviewers. As of March 2010, VBA had taken actions to respond to all of the OIG s recommendations related to STAR, including (1) implementing procedures to follow up on cases not submitted by regional offices; (2) adding a mechanism to the STAR database to remind reviewers of key decision points; (3) requiring a second-level review of STAR reviewers work; and (4) establishing a requirement that STAR reviewers receive 80 hours of training per year. In addition, during fiscal year 2009, based in part on the OIG s recommendation, VBA also began monitoring the accuracy of claims decided by rating resource centers as it does for regional offices. As we noted in our January 2010 report, VBA has significantly expanded its practice of redistributing regional offices disability claims workloads in recent years, and gathering timeliness and accuracy data on redistributed claims could help VBA assess the effectiveness of workload redistribution. In addition, as the Congress has provided more resources to VBA to increase compensation and pension staffing, VBA has devoted more resources to quality review. In fiscal year 2008, VBA more than doubled the size of the quality assurance staff, allowing it to increase the scope of quality assurance reviews. VA states that in the 12-month period ending in May 2009, STAR staff reviewed over 14,000 compensation and pension benefit entitlement decisions. Although VBA has taken steps to address deficiencies in the STAR program, the accuracy of its benefit entitlement decisions has not improved. The accuracy rate was 86 percent in fiscal year 2008 and 84 percent in fiscal year 2009, well short of VBA s fiscal year 2009 goal of 90 percent. VA attributed this performance to the relatively large number of newly hired personnel conducting claims development work and a general lack of training and experience. Human capital challenges associated with providing the needed training and acquiring the experience these new claims processors need to become proficient at their jobs will likely continue in the near future. According to VBA officials, it can take 3 to 5 years for rating specialists to become proficient. <3. VA Has Taken Actions to Strengthen Efforts to Monitor Consistency of Claims Decisions> VA has taken actions to address deficiencies identified with its consistency review programs, but it is still too early to determine whether these actions will be effective. In prior work, we reported that VBA did not systematically assess the consistency of decision making for any specific impairments included in veterans disability claims. We noted that if rating data identified indications of decision inconsistency, VA should systematically study and determine the extent and causes of such inconsistencies and identify ways to reduce unacceptable levels of variations among regional offices. Based on our recommendation, VBA s quality assurance staff began conducting studies to monitor the extent to which veterans with similar disabilities receive consistent ratings across regional offices and individual raters. VBA began these studies in fiscal year 2008. VBA identified 61 types of impairments for consistency review and conducted at least two inter-rater reliability reviews, which found significant error rates. In its March 2009 report, the OIG noted that, while VBA had developed an adequate rating consistency review plan, including metrics to monitor rating consistency and a method to identify variances in compensation claim ratings, it had not performed these reviews as scheduled. In fact, VBA had initiated only 2 of 22 planned consistency reviews in fiscal year 2008. The OIG reported that VBA had not conducted these reviews because STAR staffing resources were not sufficient to perform all of their assigned responsibilities and noted that VBA s quality review office had not staffed all of its authorized positions. In addition, the OIG found that inter-rater reliability reviews were not included in VBA s quality assurance plan. The OIG recommended that VBA (1) develop an annual rating consistency review schedule and complete all planned reviews as scheduled; (2) dedicate sufficient staff to conduct consistency reviews in order to complete planned workload and reviews; and (3) include inter- rater reliability reviews as a permanent component of its consistency review program. VBA reported that it has developed an annual consistency review schedule and is in the process of conducting scheduled fiscal year 2010 reviews. As of January 2010, VBA also added six staff members to perform quality assurance reviews. Further, VBA incorporated inter-rater reliability reviews into its fiscal year 2009 quality assurance plan. Because VBA has only recently implemented these initiatives, it is too early to determine their impact on the consistency of claims decisions. <4. Conclusion> Over the years, VA has been challenged in its efforts to ensure that veterans get the correct decisions on disability claims the first time they apply for them, regardless of where the claims are decided. Making accurate, consistent, and timely disability decisions is not easy, but it is important. Our veterans deserve timely service and accurate decisions regardless of where their claims for disability benefits are processed. To fulfill its commitment to quality service, it is imperative that VA continue to be vigilant in its quality assurance efforts, as this challenge will likely become even more difficult as aging veterans and veterans returning from ongoing conflicts add to VA s workload. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or Members of the Subcommittee may have at this time. <5. GAO Contact and Staff Acknowledgments> For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the contact named above, key contributors to this statement include Shelia Drake, Jessica Orr, Martin Scire, and Greg Whitney. Related GAO Products Veterans Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Veterans Disability Benefits: Preliminary Findings on Claims Processing Trends and Improvement Efforts. GAO-09-910T. Washington, D.C.: July 29, 2009. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. Veterans Disability Benefits: Better Accountability and Access Would Improve the Benefits Delivery at Discharge Program. GAO-08-901. Washington, D.C.: September 9, 2008. Veterans Benefits: Improved Management Would Enhance VA s Pension Program. GAO-08-112. Washington, D.C.: February 14, 2008. Veterans Benefits: Further Changes in VBA s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004. VA Disability Benefits: Routine Monitoring of Disability Decisions Could Improve Consistency. GAO-06-120T. Washington, D.C.: October 20, 2005. Veterans Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Veterans Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans Benefits: Quality Assurance for Disability Claims Processing. GAO-01-930R. Washington, D.C.: August 23, 2001. Veterans Benefits Claims: Further Improvements Needed in Claims- Processing Accuracy. GAO/HEHS-99-35. Washington, D.C.: March 1, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study For years, in addition to experiencing challenges in making disability claims decisions more quickly and reducing its claims backlog, the Department of Veterans Affairs (VA) has faced challenges in improving the accuracy and consistency of its decisions. GAO was asked to discuss issues surrounding VA's Systematic Technical Accuracy Review (STAR) program, a disability compensation and pension quality assurance program, and possible ways, if any, this program could be improved. This statement focuses on actions VA has taken; including those in response to past GAO recommendations, to (1) address identified weaknesses with STAR and (2) improve efforts to monitor the consistency of claims decisions. This statement is based on GAO's prior work, which examined several aspects of STAR, as well as VA's consistency review activities, and on updated information GAO obtained from VA on quality assurance issues that GAO and VA's Office of Inspector General (OIG) have identified. GAO also reviewed VA's OIG March 2009 report on STAR. GAO is not making any new recommendations. What GAO Found Over the past several years, GAO has identified several deficiencies with the Veterans Benefit Administration's (VBA) STAR program, and although VBA has taken actions to address these issues, it continues to face challenges in improving claims accuracy. For example, GAO found that STAR reviewers lacked organizational independence, a basic internal control principle. In response to our finding, VA began utilizing organizationally independent reviewers that do not make claims decisions. GAO also found that sample sizes for pension claims were insufficient to provide assurance about decision accuracy. In response to GAO's recommendation, in fiscal year 2009, VA began increasing the number of pension claims decisions it reviews annually at each of its offices that process pension decisions. VA has also taken a number of other steps to address weaknesses that VA's OIG found in the STAR program, including (1) establishing minimum annual training requirements for reviewers and (2) requiring additional supervisory review of STAR reviewers' work. Although it has made or has started making these improvements, VBA remains challenged to improve its decision accuracy for disability compensation decisions, and it has not met its stated accuracy goal of 90 percent. VBA's performance has remained about the same over the past several fiscal years. In addition, VA has taken steps to address deficiencies that GAO and the VA's OIG have identified with consistency reviews--assessments of the extent to which individual raters make consistent decisions on the same claims. For example, in prior work, GAO reported that VA did not conduct systematic studies of impairments that it had identified as having potentially inconsistent decisions. In response to GAO's recommendation, in fiscal year 2008, VBA's quality assurance staff began conducting studies to monitor the extent to which veterans with similar disabilities receive consistent ratings across regional offices and individual raters. However, last year, VA's OIG reported that VA had not followed through on its plans to conduct such reviews. In response to this and other OIG findings and recommendations, VA took a number of actions, including developing an annual consistency review schedule and hiring additional quality assurance staff. However, VBA has only recently begun these programs to improve consistency, and it is too early to assess the effectiveness of their actions.
<1. Background> GSA follows a prescribed process for the disposal of federal properties that are reported as excess by federal agencies a process that can take years to complete. GSA first offers excess property to other federal agencies. If no federal agency needs it or homeless provider expresses an interest in it, the property becomes surplus and may be made available for other uses through a public benefit conveyance, when state and local governments, and certain nonprofits, can obtain the property at up to 100 discount of fair market value when it is used for public purposes, such as an educational facility. Ultimately, the property may be disposed of by a negotiated sale for public use or public sale based on GSA s determination of the property s highest and best use. GSA collects rent from tenant agencies, which is deposited in the Federal Buildings Fund (FBF) and serves as GSA s primary source of funding for operating and capital costs associated with federal real property. Congress exercises control over the FBF through the appropriations process, which designates how much of the fund can be obligated for new construction and maintenance each fiscal year. According to GSA, capital funding has not kept pace with GSA s need to replace and modernize buildings in its federal real property portfolio, which includes about 1,500 buildings. We have recently found that GSA and other federal agencies have pursued alternative approaches to address challenges with funding federal real property projects. One alternative approach is a swap-construct exchange between the federal government and a nonfederal entity, such as a private developer. GSA has several authorities to exchange federal property for constructed assets and, in 2005, was specifically authorized to exchange federal property for construction services. Swap-construct exchanges can be proposed by a nonfederal entity, such as a private developer or local government, or by GSA. GSA s process for proposing and conducting a swap-construct exchange includes either proposing a swap-construct exchange to a nonfederal entity that has expressed an interest in acquiring a specific federal property or soliciting market interest through an initial proposal, often an RFI, followed by more detailed proposals. These more detailed proposals include requests for qualifications (RFQ) to identify qualified developers and requests for proposals (RFP). In a swap-construct exchange, the federal government transfers the title of the federal property to a developer or other property recipient after receiving a constructed asset or the completion of construction services at a different location. Swap-construct exchanges can involve swapping property and constructed assets or construction services that are of equal value or can include cash to compensate for a difference in value between the federal property and the asset or services to be received by the government. According to GSA, highest priority is assigned to swap-construct exchanges that involve exchanges of federal property of equal or greater value than the asset or services provided by the property recipient because these scenarios do not require appropriation of federal funding. Figure 1 describes GSA s decision- making process for proposing swap-construct exchanges and the three scenarios that can result from an agreement for a swap-construct exchange. According to GSA, once the agency has decided to pursue an exchange for a newly constructed asset or services, it follows GSA s 1997 guidance for real property exchanges of non-excess property. The guidance lays out a number of steps, including: obtaining a property appraisal; using, if possible, one appraiser for all properties involved in an analysis and documentation of all benefits and costs of the exchange to show why the exchange is in the best interest of the government. In November 2013, the GSA Inspector General issued a memo noting that GSA s 1997 guidance is not specifically applicable to exchanges of real property for services. In responding to the memo, GSA stated that it was in the process of preparing guidance specific to exchanges for services. <2. GSA s Experience with Swap-Construct Is Limited to Two Exchanges for Parking Garages That Were Initiated by Private Sector> Since 2000, GSA has completed two swap-construct exchanges initiated by companies Emory University Hospital Midtown (then called Emory Crawford Long Hospital) and H. E. Butt Store Property Company No. One (HEBSPC) that were interested in acquiring specific federal properties in Atlanta, GA and San Antonio, TX, respectively. A now-retired representative of Emory University Hospital Midtown and representatives of HEBSPC told us that they were satisfied with the end result of the exchanges, but added that there were challenges in the process that may affect future swap-construct exchanges. Specifically, the representatives told us that the exchanges took longer than anticipated, about 3 years in Atlanta and over 5 years in San Antonio, and that, consequently, less motivated parties may avoid or withdraw from future exchanges. GSA officials told us that both exchanges were a good value for the government because the properties and services received by the government were of equal or greater value than the federal properties disposed of in the exchanges. GSA officials added that the exchanges were a good value for the government because both of the assets disposed of were underutilized. However, these officials noted their lack of experience with swap-construct exchanges at the time. Atlanta Swap-Construct In 2001, GSA exchanged a federal parking garage (the Summit Garage) in Atlanta with 1,829 spaces on a 1.53-acre parcel with Emory University Hospital Midtown for a newly constructed parking garage (the Pine Street Garage) with 1,150 spaces on .92 acres (see fig. 2). GSA also received a commitment from the hospital to lease and manage the operations and maintenance of the Pine Street Garage for 16 years and to lease spaces in it to federal employees. According to GSA, at the time of the exchange, the Summit Garage was underutilized because it included more parking spaces than GSA needed. Although GSA utilized some of the extra spaces through a lease agreement with the hospital, which is located nearby, the garage was, GSA added, in deteriorating condition and was not it compliance with the Americans with Disabilities Act (ADA). According to GSA, the swap-construct exchange was in the best interest of the government because GSA received a new ADA-compliant garage with a direct covered connection to both the Peachtree Summit Federal building and a Metropolitan Atlanta Rapid Transit Authority (MARTA) subway station in exchange for a garage that was underutilized and in deteriorating condition. GSA added that the exchange was beneficial to the government because it included the hospital s commitment to lease spaces not needed by the government for 16 years, with proceeds deposited into the FBF, and to cover operations and maintenance work typically covered by GSA. According to the now-retired representative of Emory University Hospital Midtown who was involved with the swap-construct exchange, the acquisition of the Summit Garage was crucial to accommodating a hospital renovation and expansion project. However, the hospital was aware that GSA needed parking spaces to accommodate federal tenants in the Peachtree Summit Federal Building, so it proposed the swap- construct approach to GSA. The representative added that, although the hospital was pleased with the end result of the transaction, the exchange took about 3 years to complete, a time that was longer than anticipated for the hospital and that may lead less motivated parties to avoid or withdraw from future exchanges. GSA officials noted that the agency had limited experience with this type of exchange, which may have contributed to the length of time required to complete it. The representative added that the exchange was also complicated in that the appraised value of the new garage and any additional services had to be equal to the appraised value of the Summit Garage. The now-retired representative added that to address concern that the new and smaller garage might appraise for less than the Summit Garage, the hospital agreed with GSA to continue leasing spaces in the new garage and cover operations and maintenance costs. As a result, the two parts of the exchange the garages and lease agreements were equally appraised at $6.6 million. According to GSA, although the hospital s lease in the new garage expires in 2017, the size of the garage allows the agency to meet continued demand for federal parking in the vicinity of the Peachtree Summit Federal Building. San Antonio Swap-Construct In 2012, GSA exchanged an approximately 5-acre federal property (the Federal Arsenal site) in San Antonio, TX, with HEBSPC for construction of a parking garage on existing federal land for the recently renovated Hipolito F. Garcia Federal Building and U.S. Courthouse (see fig. 3). According to GSA, at the time of the exchange, the Federal Arsenal site was an underutilized asset because of historical covenants limiting the ability to redevelop the land and its buildings and because it was located on the periphery of the city away from other federal assets. Although the property was partly utilized by GSA s Fleet Management and through a lease agreement with HEB Grocery Company (HEB) for parking spaces, GSA officials told us that there was no anticipated long-term government need for it. According to GSA officials, the swap-construct exchange was in the best interests of the federal government because the government received a new federal parking garage for the Hipolito F. Garcia Federal Building and U.S. Courthouse in exchange for a property that was underutilized. HEBSPC representatives told us that the company was interested in acquiring the Federal Arsenal site to accommodate existing space needs and potential expansion of HEB s corporate headquarters, near the site and expressed this interest to GSA. The representatives added that although the historic covenants on the property presented some potential challenges, the company had prior experience renovating and utilizing historic properties on HEB s headquarters property. The representatives also told us that the company had a long-standing interest in acquiring the Federal Arsenal site prior to 2005, but during that time, the property could not be sold because it was being partly used by GSA s Fleet Management. In 2005, however, GSA told HEBSPC about the need for an additional parking for the Hipolito Garcia Federal Building and U.S. Courthouse and, subsequently, proposed the swap-construct exchange to HEBSPC, which had experience building parking garages. An official from one of the tenant agencies in the federal building and courthouse told us that the increased availability of parking with the new garage (150 new spaces compared with 35 existing spaces) was one of the reasons the agency decided to locate in the building. GSA officials told us that the availability of the new parking spaces is critical to further attracting tenants to the building, which is not fully occupied. HEBSPC representatives told us the company was pleased with the transaction and GSA s management of the transaction. However, they added that they would have preferred it to have been completed quicker than the 5-plus years between the proposal and exchange of properties, and noted that the time it took to complete the transaction may lead less motivated parties to avoid or withdraw from such exchanges. According to GSA officials, the transaction took longer than anticipated because GSA did not have significant experience to use as a basis for completing the transaction and because of fluctuations in real estate values due to the economic recession that required additional property appraisals to be completed. After four property appraisals between 2007 and 2009, GSA and HEBSPC ultimately valued the Federal Arsenal site at $5.6 million. According to GSA, the new parking garage was constructed to fully utilize the $5.6 million value of the property that HEBSPC received. <3. GSA Is Pursuing Half of Its Six Swap- Construct Proposals> Since August 2012, GSA has proposed six swap-construct exchanges one that the agency proposed directly to the City of Lakewood, CO, and five in which GSA solicited market interest in exchanging federal property, totaling almost 8-million square feet, for construction services or newly constructed assets. After reviewing responses to these six proposals, GSA is actively pursuing three, including: (1) a potential exchange of undeveloped federal land in Denver with the City of Lakewood for construction services at the Denver Federal Center; (2) a potential exchange of the existing FBI headquarters building for a new FBI headquarters building; and (3) a potential exchange of two federal buildings in the Federal Triangle South area of Washington, D.C., for construction services to accommodate federal workers elsewhere in the city. According to GSA officials, although the agency has had authority to exchange property for construction services since fiscal year 2005 and had authority to exchange property for newly constructed assets prior to that, until recently there has been limited agency interest using non- traditional property disposal and acquisition approaches, such as swap- construct exchanges. The officials added that since 2012 the agency has more widely pursued swap-construct exchanges to address challenges such as a rising number of agency needs and limited budgetary resources. According to GSA officials, although the projects could involve exchanges of equal value, similar to the Atlanta and San Antonio exchanges, they could result in the government either receiving a payment or paying to cover any difference in value between the property to be exchanged and its construction projects. GSA decided to propose a swap-construct exchange to the City of Lakewood because the city had previously expressed interest in the undeveloped federal land, totaling about 60 acres, and because GSA had need for construction services at the nearby Denver Federal Center. A representative of the City of Lakewood told us that the city was supportive of the swap-construct approach because the services provided to GSA would support employment for the local population, whereas if the city were to purchase the property through a sale, the proceeds would not necessarily be spent locally. GSA told us that negotiations for a possible swap-construct exchange are ongoing. We found that respondents expressed openness or interest in the swap- construct approach regarding four of the five exchanges for which GSA solicited market interest, but generally this openness or interest was limited to the proposed consolidation of the FBI s headquarters operations into a new location in exchange for the existing FBI headquarters building and land. Several responses to GSA s RFIs did not address swap- construct and instead provided other information, such as the credentials of a particular developer and statements that GSA should ensure that affordable housing is included in the redevelopment of federal properties to be exchanged. Figure 4 describes swap-construct exchanges for which GSA solicited market interest and responses to its RFIs. For the proposed FBI headquarters swap-construct exchange, GSA officials told us that the agency anticipates identifying qualified developers by fall 2014 and awarding a contract to a developer for the transaction in summer 2015. For the proposed swap-construct exchange involving Federal Triangle South properties, GSA narrowed the scope of its proposed exchange after reviewing responses to its RFI. Specifically, in April 2014, the agency issued an RFQ to identify qualified developers for a potential exchange involving two of the five properties included in the RFI the Cotton Annex and the GSA Regional Office Building for renovations to GSA s headquarters building and construction services to support the Department of Homeland Security s headquarters consolidation in Washington, D.C. GSA officials told us that there was little or no market interest in potential swap-construct exchanges in Baltimore, MD (the Metro West building) and Miami, FL (the David W. Dyer Courthouse), and that different approaches were now being considered to address them. In addition, although GSA received some interest in a swap-construct exchange involving another property, the U.S. Courthouse at 312 N. Spring Street in Los Angeles (hereafter referred to as the Spring Street Courthouse ), GSA officials said the agency may need to pursue other approaches for this property as well. The respondents to these potential exchanges expressed various concerns. For example, 4 of 9 respondents expressed concerns about the lack of detail regarding what GSA would expect in return for the federal property and 4 of 9 respondents expressed concerns about the amount of investment needed in the federal properties to make the exchange profitable for the property s recipient. Three RFI respondents and representatives of one nongovernmental organization familiar with GSA s real property projects added that swap- construct may be a less viable approach in markets with a large number of alternative real estate options. According to developers and organizations familiar with GSA s swap- construct proposals, the two exchanges for which GSA solicited market interest and is still pursuing generally benefit from the inclusion of federal properties located in an area with high real-estate values and, thus, profitable redevelopment potential. Specifically, both properties are located in areas of Washington, D.C., near mass transit and prominent landmarks (see fig. 5). In addition, one of the potential projects the consolidation of the FBI headquarters operations into a new location benefits from a well defined scope with GSA s expectations for the construction priority being sought by the agency in exchange for the federal property considered for exchange in the proposal the J. Edgar Hoover Building. In 2011, GSA estimated that a new FBI headquarters built on federal land would cost about $1.9 billion. According to GSA, this estimate is out of date. <4. Swap-Construct Can Help Address GSA s Needs, but Level of Detail in GSA s Solicitations and Market Interest May Affect Future Use> <4.1. Swap-Construct Can Help Facilitate New Construction and Developer Access to Federal Properties, but at Potentially Greater Cost to Some Stakeholders Than the Traditional Disposal Approach> GSA officials told us that swap-construct exchanges can help GSA facilitate construction projects given a growing need to modernize and replace federal properties, shrinking federal budgets, and challenges getting funding appropriated from the FBF. Specifically, GSA officials noted that swap-construct exchanges allow GSA to immediately apply the value of a federal property to be used in the exchange to construction needs, rather than wait for funds to be made available from the FBF. GSA officials and a representative of a nongovernmental organization familiar with GSA s real property projects added that the exchanges can be attractive for GSA because the agency can get construction projects accomplished without having to request full upfront funding for them from Congress. In addition, because swap-construct exchanges require developers or other property recipients to address GSA s construction projects prior to the transfer of the title to the exchange property, federal agencies can continue to occupy the federal property during the construction process, eliminating the need for agencies to lease or acquire other space to occupy during the construction process. GSA officials also told us that swap-construct exchanges can help advance a government-wide goal to consolidate agencies out of leased space into federally owned space. For example, according to GSA, about half of the FBI s headquarters staff are located in the existing headquarters building and the potential swap-construct exchange for a new FBI headquarters could allow the agency to consolidate into one federally owned building. The retired Emory University Hospital Midtown representative and HEBSPC representatives added that swap-construct exchanges can help the private sector acquire federal property that it otherwise may not be able to acquire. While swap-construct can facilitate GSA s construction needs, it could come at a greater cost to some stakeholders than the traditional disposal approach. Specifically, because federal properties disposed of through swap-construct are not declared excess or surplus (often because they are still in use by federal tenants when the swap-construct is proposed and during the exchange process), they do not go through the traditional disposal process. Thus, the swap-construct approach may limit the participation of nonfederal entities that would have been interested in acquiring the properties through public benefit conveyance or other means. For example, in a typical property disposal, eligible public and nonprofit entities, such as institutions of higher education or homeless organizations, can receive the federal property at up to a 100 percent discount of fair market value when it is used for a variety of qualified purposes, such as education and assistance for the homeless. Two institutions of higher education that responded to GSA s solicitations for a swap-construct exchange expressed a preference for GSA to use the traditional disposal process because the universities could then obtain it by public benefit conveyance. A representative of a national advocacy group for the homeless expressed concern that swap-construct could serve as a way around the traditional disposal process and believes GSA should offer public benefit conveyances prior to proposing swap-construct exchanges. <4.2. GSA Does Not Always Clearly Identify Its Needs in Its Solicitations for Market Interest in Swap- Construct Exchanges> Swap-construct exchanges require developers to make potentially large investments in federal construction projects prior to receiving title to federal property used in the exchanges. GSA s solicitations for market interest in swap-construct projects do not always clearly identify what projects the agency is seeking in exchange for the federal property. For example, the RFIs for the potential Dyer Courthouse and Metro West swap-construct exchanges did not specify what GSA was seeking as part of an exchange. Two respondents to the Metro West RFI told us that additional details regarding what GSA expects in return for the property would be key to future consideration of a swap-construct exchange. In addition, one developer we spoke to told us that the lack of detail regarding what GSA expected in return for the Metro West property influenced his company s decision not to respond to the RFI. One of the four respondents to the Spring Street Courthouse RFI added that although GSA specified a need for a new building in exchange for the Spring Street Courthouse, it was not clear that the new building was a GSA priority. Specifically, the respondent noted that future swap- construct exchanges may benefit from additional information on GSA s needs, such as a strategic plan for a region where GSA is proposing a swap-construct exchange. GSA officials also told us that the agency does not always identify its needs prior to releasing its RFIs for swap-construct exchanges. OMB guidance notes that although federal agencies should not specify requirements too narrowly in RFIs, agencies should identify clear agency needs in the documents. Leading practices also note the importance of identifying an agency s needs and being transparent about these needs. GSA officials acknowledged that while details were not always specified in RFIs for swap-construct exchanges, details would be specified in subsequent solicitations if GSA determines there is enough market interest based on the RFI responses. GSA officials also stated that fewer details were included in the RFIs because the agency wanted to gauge market interest in the swap-construct transaction structure and did not want to limit the creativity of potential RFI respondents. However, by not providing some detail on the agency s needs in its RFIs, GSA risks limiting respondents ability to provide meaningful input and could miss potential swap-construct opportunities for the properties. <4.3. Various Factors May Affect the Applicability of Swap- Construct Exchanges, and GSA Lacks Criteria for Identifying Good Exchange Candidates> GSA has generated interest in swap-construct for some projects, as previously discussed, but several factors may limit the applicability of the agency s approach. Three of the four RFI respondents and one of the two nongovernmental organizations we spoke to noted that the federal property to be exchanged should have high redevelopment potential to offset the developers risk of delayed access to the property until providing GSA with its needed asset or construction services. Specifically, a developer may have to expend significant time and money addressing GSA s needs for a new building or renovating an existing federal building before receiving, redeveloping, and generating revenue from the swapped federal property. GSA officials told us that it might be possible to negotiate some early rights of access to the federal property before the transfer of the property title to conduct activities such as site preparation and demolition work, but at a developer s risk. According to representatives of the two nongovernmental organizations we spoke to, GSA should also consider local market conditions in deciding if a property is suitable for swap-construct because developers can often purchase or lease similar properties they need from the private sector and quickly access them for redevelopment. For example, a representative of a firm that advises developers noted that the FBI headquarters building is located in an area of Washington, D.C., with high potential for profitable redevelopment and that there are few other similar properties available to developers. In contrast, a Metro West RFI respondent and a Spring Street Courthouse RFI respondent expressed concern that the federal properties included in those exchanges, in Baltimore and Los Angeles, respectively, may not have sufficient redevelopment potential to offset the risks associated with delayed transfer of title under a swap-construct approach. Potential complications with exchanging property in one region for a constructed asset or construction services in another region may also limit the applicability of swap-construct exchanges. Specifically, GSA officials told us that the pool of potential bidders is smaller and community and political opposition can be higher when removing federal assets from one region for a constructed asset or construction services in another. In addition, the officials said project management can be more difficult for GSA when an exchange is executed across different regions. Consequently, the officials told us they try to locate the desired constructed asset or construction services in the same region as the federal property to be exchanged. GSA officials added that many underutilized federal properties are not suitable for swap-construct because they are in locations where GSA has limited needs for new assets or construction services or because the federal properties are not sufficiently desirable or would require too much investment from a developer. A representative of the firm that advises developers added that while the swap-construct approach gives GSA greater control over the proceeds from a property disposal, the federal government may get a better deal for a new asset or construction services and potentially larger proceeds for the disposed federal property if it were to use traditional acquisition and disposal methods. In particular, the representative noted that developers may be willing to pay more for federal property through a sale because the developers could gain immediate access to the property for redevelopment purposes. Similarly, the representative told us that GSA may get a better deal on a new asset or construction services it if were to pursue them through a traditional acquisition process because it would invite more developer competition into the process, unlike in a swap- construct approach where a developer would also need to be willing to receive federal property as consideration. While GSA has guidance for determining if it should continue to pursue an exchange that has already been proposed, it does not have criteria to help determine when the agency should solicit interest in a swap- construct exchange. According to GSA officials, the agency considers possible swap-construct exchanges on a case-by-case basis during its annual review of its entire federal real property portfolio, but it lacks guidance on how that case-by-case analysis should be conducted. GSA officials added that because the agency only recently started using the swap-construct approach, it does not have screening criteria for determining when a swap-construct exchange should be proposed. Moreover, we found that some proposed swap-construct exchanges have been driven by GSA s need to dispose of specific federal properties and that, as previously discussed, GSA has not given the same amount of consideration to construction projects to include in its proposed exchanges. For example, in the Metro West and Dyer Courthouse swap- construct proposals, GSA identified federal properties to be exchanged, but little or no information on construction projects it needed in a potential exchange. GSA has proposed swap-construct exchanges since 2012 to a mixed reception, as previously noted, with little or no interest in exchanges involving the Dyer Courthouse in Miami, the Spring Street Courthouse in Los Angeles, and the Metro West building Baltimore, and high level interest in an exchange only for the FBI headquarters consolidation project. Both OMB and GAO guidance emphasize the importance of using criteria to make capital-planning decisions. By not using screening criteria to identify potentially successful swap-construct exchanges, the agency may miss the best opportunities to leverage swap-construct exchange or select properties for exchange that are better suited to the traditional property disposal process and construction projects that are better suited to traditional funding processes. GSA may also waste time and money pursuing a potential swap-construct exchange that could be better spent pursuing these traditional approaches. <5. Conclusions> GSA faces some key challenges in managing its federal real property portfolio, especially in disposing of unneeded federal property and financing the replacement or modernization of aging and underutilized properties. In some cases, the swap-construct approach discussed in this report might be a useful means through which GSA can more readily achieve these property-related goals. However, GSA s recent solicitations for market interest in swap-construct have not always been well received by potential bidders. Specifically, of the five swap-construct exchanges GSA for which GSA solicited market interest since 2012, only two are being actively pursued; the others generated little market interest. One concern for potential bidders was the lack of detail regarding the construction services that GSA hoped to gain in return for an asset it would cede to the bidder. We found that in developing initial proposals for a swap-construct exchange GSA often focused on identifying assets to dispose of and gave less attention to what it needed in exchange for those assets. Construction services or a newly constructed asset are fully half of any swap-construct exchange, yet GSA has not always clearly identified its needs when requesting feedback from potential bidders. The agency s intent may be to provide greater details at later stages of the proposal process, but this approach may limit the ability of respondents to provide meaningful input and lead to missed swap-construct opportunities for GSA. At present GSA does not have criteria for identifying viable exchanges in the sense that both sides of the potential transaction are fully defined and communicated to potential interested parties. OMB and GAO have previously identified the importance of criteria in making agency decisions. By not using screening criteria to make its choices, GSA may be pursuing swap-construct exchanges with less potential for success, and potentially delaying time that it could be spending on traditional disposal and appropriation processes. Similarly, GSA may also miss opportunities to leverage swap-construct more widely moving forward, which could be crucial given ongoing budgetary challenges. <6. Recommendations> In order to identify potentially successful swap-construct exchanges during GSA s review of its federal real property portfolio and reduce uncertainty for those responding to GSA s solicitations for possible swap- construct exchanges, we recommend that the Administrator of GSA take the following two actions: 1. include, to the extent possible, details on what GSA is seeking in exchange for federal property in its solicitations, including requests for information, for potential swap-construct exchanges and 2. develop criteria for determining when to solicit market interest in a swap-construct exchange. <7. Agency Comments> We provided a draft of this report for review and comment to GSA. GSA concurred with the report s recommendations and provided additional information on the proposed swap-construct exchange with the City of Lakewood, Colorado, which we incorporated. GSA s letter is reprinted in appendix II. As arranged with your offices, unless you publicly disclose the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Administrator of GSA. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine (1) GSA s experiences with completed swap-construct exchanges; (2) the status of GSA s proposed swap- construct exchanges; and (3) the potential benefits of swap-construct exchanges and the factors that can influence their future use. We described GSA s swap-construct process using information gathered from GSA guidance and interviews with GSA officials. In addition, we reviewed related laws that facilitate GSA s swap-construct exchanges. To determine GSA s experience with swap-construct exchanges, we identified and reviewed the two swap-construct exchanges (Atlanta, GA, and San Antonio, TX) completed by GSA since 2000 through GSA exchange agreement documentation, appraisal reports, and property descriptions, and through interviews with GSA officials. We conducted site visits to Atlanta and San Antonio, examined the properties involved in the exchanges, and interviewed GSA officials and nonfederal participants H. E. Butt Store Property Company No. One (HEBSPC) and Emory University Hospital Midtown about their experience with the transactions. To determine the status of GSA s proposed swap-construct exchanges, we identified and reviewed the six proposed swap construct exchanges two in Washington, D.C., and one each in Miami, FL; Los Angeles, CA; Baltimore, MD; and Lakewood, CO using GSA documentation, including GSA solicitations for possible exchanges, known as requests for information (RFI), and through interviews with GSA officials. We conducted site visits to three of the properties involved in the proposed exchanges (the Cotton Annex and Regional Office Building in Washington, D.C and the Metro West building in Baltimore, MD), examined the properties, and spoke with GSA officials about the RFIs that included these properties. We selected these properties based on nearby proximity (within a 50-mile radius) and to include a site visit to both a location where the property or properties in the RFI generated 10 or more responses and to a location were the property or properties in the RFI generated fewer than 10 responses. To further identify a property or properties to visit, we then limited our selection to property or properties that were furthest along in GSA s proposed swap-construct process. In addition, to better understand the status of these proposed exchanges, we analyzed the responses GSA received to its solicitations for these swap-construct exchanges and discussed the proposed exchanges with four of the seven respondents to the Metro West and Spring Street Courthouse RFIs. We did not interview RFI respondents to the proposed swap-construct exchanges that involved the FBI headquarters and Federal Triangle South properties since GSA is actively in discussions or negotiations with these respondents. We selected our sample of the respondents to include a variety of respondents, including a development company, firm that advises developers, a university, and a company that provides property management services to the government. Because the RFI respondents were selected as a nonprobability sample, the information gained in these interviews cannot be generalized to make conclusions about all of GSA s swap-construct exchanges. However, they illustrate the views of a diverse set of respondents with experience related to these exchanges. To understand the possible exchange in Lakewood, CO, we analyzed GSA documents, including agency property descriptions and tentative plans for the swap-construct exchange, and interviewed GSA officials and a local government official involved with the negotiations with GSA. To identify the potential benefits of swap-construct exchanges and factors that can influence GSA s future use these exchanges, we evaluated GSA s approach to identifying potentially successful swap-construct exchanges to propose against the OMB Capital Programming Guide and the GAO Executive Guide on Leading Practices in Capital Decision- Making, and interviewed GSA officials; nonfederal participants in completed swap-construct exchanges (HEBSPC and Emory University Hospital Midtown); stakeholders in federal property acquisition and disposal processes (the National Capital Planning Commission and the National Law Center for Homelessness and Poverty, respectively); and nongovernmental organizations familiar with GSA s swap-construct exchanges (the National Council for Public-Private Partnerships and the Urban Land Institute). In addition, we analyzed written responses GSA received to its solicitations for proposed swap-constructs exchanges and information from interviews we conducted with the four respondents, described above, to identify any factors that may affect GSA s future use of swap-construct exchanges. We conducted this performance audit from September 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the General Services Administration Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Keith Cunningham, Assistant Director; Amy Abramowitz; Dawn Bidne; Timothy Guinane; James Leonard; Sara Ann Moessbauer; Josh Ormond; and Crystal Wesco made key contributions to this report. Related GAO Products Capital Financing: Alternative Approaches to Budgeting for Federal Real Property. GAO-14-239. Washington, D.C.: March 12, 2014. Federal Real Property: Excess and Underutilized Property Is an Ongoing Challenge. GAO-13-573T. Washington, D.C.: April 25, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Courthouses: Recommended Construction Projects Should Be Evaluated under New Capital-Planning Process. GAO-13-263. Washington, D.C.: April 11, 2013. Federal Buildings Fund: Improved Transparency and Long-term Plan Needed to Clarify Capital Funding Priorities. GAO-12-646. Washington, D.C.: July 12, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Federal Real Property: The Government Faces Challenges to Disposing of Unneeded Buildings. GAO-11-370T. Washington, D.C.: February 10, 2011. Federal Courthouse Construction: Estimated Costs to House the L.A. District Court Have Tripled and There Is No Consensus on How to Proceed. GAO-08-889. Washington, D.C.: September 12, 2008. Federal Real Property: Most Public Benefit Conveyances Used as Intended, but Opportunities Exist to Enhance Federal Oversight. GAO-06-511. Washington, D.C.: June 21, 2006. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD 99-32. Washington, D.C.: December 1, 1998.
Why GAO Did This Study To help address challenges in federal real-property management, including the growing need to replace and modernize federal buildings, GSA has proposed expanding its use of swap-construct exchanges. GSA has proposed this approach for some potentially large projects, including replacing the FBI's headquarters. GAO was asked to review issues related to these exchanges. This report addresses: (1) GSA's experience with completed swap-construct exchanges; (2) the status of GSA's proposed swap-construct exchanges; and (3) the potential benefits of these exchanges and factors that can influence their future use. GAO reviewed documents, including GSA's solicitations for swap-construct exchanges, appraisals of completed exchanges, and OMB and GAO guidance. GAO conducted site visits to the completed swap-construct sites and three proposed swap-construct sites, selected based on location, number of responses to GSA's solicitation, and stage in the swap-construct process, and interviewed GSA officials and nonfederal participants in the exchanges. What GAO Found Since 2000, the General Services Administration (GSA) has completed two “swap-construct” exchanges—transactions in which the agency exchanges title to federal property for constructed assets or construction services, such as renovation work—in response to private sector interest in specific federal properties. In both completed exchanges, GSA used the value of federal properties it determined were underutilized to acquire new parking garages. The recipients of the federal properties told us that the exchanges took longer than expected (about 3 years for one of the exchanges and 5 years for the other). In response, GSA noted its lack of experience with swap-construct exchanges at the time. Since 2012, GSA has proposed six swap-construct exchanges. After reviewing responses to its solicitations, GSA is actively pursuing three, including a potential exchange of the existing Federal Bureau of Investigation's (FBI) headquarters for construction of a new FBI headquarters building. Respondents to the three solicitations that GSA is not actively pursuing noted concerns, including the amount of investment needed in the federal properties and the lack of detail regarding GSA's construction needs in an exchange. Swap-construct can result in an exchange of equally valued assets or services or can result in the government or a property recipient paying for a difference in value. The swap-construct approach can help GSA address the challenges of disposing of unneeded property and modernizing or replacing federal buildings, but various factors could affect future use of the approach. For example, swap-construct can require developers to spend large sums on GSA's construction needs before receiving title to the federal property used in the exchanges. GSA's solicitations have not always specified these construction needs. Consequently, developers may be unable to provide meaningful input, and GSA could miss swap-construct opportunities. Further, the viability of swap-construct exchanges may be affected by specific market factors, such as the availability of alternative properties. However, GSA lacks criteria to help determine if the agency should solicit interest in a swap-construct exchange. As a result, GSA could miss opportunities to use swap-construct or select properties and construction projects better suited to traditional disposal and funding processes. Office of Management and Budget (OMB) and GAO guidance emphasize the importance of criteria in making capital-planning decisions and providing clarity on construction needs. What GAO Recommends GAO recommends that GSA (1) include, to the extent possible, details on what GSA is seeking in exchange for federal property in these solicitations and (2) develop criteria for determining when to solicit market interest in swap-construct exchanges. GSA agreed with GAO's recommendations.
<1. OJJDP Established the Girls Study Group to Assess the Effectiveness of Girls Delinquency Programs> With an overall goal of developing research that communities need to make sound decisions about how best to prevent and reduce girls delinquency, OJJDP established the Girls Study Group (Study Group) in 2004 under a $2.6 million multiyear cooperative agreement with a research institute. OJJDP s objectives for the group, among others, included identifying effective or promising programs, program elements, and implementation principles (i.e., guidelines for developing programs). Objectives also included developing program models to help inform communities of what works in preventing or reducing girls delinquency, identifying gaps in girls delinquency research and developing recommendations for future research, and disseminating findings to the girls delinquency field about effective or promising programs. To meet OJJDP s objectives, among other activities, the Study Group identified studies of delinquency programs that specifically targeted girls by reviewing over 1,000 documents in relevant research areas. These included criminological and feminist explanations for girls delinquency, patterns of delinquency, and the justice system s response to girls delinquency. As a result, the group identified 61 programs that specifically targeted preventing or responding to girls delinquency. Then, the group assessed the methodological quality of the studies of the programs that had been evaluated using a set of criteria developed by DOJ s Office of Justice Programs (OJP) called What Works to determine whether the studies provided credible evidence that the programs were effective at preventing or responding to girls delinquency. The results of the group s assessment are discussed in the following sections. <2. OJJDP Efforts to Assess Program Effectiveness Were Consistent with Social Science Practices and Standards, and OJJDP Has Taken Action to Enhance Communication about the Study Group with External Stakeholders> OJJDP s effort to assess girls delinquency programs through the use of a study group and the group s methods for assessing studies were consistent with generally accepted social science research practices and standards. In addition, OJJDP s efforts to involve practitioners in Study Group activities and disseminate findings were also consistent with the internal control standard to communicate with external stakeholders, such as practitioners operating programs. According to OJJDP research and program officials, they formed the Study Group rather than funding individual studies of programs because study groups provide a cost-effective method of gaining an overview of the available research in an issue area. As part of its work, the group collected, reviewed, and analyzed the methodological quality of research on girls delinquency programs. The use of such a group, including its review, is an acceptable approach for systematically identifying and reviewing research conducted in a field of study. This review helped consolidate the research and provide information to OJJDP for determining evaluation priorities. Further, we reviewed the criteria the group used to assess the studies and found that they adhere to generally accepted social science standards for evaluation research. We also generally concurred with the group s assessments of the programs based on these criteria. According to the group s former principal investigator, the Study Group decided to use OJP s What Works criteria to ensure that its assessment of program effectiveness would be based on highly rigorous evaluation standards, thus eliminating the potential that a program that may do harm would be endorsed by the group. However, 8 of the 18 experts we interviewed said that the criteria created an unrealistically high standard, which caused the group to overlook potentially promising programs. OJJDP officials stated that despite such concerns, they approved the group s use of the criteria because of the methodological rigor of the framework and their goal for the group to identify effective programs. In accordance with the internal control standard to communicate with external stakeholders, OJJDP sought to ensure a range of stakeholder perspectives related to girls delinquency by requiring that Study Group members possess knowledge and experience with girls delinquency and demonstrate expertise in relevant social science disciplines. The initial Study Group, which was convened by the research institute and approved by OJJDP, included 12 academic researchers and 1 practitioner; someone with experience implementing girls delinquency programs. However, 11 of the 18 experts we interviewed stated that this composition was imbalanced in favor of academic researchers. In addition, 6 of the 11 said that the composition led the group to focus its efforts on researching theories of girls delinquency rather than gathering and disseminating actionable information for practitioners. According to OJJDP research and program officials, they acted to address this issue by adding a second practitioner as a member and involving two other practitioners in study group activities. OJJDP officials stated that they plan to more fully involve practitioners from the beginning when they organize study groups in the future and to include practitioners in the remaining activities of the Study Group, such as presenting successful girls delinquency program practices at a national conference. Also, in accordance with the internal control standard, OJJDP and the Study Group have disseminated findings to the research community, practitioners in the girls delinquency field, and the public through conference presentations, Web site postings, and published bulletins. The group plans to issue a final report on all of its activities by spring 2010. <3. The Study Group Found No Evidence of Effective Girls Delinquency Programs; in Response OJJDP Plans to Assist Programs in Preparing for Evaluations but Could Strengthen Its Plans for Supporting Such Evaluations> The Study Group found that few girls delinquency programs had been studied and that the available studies lacked conclusive evidence of effective programs; as a result, OJJDP plans to provide technical assistance to help programs be better prepared for evaluations of their effectiveness. However, OJJDP could better address its girls delinquency goals by more fully developing plans for supporting such evaluations. In its review, the Study Group found that the majority of the girls delinquency programs it identified 44 of the 61 had not been studied by researchers. For the 17 programs that had been studied, the Study Group reported that none of the studies provided conclusive evidence with which to determine whether the programs were effective at preventing or reducing girls delinquency. For example, according to the Study Group, the studies provided insufficient evidence of the effectiveness of 11 of the 17 programs because, for instance, the studies involved research designs that could not demonstrate whether any positive outcomes, such as reduced delinquency, were due to program participation rather than other factors. Based on the results of this review, the Study Group reported that among other things, there is a need for additional, methodologically rigorous evaluations of girls delinquency programs; training and technical assistance to help programs prepare for evaluations; and funding to support girls delinquency programs found to be promising. According to OJJDP officials, in response to the Study Group s finding about the need to better prepare programs for evaluation, the office plans to work with the group and use the remaining funding from the effort approximately $300,000 to provide a technical assistance workshop by the end of October 2009. The workshop is intended to help approximately 10 girls delinquency programs prepare for evaluation by providing information about how evaluations are designed and conducted and how to collect data that will be useful for program evaluators in assessing outcomes, among other things. In addition, OJJDP officials stated that as a result of the Study Group s findings, along with feedback they received from members of the girls delinquency field, OJJDP plans to issue a solicitation in fiscal year 2010 for funding to support evaluations of girls delinquency programs. OJJDP has also reported that the Study Group s findings are to provide a foundation for moving ahead on a comprehensive program related to girls delinquency. However, OJJDP has not developed a plan that is documented, is shared with key stakeholders, and includes specific funding requirements and commitments and time frames for meeting its girls delinquency goals. Standard practices for program and project management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate projects needed to achieve those results, supporting resources, and milestones. In addition, government internal control standards call for policies and procedures that establish adequate communication with stakeholders as essential for achieving desired program goals. According to OJJDP officials, they have not developed a plan for meeting their girls delinquency goals because the office is in transition and is in the process of developing a plan for its juvenile justice programs, but the office is taking steps to address its girls delinquency goals, for example, through the technical assistance workshop. Developing a plan for girls delinquency would help OJJDP to demonstrate leadership to the girls delinquency field by clearly articulating the actions it intends to take to meet its goals and would also help the office to ensure that the goals are met. In our July report, we recommended that to help ensure that OJJDP meets its goals to identify effective or promising girls delinquency programs and supports the development of program models, the Administrator of OJJDP develop and document a plan that (1) articulates how the office intends to respond to the findings of the Study Group, (2) includes time frames and specific funding requirements and commitments, and (3) is shared with key stakeholders. OJP agreed with our recommendation and outlined efforts that OJJDP plans to undertake in response to these findings. For example, OJJDP stated that it anticipates publishing its proposed juvenile justice program plan, which is to include how it plans to address girls delinquency issues, in the Federal Register to solicit public feedback and comments, which will enable the office to publish a final plan in the Federal Register by the end of the year (December 31, 2009). Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. <4. Contacts and Acknowledgements> For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Mary Catherine Hult, Assistant Director; Kevin Copping; and Katherine Davis. Additionally, key contributors to our July 2009 report include David Alexander, Elizabeth Blair, and Janet Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony discusses issues related to girls' delinquency--a topic that has attracted the attention of federal, state, and local policymakers for more than a decade as girls have increasingly become involved in the juvenile justice system. For example, from 1995 through 2005, delinquency caseloads for girls in juvenile justice courts nationwide increased 15 percent while boys' caseloads decreased by 12 percent. More recently, in 2007, 29 percent of juvenile arrests--about 641,000 arrests--involved girls, who accounted for 17 percent of juvenile violent crime arrests and 35 percent of juvenile property crime arrests. Further, research on girls has highlighted that delinquent girls have higher rates of mental health problems than delinquent boys, receive fewer special services, and are more likely to abandon treatment programs. The Office of Juvenile Justice and Delinquency Prevention (OJJDP) is the Department of Justice (DOJ) office charged with providing national leadership, coordination, and resources to prevent and respond to juvenile delinquency and victimization. OJJDP supports states and communities in their efforts to develop and implement effective programs to, among other things, prevent delinquency and intervene after a juvenile has offended. For example, from fiscal years 2007 through 2009, Congress provided OJJDP almost $1.1 billion to use for grants to states, localities, and organizations for a variety of juvenile justice programs, including programs for girls. Also, in support of this mission, the office funds research and program evaluations related to a variety of juvenile justice issues. As programs have been developed at the state and local levels in recent years that specifically target preventing girls' delinquency or intervening after girls have become involved in the juvenile justice system, it is important that agencies providing grants and practitioners operating the programs have information about which of these programs are effective. In this way, agencies can help to ensure that limited federal, state, and local funds are well spent. In general, effectiveness is determined through program evaluations, which are systematic studies conducted to assess how well a program is working--that is, whether a program produced its intended effects. To help ensure that grant funds are being used effectively, you asked us to review OJJDP's efforts related to studying and promoting effective girls' delinquency programs. We issued a report on the results of that review on July 24, 2009. This testimony highlights findings from that report and addresses (1) efforts OJJDP has made to assess the effectiveness of girls' delinquency programs, (2) the extent to which these efforts are consistent with generally accepted social science standards and federal standards to communicate with stakeholders, and (3) the findings from OJJDP's efforts and how the office plans to address the findings. This statement is based on our July report and selected updates made in October 2009. What GAO Found With an overall goal of developing research that communities need to make sound decisions about how best to prevent and reduce girls' delinquency, OJJDP established the Girls Study Group (Study Group) in 2004 under a $2.6 million multiyear cooperative agreement with a research institute. OJJDP's objectives for the group, among others, included identifying effective or promising programs, program elements, and implementation principles (i.e., guidelines for developing programs). Objectives also included developing program models to help inform communities of what works in preventing or reducing girls' delinquency, identifying gaps in girls' delinquency research and developing recommendations for future research, and disseminating findings to the girls' delinquency field about effective or promising programs. OJJDP's effort to assess girls' delinquency programs through the use of a study group and the group's methods for assessing studies were consistent with generally accepted social science research practices and standards. In addition, OJJDP's efforts to involve practitioners in Study Group activities and disseminate findings were also consistent with the internal control standard to communicate with external stakeholders, such as practitioners operating programs. The Study Group found that few girls' delinquency programs had been studied and that the available studies lacked conclusive evidence of effective programs; as a result, OJJDP plans to provide technical assistance to help programs be better prepared for evaluations of their effectiveness. However, OJJDP could better address its girls' delinquency goals by more fully developing plans for supporting such evaluations.
<1. Background> FOIA establishes a legal right of access to government records and information, on the basis of the principles of openness and accountability in government. Before the act (originally enacted in 1966), an individual seeking access to federal records had faced the burden of establishing a right to examine them. FOIA established a right to know standard for access, instead of a need to know, and shifted the burden of proof from the individual to the government agency seeking to deny access. FOIA provides the public with access to government information either through affirmative agency disclosure publishing information in the Federal Register or the Internet, or making it available in reading rooms or in response to public requests for disclosure. Public requests for disclosure of records are the best known type of FOIA disclosure. Any member of the public may request access to information held by federal agencies, without showing a need or reason for seeking the information. Not all information held by the government is subject to FOIA. The act prescribes nine specific categories of information that are exempt from disclosure: for example, trade secrets and certain privileged commercial or financial information, certain personnel and medical files, and certain law enforcement records or information (attachment II provides the complete list). In denying access to material, agencies may cite these exemptions. The act requires agencies to notify requesters of the reasons for any adverse determination (that is, a determination not to provide records) and grants requesters the right to appeal agency decisions to deny access. In addition, agencies are required to meet certain time frames for making key determinations: whether to comply with requests (20 business days from receipt of the request), responses to appeals of adverse determinations (20 business days from receipt of the appeal), and whether to provide expedited processing of requests (10 calendar days from receipt of the request). Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. <1.1. The FOIA Process at Federal Agencies> Although the specific details of processes for handling FOIA requests vary among agencies, the major steps in handling a request are similar across the government. Agencies receive requests, usually in writing (although they may accept requests by telephone or electronically), which can come from any organization or member of the public. Once received, the request goes through several phases, which include initial processing, searching for and retrieving responsive records, preparing responsive records for release, approving the release of the records, and releasing the records to the requester. Figure 1 is an overview of the process, from the receipt of a request to the release of records. During the initial processing phase, a request is logged into the agency s FOIA system, and a case file is started. The request is then reviewed to determine its scope, estimate fees, and provide an initial response to the requester (in general, this simply acknowledges receipt of the request). After this point, the FOIA staff begins its search to retrieve responsive records. This step may include searching for records from multiple locations and program offices. After potentially responsive records are located, the documents are reviewed to ensure that they are within the scope of the request. During the next two phases, the agency ensures that appropriate information is to be released under the provisions of the act. First, the agency reviews the responsive records to make any redactions based on the statutory exemptions. Once the exemption review is complete, the final set of responsive records is turned over to the FOIA office, which calculates appropriate fees, if applicable. Before release, the redacted responsive records are then given a final review, possibly by the agency s general counsel, and then a response letter is generated, summarizing the agency s actions regarding the request. Finally, the responsive records are released to the requester. Some requests are relatively simple to process, such as requests for specific pieces of information that the requester sends directly to the appropriate office. Other requests may require more extensive processing, depending on their complexity, the volume of information involved, the need for the agency FOIA office to work with offices that have relevant subject-matter expertise to find and obtain information, the need for a FOIA officer to review and redact information in the responsive material, the need to communicate with the requester about the scope of the request, and the need to communicate with the requester about the fees that will be charged for fulfilling the request (or whether fees will be waived). Specific details of agency processes for handling requests vary, depending on the agency s organizational structure and the complexity of the requests received. While some agencies centralize processing in one main office, other agencies have separate FOIA offices for each agency component and field office. Agencies also vary in how they allow requests to be made. Depending on the agency, requesters can submit requests by telephone, fax, letter, or e-mail or through the Web. In addition, agencies may process requests in two ways, known as multitrack and single track. Multitrack processing involves dividing requests into two groups: (1) simple requests requiring relatively minimal review, which are placed in one processing track, and (2) more voluminous and complex requests, which are placed in another track. In contrast, single-track processing does not distinguish between simple and complex requests. With single-track processing, agencies process all requests on a first-in/first-out basis. Agencies can also process FOIA requests on an expedited basis when a requester has shown a compelling need or urgency for the information. As agencies process FOIA requests, they generally place them in one of four possible disposition categories: grants, partial grants, denials, and not disclosed for other reasons. These categories are defined as follows: Grants: Agency decisions to disclose all requested records in full. Partial grants: Agency decisions to withhold some records in whole or in part, because such information was determined to fall within one or more exemptions. Denials: Agency decisions not to release any part of the requested records because all information in the records is determined to be exempt under one or more statutory exemptions. Not disclosed for other reasons: Agency decisions not to release requested information for any of a variety of reasons other than statutory exemptions from disclosing records. The categories and definitions of these other reasons for nondisclosure are shown in table 1. When a FOIA request is denied in full or in part, or the requested records are not disclosed for other reasons, the requester is entitled to be told the reason for the denial, to appeal the denial, and to challenge it in court. <1.2. The Privacy Act Also Provides Individuals with Access Rights> In addition to FOIA, the Privacy Act of 1974 includes provisions granting individuals the right to gain access to and correct information about themselves held by federal agencies. Thus the Privacy Act serves as a second major legal basis, in addition to FOIA, for the public to use in obtaining government information. The Privacy Act also places limitations on agencies collection, disclosure, and use of personal information. Although the two laws differ in scope, procedures in both FOIA and the Privacy Act permit individuals to seek access to records about themselves known as first-party access. Depending on the individual circumstances, one law may allow broader access or more extensive procedural rights than the other, or access may be denied under one act and allowed under the other. Consequently, the Department of Justice s Office of Information and Privacy issued guidance that it is good policy for agencies to treat all first-party access requests as FOIA requests (as well as possibly Privacy Act requests), regardless of whether the FOIA is cited in a requester s letter. This guidance was intended to help ensure that requesters receive the fullest possible response to their inquiries, regardless of which law they cite. In addition, Justice guidance for the annual FOIA report directs agencies to include Privacy Act requests (that is, first-party requests) in the statistics reported. According to the guidance, A Privacy Act request is a request for records concerning oneself; such requests are also treated as FOIA requests. (All requests for access to records, regardless of which law is cited by the requester, are included in this report.) Although FOIA and the Privacy Act can both apply to first-party requests, these may not always be processed in the same way as described earlier for FOIA requests. In some cases, little review and redaction (see fig. 1) is required, for example, for a request for one s own Social Security benefits records. In contrast, various degrees of review and redaction could be required for other types of first-party requests: for example, files on security background checks would need review and redaction before being provided to the person who was the subject of the investigation. <1.3. Roles of OMB and Justice in FOIA Implementation> OMB and the Department of Justice both have roles in the implementation of FOIA. Under various statutes, including the Paperwork Reduction Act, OMB exercises broad authority for coordinating and administering various aspects of governmentwide information policy. FOIA specifically requires OMB to issue guidelines to provide for a uniform schedule of fees for all agencies. OMB issued this guidance in April 1987. The Department of Justice oversees agencies compliance with FOIA and is the primary source of policy guidance for agencies. Specifically, Justice s requirements under the act are to make agencies annual FOIA reports available through a single electronic access point and notify Congress as to their availability; in consultation with OMB, develop guidelines for the required annual agency reports, so that all reports use common terminology and follow a similar format; and submit an annual report on FOIA litigation and the efforts undertaken by Justice to encourage agency compliance. Within the Department of Justice, the Office of Information and Privacy has lead responsibility for providing guidance and support to federal agencies on FOIA issues. This office first issued guidelines for agency preparation and submission of annual reports in the spring of 1997. It also periodically issues additional guidance on annual reports as well as on compliance, provides training, and maintains a counselors service to provide expert, one-on-one assistance to agency FOIA staff. Further, the Office of Information and Privacy also makes a variety of FOIA and Privacy Act resources available to agencies and the public via the Justice Web site and on- line bulletins (available at www.usdoj.gov/oip/index.html). <1.4. Annual FOIA Reports Were Established by 1996 Amendments> In 1996, the Congress amended FOIA to provide for public access to information in an electronic format (among other purposes). These amendments, referred to as e-FOIA, also required that agencies submit a report to the Attorney General on or before February 1 of each year that covers the preceding fiscal year and includes information about agencies FOIA operations. The following are examples of information that is to be included in these reports: number of requests received, processed, and pending; median number of days taken by the agency to process different types of requests; determinations made by the agency not to disclose information and the reasons for not disclosing the information; disposition of administrative appeals by requesters; information on the costs associated with handling of FOIA requests; and full-time-equivalent staffing information. In addition to providing their annual reports to the Attorney General, agencies are to make them available to the public in electronic form. The Attorney General is required to make all agency reports available on line at a single electronic access point and report to Congress no later than April 1 of each year that these reports are available in electronic form. (This electronic access point is www.usdoj.gov/oip/04_6.html.) In 2001, in response to a congressional request, we prepared the first in a series of reports on the implementation of the 1996 amendments to FOIA, starting from fiscal year 1999. In these reviews, we examined the contents of the annual reports for 25 major agencies (shown in table 2). They include the 24 major agencies covered by the Chief Financial Officers Act, as well as the Central Intelligence Agency and, until 2003, the Federal Emergency Management Agency (FEMA). In 2003, the creation of the Department of Homeland Security (DHS), which incorporated FEMA, led to a shift in some FOIA requests from agencies affected by the creation of the new department, but the same major component entities are reflected in all the years reviewed. Our previous reports included descriptions of the status of reported FOIA implementation, including any trends revealed by comparison with earlier years. We noted general increases in requests received and processed, as well as growing numbers of pending requests carried over from year to year. In addition, our 2001 report disclosed that data quality issues limited the usefulness of agencies annual FOIA reports and that agencies had not provided online access to all the information required by the act as amended in 1996. We therefore recommended that the Attorney General direct the Department of Justice to improve the reliability of data in the agencies annual reports by providing guidance addressing the data quality issues we identified and by reviewing agencies report data for completeness and consistency. We further recommended that the Attorney General direct the department to enhance the public s access to government records and information by encouraging agencies to make all required materials available electronically. In response, the Department of Justice issued supplemental guidance, addressed reporting requirements in its training programs, and continued reviewing agencies annual reports for data quality. Justice also worked with agencies to improve the quality of data in FOIA annual reports. <1.5. Executive Order Required Agencies to Take Several Actions to Improve FOIA Operations> On December 14, 2005, the President issued an Executive Order setting forth a policy of citizen-centered and results-oriented FOIA administration. Briefly, FOIA requesters are to receive courteous and appropriate services, including ways to learn about the status of their requests and the agency s response, and agencies are to provide ways for requesters and the public to learn about the FOIA process and publicly available agency records (such as those on Web sites). In addition, agency FOIA operations are to be results oriented: agencies are to process requests efficiently, achieve measurable improvements in FOIA processing, and reform programs that do not produce appropriate results. To carry out this policy, the order required, among other things, that agency heads designate Chief FOIA Officers to oversee their FOIA programs, and that agencies establish Requester Service Centers and Public Liaisons to ensure appropriate communication with requesters. The Chief FOIA Officers were directed to conduct reviews of the agencies FOIA operations and develop improvement plans to ensure that FOIA administration was in accordance with applicable law as well as with the policy set forth in the order. By June 2006, agencies were to submit reports that included the results of their reviews and copies of their improvement plans. The order also instructed the Attorney General to issue guidance on implementation of the order s requirements for agencies to conduct reviews and develop plans. Finally, the order instructed agencies to report on their progress in implementing their plans and meeting milestones as part of their annual reports for fiscal years 2006 and 2007, and required agencies to account for any milestones missed. In April 2006, the Department of Justice posted guidance on implementation of the order s requirements for FOIA reviews and improvement plans. This guidance suggested a number of areas of FOIA administration that agencies might consider in conducting their reviews and developing improvement plans. (Examples of some of these areas are automated tracking capabilities, automated processing, receiving/responding to requests electronically, forms of communication with requesters, and systems for handling referrals to other agencies.) To encourage consistency, the guidance also included a template for agencies to use to structure the plans and to report on their reviews and plans. The improvement plans are posted on the Justice Web site at www.usdoj.gov/oip/agency_improvement.html. In a July 2006 testimony, we provided preliminary results of our analyses of the improvement plans for the 25 agencies in our review that were submitted as of the end of June; in our testimony we focused on how the plans addressed reducing or eliminating backlog. We testified that a substantial number of plans did not include measurable goals and timetables that would allow agencies to measure and evaluate the success of their plans. Several of the plans were revised in light of our testimony, as well as in response to feedback to agencies from the Department of Justice in its FOIA oversight role. <2. Status of FOIA Processing Appears Similar to Previous Years, but Limitations in Annual Report Data Present Challenges> The data reported by 24 major agencies in annual FOIA reports from 2002 to 2005 reveal a number of general trends. (Data from USDA are omitted from our statistical analysis, because we determined that data from a major USDA component were not reliable.) For example, the public continued to submit more requests for information from the federal government through FOIA, but many agencies, despite increasing the numbers of requests processed, did not keep pace with this increased volume. As a result, the number of pending requests carried over from year to year has been steadily increasing. However, our ability to make generalizations about processing time is limited by the type of statistic reported (that is, the median). Taking steps to improve the accuracy and form of annual report data could provide more insight into FOIA processing. <2.1. Not All Data from USDA s Farm Service Agency Are Reliable, but Its Improvement Plan Provides Opportunity to Address This Weakness> We omitted data from USDA s annual FOIA report because we determined that not all these data were reliable. Although some USDA components expressed confidence in their data, one component, the Farm Service Agency, did not. According to this agency s FOIA Officer, portions of the agency s data in annual reports were not accurate or complete. This is a significant deficiency, because the Farm Service Agency reportedly processes over 80 percent of the department s total FOIA requests. Currently, FOIA processing for the Farm Service Agency is highly decentralized, taking place in staff offices in Washington, D.C., and Kansas City, 50 state offices, and about 2,350 county offices. The agency FOIA officer told us that she questioned the completeness and accuracy of data supplied by the county offices. This official stated that some of the field office data supplied for the annual report were clearly wrong, leading her to question the systems used to record workload data at field offices and the field office staff s understanding of FOIA requirements. She attributed this condition to the agency s decentralized organization and to lack of management attention, resources, and training. Lacking accurate data hinders the Farm Service Agency from effectively monitoring and managing its FOIA program. The Executive Order s requirement to develop an improvement plan provides an opportunity for the Farm Service Agency to address its data reliability problems. More specifically, Justice s guidance on implementing the Executive Order refers to the need for agencies to explore improvements in their monitoring and tracking systems and staff training. USDA has developed an improvement plan that includes activities to improve FOIA processing at the Farm Service Agency that are relevant to the issues raised by the Farm Service Agency s FOIA Officer, including both automation and training. The plan sets goals for ensuring that all agency employees who process or retrieve responsive records are trained in the necessary FOIA duties, as well as for determining the type of automated tracking to be implemented. According to the plan, an electronic tracking system is needed to track requests, handle public inquiries regarding request status, and prepare a more accurate annual FOIA report. In addition, the Farm Service Agency plans to determine the benefit of increased centralization of FOIA request processing. However, the plan does not directly address improvements to data reliability. If USDA does not also plan for activities, measures, and milestones to improve data reliability, it increases the risk that the Farm Service Agency will not produce reliable FOIA statistics, which are important for program oversight and meeting the act s goal of providing visibility into government FOIA operations. <2.2. Except for SSA, Increases in Requests Received and Processed Are Generally Slowing> The numbers of FOIA requests received and processed continue to rise, but except for one case SSA the rate of increase has flattened in recent years. For SSA, we present statistics separately because the agency reported an additional 16 million requests in 2005, dwarfing those for all other agencies combined, which together total about 2.6 million. SSA attributed this rise to an improvement in its method of counting requests and stated that in previous years, these requests were undercounted. Further, all but about 38,000 of SSA s over 17 million requests are simple requests for personal information by or on behalf of individuals. Figure 2 shows total requests reported governmentwide for fiscal years 2002 through 2005, with SSA s share shown separately. This figure shows the magnitude of SSA s contribution to the whole FOIA picture, as well as the scale of the jump from 2004 to 2005. Figure 3 presents statistics omitting SSA on a scale that allows a clearer view of the rate of increase in FOIA requests received and processed in the rest of the government. As this figure shows, when SSA s numbers are excluded, the rate of increase is modest and has been flattening: For the whole period (fiscal years 2002 to 2005), requests received increased by about 29 percent, and requests processed increased by about 27 percent. Most of this rise occurred from fiscal years 2002 to 2003: about 28 percent for requests received, and about 27 percent for requests processed. In contrast, from fiscal year 2004 to 2005, the rise was much less: about 3 percent for requests received, and about 2 percent for requests processed. According to SSA, the increases that the agency reported in fiscal year 2005 can be attributed to an improvement in its method of counting a category of requests it calls simple requests handled by non-FOIA staff. From fiscal year 2002 to 2005, SSA s FOIA reports have consistently shown significant growth in this category, which has accounted for the major portion of all SSA requests reported (see table 3). In each of these years, SSA has attributed the increases in this category largely to better reporting, as well as actual increases in requests. SSA describes requests in this category as typically being requests by individuals for access to their own records, as well as requests in which individuals consent for SSA to supply information about themselves to third parties (such as insurance and mortgage companies) so that they can receive housing assistance, mortgages, disability insurance, and so on. According to SSA s FOIA report, these requests are handled by personnel in about 1,500 locations in SSA, including field and district offices and teleservice centers. Such requests are almost always granted, according to SSA, and most receive immediate responses. SSA has stated that it does not keep processing statistics (such as median days to process) on these requests, which it reports separately from other FOIA requests (for which processing statistics are kept). However, officials say that these are typically processed in a day or less. According to SSA officials, they included information on these requests in their annual reports because Justice guidance instructs agencies to treat Privacy Act requests (requests for records concerning oneself) as FOIA requests and report them in their annual reports. In addition, SSA officials said that their automated systems make it straightforward to capture and report on these simple requests. According to SSA, in fiscal year 2005, the agency began to use automated systems to capture the numbers of requests processed by non-FOIA staff, generating statistics automatically as requests were processed; the result, according to SSA, is a much more accurate count. Besides SSA, agencies reporting large numbers of requests received were the Departments of Defense, Health and Human Services, Homeland Security, Justice, the Treasury, and Veterans Affairs, as shown in table 4. The rest of agencies combined account for only about 5 percent of the total requests received (if SSA s simple requests handled by non-FOIA staff are excluded). Table 4 presents, in descending order of request totals, the numbers of requests received and percentages of the total (calculated with and without SSA s statistics on simple requests handled by non-FOIA staff). <2.3. Most Requests Are Granted in Full> Most FOIA requests in 2005 were granted in full, with relatively few being partially granted, denied, or not disclosed for other reasons (statistics are shown in table 5). This generalization holds with or without SSA s inclusion. The percentage of requests granted in full was about 87 percent, which is about the same as in previous years. However, if SSA s numbers are included, the proportion of grants dominates the other categories raising this number from 87 percent of the total to 98 percent. This is to be expected, since SSA reports that it grants the great majority of its simple requests handled by non-FOIA staff, which make up the bulk of SSA s statistics. Three of the seven agencies that handled the largest numbers of requests (HHS, SSA, and VA; see table 4) also granted the largest percentages of requests in full, as shown in figure 4. Figure 4 shows, by agency, the disposition of requests processed: that is, whether granted in full, partially granted, denied, or not disclosed for other reasons (see table 1 for a list of these reasons). As the figure shows, the numbers of fully granted requests varied widely among agencies in fiscal year 2005. Six agencies made full grants of requested records in over 80 percent of the cases they processed (besides the three already mentioned, these include Energy, OPM, and SBA). In contrast, 13 of 24 made full grants of requested records in less than 40 percent of their cases, including 3 agencies (CIA, NSF, and State) that made full grants in less than 20 percent of cases processed. This variance among agencies in the disposition of requests has been evident in prior years as well. In many cases, the variance can be accounted for by the types of requests that different agencies process. For example, as discussed earlier, SSA grants a very high proportion of requests because they are requests for personal information about individuals that are routinely made available to or for the individuals concerned. Similarly, VA routinely makes medical records available to individual veterans, and HHS also handles large numbers of Privacy Act requests. Such requests are generally granted in full. Other agencies, on the other hand, receive numerous requests whose responses must routinely be redacted. For example, NSF reported in its annual report that most of its requests (an estimated 90 percent) are for copies of funded grant proposals. The responsive documents are routinely redacted to remove personal information on individual principal investigators (such as salaries, home addresses, and so on), which results in high numbers of partial grants compared to full grants. <2.4. Processing Times Vary, but Broad Generalizations Are Limited> For 2005, the reported time required to process requests (by track) varied considerably among agencies. Table 6 presents data on median processing times for fiscal year 2005. For agencies that reported processing times by component rather than for the agency as a whole, the table indicates the range of median times reported by the agency s components. As the table shows, seven agencies had components that reported processing simple requests in less than 10 days (these components are parts of the CIA, Energy, the Interior, Justice, Labor, Transportation, and the Treasury); for each of these agencies, the lower value of the reported ranges is less than 10. On the other hand, median time to process simple requests is relatively long at some organizations (for example, components of Energy and Justice, as shown by median ranges whose upper end values are greater than 100 days). For complex requests, the picture is similarly mixed. Components of four agencies (EPA, DHS, the Treasury, and VA) reported processing complex requests quickly with a median of less than 10 days. In contrast, other components of several agencies (DHS, Energy, EPA, HHS, HUD, Justice, State, Transportation, and the Treasury) reported relatively long median times to process complex requests, with median days greater than 100. Six agencies (AID, HHS, NSF, OPM, SBA, and SSA) reported using single-track processing. The median processing times for single- track processing varied from 5 days (at an HHS component) to 173 days (at another HHS component). Our ability to make further generalizations about FOIA processing times is limited by the fact that, as required by the act, agencies report median processing times only and not, for example, arithmetic means (the usual meaning of average in everyday language). To find an arithmetic mean, one adds all the members of a list of numbers and divides the result by the number of items in the list. To find the median, one arranges all the values in the list from lowest to highest and finds the middle one (or the average of the middle two if there is no one middle number). Thus, although using medians provides representative numbers that are not skewed by a few outliers, they cannot be summed. Deriving a median for two sets of numbers, for example, requires knowing all numbers in both sets. Only the source data for the medians can be used to derive a new median, not the medians themselves. As a result, with only medians it is not statistically possible to combine results from different agencies to develop broader generalizations, such as a governmentwide statistic based on all agency reports, statistics from sets of comparable agencies, or an agencywide statistic based on separate reports from all components of the agency. In rewriting the FOIA reporting requirements in 1996, legislators declared an interest in making them more useful to the public and to Congress, and the information in them more accessible. However, the limitation on aggregating data imposed by the use of medians alone impedes the development of broader pictures of FOIA operations. A more complete picture would be given by the inclusion of other statistics based on the same data that are used to derive medians, such as means and ranges. Providing means along with the median would allow more generalizations to be drawn, and providing ranges would complete the picture by adding information on the outliers in agency statistics. More complete information would be useful for public accountability and for effectively managing agency FOIA programs, as well as for meeting the act s goal of providing visibility into government FOIA operations. <2.5. Agency Pending Cases Continue to Increase> In addition to processing greater numbers of requests, many agencies (10 of 24) also reported that their numbers of pending cases requests carried over from one year to the next have increased since 2002. In 2002, pending requests governmentwide were reported to number about 138,000, whereas in 2005, about 200,000 45 percent more were reported. In addition, the rate of increase grew in fiscal year 2005, rising 24 percent from fiscal year 2004, compared to 13 percent from 2003 to 2004. Figure 5 shows these results, illustrating the accelerating rate at which pending cases have been increasing. These statistics include pending cases reported by SSA, because SSA s pending cases do not include simple requests handled by non- FOIA staff (for which SSA does not track pending cases). As the figure shows, these pending cases do not change the governmentwide picture significantly. Trends for individual agencies show mixed progress in reducing the number of pending requests reported from 2002 to 2005 some agencies have decreased numbers of pending cases, while others numbers have increased. Figure 6 shows processing rates at the 24 agencies (that is, the number of requests that an agency processes relative to the number it receives). Eight of the 24 agencies (AID, DHS, the Interior, Education, HHS, HUD, NSF, and OPM) reported processing fewer requests than they received each year for fiscal years 2003, 2004, and 2005; 8 additional agencies processed less than they received in two of these three years (Defense, Justice, Transportation, GSA, NASA, NRC, SSA, and VA). In contrast, two agencies (CIA and Energy) had processing rates above 100 percent in all 3 years, meaning that each made continued progress in reducing their numbers of pending cases. Fourteen additional agencies were able to make at least a small reduction in their numbers of pending requests in 1 or more years between fiscal years 2003 and 2005. <2.6. No Regular Mechanism Is in Place for Aggregating Annual Report Data> Legislators noted in 1996 that the FOIA reporting requirements were rewritten to make them more useful to the public and to Congress, and to make the information in them more accessible. The Congress also gave the Department of Justice the responsibility to provide policy guidance and oversee agencies compliance with FOIA. In its oversight and guidance role, Justice s Office of Information and Privacy (OIP) created summaries of the annual FOIA reports and made these available through its FOIA Post Web page (www.usdoj.gov/oip/foiapost/mainpage.htm). In 2003, Justice described its summary as a major guidance tool. It pointed out that although it was not required to do so under the law, the office had initiated the practice of compiling aggregate summaries of all agencies annual FOIA report data as soon as these were filed by all agencies. These summaries did not contain aggregated statistical tables, but they did provide prose descriptions that included statistics on major governmentwide results. However, the most recent of these summaries is for fiscal year 2003. According to the Acting Director of OIP, she was not certain why such summaries had not been made available since then. According to this official, internally the agency found the summaries useful and was considering making them available again. She also stated that these summaries gave a good overall picture of governmentwide processing. Aggregating and summarizing the information in the annual reports serves to maximize their usefulness and accessibility, in accordance with congressional intent, as well as potentially providing Justice with insight into FOIA implementation governmentwide and valuable benchmarks for use in overseeing the FOIA program. Such information would also be valuable for others interested in gauging governmentwide performance. The absence of such summaries reduces the ability of the public and the Congress to consistently obtain a governmentwide picture of FOIA processing. In providing agency views for this testimony, the Acting Director of OIP told us that the department would resume providing summaries, and that these would generally be available by the summer following the issuance of the annual reports. <3. Agency Improvement Plans Generally Included Areas of Improvement Emphasized by the Executive Order> As required by the Executive Order, all the 25 agencies submitted improvement plans based on the results of reviews of their respective FOIA operations, as well as on the areas emphasized by the order. The plans generally addressed these four areas, with 20 of 25 plans addressing all four. In particular, for all but 2 agencies with reported backlog, plans included both measurable goals and timetables for backlog reduction. Further, to increase reliance on dissemination, improve communications on the status of requests, and increase public awareness of FOIA processing, agencies generally set milestones to accomplish activities promoting these aims. In some cases, agencies did not set goals for a given area because they determined that they were already strong in that area. <3.1. All Agencies Addressed Reducing Backlog, and Most Set Measurable Goals and Milestones> The Executive Order states that improvement plans shall include specific activities that the agency will implement to eliminate or reduce the agency s FOIA backlog, including (as applicable) changes that will make the processing of FOIA requests more streamlined and effective. It further states that plans were to include concrete milestones, with specific timetables and outcomes to be achieved, to allow the plan s success to be measured and evaluated. In addition, the Justice guidance suggested a number of process improvement areas for agencies to consider, such as receiving or responding to requests electronically, automated FOIA processing, automated tracking capabilities, and multitrack processing. It also gave agencies considerable leeway in choosing means of measurement of success for improving timeliness and thus reducing backlog. All agency plans discussed avoiding or reducing backlog, and most (22 out of 25) established measurable goals and timetables for this area of focus. One agency, SBA, reported that it had no backlog, so it set no goals. A second agency, NSF, set no specific numerical goals for backlog reduction, but in fiscal year 2005 its backlog was minimal, and its median processing time was 14.26 days. In addition, its plan includes activities to increase efficiency and to monitor and analyze backlogged requests to determine whether systemic changes are warranted in its processes. A third agency, HUD, set a measurable goal for reducing backlog, but did not include a date by which it planned to achieve this goal. However, it achieved this goal, according to agency officials, by November 2006. carefully determine which ones best fit their individual circumstances, which can vary greatly from one agency to another. in the number of pending FOIA cases that were over 1 year old. NRC chose to focus on improving processing times, setting percentage goals for completion of different types of requests (for example, completing 75 percent of simple requests within 20 days). Labor s plan sets goals that aim for larger percentages of reduction for the oldest categories of pending requests (75 percent reduction for the oldest, 50 percent reduction for the next oldest, and so on). A number of agencies included goals to close their oldest 5 to 10 requests (Justice, the Treasury, Education, Commerce, Defense, GSA, NASA, SSA, and VA). Other agencies planned to eliminate their backlogs (for example, OPM and DHS) or to eliminate fiscal year 2005 backlog (Transportation), and several agencies chose goals based on a percentage of reduction of existing backlog (for example, CIA, Commerce, Education, Defense, the Interior, Justice, SSA, the Treasury, and USDA). Some agencies also described plans to perform analyses that would measure their backlogs so that they could then establish the necessary baselines against which to measure progress. In addition to setting backlog targets, agencies also describe activities that contribute to reducing backlog. For example, the Treasury plan, which states that backlog reduction is the main challenge facing the department and the focus of its plan, includes such activities (with associated milestones) as reengineering its multitrack FOIA process, monitoring monthly reports, and establishing a FOIA council. The agency plans thus provide a variety of activities and measures of improvement that should permit agency heads, the Congress, and the public to assess the agencies success in implementing their plans to reduce backlog. <3.2. Most Agencies Plan to Increase Public Dissemination of Records through Web Sites> The Executive Order calls for increased reliance on the dissemination of records that can be made available to the public without the necessity of a FOIA request, such as through posting on Web sites. In its guidance, Justice notes that agencies are required by FOIA to post frequently requested records, policy statements, staff manuals and instructions to staff, and final agency opinions. It encourages agencies not only to review their activities to meet this requirement, but also to make other public information available that might reduce the need to make FOIA requests. It also suggests that agencies consider improving FOIA Web sites to ensure that they are user friendly and up to date. Agency plans generally established goals and timetables for increasing reliance on public dissemination of records, including through Web sites. Of 25 agencies, 24 included plans to revise agency Web sites and add information to them, and 12 of these are making additional efforts to ensure that frequently requested documents are posted on their Web sites. For example, Defense is planning to increase the number of its components that have Web sites as well as posting frequently requested documents. Interior is planning to facilitate the posting of frequently requested documents by using scanning and redaction equipment to make electronic versions readily available. Agencies planned other related activities, such as making posted documents easier to find, improving navigation, and adding other helpful information. For example, AID plans to establish an information/searching decision tree to assist Web site visitors by directing them to agency public affairs staff who may be able to locate information and avoid the need for visitors to file FOIA requests. HUD plans activities to anticipate topics that may produce numerous FOIA requests ( hot button issues) and post relevant documents. Education is planning to use its automated tracking technology to determine when it is receiving multiple requests for similar information and then post such information on its Web site. The Treasury plan does not address increasing public dissemination of records. The Treasury s plan, as mentioned earlier, is focused on backlog reduction. It does not mention the other areas emphasized in the Executive Order, list them among the areas it selected for review, or explain the decision to omit them from the review and plan. Treasury officials told us that they concentrated in their plan on areas where they determined the department had a deficiency: namely, a backlog consisting of numerous requests, some of which were very old (dating as far back as 1991). By comparison, they did not consider they had deficiencies in the other areas. They also stated that neither Justice nor OMB had suggested that they revise the plan to include these areas. With regard to dissemination, they told us that they did not consider increasing dissemination to be mandatory, and they noted that their Web sites currently provide frequently requested records and other public documents, as required by the act. However, without a careful review of the department s current dissemination practices or a plan to take actions to increase dissemination, the Treasury does not have assurance that it has identified and exploited available opportunities to increase dissemination of records in such a way as to reduce the need for the public to make FOIA requests, as stressed by the Executive Order. <3.3. Most Agency Plans Included Improving Status Communications with FOIA Requesters> The Executive Order sets as policy that agencies shall provide FOIA requesters ways to learn about the status of their FOIA requests and states that agency improvement plans shall ensure that FOIA administration is in accordance with this policy. In its implementation guidance, Justice reiterated the order s emphasis on providing status information to requesters and discussed the need for agencies to examine, among other things, their capabilities for tracking status and the forms of communication used with requesters. Most agencies (22 of 25) established goals and timetables for improving communications with FOIA requesters about the status of their requests. Goals set by these agencies included planned changes to communications, including sending acknowledgement letters, standardizing letters to requesters, including information on elements of a proper FOIA request in response letters, and posting contact information on Web pages. Other activities included establishing toll free numbers for requesters to obtain status information, acquiring software to allow requesters to track the status of their requests, and holding public forums. Three agencies did not include improvement goals because they considered them unnecessary. In two cases (Defense and EPA), agencies considered that status communications were already an area of strength. Defense considered that it was strong in both customer responsiveness and communications. Defense s Web site provides instructions for requesters on how to get information about the status of requests, as well as information on Requester Service Centers and Public Liaisons. Officials also told us that this information is included in acknowledgement letters to requesters, and that the department is working to implement an Interactive Customer Collection tool that would enable requesters to provide feedback. Similarly, EPA officials told us that they considered the agency s activities to communicate with requesters on the status of their requests to be already effective, noting that many of the improvements planned by other agencies were already in effect at EPA. Officials also stated that EPA holds regular FOIA requester forums (the last in November 2006), and that EPA s requester community had expressed satisfaction with EPA s responsiveness. EPA s response to the Executive Order describes its FOIA hotline for requesters and its enterprise FOIA management system, deployed in 2005, that provides cradle to grave tracking of incoming requests and responses. The third agency, the Treasury, did not address improving status communications, as its plan is focused on backlog reduction. As required by the Executive Order, the Treasury did set up Requester Service Centers and Public Liaisons, which are among the mechanisms envisioned to improve status communications. However, because the Treasury omitted status communications from the areas of improvement that it selected for review, it is not clear that this area received attention commensurate with the emphasis it was given in the Executive Order. Without attention to communication with requesters, the Treasury increases the risk that its FOIA operations will not be responsive and citizen centered, as envisioned by the Executive Order. <3.4. Agencies Generally Plan to Rely on FOIA Reference Guides to Increase Public Awareness of FOIA Processing> The Executive Order states that improvement plans shall include activities to increase public awareness of FOIA processing, including (as appropriate) expanded use of Requester Service Centers and FOIA Public Liaisons, which agencies were required to establish by the order. In Justice s guidance, it linked this requirement to the FOIA Reference Guide that agencies are required to maintain as an aid to potential FOIA requesters, because such guides can be an effective means for increasing public awareness. Accordingly, the Justice guidance advised agencies to double-check these guides to ensure that they remain comprehensive and up to date. Most agencies (23 of 25) defined goals and timetables for increasing public awareness of FOIA processing, generally including ensuring that FOIA reference guides were up to date. In addition, all 25 agencies established requester service centers and public liaisons as required by the Executive Order. Besides these activities, certain agencies planned other types of outreach: for example, the Department of State reported taking steps to obtain feedback from the public on how to improve FOIA processes; the Department of the Interior plans to initiate feedback surveys on requesters FOIA experience; and the Department of Labor is planning to hold public forums and solicit suggestions from the requester community. Defense did not set specific goals and milestones in this area; according to Defense, it did not do so because its FOIA handbook had already been updated in the fall of 2005. Department officials told us that in meeting their goals and milestones for revising FOIA Web sites, they expect to improve awareness of Defense s FOIA process, as well as improving public access and other objectives. As mentioned earlier, the Treasury did not address this area in its review or plan. However, Treasury has established Requester Service Centers and FOIA Public Liaisons, as required. The Treasury s Director of Disclosure Services also told us that the Treasury provides on its Web site a FOIA handbook, a Privacy Act handbook, and a citizen s guide for requesters. In addition, this official told us that the Treasury had updated its FOIA handbook in 2005 and conducted staff training based on the update. However, at the time of our review, the FOIA handbook on the Web site was a version dated January 2000. When we pointed out that this earlier version was posted, the official indicated that he would arrange for the most recent version to be posted. Because the Treasury did not review its efforts to increase public awareness, it missed an opportunity to discover that the handbook on the Web site was outdated and thus had reduced effectiveness as a tool to explain the agency s FOIA processing to the public. Without further attention to increasing public awareness, the Treasury lacks assurance that it has taken all appropriate steps to ensure that the public has the means of understanding the agency s FOIA processing. <4. Annual Reporting and Selected Improvement Plans Could Be Further Enhanced> The annual FOIA reports continue to provide valuable information about citizens use of this important tool for obtaining information about government operation and decisions. The value of this information is enhanced when it can be used to reveal trends and support generalizations, but our ability to generalize about processing times whether from agency to agency or year to year is limited because only median times are reported. Given that processing times are an important gauge of government responsiveness to citizen inquiries, this limitation impedes the development of broader pictures of FOIA operations, which could be useful in monitoring efforts to improve processing and reduce the increasing backlog of requests, as intended by the Executive Order. Finally, having aggregated statistics and summaries could increase the value of the annual reporting process for assessing the performance of the FOIA program as a whole. In the draft report on which my statement today is based, we suggest that the Congress consider amending the act to require agencies to report additional statistics on processing time, which at a minimum should include average times and ranges. We also recommend that Justice provide aggregated statistics and summaries of the annual reports. The Executive Order provided a useful impetus for agencies to review their FOIA operations and ensure that they are appropriately responsive to the public generally and requesters specifically. Our draft report makes recommendations aimed at improving selected agency improvement plans. Nonetheless, all the plans show a commendable focus on making measurable improvements and form a reasonable basis for carrying out the order s goals. In summary, increasing the requirements for annual reporting would further improve the public visibility of the government s implementation of FOIA. In addition, implementing the improvement plans and reporting on their progress should serve to keep management attention on FOIA and its role in keeping citizens well informed about the operations of their government. However, to realize the goals of the Executive Order, it will be important for Justice and the agencies to continue to refine the improvement plans and monitor progress in their implementation. Mr. Chairman, this completes my statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. <5. Contact and Acknowledgments> If you should have questions about this testimony, please contact me at (202) 512-6240 or [email protected]. Other major contributors included Barbara Collier, Kelly Shaw, and Elizabeth Zhao. Attachment I: Scope and Methodology For the draft report on which this testimony is based, we gauged agencies progress in processing requests by analyzing the workload data (from fiscal year 2002 through 2005) included in the 25 agencies annual FOIA reports to assess trends in volume of requests received and processed, median processing times, and the number of pending cases. All agency workload data were self- reported in annual reports submitted to the Attorney General. To assess the reliability of the information contained in agency annual reports, we interviewed officials from selected agencies and assessed quality control processes agencies had in place. We selected 10 agencies to assess data reliability: the Departments of Agriculture (USDA), Defense, Education, the Interior, Labor, and Veterans Affairs, as well as the National Aeronautics and Space Administration, National Science Foundation, Small Business Administration, and Social Security Administration. We chose the Social Security Administration and Veterans Affairs because they processed a majority of the requests. To ensure that we selected agencies of varying size, we chose the remaining 8 agencies by ordering them according to the number of requests they received, from smallest to largest, and choosing every third agency. These 10 agencies account for 97 percent of the received requests that were reported in the 25 agencies annual reports. Of the 10 agencies that were assessed for data reliability, we determined that the data for USDA s Farm Service Agency were not reliable; these data account for over 80 percent of the reported USDA data. We therefore eliminated USDA s data from our analysis. Because of this elimination, our analysis was of 24 major agencies (herein we refer to this scope as governmentwide). Table 7 shows the 25 agencies and their reliability assessment status. To determine to what extent the agency improvement plans contain the elements emphasized by the order, we first analyzed the Executive Order to determine how it described the contents of the improvement plans. We determined that the order emphasized the following areas to be addressed by the plans: (1) reducing the backlog of FOIA requests, (2) increasing reliance on public dissemination of records (affirmative and proactive) including through Web sites, (3) improving communications with FOIA requesters about the status of their requests, and (4) increasing public awareness of FOIA processing including updating an agency s FOIA Reference Guide. We also analyzed the improvement plans to determine if they contained specific outcome-oriented goals and timetables for each of the criteria. We then analyzed the 25 agencies (including USDA) plans to determine whether they contained goals and timetables for each of these four elements. We evaluated the versions of agency plans available as of December 15, 2006. We also reviewed the Executive Order itself, implementing guidance issued by OMB and the Department of Justice, other FOIA guidance issued by Justice, and our past work in this area. We conducted our review in accordance with generally accepted government auditing standards. We performed our work from May 2006 to February 2007 in Washington, D.C. Attachment II: Freedom of Information Act Exemptions Exemption number Matters that are exempt from FOIA (A) Specifically authorized under criteria established by an Executive Order to be kept secret in the interest of national defense or foreign policy and (B) are in fact properly classified pursuant to such Executive Order. Related solely to the internal personnel rules and practices of an agency. Specifically exempted from disclosure by statute (other than section 552b of this title), provided that such statute (A) requires that matters be withheld from the public in such a manner as to leave no discretion on the issue, or (B) establishes particular criteria for withholding or refers to particular types of matters to be withheld. Trade secrets and commercial or financial information obtained from a person and privileged or confidential. Inter-agency or intra-agency memorandums or letters which would not be available by law to a party other than an agency in litigation with the agency. Personnel and medical files and similar files the disclosure of which would constitute a clearly unwarranted invasion of personal privacy. Records or information compiled for law enforcement purposes, but only to the extent that the production of such law enforcement records or information could reasonably be expected to interfere with enforcement proceedings; would deprive a person of a right to a fair trial or impartial adjudication; could reasonably be expected to constitute an unwarranted invasion of personal privacy; could reasonably be expected to disclose the identity of a confidential source, including a State, local, or foreign agency or authority or any private institution which furnished information on a confidential basis, and, in the case of a record or information compiled by a criminal law enforcement authority in the course of a criminal investigation or by an agency conducting a lawful national security intelligence investigation, information furnished by confidential source; would disclose techniques and procedures for law enforcement investigations or prosecutions, or would disclose guidelines for law enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk circumvention of the law; or could reasonably be expected to endanger the life or physical safety of an individual. Contained in or related to examination, operating, or condition reports prepared by, on behalf of, or for the use of an agency responsible for the regulation of supervision of financial institutions. Geological and geophysical information and data, including maps, concerning wells. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The Freedom of Information Act (FOIA) establishes that federal agencies must provide the public with access to government information, enabling them to learn about government operations and decisions. To help ensure proper implementation, the act requires that agencies annually report specific information about their FOIA operations, such as numbers of requests received and processed and median processing times. In addition, a recent Executive Order directs agencies to develop plans to improve their FOIA operations, including decreasing backlogs. GAO was asked to testify on the results of its study on FOIA processing and agencies' improvement plans. The draft report on the study is currently out for comment at the agencies involved (and is thus subject to change). For the study, GAO reviewed status and trends of FOIA processing at 25 major agencies as reflected in annual reports, as well as the extent to which improvement plans contain the elements emphasized by the Executive Order. To do so, GAO analyzed the 25 agencies' annual reports and improvement plans. What GAO Found Based on data in annual reports from 2002 to 2005, the public continued to submit more requests for information from the federal government through FOIA. Despite increasing the numbers of requests processed, many agencies did not keep pace with the volume of requests that they received. As a result, the number of pending requests carried over from year to year has been steadily increasing. Agency reports also show great variations in the median times to process requests (less than 10 days for some agency components to more than 100 days at others). However, the ability to determine trends in processing times is limited by the form in which these times are reported: that is, in medians only, without averages (that is, arithmetical means) or ranges. Although medians have the advantage of providing representative numbers that are not skewed by a few outliers, it is not statistically possible to combine several medians to develop broader generalizations (as can be done with arithmetical means). This limitation on aggregating data impedes the development of broader pictures of FOIA operations, which could be useful in monitoring efforts to improve processing and reduce the increasing backlog of requests, as intended by the Executive Order. The improvement plans submitted by the 25 agencies mostly included goals and timetables addressing the four areas of improvement emphasized by the Executive Order: eliminating or reducing any backlog of FOIA requests; increasing reliance on dissemination of records that can be made available to the public without the need for a FOIA request, such as through posting on Web sites; improving communications with requesters about the status of their requests; and increasing public awareness of FOIA processing. Most of the plans (20 of 25) provided goals and timetables in all four areas; some agencies omitted goals in areas where they considered they were already strong. Although details of a few plans could be improved (for example, one agency did not explicitly address areas of improvement other than backlog), all the plans focus on making measurable improvements and form a reasonable basis for carrying out the goals of the Executive Order.
<1. Background> There are two approaches for reorganizing or terminating a large financial company. Large financial companies may be reorganized or liquidated under a judicial bankruptcy process or resolved under special legal and regulatory resolution regimes that have been created to address insolvent financial entities such as insured depository institutions and insurance companies. <1.1. Bankruptcy Proceedings> Bankruptcy is a federal court procedure, the goal of which is to help individuals and businesses eliminate or restructure debts they cannot repay and help creditors receive some payment in an equitable manner. Generally the filing of a bankruptcy petition operates as an automatic stay; that is, it stops most lawsuits, foreclosures, and other collection activities against the debtor. Equitable treatment of creditors means all creditors with substantially similar claims are classified similarly and receive the same treatment. For example, a class of secured creditors those with liens or other secured claims against the debtor s property will receive similar treatment as to their secured claims. Business debtors may seek liquidation, governed primarily by Chapter 7 of the Code, or reorganization, governed by Chapter 11. Proceedings under Chapters 7 and 11 can be voluntary (initiated by the debtor) or involuntary (generally initiated by at least three creditors holding at least a certain minimum amount of claims against the debtor). In an involuntary proceeding, the debtor can defend against the proceeding, including presenting objections. The judge subsequently decides whether to grant the creditors request and permit the bankruptcy to proceed, dismiss the request, or enter any other appropriate order. A Chapter 7 proceeding is a court-supervised procedure by which a trustee takes over the assets of the debtor s estate subject to limited exemptions, reduces them to cash, and makes distributions to creditors, subject to the rights of secured creditors to the collateral securing their loans to the debtor. A reorganization proceeding under Chapter 11 allows debtors to continue some or all of their operations subject to court supervision as a way to satisfy creditor claims. The debtor typically remains in control of its assets, and is called a debtor-in-possession (DIP). Under certain circumstances, the court can direct the U.S. Trustee to appoint a Chapter 11 trustee to take over the affairs of the debtor. As shown in figure 1, a firm going through a Chapter 11 bankruptcy generally will pass through several stages. Among these are: First-day motions. The most common first-day motions relate to the continued operation of the debtor s business and involve matters such as requests to use cash collateral liquid assets on which secured creditors have a lien or claim and obtaining financing, if any. Disclosure. The disclosure statement must include information on the debtor s assets, liabilities, and business affairs sufficient to enable creditors to make informed judgments about how to vote on the debtor s reorganization plan and must be approved by the bankruptcy court. Plan of reorganization. A debtor has an exclusive right to file a plan of reorganization within the first 120 days of bankruptcy. The plan describes how the debtor intends to reorganize and treat its creditors. The plan divides claims against the debtor into separate classes and specifies the treatment each class will receive. The court may confirm the plan if, among other things, each class of allowed creditors has accepted the plan or the class is not impaired by the plan. If not all classes of impaired creditors vote to accept the plan, the court can still confirm the plan if it is shown that it is fair to all impaired creditors. Reorganization. Possible outcomes, which can be used in combination, include (1) distribution under a plan of the proceeds of a pre-plan sale of the assets of the company (in whole or in part), sometimes called a section 363 sale. Section 363 of the Code permits sales that are free and clear of creditor claims of property of the estate; (2) liquidation of the company s assets with approval of the court, through means other than a 363 sale; and (3) reorganization of the company, in which it emerges from bankruptcy with new contractual rights and obligations that replace or supersede those it had before filing for bankruptcy protection. The debtor, creditors, trustee, or other interested parties may initiate adversary proceedings in effect, a lawsuit within the bankruptcy case to preserve or recover money or property, to subordinate a claim of another creditor to their own claims, or for similar reasons. Furthermore, the The U.S. bankruptcy system involves multiple federal entities. Bankruptcy courts are located in 90 federal judicial districts; however, as we reported in 2011, the Southern District of New York and the District of Delaware adjudicate a majority of larger corporate or business bankruptcy cases. The Judicial Conference of the United States serves as the judiciary s principal policymaking body and recommends national policies on all aspects of federal judicial administration. In addition, AOUSC serves as the central administrative support entity for the Judicial Conference and the federal courts, including bankruptcy courts. The Federal Judicial Center is the education and research agency for the federal courts and assists bankruptcy courts with reports and assessments relating to the administration and management of bankruptcy cases. Finally, the Department of Justice s U.S. Trustee Program and the judiciary s Bankruptcy Administrator Program oversee bankruptcy trustees and promote integrity and efficiency in the bankruptcy system by overseeing the administration of bankruptcy estates. A preference action can be asserted for payments made to an insider within a year prior to the bankruptcy filing. <1.2. Financial Companies and the Bankruptcy Code> Large, complex financial companies that are eligible to file for bankruptcy generally file under Chapter 11 of the Code. Such companies operating in the United States engage in a range of financial services activities. Many are organized under both U.S. and foreign laws. The U.S. legal structure is frequently premised on a parent holding company owning regulated subsidiaries (such as depository institutions, insurance companies, broker-dealers, and commodity brokers) and nonregulated subsidiaries that engage in financial activities. Certain financial institutions may not file as debtors under the Code and other entities face special restrictions in using the Code: Insured depository institutions. Under the Federal Deposit Insurance Act, FDIC serves as the conservator or receiver for insured depository institutions placed into conservatorship or receivership under applicable law. Insurance companies. Insurers generally are subject to oversight by state insurance commissioners, who have the authority to place them into conservatorship, rehabilitation, or receivership. Broker-dealers. Broker-dealers can be liquidated under the Securities Investor Protection Act (SIPA) or under a special subchapter of Chapter 7 of the Code. However, broker-dealers may not file for reorganization under Chapter 11. Commodity brokers. Commodity brokers, which include futures commission merchants, foreign futures commission merchants, clearing organizations, and certain other entities in the derivatives industry, can only use a special subchapter of Chapter 7 for bankruptcy relief. <1.3. Current Role of Financial Regulators in Bankruptcy Proceedings> Regulators often play a role in financial company bankruptcies. With the exception of CFTC and SEC, the Code does not explicitly name federal financial regulators as a party of interest with a right to be heard before the court. In practice, regulators frequently appear before the court in financial company bankruptcies. For example, as receiver of failed insured depository institutions, FDIC s role in bankruptcies of bank holding companies is typically limited to that of creditor. CFTC has the express right to be heard and raise any issues in a case under Chapter 7. SEC has the same rights in a case under Chapter 11. SEC may become involved in a bankruptcy particularly if there are issues related to disclosure or the issuance of new securities. SEC and CFTC are, in particular, involved in Chapter 7 bankruptcies of broker-dealers and commodity brokers. In the event of a broker-dealer liquidation, pursuant to SIPA the bankruptcy court retains jurisdiction over the case and a trustee, selected by the Securities Investor Protection Corporation (SIPC), typically administers the case. SEC may participate in any SIPA proceeding as a party. The Code does not restrict the federal government from providing DIP financing to a firm in bankruptcy, and in certain cases it has provided such funding for example, financing under the Troubled Asset Relief Program (TARP) in the bankruptcies of General Motors and Chrysler. The authority to make new financial commitments under TARP terminated on October 3, 2010. In July 2010, the Dodd-Frank Act amended section 13(3) of the Federal Reserve Act to prohibit the establishment of an emergency lending program or facility for the purpose of assisting a single and specific company to avoid bankruptcy. Nevertheless, the Federal Reserve may design emergency lending programs or facilities for the purpose of providing liquidity to the financial system. <1.4. Current Safe-Harbor Treatment for Financial Contracts under the Code> Although the automatic stay generally preserves assets and prevents creditors from taking company assets in payment of debts before a case is resolved and assets are systematically distributed, the stay is subject to exceptions, one of which can be particularly important in a financial institution bankruptcy. These exceptions commonly referred to as the safe harbor provisions pertain to certain financial and derivative contracts, often referred to as qualified financial contracts (QFC). The types of contracts eligible for the safe harbors are defined in the Code. They include derivative financial products, such as forward contracts and swap agreements that financial companies (and certain individuals and nonfinancial companies) use to hedge against losses from other transactions or speculate on the likelihood of future economic developments. Repurchase agreements, which are collateralized instruments that provide short-term financing for financial companies and others, also generally receive safe-harbor treatment. Under the safe-harbor provisions, most counterparties that entered into a qualifying transaction with the debtor may exercise certain contractual rights even if doing so otherwise would violate the automatic stay. In the event of insolvency or the commencement of bankruptcy proceedings, the nondefaulting party in a QFC may liquidate, terminate, or accelerate the contract, and may offset (net) any termination value, payment amount, or other transfer obligation arising under the contract when the debtor files for bankruptcy. That is, generally nondefaulting counterparties subtract what they owe the bankrupt counterparty from what that counterparty owes them (netting), often across multiple contracts. If the result is positive, the nondefaulting counterparties can sell any collateral they are holding to offset what the bankrupt entity owes them. If that does not fully settle what they are owed, the nondefaulting counterparties are treated as unsecured creditors in any final liquidation or reorganization. <1.5. Orderly Liquidation Authority> OLA gives FDIC the authority, subject to certain constraints, to resolve large financial companies, including a bank holding company or a nonbank financial company designated for supervision by the Federal Reserve, outside of the bankruptcy process. This regulatory resolution authority allows for FDIC to be appointed receiver for a financial company if the Secretary of the Treasury, in consultation with the President, determines, upon the recommendation of two-thirds of the Board of Governors of the Federal Reserve and (depending on the nature of the financial firm) FDIC, SEC, or the Director of the Federal Insurance Office, among other things, that the firm s failure and its resolution under applicable law, including bankruptcy, would have serious adverse effects on U.S. financial stability and no viable private-sector alternative is available to prevent the default. In December 2013, FDIC released for public comment a notice detailing a proposed single-point-of-entry (SPOE) approach to resolving a systemically important financial institution under OLA. Under the SPOE approach, as outlined, FDIC would be appointed receiver of the top-tier U.S. parent holding company of a covered financial company determined to be in default or in danger of default pursuant to the appointment process set forth in the Dodd-Frank Act. Immediately after placing the parent holding company into receivership, FDIC would transfer assets (primarily the equity and investments in subsidiaries) from the receivership estate to a bridge financial company. By allowing FDIC to take control of the firm at the parent holding company level, this approach could allow subsidiaries (domestic and foreign) carrying out critical services to remain open and operating. In a SPOE resolution, at the parent holding company level, shareholders would be wiped out, and unsecured debt holders would have their claims written down to reflect any losses that shareholders cannot cover. <1.6. Challenges of Resolving Failing Cross Border Financial Companies> The resolution of globally active large financial firms is often associated with complex international, legal, and operational challenges. The resolution of failed financial companies is subject to different national frameworks. During the recent financial crisis, these structural challenges led to government rescues or disorderly liquidations of systemic firms. Insolvency laws vary widely across countries. The legal authorities of some countries are not designed to resolve problems in financial groups operating through multiple legal entities that span borders. Some resolution authorities may not encourage cooperative solutions with foreign resolution authorities. Regulatory and legal regimes may conflict. Depositor preference, wholesale funding arrangements, derivatives, and repurchase agreements are often treated differently among countries when a firm enters bankruptcy. Some resolution authorities may lack the legal tools or authority to share information with relevant foreign authorities about the financial group as a whole or subsidiaries or branches. Country resolution authorities may have as their first responsibility the protection of domestic financial stability and minimization of any risk to public funds. For instance, if foreign authorities did not have full confidence that national and local interests would be protected, the assets of affiliates or branches of a U.S.-based financial institution chartered in other countries could be ring fenced or isolated and wound down separately under the insolvency laws of other countries thus complicating home-country resolution efforts. <1.7. Chapter 15 of the Bankruptcy Code Governs Judicial Cross-Border Coordination in Limited Circumstances> In 2005, the United States adopted Chapter 15 of the U.S. Bankruptcy Code. Chapter 15 is based on the Model Law on Cross-Border Insolvency of the United Nations Commission on International Trade Law (UNCITRAL). The model law is intended to promote coordination between courts in different countries during insolvencies and has been adopted in 21 jurisdictions. More than 450 Chapter 15 cases have been filed since its adoption, with more than half filed in the Southern District of New York and the District of Delaware. Among the stated objectives of Chapter 15 are promoting cooperation between U.S. and foreign parties involved in a cross-border insolvency case, providing for a fair process that protects all creditors, and facilitating the rescue of a distressed firm. In pursuit of these goals, Chapter 15 authorizes several types of coordination, including U.S. case trustees or other authorized entities operating in foreign countries on behalf of a U.S. bankruptcy estate; foreign representatives having direct access to U.S. courts, including the right to commence a proceeding or seek recognition of a foreign proceeding; and U.S. courts communicating information they deem important, coordinating the oversight of debtors activities, and coordinating proceedings. Chapter 15 excludes the same financial institutions that are generally not eligible to file as debtors under the Code (such as insured depository institutions and U.S. insurance companies), with the exception of foreign insurance companies. It also excludes broker-dealers that can be liquidated under SIPA or a special provision of Chapter 7 of the Code and commodity brokers that can be liquidated under a different special provision of Chapter 7. Based on the UNCITRAL model law, Chapter 15 contains a public policy exception that allows a U.S. court to refuse cooperation and coordination if doing so would be manifestly contrary to the public policy of the United States. <2. No Changes Have Been Made to the Bankruptcy Code but Proposals Were Introduced in the Previous Congress> <2.1. Two Proposals Would Have Made Broad Changes Relating to Complex Financial Institutions> Since we last reported on financial company bankruptcies in July 2013, no changes have been made to Chapters 7, 11, or 15 of the Bankruptcy Code relating to large financial companies, although two bills were introduced in the 113th Congress that would have attempted to address challenges associated with the reorganization of large financial firms as governed by Chapter 11 of the Code. Neither bill was signed into law nor re-introduced in the current Congress, as of March 12, 2015. The Taxpayer Protection and Responsible Resolution Act (S. 1861) was introduced in the Senate on December 19, 2013. The bill would have added a new chapter to the Code Chapter 14: Liquidation, Reorganization, or Recapitalization of a Covered Financial Corporation that would have generally applied to bank holding companies or corporations predominantly engaged in activities that the Federal Reserve Board has determined are financial in nature. Its provisions would have made changes to the role of regulators, changed the treatment of QFCs, and specifically designated judges to hear Chapter 14 cases, as the following examples illustrate. The proposal would have repealed the regulatory resolution regime in Title II of the Dodd-Frank Act revoking FDIC s role as a receiver of a failed or failing financial company under OLA and returned all laws changed by Title II to their pre-Title II state. The proposal would have allowed the Federal Reserve Board to commence an involuntary bankruptcy and granted the Federal Reserve Board the right to be heard before the court. The proposal would have allowed the court to transfer assets of the estate to a bridge company (on request of the Federal Reserve Board or the trustee and after notice and hearing and not less than 24 hours after the start of the case). The court would have been able to order transfer of assets to a bridge company only under certain conditions (including that a preponderance of evidence indicated the transfer was necessary to prevent imminent substantial harm to U.S. financial stability). FDIC also would have been granted the right to be heard before the court on matters related to the transfer of property to the bridge company. However, this proposal would have explicitly prohibited the Federal Reserve Board from providing DIP financing to a company in bankruptcy or to a bridge company and provided no specific alternative non-market source of funding. The Taxpayer Protection and Responsible Resolution Act (S. 1861) also would have changed the treatment of QFCs in bankruptcy. The rights to liquidate, terminate, offset, or net QFCs would have been stayed for up to 48 hours after bankruptcy filing (or the approval of the petition from the Federal Reserve Board). During the stay, the trustee would have been able to perform all payment and delivery obligations under the QFC that became due after the case commenced. The stay would have been terminated if the trustee failed to perform any payment or delivery obligation. Furthermore, QFCs would not have been able to be transferred to the bridge company unless the bridge assumed all contracts with a counterparty. If transferred to the bridge company, the QFCs could not have been terminated or modified for certain reasons, including the fact that a bankruptcy filing occurred. Aside from the limited exceptions, QFC counterparties would have been free to exercise all of their pre-existing contractual rights, including termination. Finally, the Taxpayer Protection and Responsible Resolution Act (S. 1861) would have required the Chief Justice to designate no fewer than 10 bankruptcy judges with expertise in cases under Title 11 in which a financial institution is a debtor to be available to hear a Chapter 14 case. Additionally, the Chief Justice would have been required to designate at least one district judge from each circuit to hear bankruptcy appeals under Title 11 concerning a covered financial corporation. A second bankruptcy reform proposal, the Financial Institution Bankruptcy Act of 2014 (H.R. 5421), was passed by voice vote by the House of Representatives on December 1, 2014, and would have added a new Subchapter V under Chapter 11. Generally, the proposed subchapter would have applied to bank holding companies or corporations with $50 billion or greater in total assets and whose activities, along with its subsidiaries, are primarily financial in nature. The Financial Institution Bankruptcy Act (H.R. 5421) contained provisions similar or identical to those in the Taxpayer Protection and Responsible Resolution Act (S. 1861) that would have affected the role of regulators, treatment of QFCs, and designation of judges. For example, this proposal would have allowed an involuntary bankruptcy to be commenced by the Federal Reserve Board and allowed for the creation of a bridge company to which assets of the debtor holding company could be transferred. This proposal also would have granted the Federal Reserve Board and FDIC the right to be heard before the court, as well as the Office of the Comptroller of the Currency and SEC (which are not granted this right under the Taxpayer Protection and Responsible Resolution Act). The changes to the treatment of QFCs under this proposal were substantively similar to those under the Taxpayer Protection and Responsible Resolution Act (S. 1861). In addition, the Financial Institution Bankruptcy Act (H.R. 5421) would have required that the Chief Justice would designate no fewer than 10 bankruptcy judges to be available to hear a Subchapter V case. The Chief Justice also would have been required to designate not fewer than three judges of the court of appeals in not fewer than four circuits to serve on an appellate panel. Although the two bills have similarities, there are significant differences. For example, the Financial Institution Bankruptcy Act (H.R. 5421) would not have repealed Title II of the Dodd-Frank Act. Instead, Title II would have remained an alternative to resolving a firm under the Bankruptcy Code. Also, the Financial Institution Bankruptcy Act (H.R. 5421) would not have restricted the Federal Reserve Board from providing DIP financing to a financial firm under the proposed subchapter. Furthermore, the Financial Institution Bankruptcy Act (H.R. 5421) would have given the court broad power in the confirmation of the bankruptcy plan to consider the serious adverse effect that any decision in connection with Subchapter V might have on financial stability in the United States. By contrast, the Taxpayer Protection and Responsible Resolution Act (S. 1861) mentioned financial stability as a consideration in specific circumstances, such as whether the Federal Reserve Board could initiate an involuntary bankruptcy under Chapter 14, or whether the court could order a transfer of the debtor s property to the bridge company. Certain provisions in these bills resembled those in OLA and may have facilitated a resolution strategy similar to FDIC s SPOE strategy under OLA. For example, each of the bankruptcy reform bills and FDIC s SPOE strategy under OLA would have allowed for the creation of a bridge company, in which assets, financial contracts, and some legal entities of the holding company would have been transferred, allowing certain subsidiaries to have maintained operations. In addition, OLA, like the bills, included a temporary stay for QFCs. OLA uses a regulatory approach to resolution, while the bankruptcy reform bills in the 113th Congress would have maintained a judicial approach to resolution. Some experts have expressed concern that a regulatory resolution may not adequately ensure the creditors rights to due process. For example, experts attending GAO s 2013 bankruptcy reform roundtables noted that if preferences were given to some counterparties or creditors during a temporary stay, other counterparties or creditors would have the right to take action to recover value later in the process, as opposed to having a judge consider the views of all of the parties prior to making any decisions. However, as we reported in July 2013, other experts have stated that the judicial process of bankruptcy does not contemplate systemic risk, or have some of the tools available for minimizing the systemic risk associated with the failure of a systemically important financial institution. For example, to act quickly in cases involving large and complex financial companies, courts might need to shorten notice periods and limit parties right to be heard, which could compromise due process and creditor rights. In the United States, the judicial process under bankruptcy remains the presumptive method for resolving financial institutions, even those designated as systemically important. <2.2. Another Proposal Would Have Removed Safe Harbor Treatment of QFCs in Bankruptcy> A third proposal would have more narrowly amended the Code. The 21st Century Glass-Steagall Act of 2013 (S. 1282 in the Senate and H.R. 3711 in the House) contained a provision that would have repealed all safe- harbor provisions for QFCs. This legislative proposal was neither signed into law nor re-introduced in the current Congress, as of March 12, 2015. Some experts have identified the safe-harbor treatment of QFCs under the Code as a challenge to an orderly resolution in bankruptcy. For example, safe-harbor treatment can create significant losses to the debtor s estate, particularly for financial institution debtors that often are principal users of these financial products. As we previously reported in July 2011, some experts we interviewed suggested that modifying the safe harbor provisions might help to avoid or mitigate the precipitous decline of the asset values typical in financial institution bankruptcies. For example, these experts suggested that the treatment of QFCs in the Lehman bankruptcy contributed to a significant and rapid loss of asset values to the estate. Other experts we spoke with in 2011 suggested that safe-harbor treatment might lessen market discipline. Because counterparties entered into QFCs may close out their contracts even if doing so would otherwise violate the automatic stay, the incentive to monitor the risk of each other could be reduced. Additionally, as we reported in July 2013, attendees of our roundtable discussions on bankruptcy reform noted that the safe harbors lead to a larger derivatives market and greater reliance on short-term funding because QFCs would not be subject to a stay, which could increase systemic risk in the financial system. However, others argue that a repeal of the safe-harbor provisions could have adverse effects. As we previously reported in July 2011, these experts assert that subjecting any QFCs to the automatic stay in bankruptcy would freeze many assets of the counterparties of the failed financial institution, causing a chain reaction and a subsequent systemic financial crisis. In January 2011, regulatory officials we spoke with also told us that the safe harbor provisions uphold market discipline through margin, capital, and collateral requirements. They said that the requirement for posting collateral limits the amount of risk counterparties are willing to undertake. In addition, during the 2013 expert roundtable on financial company bankruptcies, one expert noted that one of the goals of safe harbors is to limit market turmoil during a bankruptcy that is, they are to prevent the insolvency of one firm from spreading to other firms. <3. Recent Efforts to Enhance International Coordination to Resolve Failing Financial Companies under Bankruptcy> In the United States the presumptive mechanism to resolve a failed cross- border large financial company continues to be through the judicial bankruptcy process, though no statutory changes have been made to Chapter 15 of the Code or the U.S. judicial bankruptcy process to address impediments to an orderly resolution of a large, multinational financial institution. However, while some structural challenges discussed earlier remain, others, such as conflicting regulatory regimes and the treatment of cross-border derivatives, are being addressed through various efforts. For example, the Federal Reserve and FDIC have taken certain regulatory actions mandated by the Dodd-Frank Act authorities toward facilitating orderly resolution, including efforts that could contribute to cross-border coordination. Specifically, certain large financial companies must provide the Federal Reserve and FDIC with periodic reports of their plans for rapid and orderly resolution in the event of material financial distress or failure under the Code. The resolution plans or living wills are to demonstrate how a company could be resolved in a rapid manner under the Code. FDIC and the Federal Reserve have said that the plans were expected to address potential obstacles to global cooperation, among others. In 2014, FDIC and the Federal Reserve sent letters to a number of large financial companies identifying specific shortcomings with the resolution plans that those firms will need to address in their 2015 submissions, due on or before July 1, 2015, for the first group of filers. International bodies have also focused on strengthening their regulatory structures to enable the orderly resolution of a failing large financial firm and have taken additional actions to facilitate cross-border resolutions. In October 2011, the Financial Stability Board (FSB) an international body that monitors and makes recommendations about the global financial system issued a set of principles to guide the development of resolution regimes for financial firms active in multiple countries. For example, each jurisdiction should have the authority to exercise resolution powers over firms, jurisdictions should have policies in place so that authorities are not reliant on public bailout funds, and statutory mandates should encourage a cooperative solution with foreign authorities. In addition, in December 2013 the European Parliament and European Council reached agreement on the European Union s (EU) Bank Recovery and Resolution Directive, which establishes requirements for national resolution frameworks for all EU member states and provides for resolution powers and tools. For example, member states are to appoint a resolution authority, institutions must prepare and maintain recovery plans, resolution authorities are to assess the extent to which firms are resolvable without the assumption of extraordinary financial support, and authorities are to cooperate effectively when dealing with the failure of cross-border banks. Unlike the United States, EU and FSB do not direct resolution authorities to use the bankruptcy process developed for corporate insolvency situations. In a letter to the International Swaps and Derivatives Association (ISDA) in 2013, FDIC, the Bank of England, BaFin in Germany, and the Swiss Financial Market Supervisory Authority called for changes in the exercise of termination rights and other remedies in derivatives contracts following commencement of an insolvency or resolution action. In October 2014, 18 major global financial firms agreed to sign a new ISDA Resolution Stay Protocol to facilitate the cross-border resolution of a large, complex institution. This protocol was published and these 18 financial firms agreed to it on November 12, 2014, and certain provisions of which became effective in January 2015. Generally, parties adhering to this protocol have agreed to be bound by certain limitations on their termination rights and other remedies in the event one of them becomes subject to certain resolution proceedings, including OLA. These stays are intended to give resolution authorities and insolvency administrators time to facilitate an orderly resolution of a troubled financial firm. The Protocol also incorporates certain restrictions on creditor contractual rights that would apply when a U.S. financial holding company becomes subject to U.S. bankruptcy proceedings, including a stay on cross-default rights that would restrict the counterparty of a non-bankrupt affiliate of an insolvent U.S. financial holding company from immediately terminating its derivatives contracts with that affiliate. Finally, a United Nations working group (tasked with furthering adoption of the UNCITRAL Model Law) included the insolvency of large and complex financial institutions as part of its focus on cross-border insolvency. In 2010, Switzerland proposed that the working group study the feasibility of developing an international instrument for the cross- border resolution of large and complex financial institutions. The working group has acknowledged and has been monitoring the work undertaken by FSB, Basel Committee on Banking Supervision, the International Monetary Fund, and EU. <4. Agency Comments> We provided a draft of this report to AOUSC, CFTC, Departments of Justice and the Treasury, FDIC, Federal Reserve, and SEC for review and comment. The agencies did not provide written comments. We received technical comments from the Department of the Treasury, FDIC, Federal Reserve, and SEC, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, Director of the Administrative Office of the U.S. Courts, the Chairman of the Commodity Futures Trading Commission, Attorney General, the Secretary of the Treasury, the Chairman of the Federal Deposit Insurance Corporation, the Director of the Federal Judicial Center, the Chair of the Board of Governors of the Federal Reserve System, the Chair of the Securities and Exchange Commission, and other interested parties. The report also is available at no charge on the GAO web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Cindy Brown Barnes at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Updates on the Bankruptcy Proceedings for Lehman Brothers Holdings, Inc., MF Global, and Washington Mutual In our July 2011 and July 2012 reports on the bankruptcy of financial companies, we reported on the status of the bankruptcy proceedings of, among other financial companies, Lehman Brothers Holdings Inc., MF Global, and Washington Mutual. In the 2011 report, we found that comprehensive data on the number of financial companies in bankruptcies were not readily available. We collected information to update the status of the bankruptcy proceedings for Lehman Brothers Holdings Inc., MF Global, and Washington Mutual. Since we last reported in July 2012, in each case, additional payments to creditors have been distributed and litigation with various parties is ongoing. <5. Update on the Lehman Bankruptcy> Lehman Brothers Holdings Inc. (Lehman) was an investment banking institution that offered equity, fixed-income, trading, asset management, and other financial services. In 2008, Lehman was the fourth largest U.S. investment bank and had been in operation since 1850. It had 209 registered subsidiaries in 21 countries. On September 15, 2008, Lehman filed Chapter 11 cases in the U.S. Bankruptcy Court. Its affiliates filed for bankruptcy over subsequent months. Some of Lehman s affiliates also filed bankruptcy or insolvency proceedings in foreign jurisdictions. There are three different legal proceedings involving (1) the holding company or LBHI, (2) the U.S. broker dealer or LBI, and (3) the U.K. broker dealer or LBIE. On September 19, 2008, Lehman s broker-dealer was placed into liquidation under the Securities Investor Protection Act (SIPA). The bankruptcy court approved the sale of LBI s assets to Barclays PLC on September 20, 2008 5 days after the filing of the LBHI Chapter 11 case. In March 2010, LBHI debtors filed their proposed Chapter 11 plan. In December 2010, a group of senior creditors filed an alternative plan. Since then, various plan amendments and counter plans were filed. In December 2011, the U.S. Bankruptcy Court for the Southern District of New York confirmed a reorganization plan for LBHI and the plan took effect in March 2012. <5.1. Payments to Creditors Continue> LBHI had more than 100,000 creditors. As of October 2, 2014, some $8.6 billion had been distributed to LBHI creditors in the nonpriority unsecured claims class. The Trustee of LBI has distributed more than $106 billion to 111,000 customers. As of September 2014, 34 billion has been distributed by the LBIE Administrator to counterparties in the House Estate (general unsecured estate) and the Trust Estate (Client Assets, Client Money and Omnibus Trust). In February 2015, the bankruptcy court approved a second interim distribution of $2.2 billion to general unsecured creditors with allowed claims. This would bring the total distributions to allowed general unsecured creditors to approximately 27 percent. <5.2. Litigation Continues> There is ongoing litigation involving a breach of a swap with Giants Stadium, the payment of creditor committee members legal fees, and transactions with foreign entities, according to an official of the U.S. Trustees Program. Litigation concerning issues surrounding the sale of LBI assets to Barclays PLC also continues. On December 15, 2014, the SIPA Trustee filed a petition for a writ of certiorari with the U.S. Supreme Court seeking review of the lower court rulings that awarded $4 billion of margin cash assets to Barclay s. <6. Update on the MF Global Bankruptcy> MF Global Holdings Ltd. (MFGH) was one of the world s leading brokers in markets for commodities and listed derivatives. The firm was based in the United States and had operations in Australia, Canada, Hong Kong, India, Japan, Singapore, and the U.K. On October 31, 2011, MFGH and one of its affiliates filed Chapter 11 cases in the U.S. Bankruptcy Court for the Southern District of New York. In the months following four other affiliates filed for relief in Bankruptcy Court. Also, on October 31, 2011, the Securities Investor Protection Corporation (SIPC) commenced a SIPA case against MF Global s broker-dealer subsidiary (MFGI). The SIPA trustee has been liquidating the firm s assets and distributing payments to its customers on a rolling basis pursuant to a claims resolution procedure approved by the bankruptcy court overseeing the case. MFGI was required to pay $1.2 billion in restitution to its customers as well as a $100 million penalty. In December 2014, CFTC obtained a federal court consent order against MFGH requiring it to pay $1.2 billion or the amount necessary in restitution to ensure the claims of MFGI are paid in full. The bankruptcy court confirmed a liquidation plan for MFGH on April 22, 2013, which became effective in June 2013. As of the end of 2013, the SIPA trustee reported the probability of a 100 percent recovery of allowed net equity claims for all commodities and securities customers of MFGI. <6.1. Payments to Creditors Continue> As of mid-December 2014, 100 percent of the distributions through the SIPA trustee have been completed to substantially all categories of commodities and securities customers and 39 percent of the first interim distribution on allowed unsecured claims. The trustee started to make $551 million in distributions to general creditors on October 30, 2014. An interim payment of $518.7 million went to unsecured general claimants and covered 39 percent of their allowed claims. A reserve fund of $289.8 million was to be held for unresolved unsecured claims and a reserve fund of $9.9 million will be held for unresolved priority claims. In April 2014, the SIPA trustee began final distributions to all public customers. With this distribution a total of $6.7 billion was to have been returned to over 26,000 securities and commodities futures customers. General creditor claims totaling more than $23 billion in asserted amounts, as substantial unliquidated claims, were filed in this proceeding as of the end of June 2014. As of December 2014, the SIPA trustee reports that of 7,687 general creditor claims asserted or reclassified from customer status, only 23 claims remain unresolved. <6.2. Litigation Continues> Current litigation surrounds a malpractice complaint against PricewaterhouseCoopers (the company s former auditor) and an investigation of the officers, according to an official of the U.S. Trustees Program. <7. Update on Washington Mutual Bankruptcy> Washington Mutual Inc. was a thrift holding company that had 133 subsidiaries. Its subsidiary Washington Mutual Bank was the largest savings and loan association in the United State prior to its failure. In the 9 days prior to receivership by the Federal Deposit Insurance Corporation (FDIC), there were more than $16.7 billion in depositor withdrawals. At the time of its filing, Washington Mutual had about $32.9 billion in total assets and total debt of about $8.1 billion. Its failure was the largest bank failure in U.S. history. On September 25, 2008, the Office of Thrift Supervision found Washington Mutual Bank to be unsafe and unsound, closed the bank, and appointed FDIC as the receiver. FDIC as receiver then took possession of the bank s assets and liabilities and transferred substantially all the assets and liabilities to JPMorgan Chase for $1.9 billion. On September 26, 2008, Washington Mutual and its subsidiary WMI Investment Corporation filed Chapter 11 cases in U.S. Bankruptcy Court for the District of Delaware. On March 12, 2010, Washington Mutual, FDIC, and JPMorgan Chase announced that they had reached a settlement on disputed property and claims. This was called the global settlement. On July 28, 2010, the bankruptcy court approved the appointment of an examiner, selected by the U.S. Trustee s office, to investigate the claims of various parties addressed by the global settlement. The seventh amended plan was confirmed by the court on February 24, 2012. The plan established a liquidating trust the Washington Mutual Liquidating Trust (WMILT) to make subsequent distributions to creditors on account of their allowed claims. Upon the effective date of the plan, Washington Mutual became a newly reorganized company, WMI Holdings Corp. consisting primarily of its subsidiary WMI Mortgage Reinsurance Company, Inc. <7.1. Payments to Creditors Continue> In 2012, there was an initial distribution of $6.5 billion. Since that initial distribution, an additional $660 million has been distributed to creditors, according to officials at the U.S. Trustees Program, including a distribution of $78.4 million paid on August 1, 2014. <7.2. Litigation Continues> In August 2013, WMILT, pursuant to an order by the U.S. Bankruptcy Court for the District of Delaware, filed a declaratory judgment in the U.S. District Court for the Western District of Washington against FDIC, the Board of Governors of the Federal Reserve System (Federal Reserve), and 90 former employees who were also claimants in the bankruptcy proceeding. Certain employee claimants have asserted cross-claims against FDIC and the Federal Reserve, contending that the banking agencies are without authority to assert limits on payment from troubled institutions that are contingent on termination of a person s employment over WMILT, because WMILT is a liquidating trust. After the case was transferred to the U.S. Bankruptcy Court for the District of Delaware in July 2014 and all pending motions terminated, most of the parties stipulated to withdraw the reference to the bankruptcy court. FDIC moved to dismiss the complaint on September 5, 2014. The proposed order to withdraw the reference and the briefing on the motion to dismiss remains pending. Appendix II: Objectives, Scope, and Methodology Section 202(e) of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd Frank Act) mandated that we report on the orderliness and efficiency of financial company bankruptcies every year for 3 years after passage of the act, in the fifth year, and every 5 years thereafter. This report, the fourth in the series, examines (1) recent changes to the U.S. Bankruptcy Code (Code) and (2) efforts to improve cross-border coordination to facilitate the liquidation and reorganization of failed large financial companies under bankruptcy. For each of our objectives, we reviewed relevant regulations and laws, including the Code and the Dodd-Frank Act as well as GAO reports that addressed bankruptcy issues and financial institution failures. We specifically reviewed the reports we issued during the first 3 years of the mandate as well as reports written under the same or similar mandates by the Administrative Office of the United States Courts (AOUSC) and the Board of Governors of the Federal Reserve System (Federal Reserve). We interviewed officials from the following federal agencies due to their role in financial regulation and bankruptcy proceedings: AOUSC; the Commodity Futures Trading Commission (CFTC); Federal Deposit Insurance Corporation (FDIC); Department of Justice; Department of the Treasury (Treasury), including officials who support the Financial Stability Oversight Council (FSOC); Federal Reserve; and Securities and Exchange Commission (SEC). We also updated our review of published economic and legal research on the financial company bankruptcies that we had originally completed during the first year of the mandate (see appendix I). For the original search, we relied on Internet search databases (including EconLit and Proquest) to identify studies published or issued after 2000 through 2010. To address our first objective, we reviewed Chapters 7, 11, or 15 of the Bankruptcy Code for any changes. In addition, we reviewed legislation proposed in the 113th Congress that would change the Code for financial company bankruptcies. We also reviewed academic literature on financial company bankruptcies and regulatory resolution, transcripts of congressional hearings on bankruptcy reform, and transcripts from expert roundtables on bankruptcy reform that were hosted by GAO in 2013. To address our second objective, we reviewed Chapter 15 of the Bankruptcy Code, which relates to coordination between U.S. and foreign jurisdictions in bankruptcy cases in which the debtor is a company with foreign operations, for any changes. In addition, we sought information on U.S. and international efforts to improve coordination of cross-border resolutions from the federal agencies we interviewed. We also reviewed and analyzed documentary information from the Bank of England, Basel Committee on Banking Supervision, European Union, the Financial Stability Board, BaFin in Germany, International Monetary Fund, Swiss Financial Market Supervisory Authority, and the United Nations. To update the three bankruptcy cases of Lehman Brothers Holdings, Inc.; MF Global Holdings, Ltd.; and Washington Mutual, Inc. discussed in our July 2011 and July 2012 reports, we sought available information for example, trustee reports and reorganization plans on these cases from CFTC, FDIC, Federal Reserve, and SEC; AOUSC, the Department of Justice, and Treasury. In addition, we collected information from prior GAO reports, bankruptcy court documents, and the trustees in each case. To determine whether there were new bankruptcy filings of large financial companies such as those in our case studies, we inquired of AOUSC, CFTC, FDIC, Department of Justice, Treasury, Federal Reserve, and SEC. We also conducted a literature review, which did not show evidence of any new bankruptcy cases filed by large financial companies. We conducted this performance audit from June 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the individual named above, Karen Tremba, Assistant Director; Nancy S. Barry; Patrick Dynes; Risto Laboski; Marc Molino; Barbara Roesmann; Jessica Sandler; and Jason Wildhagen made key contributions to this report. Technical assistance was provided by JoAnna Berry.
Why GAO Did This Study The challenges associated with the bankruptcies of large financial companies during the 2007-2009 financial crisis raised questions about the effectiveness of the U.S. Bankruptcy Code and international coordination for resolving complex financial institutions with cross-border activities. The Dodd-Frank Act mandates that GAO report on an ongoing basis on ways to make the U.S. Bankruptcy Code more effective in resolving certain failed financial companies. GAO has issued three reports on this issue. This fourth report addresses (1) recent changes to the U.S. Bankruptcy Code and (2) efforts to improve cross-border coordination to facilitate the liquidation or reorganization of failed large financial companies under bankruptcy. GAO reviewed laws, court documents, regulations, prior GAO reports, and academic literature on financial company bankruptcies and regulatory resolution. GAO also reviewed documentation from foreign financial regulators and international bodies such as the Financial Stability Board. GAO interviewed officials from the Administrative Office of the United States Courts, Department of Justice, Department of the Treasury, and financial regulators with a role in bankruptcy proceedings. GAO makes no recommendations in this report. The Department of the Treasury, Federal Reserve, FDIC, and the Securities and Exchange Commission provided technical comments on a draft of the report that GAO incorporated as appropriate. What GAO Found The U.S. Bankruptcy Code (Code) chapters dealing with the liquidation or reorganization of a financial company have not been changed since GAO last reported on financial company bankruptcies in July 2013. However, bills introduced in the previous Congress would, if re-introduced and passed, make broad changes to the Code relevant to financial company bankruptcies. The Financial Institution Bankruptcy Act of 2014 (H.R. 5421) and Taxpayer Protection and Responsible Resolution Act (S.1861) would have expanded to varying degrees the powers of the Board of Governors of the Federal Reserve System (Federal Reserve) and Federal Deposit Insurance Corporation (FDIC) and would have imposed a temporary stay on financial derivatives (securities whose value is based on one or more underlying assets) that are exempt from the automatic stay under the Code. That stay would prohibit a creditor from seizing or taking other action to collect what the creditor is owed under the financial derivative. The bills also would have added to the Code processes for the resolution of large, complex financial companies similar in some ways to provisions currently in the Orderly Liquidation Authority (OLA) in Title II of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), which grants FDIC the authority to resolve failed systemically important financial institutions under its receivership. For example, each bill would have allowed for the creation of a bridge company, in which certain assets and financial contracts of the holding-company would be transferred, allowing certain subsidiaries to continue their operations. The 21st Century Glass-Steagall Act of 2013—a bill introduced in the House of Representatives (H.R. 3711) and the Senate (S. 1282)—would have repealed safe-harbor provisions that allow most counterparties in a qualifying transaction with the debtor to exercise certain contractual rights even if doing so would otherwise violate the automatic stay. As of March 12, 2015, these legislative proposals had not been re-introduced in Congress. In the United States, the presumptive mechanism to resolve a failed large financial company with cross-border operations is through the judicial bankruptcy process. Since GAO's 2013 report, no changes have been made to the chapter of the Code that relates to coordination between U.S. and foreign jurisdictions in bankruptcy cases in which the debtor has foreign operations. Some structural challenges remain, such as conflicting regulatory regimes related to the treatment of financial contracts between parties in different countries when a firm enters bankruptcy, but efforts are underway to address them. Regulators have implemented a Dodd-Frank Act provision that requires certain large financial firms to submit a resolution plan to assist with an orderly bankruptcy process, which regulators expect to help address potential problems with international cooperation, among others. However, in 2014, FDIC and the Federal Reserve identified shortcomings with the plans for a number of large financial companies that those firms are to address in their 2015 submissions. Further, international bodies, such as the Financial Stability Board—an international body that monitors and makes recommendations about the global financial system—have focused on having countries adopt a regulatory approach to resolutions. Other recent actions include a January 2015 stay protocol for derivatives contracts developed by the International Swaps and Derivatives Association that is intended to give regulators time to facilitate an orderly resolution of a troubled firm.
<1. Background> Air bags are one part of a vehicle s occupant protection system, which also includes the structure of the vehicle and seat belts. Seat belts are the primary restraint for an occupant during a crash, and air bags are intended to supplement this protection. In 1999, NHTSA reported that seat belt use alone (lap and shoulder belts) reduces fatalities by 45 percent in crashes involving an impact to the front of the vehicle, frontal air bags without seat belts reduce fatalities by 14 percent, and the combination of seat belts and air bags reduces fatalities by 50 percent. Between 1986 and April 2001, frontal air bags have saved an estimated 6,856 lives but have caused 175 fatalities that have been confirmed by NHTSA 19 infants in a rear-facing child seat, 85 children (not in a rear- facing child seat), 64 drivers, and 7 adult passengers in relatively low- speed crashes. NHTSA investigators have found that people who were killed by deploying air bags were typically in close proximity to the air bag in one of two ways: The occupant was thrown forward by events that occurred before the air bag deployed, such as sudden braking immediately before the crash or multiple impacts. This usually occurred because the occupant was unbelted or improperly belted. The occupant s initial seating position placed them close to the air bag. According to NHTSA, these fatalities included shorter drivers who were belted but had moved the seat forward in order to more easily reach the steering wheel and pedals, infants in rear-facing child seats, and children sitting on the lap of another passenger. The majority of people who were killed by deploying air bags in low-speed crashes were unbelted or improperly restrained, which made them more susceptible to being thrown into the path of the deploying air bag than belted occupants. (See fig. 1.) The reported number of air bag-related fatalities increased from 1 in 1990 to 58 in 1997, as the installation of air bags in vehicles increased. Since 1997, the number of fatalities has decreased; 17 fatalities were reported in 2000. NHTSA attributes the decrease in part to actions that resulted from its November 1996 plan to address the risk of air bag-related fatalities. These actions included a public education effort to persuade people to properly restrain infants and children under 12 in the rear seat and issuing a rule in March 1997 that made it possible for manufacturers to quickly reduce the inflation power in the air bags installed in new vehicles. From model year 1997 through 1998, manufacturers lowered the inflation power by an average of 22 percent in driver-side air bags and 14 percent in passenger-side air bags. NHTSA s 1996 plan also anticipated the need for long-term technological improvements advanced air bag systems to control or prevent deployment of the air bag, as appropriate. In June 1998, the Transportation Equity Act for the 21st Century directed the Secretary of Transportation to issue a rule requiring vehicle manufacturers to install advanced air bag systems. The act specified that these systems should achieve two goals provide improved protection for occupants of different sizes (belted and unbelted) as well as minimize the risk of injury or death from air bags for infants, young children, and other occupants. On May 12, 2000, NHTSA issued a rule specifying the requirements for such a system. Under the previous requirements, vehicle manufacturers performed tests that involved crashing vehicles into a rigid barrier with crash dummies belted and unbelted that represented average-sized males in the driver and passenger seats. To provide improved crash protection for occupants of different sizes, the rule adds new crash tests that simulate different types of crashes and include the use of crash dummies that represent small adults (defined as a 5th percentile female). To reduce the risk of injury or death to children and small adults, the rule requires a new battery of static tests using dummies representing infants, young children, and 5th percentile females. These tests involve placing the dummy in various positions in the seat to determine if the air bag system suppresses or activates the air bag, or placing the dummy against the air bag module and deploying the air bag to determine if the bag deploys in a low-risk manner that does not cause severe injury. Starting in the production year beginning September 1, 2003 (approximately model year 2004) and continuing over a 3-year phase-in period, increasing percentages of each manufacturer s vehicles must comply with the requirements of the rule. (See app. II for a more comprehensive discussion of the rule s requirements.) <2. Some Advanced Technologies Are Available; Others Are Being Developed> Manufacturers have installed some of the advanced technologies that will be needed to comply with the advanced air bag rule in certain vehicles that are on the market today. (See table 1.) Manufacturers and companies that produce air bags are working on the development of other needed advanced technologies, with the aim of having them ready for installation in vehicles by September 2003, as required. <2.1. Components of Conventional and Advanced Air Bag Systems> Advanced air bag systems installed in future vehicles will be much more sophisticated than the conventional air bag systems in today s vehicles, because they will be capable of tailoring air bag deployment to characteristics of the front seat occupants as well as crash severity. Conventional frontal air bag systems deploy the air bags with a single level of inflation output for all crashes that exceed a predetermined severity threshold. These systems generally consist of separate components designed to work together: crash sensors, a control module, and a driver and passenger inflator and air bag. (See fig. 2.) The crash sensors and control module are typically located in one unit within the passenger compartment; the unit is often mounted within the floor between the driver and the passenger. The crash sensors detect the occurrence and severity of crashes and provide this input to the control module. The control module evaluates inputs from the sensors. If the control module determines that a crash has occurred that exceeds the severity threshold, it then sends a triggering signal to the inflators to deploy the air bags. The inflators and air bags are packaged together in air bag modules, which are located in the steering wheel on the driver side and in the instrument panel on the passenger side. Upon receiving a triggering signal from the control module, inflators generate or release gases that rapidly fill the air bags, generally within 1/20 of a second after impact. The purpose of the inflated air bags is to provide protective cushioning between the occupants and the steering wheel, instrument panel, and windshield. However, the single- stage inflators in most vehicles today, in some cases, provide more inflation power than necessary because they fill the air bags with one level of output when deployed, regardless of the types of occupants requiring protection or the degree of severity of the crash. Future frontal air bag systems designed to meet the performance requirements of NHTSA s advanced air bag rule will have additional features that will allow the deployment of the air bags to adapt to characteristics of the front seat occupants as well as different crash situations. Auto manufacturers anticipate that two new components will be needed to meet the rule s requirements: occupant classification sensors and multistage inflators. Occupant classification sensors will provide an additional input to the control module to detect different types of occupants and whether or not they are belted. For example, manufacturers anticipate installing sensors that will be able to identify whether the front passenger seat is occupied by an infant in a rear-facing child seat, a child, or an adult. Multistage inflators, which will replace single-stage inflators, will provide varying levels of inflation output that can be tailored to characteristics of the driver and front seat passenger as well as different crash scenarios. Deployment options could include no deployment, low-level output, and high-level output, as well as additional levels of deployment between the low- and high-output stages. While the occupant classification sensors and multistage inflators are the key new features of the advanced air bag systems envisioned by auto manufacturers, other components will also be improved. For example, manufacturers anticipate that these systems will include crash sensors that can more precisely discriminate among different types of crashes (such as a crash into a rigid concrete wall versus a crash with another car), control modules that can process the additional inputs provided by crash and occupant sensors and make more accurate and timely deployment decisions, and air bag designs that will allow the bag to deploy less aggressively. These advanced air bag systems will be designed to reduce the likelihood of the types of fatalities previously caused by air bag deployments. For example, such systems would deactivate the passenger air bag or deploy it at a low level if the passenger seat is occupied by an infant or small child.These systems may also adjust air bag deployment if the driver or passenger is a small adult. <2.2. Some Advanced Air Bag Technologies Are Currently Available> Some vehicles on the U.S. market today have frontal air bag systems with multistage inflators and some other advanced features, such as seat belt usage sensors and improved air bag designs. However, no vehicles currently on the market have air bag systems with all the features manufacturers believe are needed to fulfill the requirements of the advanced air bag rule. In particular, no vehicles currently have frontal air bag systems with occupant classification sensors that can distinguish among child seats, children, or adults. Manufacturers are not required to produce vehicles that can meet the requirements of the advanced air bag rule until the production year starting in September 2003 (approximately model year 2004). Frontal air bag systems with multistage inflators started appearing on the market in some model year 1999 and 2000 vehicles and became more widely available in model year 2001 vehicles. While three of the eight manufacturers we talked to installed multistage air bag systems in some or all of their model year 1999 vehicles, seven of the manufacturers installed this technology in some or all of their model year 2001 vehicles. Four of these seven manufacturers BMW, DaimlerChrysler, Ford, and Honda installed multistage air bag systems in at least one-quarter of their model year 2001 fleets. While most of the multistage air bag systems installed in these model year 2001 vehicles have two stages of inflation, some have three stages. Manufacturers are planning to further increase the number of vehicles with multistage air bag systems in their model year 2002 fleets. (See app. III for more detailed information on the availability and features of multistage frontal air bag systems in U.S. market vehicles.) Most of the multistage air bag systems installed in vehicles on the market today have one or more types of sensors that provide information about the front seat occupants, such as the presence of an occupant in the passenger seat, driver seat position, and driver and passenger seat belt use. In air bag systems with these occupant sensors installed, the control module utilizes input from these sensors, in addition to input from the crash sensors, in making deployment decisions. Three manufacturers BMW, DaimlerChrysler (Mercedes-Benz), and Ford have offered some model year 2001 vehicles equipped with weight- based occupant presence sensors on the passenger side. In these vehicles, the control module deactivates the passenger air bag if the sensor detects that the passenger seat is unoccupied. The main purpose of these sensors is to prevent unnecessary deployment of the passenger air bag and save on repair costs. The sensors are not capable of identifying what type of occupant is in the passenger seat. One manufacturer Ford has offered model year 2001 vehicles equipped with sensors that detect whether the driver s seat is positioned forward or rearward on the seat track. When the sensor detects that the seat is positioned forward, indicating that the driver is seated close to the air bag module, the control module deactivates the high-output stage of the driver s air bag. Four manufacturers BMW, DaimlerChrysler (Mercedes-Benz), Ford, and Honda have offered some model year 2001 vehicles that contain, as part of their multistage air bag systems, sensors that detect whether the occupants are wearing seat belts. The control module deploys the air bags at a higher crash severity threshold if the occupant is belted and a lower threshold if the occupant is unbelted. In addition to installing the new air bag technologies described above, manufacturers have also made improvements to crash sensors, control modules, and air bags. In currently available multistage air bag systems, the level of air bag deployment in a crash is based on the level of crash severity, although the occupant sensors described above also affect deployment decisions in some vehicles. The crash sensors in these systems have been refined to better discriminate crash severity levels. These crash sensors are generally arranged in one of two ways. In the first type of arrangement, which is typically used in conventional air bag systems, a single-point electronic crash sensor is located within the control module in the passenger compartment. In the second type of arrangement, called a multipoint electronic crash sensing system, one sensor is located within the control module and one or more sensors are located in the front (crush zone) of the vehicle. In all of the multistage air bag systems installed in vehicles on the market today, the control modules contain more complex computational systems designed to make timely decisions about the appropriate level of air bag deployment. In multistage air bag systems that include occupant sensors and/or multipoint crash sensing systems, the control modules must process the additional inputs provided by these sensors in making deployment decisions. Manufacturers have made a variety of improvements in their air bag designs aimed at reducing the aggressivity of the deploying air bag and, therefore, the risk of injury caused by deployment. One major area of improvement has been to change the location of the air bag module or the size, shape, and folding of the bag to increase the distance between the occupant and the deploying air bag. For example, on the driver side, manufacturers now often recess the air bag into the steering wheel and employ a fold and shape that allows the bag to deploy laterally rather than rearward toward the driver. Some passenger air bags in use today contain a device that directs the initial inflation of the bag away from the occupant if he or she is in close proximity to the bag at the time of deployment. Other improvements in bag design that are used in some vehicles include vents that can make the bag deploy more softly if it is obstructed by the occupant during deployment and the use of tethers within the bag to reduce extension when deployed. (For further information on advanced technologies currently installed in vehicles, see app. IV.) <2.3. Significant Improvements Are Under Development> Vehicle manufacturers, along with companies that supply them with air bag systems, are working now on developing frontal air bag systems that are intended to meet the requirements in the advanced air bag rule and be ready to be installed in model year 2004 vehicles, as required. The advanced air bag systems envisioned by manufacturers for meeting the rule s requirements include new technologies that have not previously been installed in vehicles as well as significant improvements in existing technologies. The key new technologies that manufacturers anticipate will be needed to comply with the advanced air bag rule are occupant classification sensors that can identify whether the passenger seat is occupied by an infant in a child seat, a small child in or out of a child seat, or a small adult. Air bag suppliers have been working on the development of a number of such sensor technologies, and manufacturers are currently considering these technologies. The primary technologies under consideration are weight- based sensors and pattern-based sensors, which would be installed within or under the passenger seat. Weight-based sensors attempt to classify the occupant through various means of determining the amount of force or pressure applied to the seat. Pattern-based sensors attempt to classify an occupant using a mat, installed directly under the seat cover, which senses the occupant s applied pressure and imprint. Manufacturers are also considering augmenting some of these technologies with seat belt tension sensors to identify whether the amount of force applied to the seat is due in part to the seat belt rather than the occupant s weight. In addition to developing new sensors for identifying the type of occupant in the passenger seat, manufacturers plan to increase the use of driver and passenger seat belt use sensors and driver seat position sensors. As described in the previous section, these occupant sensor technologies are already developed and available in some vehicles. Manufacturers also plan to continue making improvements in existing technologies for crash sensors, control modules, inflators, and air bags to comply with the advanced air bag rule. Manufacturers and suppliers are working on improving the ability of crash sensing systems to differentiate levels of crash severity and types of crashes. As part of this effort, manufacturers plan to increase the use of multipoint crash sensing systems. Manufacturers and suppliers are also developing more complex computational systems to be incorporated into control modules, in order to allow them to process the additional inputs in advanced air bag systems and to make accurate and timely decisions regarding deployment outputs. Manufacturers will use multistage inflators that have two or more stages of inflation output in their advanced air bag systems. Some manufacturers have already installed inflators with more than two stages of inflation on a limited basis, but other manufacturers have told us that they do not plan to use them until occupant classification and control module technologies are more fully developed. Finally, manufacturers and suppliers continue to work on improvements in air bag design, such as venting and bag shapes, in order to enhance the ability of vehicles to comply with the advanced air bag rule. Further improvements may include increased use of innovative bag designs as well as new designs that will enhance the ability of the deploying air bag to adapt to characteristics of the occupant. Vehicle manufacturers and air bag suppliers are also researching some other advanced air bag technologies that are not considered necessary for complying with the advanced air bag rule but that may be used in the longer term to enhance the performance of air bag systems. Some manufacturers and air bag suppliers are researching dynamic occupant position sensing, which would continuously track the proximity of the occupant to the air bag. Inputs from these sensors, which would be installed in the passenger compartment, would be used by the control module to determine when the occupant is in close proximity to the air bag and, when this is the case, to deactivate the bag. Static sensors that periodically determine the occupant s position may be installed on a limited basis in the near term to augment occupant classification sensors. Although researchers are examining various technologies for achieving dynamic occupant position sensing, it is not yet clear whether or when this technology will become widely used. Precrash sensing is another area of technology currently in the research stage. These sensors would identify the position, speed, and mass of objects prior to a collision and allow more time for the air bag system to respond. The feasibility of this concept has not yet been determined; therefore, it is not yet clear when this technology might become available. Some suppliers are researching inflator technologies that may produce continuous variation in inflation, rather than inflation in discrete stages, allowing air bag deployment to be more adaptive to inputs from crash and occupant sensors. These may be introduced by some manufacturers during the initial 3-year phase-in period for complying with the advanced air bag rule. (For further details on anticipated advancements in air bag technologies, see app. IV.) <3. Occupant Sensing Is the Primary Challenge in Meeting the Advanced Air Bag Rule s Requirements> According to representatives of vehicle manufacturers and air bag suppliers, the primary challenge in meeting the requirements of the advanced air bag rule is developing occupant classification sensors for the passenger side that are accurate, durable, and suitable for mass production before September 2003. The rule requires manufacturers to install advanced air bag systems that either suppress the air bag if an infant or child is seated in the passenger seat or deploy it in a low-risk manner that does not cause severe or fatal injury, even if the infant or child is out of position. If the system is designed to suppress the air bag in the presence of an infant or child, it must deploy if the passenger is a small adult (defined as a 5th percentile woman). To test whether a sensor accurately classifies an occupant so the air bag can deploy appropriately, the rule specifies tests using dummies representing infants, 3-year-old and 6-year-old children, and 5th percentile women. The dummies have fixed weights, heights, and stature that are easily distinguishable from each other. However, the rule also requires that some tests be conducted using child seats, variable seat belt tension, blankets, and with the dummies in various positions. These added factors make it more difficult for sensors to distinguish among the different occupants. In addition to the requirements in the rule, manufacturer and supplier representatives told us that they are designing occupant classification sensors for additional real-world situations that further challenge the ability of sensors to perform accurately. Such real-world situations could include variation in the actual weight of humans, changes in weight detected by sensors as the occupant moves forward, backward, and side-to-side, or increased weight from objects held on laps. Manufacturers generally require that technologies perform accurately over 99 percent of the time before being installed in vehicles. However, manufacturer representatives told us that technologies that are currently being developed for occupant classification sensors, such as weight-based or pattern-based sensors, have not demonstrated the ability to consistently distinguish among various sizes of occupants. For example, weight-based sensors in seats have difficulty distinguishing between 6-year-old children and small adults because a 6-year-old child can appear heavier from additional weight (such as a booster seat and increased tension from a tightly cinched seat belt); additionally, small adults can appear lighter because a portion of the occupant s weight is borne by the legs resting on the floor. Pattern-based sensors must first be programmed to recognize various seating positions. If a child or a small adult sits in a position that was not previously anticipated and programmed for the sensor, the system could mistake the child for an adult or vice versa. Incorrect classification of an occupant could result in the system mistakenly deploying the air bag in the presence of a child, not deploying in the presence of an adult, or deploying the air bag with greater or less force than intended. In addition to performing accurately, occupant classification sensors must also be durable and capable of being consistently produced and integrated into vehicles in large quantities. Air bag systems are expected to operate reliably over the life span of a vehicle, which could be up to 15 years. However, sensors are susceptible to aging and environmental influences over that time. For example, the performance of pattern-based sensors that are installed directly under the seat cover could be affected by deterioration of the seat cover. Sensor performance is also affected by variations in the manufacturing process that can affect the construction of the sensor or how easily it can be integrated into the vehicle. The parts of a sensor must be precisely constructed because inconsistencies in the parts can cause the sensor to malfunction. Sensors that are integrated into a seat are also subject to variations in how the seat is constructed. According to vehicle manufacturer representatives, companies that produce vehicle seats will have to significantly redesign seats and decrease the variation in the production of seats before occupant classification sensors can consistently function properly. Vehicle manufacturers are working with the companies that supply air bag systems to find solutions to these accuracy, durability, and manufacturing issues. For example, to address the influence of seat belt tension on weight-based sensors, some manufacturers and suppliers are developing seat belt tension sensors that would detect when the seat belt is cinched tightly and causing the occupant to appear heavier. Individual manufacturers are simultaneously developing multiple occupant classification technologies with different suppliers to increase the likelihood of finding a solution by the deadline. Some manufacturers told us they have also postponed research on occupant position sensors so they can focus on occupant classification sensors. According to representatives of some vehicle manufacturers, their goal is to install occupant classification sensors 1 year before the September 2003 deadline in order to get real-world experience with the performance of the sensors. However, a number of vehicle manufacturers have expressed concerns about their ability to develop occupant classification sensors that comply with the advanced air bag rule by the deadline slightly more than 2 years away. Despite the fact that manufacturers have been working on technologies for occupant classification sensors for several years, the development of these technologies has not yet reached the level that a new technology would normally have reached to be ready for installation within that time frame. In 1998 and 1999, NHTSA reported that vehicle manufacturers anticipated having occupant classification sensors installed in model year 2000 vehicles. However, accuracy, durability, and manufacturing issues were more difficult to overcome than anticipated. For example, General Motors anticipated installing a pattern-based sensor in its model year 2000 Cadillac Seville but abandoned this plan in part because the sensor did not perform accurately under different humidity and temperature settings, and the production process was so variable that only 10 percent of the sensors that were produced were suitable for installation in vehicles. More recently, in March 2001, after 2 years of work, a company that was to be the primary supplier of a weight-sensing system for DaimlerChrysler decided to abandoned work on the project due to technical reasons. As a result, DaimlerChrysler is reevaluating its options for occupant classification sensors. NHTSA officials have met with industry representatives to discuss their efforts to develop advanced air bag systems. According to NHTSA officials, although vehicle manufacturers have stated that it will be difficult to develop occupant classification sensors by September 2003, none of the manufacturers have indicated that they will not be able to meet the deadline. Due to the uncertainty associated with developing occupant classification sensors, NHTSA plans to stay abreast of manufacturers progress by holding periodic meetings with manufacturers. These meetings may be informal meetings that occur as NHTSA gathers information about technologies or more formal meetings for manufacturers to provide an update on the status of their progress. NHTSA also plans to conduct research on the feasibility of occupant classification technologies, including laboratory research on specific occupant classification technologies and monitoring the performance of occupant classification technologies as they are installed in vehicles. Other technological challenges described by manufacturers include designing an air bag that can generate enough power to protect an average adult male as well as deploy in a manner that does not severely injure a smaller occupant and developing crash sensors that can distinguish among the various types of crash tests required in the rule. In addition to the technological challenges of developing an advanced air bag system, manufacturers and suppliers are concerned about the accuracy and repeatability of some of the test procedures in the rule and using humans rather than dummies to test suppression systems. These concerns were highlighted in petitions for reconsideration of the rule filed by the manufacturers. (See app. II for further information on these petitions for reconsideration and NHTSA s response.) <4. Federal and Industry Expenditures on Advanced Air Bag Research and Development Have Increased Since 1998> NHTSA s reported expenditures on advanced air bag R&D increased from about $6.3 million in fiscal year 1998 to nearly $7.0 million in fiscal year 2000. (See table 2.) According to NHTSA officials, these expenditures were primarily for activities related to the development of the advanced air bag rule, such as investigations of crashes involving an air bag-related injury or fatality, evaluations of the performance and characteristics of air bag systems, and studies to determine how people are injured or killed by air bags. NHTSA officials estimate that expenditures on advanced air bag R&D will increase to $7.2 million in fiscal year 2001. According to NHTSA officials, future expenditures will focus on monitoring the performance of advanced air bags and continuing the R&D of specific technologies. NHTSA s planned activities include analyzing the protection provided by advanced air bags in real-world crashes, conducting crash tests including tests at various speeds and angles with belted and unbelted crash dummies to evaluate the performance of advanced air bags, and researching advanced air bag technologies that are anticipated to be ready for installation in vehicles in the next 3 to 5 years. NHTSA plans to conduct some of this research through cooperative agreements with air bag suppliers. Vehicle manufacturers did not provide information on their individual expenditures for advanced air bag R&D because they consider this information confidential. Instead, an industry association the Alliance of Automobile Manufacturers coordinated with manufacturers to provide aggregated information on the extent to which expenditures have changed and are anticipated to change compared to calendar year 1998. Four manufacturers Ford, General Motors, Nissan, and Toyota provided information on expenditures for resources related to one or more of the following categories: staffing (including expenditures to support supplier staffing); technology development and testing; prototype parts; engineering resources; in-house testing and data analysis, analytical performance assessment (computer modeling), physical test properties and test costs; and implementation and integration of technologies into vehicles. According to the Alliance, these expenditures generally total between $20 million and $30 million per vehicle platform (a group of vehicles that utilize the same basic design). The aggregated information from the four manufacturers shows that their expenditures increased by about 275 percent from 1998 to 2000 and are anticipated to increase overall by nearly 375 percent from 1998 through 2003. (See fig. 3.) According to the Alliance, the estimated increase from 1998 through 2003 is due to the cost of designing and installing advanced air bag systems for an increasing number of vehicle platforms to meet the phase-in requirements in the advanced air bag rule. These expenditures are estimated to decrease after advanced air bag systems have been installed in vehicles. <5. Agency and Industry Comments and Our Evaluation> We provided a draft of this report to the Department of Transportation for its review and comment. The Department did not provide an overall assessment of our draft report. Rather, Department representatives, including the Director of NHTSA s Office of Vehicle Safety Research, provided one technical comment through e-mail. Specifically, the Director suggested that several manufacturers may have the necessary technologies for occupant classification sensors that can distinguish among various sizes of occupants, even though they may not have installed them in vehicles on a large scale. We verified with auto manufacturers that they have not installed occupant classification sensors that can distinguish among various sizes of occupants and are still developing such sensors for frontal air bag systems that are intended to meet the requirements of the advanced air bag rule. We provided portions of our draft report to vehicle manufacturers and air bag suppliers for review to verify the accuracy of our descriptions of advanced air bag technologies and challenges in meeting the requirements of the advanced air bag rule. The manufacturers and suppliers generally agreed with our draft report and offered several technical corrections, which we incorporated as appropriate. We are sending copies of this report to congressional committees and subcommittees responsible for transportation safety issues; the Secretary of Transportation; the Executive Director, National Highway Traffic Safety Administration; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request and on GAO s home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Key contributors to this report were Judy Guilliams- Tapia, Bert Japikse, James Ratzenberger, Phyllis Scheinberg, and Sara Vermillion. Appendix I: Scope and Methodology To determine the current availability of and planned improvements to advanced air bag technologies, we collected and analyzed information from eight vehicle manufacturers (BMW, DaimlerChrysler, Ford, General Motors, Honda, Nissan, Toyota, and Volkswagen) and the five companies that are the primary suppliers of air bag systems in the United States (Autoliv, Breed, Delphi, Takata, and TRW). According to the Alliance for Automobile Manufacturers, the combined sales for the eight manufacturers account for over 90 percent of vehicles sold in the United States. We did not independently verify the information we received from manufacturers and suppliers. We reviewed literature on automotive technology for descriptions of the technologies used in advanced air bag development. We also met with officials from NHTSA, representatives from the Insurance Institute for Highway Safety, the National Transportation Safety Board, university researchers, and consumer groups. To identify the challenges, if any, that the industry faces in complying with the advanced air bag rule, we reviewed the requirements of the rule and discussed these requirements with representatives of vehicle manufacturers and companies that supply air bags. We also reviewed comments on the rule submitted by manufacturers and suppliers. To identify the changes in federal expenditures on advanced air bag R&D, we collected data on NHTSA s expenditures for fiscal years 1998 through 2001 and analyzed the changes in the individual categories of expenditures. Vehicle manufacturers did not provide information on their individual expenditures for advanced air bag R&D because they consider this information confidential. Therefore, to identify the changes in industry expenditures on advanced air bag R&D, we collected aggregated information from an industry association on the extent to which four manufacturers expenditures have changed since 1998. We did not independently verify this aggregated information. We conducted our work from July 2000 through May 2001 in accordance with generally accepted government auditing standards. Appendix II: Selected Aspects and Status of the Advanced Air Bag Rule The advanced air bag rule requires that future air bags be designed to create less risk of serious injury from air bags particularly for small women and young children and to improve frontal crash protection for all occupants. To achieve these goals, the rule includes requirements for additional test procedures using different sizes of dummies than were included in previous requirements. These new requirements will be phased in during two stages. During the first stage phase-in from September 1, 2003, to August 31, 2006 an increasing number of each manufacturer s vehicles must be certified each year as meeting the requirements in the advanced air bag rule. During the second stage phase-in from September 1, 2007, to August 31, 2010 the speed for one of the tests (the belted test for the 50th percentile adult male dummy) will be increased from 30 to 35 miles per hour (mph) and, similar to the first phase-in period, an increasing number of each manufacturer s vehicles must be certified each year. In comments to the supplemental notice of proposed rulemaking on advanced air bags, there was a difference of opinion on whether the maximum speed for the unbelted rigid barrier crash test should be set at 25 or 30 mph. In the final rule, NHTSA set the maximum speed at 25 mph on an interim basis while the agency continues to investigate whether the higher speed is more appropriate. After the rule was issued in May 2000, consumer safety groups, vehicle manufacturers, and air bag suppliers filed petitions for NHTSA to consider changing certain provisions in the rule. NHTSA plans to respond to these petitions in July 2001. <6. Selected Requirements in the Advanced Air Bag Rule> To minimize risk to infants and children on the passenger side, the rule includes provisions for the air bag to be suppressed or deployed in a low- risk manner that is much less likely to cause serious or fatal injury. (See fig. 4.) For newborn infants in car beds, the rule requires that the air bag be suppressed. For 1-year-old infants in child seats, 3-year-old children, and 6-year-old children, manufacturers are allowed to install systems designed for suppression or low-risk deployment. Manufacturers may choose different strategies for different occupants. For example, a manufacturer could design an air bag system that would suppress the air bag for infants and deploy the bag in a low-risk manner for 3- and 6-year- old children. To test for suppression on the passenger side, the dummies are placed in their appropriate child seats that are, in turn, placed on the passenger seat. These tests may be conducted under various scenarios: using any of several models of safety seats; with the passenger seat in the forward, middle, or rear position; unbelted or belted with up to 30 pounds of tension on the belt; with any handles and sunshields on infant safety seats in fully open and fully closed positions; or with a towel or blanket on the infant safety seats. For the 3- and 6-year-old dummies, tests will also be conducted with the dummies unbelted and in various positions, such as sitting back or sitting on the front edge of the seat. The rule requires that the car have a telltale light that, after the dummy is in place, indicates whether the air bag is suppressed or activated. Following each suppression test with an infant or child, a dummy representing a small (5th percentile) adult female will be placed in the passenger seat to ensure that the air bag is not suppressed for small adults. To test for low-risk deployment on the passenger side, a 1-year-old dummy is placed in one of several models of rear- or forward-facing child seats on the passenger seat in the forward position on the seat track. The seat belt may be cinched with up to 30 pounds of tension. For the 3- and 6-year-old dummies, the unbelted dummy is placed out of position with their head or chest on the air bag module to simulate the situation where an unbelted child is close to the instrument panel due to sudden braking immediately before a crash. After the dummy (infant or child) is in place, the air bag is deployed. The amount of injury that occurs to the head and neck of the dummies (and the chest of the 3- and 6-year-old child dummies) must be below criteria specified in the rule. To minimize risk to small drivers, the rule includes provisions to deploy the air bag in a low-risk manner, similar to the low-risk deployment tests for 3- and 6-year-old children. The tests are conducted by placing the 5th percentile adult female dummy out of position, with the chin on the steering wheel rim or on the air bag module. The air bag is then deployed and the resulting injury to the head, neck, chest, and legs is measured. NHTSA has determined that, when all of the combinations of the various testing scenarios are considered, there are 129 tests for suppression and low-risk deployment: 95 suppression tests for infants in a car bed or child seat, 28 suppression tests for 3- and 6-year-old children, 4 low-risk deployment tests for 3- and 6-year-old children, and 2 low-risk deployment tests for 5th percentile female drivers. To improve protection in frontal crashes for occupants of different sizes, the rule includes seven tests that involve crashing vehicles into barriers at different speeds and angles and with dummies representing average (50th percentile) adult males and 5th percentile women, belted and unbelted. (See fig. 5.) Four of the tests are conducted with dummies that represent 50th percentile adult males and three are conducted with dummies that represent 5th percentile adult females. After the crash test, the resulting injury to the head, neck, chest, and legs of the dummies must not exceed the limits specified in the rule. The offset deformable barrier test was included in the requirements to ensure that manufacturers upgrade their crash sensors as necessary to prevent late air bag deployments in crashes that are less abrupt than those into rigid barriers. NHTSA did not include a requirement for an unbelted crash test at an oblique angle using a 5th percentile female dummy because the agency determined that the requirement for this type of crash using a 50th percentile male dummy would result in an air bag that is sufficient to protect smaller occupants as well. The rule will be phased in during two stages. The first stage phase-in from September 1, 2003, to August 31, 2006 requires an increasing number of vehicles to be certified as passing all of the above tests each year. (See fig. 6.) During the second stage phase-in from September 1, 2007, to August 31, 2010 the speed for the belted test for the 50th percentile adult male dummy will be increased from 30 to 35 mph. As with the earlier requirements, an increasing percentage of vehicles must comply with the new test speed each year. <7. Comments on the Maximum Speed for the Unbelted Rigid Barrier Test> In the comments to the supplementary notice of proposed rulemaking for the advanced air bag rule, there was a significant difference of opinion on whether the top speed for the unbelted rigid barrier crash test should be set at 30 mph or 25 mph. Comments from those who favored setting the maximum test speed at 30 mph safety groups such as Public Citizen, Center for Auto Safety, Consumers Union, and Parents for Safer Air Bags included: half of all fatalities in frontal crashes involve a change in velocity greater than 30 mph, so a maximum test speed of 30 mph represents significantly more potentially fatal crashes than a test speed of 25 mph; in crash tests conducted by NHTSA, almost all vehicles with redesigned air bags passed the 30 mph unbelted rigid barrier test with the 50th percentile male dummy, so air bags would not have to be more aggressive (and potentially more risky to small occupants) to meet a 30 mph test; lowering the test speed to 25 mph would not offer improved protection, as required in the Transportation Equity Act for the 21st Century; advanced technologies can be used to enable all vehicles to meet requirements for high speed protection and risk reduction; and a 25 mph test speed would not encourage the use of advanced technologies. Comments from proponents of a 25 mph maximum test speed such as vehicle manufacturers, air bag suppliers, the Insurance Institute for Highway Safety, and the National Transportation Safety Board included: redesigned air bags work well and there has been no loss in protection; a 25 mph test speed allows flexibility to design air bags for all occupants; a return to a 30 mph test would require a return to overly powerful air bags; there are significant technological challenges in meeting a 30 mph requirement for both the 50th percentile adult male dummy and the 5th percentile adult female dummy; and advanced technologies are not currently available that address problems posed by air bags designed to a 30 mph test. NHTSA concluded that, given the uncertainty associated with simultaneously achieving improved protection for occupants of all sizes without compromising efforts to reduce the risks of injury to smaller occupants, a conservative approach should be taken. Consequently, NHTSA set the maximum speed for the unbelted rigid barrier test at 25 mph. However, the agency issued that part of the rule as an interim final rule and announced that it would issue a final rule after it monitors the performance of advanced air bags and determines whether increasing the maximum test speed to 30 mph would offer any advantages over a test speed of 25 mph. To monitor the performance of advanced air bags, NHTSA plans to, among other things, evaluate real-world crash data, perform compliance testing and publish an annual report on the extent to which advanced air bags comply with requirements, conduct crash tests, and conduct research on specific advanced air bag technologies. <8. Petitions for Reconsideration> After the final rule was issued in May 2000, consumer safety groups, vehicle manufacturers, and air bag suppliers submitted petitions to NHTSA for the agency to reconsider certain aspects of the rule. Consumer safety groups Center for Auto Safety, Consumer Federation of America, Parents for Safer Air Bags, and Public Citizen jointly filed a petition. The petition states that separate phase-in schedules for cars and sport utility vehicles are feasible because some cars can already meet a 30 mph unbelted rigid barrier crash test, but sport utility vehicles have more difficulty in complying with this test due to their stiffer frame, which produces a harder crash pulse and requires a more aggressive air bag than passenger cars. Specifically, the petition requests NHTSA to require the industry to meet a 30 mph unbelted rigid barrier test for passenger cars and a 25 mph test for sport utility vehicles, to be increased to 30 mph at a later date. The petition also requests NHTSA to: (1) add requirements for tests to simulate lower-speed, softer crashes in which the air bag deploys late and strikes an occupant who has moved forward before the air bag deploys, (2) require manufacturers to meet a 35 mph belted barrier test with the 5th percentile female dummy as well as the 50th percentile male dummy, and (3) require that manufacturers conduct all barrier tests in both the perpendicular and oblique modes, including tests using the 5th percentile female dummies. Petitions from some vehicle manufacturers and air bag suppliers state that the directions for some tests in the rule, particularly those related to the positioning of dummies in suppression and low-risk deployment tests, need to be clarified. For example, some petitioners claimed that the procedure for positioning the child dummies for the low-risk deployment test do not always result in the dummies being against the air bag module, as intended. Other issues raised in petitions from manufacturers include: (1) limiting the amount of time required to collect data on the dummies injuries during a low-risk deployment test in order to minimize inclusion of injury from interior components other than the air bag, (2) requesting that a generic child restraint test device be developed so that humans will not have to be used to test air bag suppression systems, and (3) reducing the upper limit on the amount of tension that can be applied to a seat belt. According to NHTSA officials, NHTSA is drafting a final response to these petitions and plans to issue the response in July 2001. Some consumer safety groups and vehicle manufacturers have told us that they are concerned about the timeliness of NHTSA s response to the petitions. Some manufacturers have raised concerns that the issues with the compliance test procedures may not be resolved in time for them to finalize their advanced air bag system designs and perform the required testing to certify that the vehicles meet the requirements in the advanced air bag rule. Appendix III: Multistage Frontal Air Bag Systems in Model Year 2001 Vehicles and Model Year 2002 Vehicles in Production as of April 1, 2001 Belt use sensor on driver and passenger sidesThese figures are the approximate percentage that each specified vehicle line represents of the company s light vehicles produced during the model year for the U.S. market. Appendix IV: Current Availability of and Anticipated Improvements in Advanced Air Bag Technologies <9. Technologies Currently Available and Planned Enhancements to Meet Requirements Crash Sensors> Description: These sensors detect a crash and differentiate levels of crash severity. They provide this input to the control module, which uses it to make decisions about whether the air bag should be deployed and, if so, what level of deployment is appropriate. Advanced technologies currently available: The crash sensors in vehicles equipped with multistage frontal air bags have been refined to better discriminate crash severity levels, so that the appropriate level of air bag deployment can be determined. Some of these vehicles have a single- point electronic crash sensor, which is generally located within the control module in the passenger compartment. Others have multipoint crash sensing systems, in which one sensor is located within the control module and one or more sensors are located in the front (crush zone) of the vehicle. In some of these multipoint systems all of the sensors are electronic while in others the sensor in the control module is electronic and the sensors in the front of the vehicle are mechanical. Some manufacturers have placed additional sensors in the front of the vehicle in order to produce more information on the crash earlier in the crash event, allowing additional time to determine crash severity and make the appropriate deployment decision. However, others have not yet installed up-front crash sensors in their multistage air bag systems, because the performance of these sensors in real-world conditions can be affected by irrelevant background noise, such as extraneous vibrations that occur during a crash event. (See app. III for information on the types of crash sensing systems installed in model year 2001 and early model year 2002 vehicles with multistage air bags.) Anticipated improvements: In order to enhance the ability of their vehicles to comply with the requirements in the advanced air bag rule, manufacturers plan to improve the ability of their crash sensing systems to distinguish among levels of crash severity as well as identify the type of crash, such as a frontal rigid barrier crash, a pole crash, or an offset deformable barrier crash. As part of this effort, manufacturers plan to refine and increase the use of multipoint crash sensing systems. <9.1. Occupant Classification Sensors> Description: These sensing devices installed in the interior of the vehicle are intended to identify characteristics of the occupants, such as their belted status and size. They provide this input to the control module, which uses it to make decisions about whether the air bag should be deployed and, if so, what level of deployment is appropriate. Advanced technologies currently available: Some vehicles currently equipped with multistage frontal air bag systems have occupant sensors that provide information such as seat position, occupant presence, seat belt use, and identification of a child seat. However, occupant sensors currently available in U.S. market vehicles are not capable of distinguishing among different sizes of occupants, such as whether the passenger is a child or an adult. A limited number of vehicles with multistage air bags contain seat position sensors on the driver side that identify whether the seat is forward or rearward on the seat track. If the seat is positioned forward, indicating that the driver is seated close to the steering wheel and the air bag module, the system deploys the air bag with reduced force. Some vehicle models contain weight-based sensors on the passenger side that identify whether the seat is occupied. If the sensor does not detect an occupant over a specified weight, the system deactivates the passenger air bag. These sensors are intended to prevent unnecessary deployment of the passenger air bag. Some vehicle models also contain, as part of their multistage air bag systems, seat belt use sensors on the driver and passenger sides that identify if the occupant is wearing the seat belt. The system deploys the air bags at a higher crash velocity threshold if the occupant is buckled and a lower threshold if the occupant is unbuckled. Finally, a limited number of vehicles with multistage air bags contain child seat sensors that identify a tag in the bottom of a compatible child seat. The system deactivates the passenger air bag when it detects the tag. NHTSA considers these child seat sensors to be an excellent supplement to other occupant classification systems. However, NHTSA will not allow manufacturers to use this sensing system alone to comply with the rule s requirements because it would be difficult to ensure that tags would be properly installed on the wide variety of child seats used by the general public. (See app. III for information on the types of occupant classification sensors installed in model year 2001 and early model year 2002 vehicles with multistage air bags.) Anticipated improvements: To comply with the advanced air bag rule, manufacturers anticipate increasing the use of driver seat position sensors and driver and passenger seat belt use sensors. In addition, sensors based on weight classification and/or pattern recognition will be installed on the passenger side to distinguish among adults, children, and child seats. Technologies being considered primarily include load cells, pressure bladders, and pattern/pressure mats. Load cells are electro-mechanical devices located at each attachment point of the seat frame to the vehicle. They estimate the force applied to the seat, allowing the system to classify an occupant based on their seated weight. A pressure bladder is a fluid- or air-filled bladder located under the seat cushion and above the seat frame. The system classifies an occupant based on the amount of pressure applied to the bladder. A pattern/pressure mat contains multiple sensor elements and is located between the seat cushion and upholstery. The system classifies an occupant based on the amount of pressure applied to the mat and the pattern of the occupant s imprint on the seat. Some of these technologies may need to be augmented by seat belt tension sensors to identify whether the amount of force applied to the seat is due in part to the seat belt rather than the occupant s weight. This is important information for identifying child seats. Occupant position sensors, which are described in the section below, may also be used to enhance occupant classification sensors. <9.2. Control Module> Description: This central processing unit stores the vehicle s sensing algorithms, computational systems that interpret and analyze inputs from the crash sensors and occupant sensors to determine whether the air bag should be deployed and, if so, what level of deployment is appropriate. In order to deploy the air bags in time to restrain the occupants, the control module must predict during the initial part of the crash whether a crash is occurring that exceeds a predetermined severity threshold. The control module generally triggers deployment from 10 to 100 milliseconds after the start of the crash, depending on the type of crash. Advanced technologies currently available: Multistage air bag systems contain control modules with sensing algorithms of increased complexity that can determine the appropriate level of air bag deployment, based on available inputs. In multistage air bag systems that include occupant sensors and/or multipoint crash sensing systems, the algorithms process the additional inputs provided by these sensors in making deployment decisions. Anticipated improvements: To comply with the advanced air bag rule, control modules will require algorithms of greater complexity that will be able to interpret and analyze additional inputs concerning crash scenarios and types of occupants and use this information in making appropriate deployment decisions. Manufacturers also intend to make further improvements in control modules to increase the speed of processing of inputs and the accuracy of deployment decisions. As algorithms become more complex, it may be necessary in the longer term to move from a centralized control module to a system in which the processing and decision-making functions are decentralized, because of dramatic increases in the amount of information being input and in the computations needed. <9.3. Multistage Inflators> Description: Multistage inflators have two charges that can generate two or more stages of inflation. Firing one charge generates low-level deployment; firing both charges simultaneously or in sequence generates higher levels of deployment. Advanced technologies currently available: A number of vehicle models currently have frontal air bag systems with multistage inflators. Most of these vehicles have inflators with two stages of inflation (low- and high- level deployment), while a limited number have inflators with three stages of inflation (low-, medium-, and high-level deployment). In most cases, the multistage inflators are on both the driver and passenger sides, but in some cases only the passenger side has a multistage inflator. In currently available multistage air bag systems, the deployment level is triggered based on crash severity and, in some cases, driver seat position. In addition, as previously explained, some of these systems deploy the air bags at different crash severity thresholds for belted or unbelted occupants and/or deactivate the passenger air bag if a sensor detects that the passenger seat is empty or contains a child seat. (See app. III for information on the multistage air bag systems installed in model year 2001 and early model year 2002 vehicles.) Anticipated improvements: To comply with the advanced air bag rule, manufacturers will use multistage inflators with two or more stages of inflation. Some manufacturers have told us that their introduction of inflators with more than two stages of inflation depends on further advancements in crash sensors, occupant sensors, and the control module in order to be able to reliably determine the appropriate level of inflation. Various inflation technologies are under development that may provide continuous variation in inflation, rather than inflation in discrete stages, allowing greater adaptiveness to inputs provided by crash and occupant sensors. For example, one such technology would use a variable electric current to continuously control the rate of gas generation during inflation of the air bag. Inflators with these technologies may be introduced by some manufacturers during the 3-year phase-in period for complying with the advanced air bag rule. <9.4. Air Bag Features That Minimize Risk to Occupants> Description: In addition to characteristics of the inflators, some air bag design features can reduce the aggressivity of the deploying air bag and, therefore, the likelihood of serious injury caused by deployment. These features include the location of the air bag module (which contains the inflator and the air bag) and characteristics of the bag itself, such as folding, shape, compartments, tether straps, and venting. Advanced technologies currently available: Manufacturers have made a variety of changes in bag design in order to make deployment less likely to cause injury. The location, folding, and shape of frontal air bags have been major areas of design change. On the driver s side, air bag modules have been recessed into the steering wheel in many vehicles to add space between the driver and the deploying air bag. Also, many driver air bags now have a fold pattern and shape that allows the bag to deploy in a radial manner, so that the initial burst out inflation force will inflate the bag laterally rather than rearward toward the driver. On the passenger s side, manufacturers often locate the bag module in a top-mount position on the instrument panel, to increase the distance between the occupant and the deploying bag, or use a smaller-sized bag if it is located in a mid- mount position in front of the passenger on the instrument panel. Some newer fold and shape designs and venting schemes can make the deploying air bag adaptive to the position of the occupant. For example, some passenger air bags in use today contain a fabric flap attached to the bag, known as a bias flap, which directs the initial burst out inflation of the bag to the side of and away from the occupant if he or she is out of position (in close proximity to the bag) at the time of deployment. Some bags have variable venting designs that inflate the bag more softly if it is obstructed during deployment, indicating that the occupant is out of position. Tethers, which are strips of fabric connecting the front and back panels of the bag, have been incorporated into some bag designs on both the driver and passenger sides to reduce extension of the bag and help position it more quickly when it is deployed. Anticipated improvements: Although air bag design is a relatively mature technology, manufacturers and air bag suppliers continue to work on improvements in this area in order to enhance the ability of air bag systems to comply with the advanced air bag rule. Concepts under development that may become available in the longer term include venting systems that will work with multistage inflators to increase the adaptability of deployment (by controlling inflator output based on input from sensors) and bags with multiple compartments that inflate sequentially. <10. Additional Enhancements to Advanced Air Bag Systems Anticipated for the Longer Term> <10.1. Occupant Position Sensors> Description: These sensors are intended to determine the proximity of an occupant to the air bag. Sensing devices installed in the interior of the vehicle would enable the system to suppress the air bag if an occupant is out of position and too close to the air bag. Anticipated technologies: Infrared, ultrasonic, capacitive, and optical technologies are being researched to develop dynamic sensors that can continuously track an occupant s position with respect to the air bag module. Infrared sensors utilize an array of invisible infrared light beams projected across the passenger compartment to identify the position of an occupant. For ultrasonic sensors, ultrasonic transducers emit sound waves and the sensors monitor the sound waves that are reflected by an occupant. Capacitive sensors utilize an electric field to identify the position of an occupant by detecting moisture in the body and optical sensors monitor the position of an occupant. While static ultrasonic sensors that periodically determine the occupant s position may be installed on a limited basis to augment occupant classification sensors before 2003, researchers are not yet certain whether or when dynamic occupant position sensing will become widely used. <10.2. Precrash Sensors> Description: These sensors would identify the position, approach angle, velocity, and mass of objects prior to a collision and allow more time for the air bag system to respond.
Why GAO Did This Study The National Highway Traffic Safety Administration (NHTSA) has issued a rule requiring vehicle manufacturers to install advanced air bag systems in an increasing number of cars beginning in 2003. This report reviews the development of technologies that vehicle manufacturers plan to use to comply with the advanced air bag rule. What GAO Found GAO found that some advanced air bag technologies are now being installed in vehicles and others are still being developed. The principal advanced technology being installed in some vehicles is an air bag that can inflate with lower or higher levels of power--rather than a single level--depending on the severity of the crash. Although frontal air bag systems with these advanced technologies are an improvement over previous systems, they do not contain all of the features that manufacturers believe are needed to meet the requirements of the advanced air bag rule, such as sensors that can distinguish among different types of occupants. To meet the requirements, manufacturers plan to introduce new technologies as well as continue to make further improvements in current technologies. The key new technologies that manufacturers plan to introduce are occupant classification sensors that can distinguish among infants and children (as well as their safety seats) and adults on the passenger side. The addition of these sensors is necessary to allow the air bag system to provide the appropriate deployment level--such as no deployment, low power, or high power--depending on the type of occupant. The primary challenge in meeting the requirements in the advanced air bag rule is the development of occupant classification sensors that are accurate, durable, and suitable for mass production. Expenditures on advanced air bag research and development by NHTSA and vehicle manufacturers have increased since 1998, when Congress mandated the installation of advanced air bags in future vehicles. The information aggregated from four manufacturers shows that these expenditures rose by about 275 percent from 1998 through 2000 and are anticipated to increase overall by about 375 percent from 1998 through 2003, when the requirements in the advanced air bag rule take effect.
<1. Background> GPRAMA requires OMB to coordinate with agencies to develop long- term, outcome-oriented federal government priority goals for a limited number of crosscutting policy areas and management improvement areas every 4 years. Furthermore, with the submission of the fiscal year 2013 budget, GPRAMA required OMB to identify a set of interim priority goals.interim CAP goals, 9 of which were related to crosscutting policy areas and 5 of which were management improvement goals. The President s 2013 budget submission included a list of 14 The CAP Goal Leader. As required by GPRAMA, each of the interim CAP goals had a goal leader responsible for coordinating efforts to achieve each goal. CAP goal leaders were given flexibility in how to manage these efforts, and were encouraged by OMB to engage officials from contributing agencies by leveraging existing inter-agency working groups, policy committees, and councils. For information on the position of the goal leader and the interagency groups used to engage officials from agencies contributing to each interim CAP goal, see figure 1. For more information on the interagency groups used to engage agency officials in efforts related to each goal, see appendix III. According to OMB and PIC staff, because CAP goal leaders were responsible for managing efforts related to the achievement of the goals as part of a larger portfolio of responsibilities, staff from the PIC, OMB, and in some cases from agencies with project management responsibilities, provided additional capacity for coordinating interagency efforts and overseeing the process of collecting, analyzing, and reporting data. Specifically, PIC staff provided logistical support, assisting with the regular collection of data, updates to Performance.gov, and the development of CAP goal governance structures and working groups. They also provided support in the area of performance measurement and analysis. For example, PIC staff supported the Exports goal leader by informing discussions of how to measure the success and impact of export promotion efforts, providing expertise in the development and selection of appropriate performance measures, and assisting in the collection and analysis of relevant data. Progress Reviews. GPRAMA also requires that the Director of OMB, with the support of the PIC, review progress towards each CAP goal with the appropriate lead government official at least quarterly. Specifically, the law requires that these should include a review of progress during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance. As part of these reviews, OMB is to assess whether relevant agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities are contributing as planned to the goal. The law also requires that OMB categorize the goals by risk of not achieving the planned level of performance and, for those at greatest risk of not meeting the planned level of performance, identify strategies for performance improvement. In an earlier evaluation of the implementation of quarterly performance reviews at the agency level, we found that regular, in-person review meetings provide a critical opportunity for leaders to use current data and information to analyze performance, provide feedback to managers and staff, follow up on previous decisions or commitments, learn from efforts to improve performance, and identify and solve performance problems. As part of this work we also identified nine leading practices that can be used to promote successful performance reviews at the federal level. To identify these practices, we conducted a review of relevant academic and policy literature, including our previous reports. We refined these practices with additional information obtained from practitioners at the local, state, and federal level who shared their experiences and lessons learned. Nine Leading Practices That Can Be Used to Promote Successful Performance Reviews Leaders use data-driven reviews as a leadership strategy to drive performance improvement. Key players attend reviews to facilitate problem solving. Reviews ensure alignment between goals, program activities, and resources. Leaders hold managers accountable for diagnosing performance problems and identifying strategies for improvement. There is capacity to collect accurate, useful, and timely performance data. Staff have skills to analyze and clearly communicate complex data for decision making. Rigorous preparations enable meaningful performance discussions. Reviews are conducted on a frequent and regularly scheduled basis. Participants engage in rigorous and sustained follow-up on issues identified during reviews. Reporting Requirements. In addition to requiring quarterly reviews, GPRAMA requires that OMB make information available on a single website (now known as Performance.gov) for each CAP goal on the results achieved during the most recent quarter, and overall trend data compared to the planned level of performance. In addition, information on Performance.gov is to include an assessment of whether relevant federal organizations, programs, and activities are contributing as planned, and, for those CAP goals at risk of not achieving the planned level of performance, information on strategies for performance improvement. New CAP Goals. As required by GPRAMA, in March 2014, OMB announced the creation of a new set of CAP goals in the fiscal year 2015 budget. It then identified 15 CAP goals with 4-year time frames on Performance.gov 7 mission-oriented goals and 8 management-focused goals. Five goal areas Cybersecurity; Open Data; Science, Technology, Engineering, and Mathematics (STEM) Education; Strategic Sourcing; and Sustainability (renamed Climate Change (Federal Actions)) were carried over from the set of interim CAP goals, while the other 10 are new goal areas. OMB stated on Performance.gov that more detailed action plans for each of the goals, including specific metrics and milestones that will be used to gauge progress, will subsequently be released. The new CAP goals will also have co-leaders; one from an office within the Executive Office of the President (EOP) and one or more from federal agencies. According to OMB staff, this change was made to ensure that CAP goal leaders can leverage the convening authority of officials from the EOP while also drawing upon expertise and resources from the agency level. <2. CAP Goal Leaders Reported Performance on Goals, but Many Lacked Key Information to Demonstrate Progress> GPRAMA Requirements for Establishing Planned Performance for CAP Goals GPRAMA requires the Director of OMB to establish, in the annual federal government performance plan, a planned level of performance for each CAP goal for the year in which the plan is submitted and the next fiscal year, as well as quarterly performance targets for the goals. GPRAMA Requirements for Reporting CAP Goal Performance Information GPRAMA requires the Director of OMB to publish on Performance.gov information about the results achieved during the most recent quarter and trend data compared to the planned level of performance for each CAP goal. OMB released the federal government performance plan on Performance.gov concurrently with the fiscal year 2013 budget submission that identified the 14 interim CAP goals. The information on Performance.gov included a goal statement for each of the interim goals that established an overall planned level of performance. During the two- year interim goal period, OMB addressed the requirement to report on results achieved during the most recent quarter for each of the CAP goals by publishing 5 sets of quarterly updates to the interim CAP goals on Performance.gov. The first set of updates, for the fourth quarter of fiscal year 2012, was published in December 2012 and the final set of updates, for the fourth quarter of fiscal year 2013, was published in February 2014. These documents described general accomplishments made to date, specific actions completed, or both. The updates to the Broadband CAP goal, for instance, included short descriptions of general progress made towards each of the five strategies identified for achieving the goal, as well as specific milestones accomplished. The quarterly updates did not, however, consistently identify required interim planned levels of performance and data necessary to indicate progress being made toward the CAP goals. Updates to eight of the goals included quarterly, biannual, or annual data that indicated performance achieved to date toward the target identified in the goal statement. Three of the eight goals (Cybersecurity, Energy Efficiency, and Strategic Sourcing) also contained the required annual or quarterly targets that defined planned levels of performance, which allowed for an assessment of interim progress. For example, the Cybersecurity goal s updates stated that the goal would not be met within its established time frame, and provided quarterly performance data compared to quarterly targets for the entirety of the goal period to support the statement. In contrast, the updates for the other five goals did not contain annual or quarterly targets, which made it difficult to determine whether interim progress towards the goals overall planned levels of performance was being made. For example, updates to the Exports goal included data on the total amount of U.S. exports by quarter for calendar years 2012 and 2013 but did not include a target level of performance for those years or quarters. Therefore, it was unclear whether the goal s overall planned level of performance of doubling U.S. exports by the end of 2014 is on track to be met. Furthermore, updates to six interim CAP goals did not include trend data to indicate progress being made towards the goals overall planned levels of performance. Figure 2 below identifies the frequency with which data on CAP goal performance were reported, as well as the overall performance CAP goal leaders reported making compared to the goal s planned level of performance through the fourth quarter of fiscal year 2013. Through our review of information on Performance.gov and interviews with managers of the six interim CAP goals that did not report any data on progress towards the stated goal, we identified reasons that included: Lack of quantitative planned level of performance (targets). The Entrepreneurship and Small Business CAP goal lacked a quantitative performance target. The quarterly updates to the goal explained that efforts were focused on the goal s 10 sub-goals. Most of these sub- goals, however, also lacked quantitative performance targets. The deputy goal leader told us that some of the sub-goals did not have quantitative targets by design, as goal managers thought it more appropriate to use qualitative milestones to track progress towards them. The quarterly updates to the Streamline immigration pathways for immigrant entrepreneurs sub-goal lacked a quantitative target but had a range of qualitative milestones. For example, the Department of Homeland Security and the Department of State established a milestone to identify reforms needed to ease the application and adjudication processes for visas available to certain immigrant entrepreneurs. Unavailable data. Some CAP goal managers told us that the data needed to assess and report progress toward their goals performance targets were unavailable or not yet being collected. For example, a manager of the Job Training CAP goal told us that staff had not established a baseline number of participants served by federal job training programs against which progress towards the goal could be tracked. In addition, managers of the Real Property CAP goal told us that they did not have data available for tracking progress toward the goal of holding the federal real property footprint at its fiscal year 2012 baseline level. Where key data were not reported, some goal managers took actions to obtain previously unavailable data or developed an alternative approach for assessing progress. Job Training CAP Goal. The first quarterly update for the Job Training CAP goal, published on Performance.gov in December 2012, stated that federal agencies were surveyed to compile a list of all job training programs in the federal government, including the number of participants served by those programs, and that a working group was developing a baseline for measuring progress towards the goal of preparing 2 million workers with skills training by 2015. A goal manager told us that the deputy goal leader and staff from the PIC gathered baseline information for most of the programs within the scope of the CAP goal, but that they were unable to complete the efforts by the end of the goal period. Real Property CAP Goal. Managers of the Real Property CAP goal told us that they worked to establish a baseline and metrics for measuring future performance and would be able to report on progress after the goal period ended. Closing Skills Gaps CAP Goal. A manager of the Closing Skills Gaps goal told us that the goal s managers decided early on that it did not make sense for each of the goal s identified mission-critical occupations to have the same skills gaps reduction target. Instead, managers of the goal s sub-goals identified efforts to reduce skills gaps in their specific occupations. They identified an individual targeted level of performance for that effort and collected and reported data on progress made towards the target. For instance, managers of the Acquisitions sub-goal established a target for increasing the certification rate of GS-1102 contract specialists to 80 percent. The final quarterly status update to the Closing Skills Gaps CAP goal reported that the target was met and the certification rate increased to 81 percent. Veteran Career Readiness CAP Goal. The leader of the Veteran Career Readiness CAP goal told us that efforts were made to collect data to assess the veteran employment situation. For instance, she said that an interagency data-gathering working group reviewed sources of available data, integrated those data such as the unemployment rate for various sub-populations of veterans into dashboards for senior leadership review, and made proposals to improve data availability. In addition, the Army led a working group to develop a more complete picture of veterans receiving unemployment compensation. She said that these and other efforts led to a concerted effort to improve the availability of data, and to develop and implement metrics measuring career readiness and attendance in a veteran career transition assistance program. However, no data to track progress towards the overall goal were reported during the interim goal period. As we have previously reported, no picture of what the federal government is accomplishing can be complete without adequate performance information. However, OMB and CAP goal leaders did not identify interim planned levels of performance or targets for most of the interim CAP goals. Furthermore, they established a number of CAP goals for which data necessary to indicate progress towards the goal could not be reported. In so doing, they limited their ability to demonstrate progress being made towards most of the CAP goals and ensure accountability for results from those who helped to manage the goals. <2.1. CAP Goal Leaders Described What Contributed to Goal Achievement, but in Some Cases Information Was Incomplete> GPRAMA Requirement for Establishing Milestones GPRAMA requires the Director of OMB to establish, in the federal government performance plan, clearly defined quarterly milestones for the CAP goals. GPRAMA Requirement for Reporting on Contributions towards Cross-Agency Priority Goals GPRAMA requires that OMB identify the agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each CAP goal on Performance.gov. It also requires OMB to make available on the website an assessment of whether relevant agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities are contributing as planned. In the status updates that were published on Performance.gov, managers of each of the CAP goals reported the general approaches, strategies, or specific initiatives being employed to make progress towards the achievement of the goal, as well as the departments, agencies, and programs that were expected to contribute to goal achievement. For example, the leader of the Science, Technology, Engineering, and Mathematics (STEM) Education CAP goal identified a number of general strategies for making progress towards the achievement of its goal of increasing the number of graduates in STEM subjects by 1 million over the next 10 years, such as Address the mathematics preparation gap that students face when they arrive at college and Identifying and supporting the role of technology and innovation in higher education. In addition, the goal leader identified a number of programs and goals within four departments and agencies that were likely to contribute in part or in whole to the goal. Figure 3 below illustrates how this information was presented in the update to the STEM Education CAP goal for the fourth quarter of fiscal year 2013. In a May 2012 report on our work related to the CAP goals, we noted that information on Performance.gov indicated additional programs with the potential to contribute to each of the CAP goals may be identified over time. We then recommended that OMB review and consider adding to the list of CAP goal contributors the additional departments, agencies, and programs that we identified, as appropriate. OMB agreed with the recommendation, and in the quarterly updates to the CAP goals published in December 2012 and March 2013, OMB added some of the departments, agencies, and programs we identified in our work to some CAP goals lists of contributors. For example, we had noted that 12 member agencies of the Trade Promotion Coordinating Committee had not been identified as contributors to the Exports CAP goal. OMB added additional information about contributors to the Exports goal in the update published in December 2012. During our review, in some cases CAP goal managers told us about additional organizations and program types that contributed to their goals, but which were not identified on Performance.gov or in our previous report. For example, the leader of the STEM Education CAP goal told us that representatives from the Smithsonian Institution led an interagency working group that contributed to key efforts towards achieving the goal. Although the CAP goal updates indicate that the Smithsonian Institution is involved in federal STEM education efforts, it was not identified in a dedicated list of contributors to the goal. We have previously found that federal STEM education programs are fragmented across a number of agencies. We continue to believe that the federal government s efforts to ensure STEM education programs are effectively coordinated must include all relevant efforts. Furthermore, the leader of the Broadband CAP goal told us that he is aware that tax deductions available to businesses making capital investments contributed to the goal by incentivizing investments in broadband. We have long referred to such deductions, along with other reductions in a taxpayer s liability that result from special exemptions and exclusions from taxation, credits, deferrals of tax liability, or preferential tax rates, as tax expenditures. As we have previously reported, as with spending programs, tax expenditures represent a substantial federal commitment to a wide range of mission areas. We have recommended greater scrutiny of tax expenditures. Periodic reviews could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of programs with similar goals. As previously mentioned, GPRAMA also requires OMB to identify tax expenditures that contribute to CAP goals. However, tax expenditures were not reported as contributors to the Broadband CAP goal in the quarterly status updates published on Performance.gov. <2.2. CAP Goal Leaders Identified Milestones for Tracking Progress, but in Some Cases Milestones Were Missing Key Information> Leading practices state that a clear connection between goals and day-to- day activities can help organizations better articulate how they plan to accomplish their goals. In addition, a clear connection between goals and the programs that contribute to them helps to reinforce accountability and ensure that managers keep in mind the results their organizations are striving to achieve. Milestones scheduled events signifying the completion of a major deliverable or a set of related deliverables or a phase of work can help organizations demonstrate the connection between their goals and day-to-day activities and that they are tracking progress to accomplish their goals. Organizations, by describing the strategies to be used to achieve results, including clearly defined milestones, can provide information that would help key stakeholders better understand the relationship between resources and results. GAO-13-174; GAO-13-228; and GAO, Managing for Results: Critical Issues for Improving Federal Agencies Strategic Plans, GAO/GGD-97-180 (Washington, D.C.: Sept. 16, 1997). actions, however, lacked clear time frames for completion. Figure 4 below illustrates the next steps identified for the Strategic Sourcing CAP goal in the update for the third quarter of fiscal year 2013. Completion status: The Real Property CAP goal update for the second quarter of fiscal year 2013 identified two planned actions as next steps. After agencies submit their Revised Cost Savings and Innovation Plans to OMB, OMB will evaluate agency plans to maintain their square footage baselines, while balancing mission requirements, and Updates on agency square footage baselines and projects are forthcoming and will be posted on Performance.gov. These two actions were again identified as next steps in the update for the third quarter of fiscal year 2013, but no update was provided on the status of the actions. By establishing planned activities that, in many of the CAP goal updates, did not have information about their alignment with the strategies they supported, their time frames for completion, or their completion status, CAP goal leaders did not fully demonstrate that they had effectively planned to support goal achievement or were tracking progress toward the goal or identified milestones. OMB did not issue formal guidance to CAP goal leaders on the types of information that were to be included in the CAP goal updates, including information about contributors and milestones. Standards for internal control in the federal government emphasize the importance of documenting policies and procedures to provide a reasonable assurance that activities comply with applicable laws and regulations, and that managers review performance and compare actual performance to planned or expected results and analyze significant differences.staff told us they provided an implementation plan template to goal leaders, which outlined the data elements to be reported in the quarterly status updates. The template was also used to collect information for internal and public reporting. Some CAP goal managers told us that OMB or PIC staff, in their role supporting the collection, analysis, and presentation of data on CAP goal performance, occasionally provided feedback on the information that the individuals submitted in draft updates that OMB reviewed before they were published on Performance.gov. For example, one CAP goal manager told us that during a review of an update submission PIC staff told him that he should develop additional milestones to be completed during a specific future fiscal year quarter. This is in contrast to the detailed guidance that OMB issued on the types of information that agencies must provide for the updates for agency priority goals (APG), which are also published quarterly on Performance.gov. The APG guidance includes explicit instructions for agencies to identify, as appropriate, the organizations, regulations, tax expenditures, policies, and other activities within and external to the agency that contribute to each APG, as well as key milestones with planned completion dates for the remainder of the goal period. Because guidance for the types of information that should have been included in the CAP goal updates was never formally established, CAP goal leaders were at a heightened risk of failing to take into account important contributors to the goal and providing incomplete information about milestones that could help demonstrate progress being made. <3. OMB and Goal Leaders Established Processes for Reviewing Cross- Agency Priority Goal Progress, but Not All Review Processes Were Consistent with Requirements and Leading Practices> <3.1. OMB Established a Quarterly Process for Reviewing Progress on CAP Goals, but Did Not Consistently Outline Improvement Strategies Where Goal Achievement Was at Risk> GPRAMA Requirement for OMB Progress Reviews GPRAMA requires that, not less than quarterly, the Director of OMB, with the support of the PIC, shall review progress on the CAP goals, including progress during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance. GPRAMA also requires that, as part of these reviews, OMB categorize goals by their risk of not achieving the planned level of performance and, for those goals most at risk of not meeting the planned level of performance, identify strategies for performance improvement. As required by GPRAMA, OMB reviewed progress on CAP goals each quarter, beginning with the quarter ending June 30, 2012. This review process consisted of the collection of updated information for each CAP goal by OMB or PIC staff, and the development of a memorandum for the Director of OMB with information on the status of the CAP goals. To develop these memorandums, OMB staff told us that approximately 6 weeks after the end of each quarter, OMB and PIC staff worked with CAP goal leaders to collect updated data and information on goal metrics and milestones, and to update the narratives supporting the data. CAP goal leaders, or staff assisting leaders with the management of efforts related to the goal, would provide this information to OMB using a template for the status updates ultimately published on Performance.gov. In addition to the memorandums developed for the Director of OMB, OMB published more detailed information through the quarterly status updates available on Performance.gov. OMB and PIC staff told us that to support OMB s quarterly review efforts, PIC staff were to conduct assessments rating the overall health of implementation efforts and goal leader engagement. They were also to assess the execution status of each goal, including the quality and trend of performance indicators. One purpose of these assessments was to identify areas where risks, such as goal leader turnover, could affect the ability to achieve the planned level of performance. Consistent with this intent, several of the quarterly OMB review memorandums we examined highlighted turnover in goal leader or deputy goal leader positions as risks, and suggested the need to find or approve replacements. Although PIC staff have been tasked with assessing these elements of CAP goal implementation, and said that there was a shared understanding between involved staff as to how these assessments would be carried out, the PIC has not documented its procedures or criteria for conducting these assessments. Standards for internal control in the federal government emphasize the importance of documenting procedures, including those for assessing performance. Without clearly established criteria and procedures, PIC staff lack a means to: consistently assess implementation efforts and execution across all goals; bring any deficiencies, risks, and recommended improvements identified to the attention of leadership; and ensure consistent application of criteria over time. While these quarterly review memorandums identified one goal as being at risk of not achieving the planned level of performance, and identified other instances where progress on goals had been slower than planned, the memorandums did not consistently outline the strategies that were being used to improve performance or address identified risks. For example, the Cybersecurity CAP goal was the one goal specifically described as being at risk of not achieving the planned level of performance, both in these memorandums and in the status updates on Performance.gov. Specifically, the memorandum for the third quarter of fiscal year 2012 identified the risk of not achieving the planned level of performance, and outlined seven specific risks facing the goal and the steps being taken to mitigate them. Similarly, the memorandum for the second quarter of fiscal year 2013 also acknowledged that some agencies were at risk of not meeting their Cybersecurity CAP goal targets. However, in contrast to the earlier memorandums, no information was included about the specific steps that were being taken to mitigate these risks, although information on planned and ongoing actions to improve government-wide implementation was included in the milestones section of the status update for that quarter on Performance.gov. The memorandum for the fourth quarter of fiscal year 2012 also acknowledged that the pace of progress on the STEM Education and Closing Skills Gaps goals had been slower than expected. While the memorandum stated that additional OMB attention was needed to support implementation and assure sufficient progress, no information on the specific strategies being employed to improve performance was mentioned. According to OMB staff, however, these memorandums were used to inform subsequent conversations with OMB leadership, which would build on the information presented in the memorandums. Furthermore, because the data necessary to track progress for some goals were unavailable, the Director of OMB would not have been able to consistently review progress for all CAP goals, or make a determination about whether some CAP goals were at risk of meeting their planned levels of performance. This fact was acknowledged in the quarterly review memorandums for quarters one and two of fiscal year 2013, which acknowledged that progress on three goals (Entrepreneurship and Small Business, Job Training, and STEM Education) was difficult to track, and that additional work was needed on data collection. However, no information on the specific steps that were being taken to address these shortcomings was included. A lack of specific information about the steps being taken to mitigate identified risk areas and improve performance could hinder the ability of OMB leadership and others to adequately track the status of efforts to address identified deficiencies or risks and to hold officials accountable for taking necessary actions. <3.2. CAP Goal Leaders Established Processes to Review Progress, but Their Consistency with Leading Practices and Their Effects on Performance and Collaboration Varied> GPRAMA Requirement for Goal Leader and Agency Involvement in Progress Reviews As part of the quarterly review process, GPRAMA requires that the Director of OMB review each priority goal with the appropriate lead government official, and include in these reviews officials from the agencies, organizations, and program activities that contribute to the achievement of the goal. According to OMB staff, to encourage goal leaders and contributing agencies to take ownership of efforts to achieve the goals, OMB gave goal leaders flexibility to use different approaches to engage agency officials and review progress at the CAP-goal level. While guidance released by OMB in August 2012 encouraged goal leaders to leverage existing interagency working groups, committees, and councils in the management of the goals as much as practicable, it did not include information on the purpose of reviews, expectations for how reviews should be conducted to maximize their effectiveness as a tool for performance management and accountability, or the roles that CAP goal leaders and agency officials should play in the review process. Again, standards for internal control in the federal government emphasize the importance of documenting procedures for reviewing performance against established goals and objectives. This is in contrast to the detailed guidance that OMB released for agency priority goal and agency strategic objective reviews, which outlined the specific purposes of the reviews, how frequently they should be conducted, the roles and responsibilities of agency leaders involved in the review process, and how the reviews should be conducted. We also found that this guidance for reviews at the agency level was broadly consistent with the leading practices for performance reviews that we previously identified. While no official guidance was published to guide how reviews involving goal leaders and staff from contributing agencies could be conducted for the CAP goals, OMB staff said the principles of the guidance released for agency reviews, which reflected many of the leading practices, was referenced in conversations with CAP goal leaders and teams. OMB has emphasized that flexibility is needed to ensure that goal leaders can use review processes that are appropriate given the scope of interagency efforts, the number of people involved, and the maturity of existing reporting and review processes. The guidance for agency reviews gave agencies flexibility to design their performance review processes in a way that would fit the agency s mission, leadership preferences, organizational structure, culture, and existing decision- making processes. In our previous work, we detailed how several federal agencies had implemented quarterly performance reviews in a manner consistent with leading practices, but which were also tailored to the structures, processes, and needs of each agency. In this way, flexible implementation of review processes is possible within a framework that encourages the application of leading practices. A lack of clear expectations for how progress should be reviewed at the CAP-goal level resulted in a number of different approaches being used by goal leaders to engage officials from contributing agencies to review progress on identified goals and milestones, ranging from regular in- person review meetings led by the CAP goal leader to the review of written updates provided to the goal leader by officials from contributing agencies. See appendix IV for more detailed information on the various processes used by goal leaders to collect data on, and review progress towards, identified goals. <3.2.1. Some Goal Leaders Used Review Processes Broadly Consistent with Leading Practices, and Noted Their Positive Effects on Performance, Accountability, and Collaboration> Instituting review processes consistent with the leading practices we previously identified can help ensure that reviews include meaningful performance discussions, provide opportunities for oversight and accountability, and drive performance improvement. Taken together, these leading practices emphasize the importance of leadership involvement in the review process, data collection and review meeting preparation, participation by key officials, and rigorous follow-up. Through our evaluation of how goal leaders and contributing agency officials reviewed progress towards the interim goals, we identified two CAP goals Cybersecurity and Closing Skills Gaps and one sub-goal the Entrepreneurship and Small Business sub-goal on improving access to government services and information (BusinessUSA sub-goal) where goal managers instituted in-person review processes with officials from contributing agencies that were broadly consistent with the full range of leading practices for reviews, which we have summarized in four categories below. The processes used by other CAP goal leaders to engage agency officials in the review of progress did not reflect the full range of leading practices. Leadership Involvement. Leading practices indicate that leaders should use frequent and regular progress reviews as a leadership strategy to drive performance improvement and as an opportunity to hold people accountable for diagnosing performance problems and identifying strategies for improvement. The direct and visible engagement of leadership is vital to the success of such reviews. Leadership involvement helps ensure that participants take the review process seriously and that decisions and commitments can be made. The goal leaders managing the Cybersecurity and Closing Skills Gaps goals, as well as the BusinessUSA sub-goal, were directly involved in leading in-person reviews for these goals, and in using them as opportunities to review progress, identify and address performance problems, and hold agency officials accountable for progress on identified goals and milestones, as detailed in table 1. Data Collection and Review Meeting Preparation. Leading practices also indicate that those managing review processes should have the capacity to collect, analyze, and communicate accurate, useful, and timely performance data, and should rigorously prepare for reviews to enable meaningful performance discussions. The collection of current, reliable data on the status of activities and progress towards goals and milestones is critical so that those involved can determine whether performance is improving, identify performance problems, ensure accountability for fulfilling commitments, and learn from efforts to improve performance. The ability to assess data to identify key trends and areas of strong or weak performance, and to communicate this to managers and staff effectively through materials prepared for reviews, is also critical. As detailed in table 2, those supporting the Cybersecurity and Skills Gap goals, and the BusinessUSA sub-goal, instituted processes to regularly collect and analyze data on progress towards identified goals and milestones, and to ensure these data would be communicated through materials prepared for review meetings. Participation by Key Officials. Leading practices indicate that key players involved in efforts to achieve a goal should attend reviews to facilitate problem solving. This is critical as their participation enables those involved to break down information silos, and to use the forum provided by the review to communicate with each other, identify improvement strategies, and agree on specific next steps. Reviews for both the Cybersecurity and Closing Skills Gaps CAP goals, and the BusinessUSA sub-goal, were structured so that relevant agency officials playing a key role in efforts to carry out the goal were included, as detailed in table 3. Review Follow-Up. Leading practices indicate that participants should engage in sustained follow-up on issues identified during reviews, which is critical to ensure the success of the reviews as a performance improvement tool. Important follow-up activities include identifying and documenting specific follow-up actions stemming from reviews, those responsible for each action item, as well as who will be responsible for monitoring and follow-up. Follow-up actions should also be included as agenda items for subsequent reviews to hold responsible officials accountable for addressing issues raised and communicating what was done. Goal managers for the Cybersecurity and Closing Skills Gap CAP goals, as well as the BusinessUSA sub-goal, took steps to follow up on action items identified in these meetings, and to ensure that steps were taken towards their completion, as detailed in table 4. Review Effects. Goal leaders and managers we interviewed said that these review processes were valuable in driving improved performance, establishing a greater sense of accountability for progress on the part of contributors, and in providing a forum for interagency communication and collaboration. For example, according to DHS staff involved in the management of the Cybersecurity CAP goal, implementation of Personal Identity Verification (PIV) requirements across the federal government had been stagnant for several years prior to the introduction of cybersecurity as a CAP goal. The review process was used to hold agencies accountable for improved PIV implementation, which helped bring an increased focus on the issue and drive recent progress. Since the reviews were instituted in 2012, DHS has reported improved PIV adoption in civilian agencies, which has increased from 1.24 percent in fiscal year 2010, to 7.45 percent in fiscal year 2012, to 19.67 percent in the fourth quarter of fiscal year 2013. According to data from DHS, while still falling short of the target, this has contributed to the overall increase in PIV adoption across the federal government including both civilian agencies and the Department of Defense from 57.26 percent in fiscal DHS year 2012 to 66.61 percent in fourth quarter of fiscal year 2013.staff also added that agencies generally had not previously collaborated on cybersecurity issues or worked to identify best practices. According to DHS staff, the reviews have created an important point of collaboration between DHS, OMB, National Security Staff, and agencies, and provided an opportunity to inform agencies of best practices and connect them with other agencies that are meeting their targets to learn from them. Similarly, OPM officials and sub-goal leaders involved in the management of the Closing Skills Gap CAP goal said that the quarterly review meetings were a critical means to ensure sub-goal leaders and staff were demonstrating progress. Having sub-goal leaders report out on progress, and hear about the progress made in other sub-goal areas, provided additional pressure for continuous improvement and the need to remain focused on driving progress towards their goals. Having the goal leader lead the review was also a way to demonstrate leadership commitment to the achievement of each sub-goal. According to OPM officials and sub- goal leaders, the review meetings also served as an important forum for discussing innovative approaches being taken to address skills gaps in different areas, opportunities for collaboration to address challenges shared by different sub-goals, and how leaders could leverage the efforts of other sub-goals to drive progress on their own. The BusinessUSA sub-goal leader said that having it as the basis for a CAP sub-goal elevated the cross cutting nature of the initiative. In addition to reviewing performance information and the status of deliverables, discussions at inter-agency Steering Committee meetings were used to discuss how contributors could work together to meet the initiative s performance goals. This communication and coordination led to connections between agencies and to discussions about how programs could be working in a more integrated way. For example, these discussions were used to identify ways that programs could more effectively integrate program information on the BusinessUSA website to increase customer satisfaction. <3.2.2. Other Goal Leaders Did Not Use Review Processes Consistent with the Full Range of Leading Practices for Reviews> We found that the processes used by other CAP goal leaders to engage agency officials in the review of progress, which are summarized in appendix IV, did not reflect the full range of leading practices. For example, the process for reviewing progress on the Job Training CAP goal involved staff from the PIC collecting updates on recent milestones from agencies, which were then compiled in the quarterly status update and reviewed by the goal leader. This approach was used by the goal leaders for the Broadband and STEM Education CAP goals to review progress as well. While goal leaders and managers for these goals indicated that they used the collection and review of information as an opportunity to communicate with officials from contributing agencies, this approach contrasts with OMB guidance for reviews of agency priority goals, which states explicitly that performance reviews should not be conducted solely through the sharing of written communications. As OMB noted in its guidance, in-person engagement of leaders in performance reviews greatly accelerates learning and performance improvement, and personal engagement can demonstrate commitment to improvement, ensure coordination across agency silos, and enable rapid decision making. While not employing the full range, goal leaders for a number of goals did use processes that reflected one or more leading practices. For example, many CAP goal leaders led or participated in interagency meetings with representatives of contributing agencies. While these were used to facilitate interagency communication and collaboration on the development of plans and policies, it was unclear whether many of these meetings were consistently used to review progress on identified CAP goals and milestones. The goal leader for the Strategic Sourcing CAP goal used processes that reflected leadership involvement, participation by key officials, and the collection and analysis of relevant data. Specifically, according to goal managers, the goal leader led regular meetings of the Strategic Sourcing Leadership Council (SSLC), which were attended by senior procurement officials from eight agencies that combine to make up almost all of the federal government s total procurement spending. To prepare for each SSLC meeting, staff from OMB s Office of Federal Procurement Policy (OFPP) held a meeting for supporting staff from each agency, who would then prepare the SSLC member from their agency for the issues to be discussed in the SSLC meeting. OFPP also established a regular data collection process where each agency would report on its adoption and spending rates for two strategic sourcing options, which would then be used for the purposes of reporting on the CAP goal. However, it was unclear how regularly, if at all, SSLC meetings were used to engage agency officials in the review of data on agency adoption of, and spending on, strategic sourcing options, or how regularly meetings were used to review progress that was being made towards the CAP goal. It was also unclear what mechanisms, if any, were used to ensure rigorous follow-up on issues raised in these meetings, a key leading practice, as there were no official meeting minutes maintained. The lack of an official record could hinder follow-up and accountability for any identified actions that need to be taken. <3.2.3. Some Goal Leaders Reported Minimal Effects on Performance and Collaboration> Representatives of some goals stated that it was difficult to isolate the impact of the CAP goal designation, and its associated reporting and review requirements, on performance and collaboration. According to some goal managers, because their interim goals were based on initiatives that had been previously established in executive orders or Presidential memorandums, much of the interagency activity supporting their efforts would have happened without the CAP goal designation and its reporting and review requirements. For example, a manager for the Data Center Consolidation CAP goal told us that the previously established Federal Data Center Consolidation Initiative was used to drive progress and that the CAP goal designation and quarterly reporting and review requirements had little impact. Similarly, Job Training CAP goal managers said that interagency collaboration on job training issues had been established prior to the creation of the CAP goal, that the goal s reporting and review requirements were incidental to the contributors ongoing work, and that it did not add an additional level of accountability for the completion of job training initiatives. However, this is a goal where no data were reported to demonstrate its impact on federal job training programs, and which was identified in multiple OMB reviews as having slower than anticipated progress due, in part, to extended periods of time in which there was no deputy CAP goal leader to provide support necessary to improve coordination and collaboration. While many CAP goal leaders and staff we interviewed noted the progress they had made with their existing interagency meetings and approaches, a lack of clear expectations or guidance for how review processes at the CAP goal level should be carried out can lead to a situation where reviews are implemented in a manner that is not informed by, or fully consistent with, leading practices. This could result in missed opportunities to realize the positive effects on performance and accountability that can stem from the implementation of review processes that regularly and consistently involve leaders and agency officials in the analysis of performance data to identify and address performance deficiencies, and use rigorous follow-up to ensure accountability for commitments. <4. Conclusions> Many of the meaningful results that the federal government seeks to achieve require the coordinated efforts of more than one federal agency. GPRAMA s requirement that OMB establish CAP goals offers a unique opportunity to coordinate cross-agency efforts to drive progress in priority areas. That opportunity will not be realized, however, if the CAP goal reporting and review requirements and leading review practices are not followed. The reporting and review requirements for the CAP goals, and leading practices for the reviews, are designed to ensure that relevant performance information is used to improve performance and results, and that OMB and goal leaders actively lead efforts to engage all relevant participants in collaborative performance improvement initiatives and hold them accountable for progress on identified goals and milestones. OMB reported performance information in the quarterly CAP goal status updates it published on Performance.gov. While updates for most goals reported data on performance towards the identified planned level of performance, the information in the updates did not always present a complete picture of progress towards identified goals and milestones. For example, while updates for 8 of the 14 goals included data that indicated performance towards the identified overall planned level of performance, only 3 also contained annual or quarterly targets that allowed for an assessment of interim progress. Updates for the other 6 of the 14 goals did not report on performance towards the goal s primary performance target because the goal was established without a quantitative target or because goal managers were unable to collect the data needed to track performance. In other cases, planned activities that were identified as contributing to the goal were sometimes missing important elements, including alignment with the strategies for goal achievement they supported, a time frame for completion, or information on their implementation status. The incomplete picture of progress that many of the updates gave limited the ability of goal leaders and others to ensure accountability for the achievement of targets and milestones. Holding regular progress reviews that are consistent with GPRAMA requirements and the full range of leading practices can produce positive effects on performance and collaboration. Engaging contributors in regular reviews of data on performance can help ensure interagency efforts are informed by information on progress towards identified goals and milestones, which can be used to identify and address areas where goal or milestone achievement is at risk. Reviews can also be used to reinforce agency and collective accountability for the achievement of individual and shared outcomes, helping to ensure that efforts to improve performance or address identified risks are implemented. Lastly, reviews can be used to foster greater collaboration, ensuring opportunities for communication and coordination between officials involved in efforts to achieve shared outcomes. While OMB and CAP goal leaders instituted processes for reviewing progress on the interim CAP goals, if GPRAMA requirements and leading practices for reviews are not consistently followed, it may result in missed opportunities to improve performance, hold officials accountable for achieving identified goals and milestones, and ensure agency officials are coordinating their activities in a way that is directed towards the achievement of shared goals and milestones. <5. Recommendations for Executive Action> We recommend that the Director of OMB take the following three actions: Include the following in the quarterly reviews of CAP goal progress, as required by GPRAMA: a consistent set of information on progress made during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance; goals at risk of not achieving the planned level of performance; and the strategies being employed to improve performance. Work with the PIC to establish and document procedures and criteria to assess CAP goal implementation efforts and the status of goal execution, to ensure that the PIC can conduct these assessments consistently across all goals and over time. Develop guidance similar to what exists for agency priority goal and strategic objective reviews, outlining the purposes of CAP goal progress reviews, expectations for how the reviews should be carried out, and the roles and responsibilities of CAP goal leaders, agency officials, and OMB and PIC staff in the review process. To ensure that OMB and CAP goal leaders include all key contributors and can track and report fully on progress being made towards CAP goals overall and each quarter, we recommend that the Director of OMB direct CAP goal leaders to take the following four actions: Identify all key contributors to the achievement of their goals; Identify annual planned levels of performance and quarterly targets for each CAP goal; Develop plans to identify, collect, and report data necessary to demonstrate progress being made towards each CAP goal or develop an alternative approach for tracking and reporting on progress quarterly; and Report the time frames for the completion of milestones; the status of milestones; and how milestones are aligned with strategies or initiatives that support the achievement of the goal. <6. Agency Comments> We provided a draft of this report for review and comment to the Director of OMB, the Secretaries of Commerce and Homeland Security, the Director of the Office of Personnel Management, the Administrator of the Small Business Administration, as well as the officials we interviewed to collect information on the interim CAP goals from the Council on Environmental Quality, Department of Education, Department of Labor, Department of Veterans Affairs, National Science Foundation, and the Office of Science and Technology Policy. OMB and PIC staff provided oral comments on the draft, and we made technical changes as appropriate. OMB staff generally agreed to consider our recommendations. For example, while they said that OMB and PIC staff will continue to work directly with CAP goal leaders to convey suggested practices for reviewing performance, they will consider referencing principles and practices for data-driven performance reviews in future Circular A-11 guidance related to the management of CAP goals. Furthermore, while they noted that quantitative performance data for some key measures may not available on a quarterly basis, they said that they will continue to work to develop more robust quarterly targets. Officials or staff from the Departments of Commerce and Veterans Affairs, and the Office of Science and Technology Policy provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report is part of our response to a mandate that we evaluate the implementation of the federal government priority goals under the GPRA Modernization Act of 2010 (GPRAMA). Due to the timing of our work, we focused on the implementation of the reporting and review requirements for the 14 interim cross-agency priority (CAP) goals established in February 2012.about progress made towards the interim CAP goals; and (2) how, if at all, quarterly progress reflected GPRAMA requirements and leading practices for data-driven reviews, as well as how they contributed to improved cross-agency performance and collaboration. Specifically, this report assesses (1) what is known To address these objectives, we interviewed representatives of 13 of the 14 interim goals. For 8 of the 13 goals we spoke directly with the goal leader or deputy goal leader, along with, in some cases, staff from Office of Management and Budget (OMB) and agencies involved in supporting efforts related to the goals. For the other five goals (Closing Skills Gaps, Cybersecurity, Data Center Consolidation, Exports, and Job Training) we met with agency officials or OMB staff playing a key role in the management of interagency efforts related to the CAP goal. During these interviews, we asked officials questions concerning how the goal leader and officials from contributing agencies reviewed progress on the goal; the interagency groups used to engage agency officials and manage efforts related to the goal; the role that staff from OMB and the Performance Improvement Council (PIC) played in the review process; and any impact the CAP goal designation and review processes had on performance, collaboration, and accountability. We also participated in interviews with the goal leaders of 11 agency priority goals that were aligned with, or identified as a contributor to, a CAP goal. To further address the first objective, and assess what is known about progress made toward the interim CAP goals, we analyzed information on identified performance metrics and milestones included in the quarterly status updates for each CAP goal published on Performance.gov. We also analyzed relevant information collected through our interviews with CAP goal leaders, deputies, and supporting staff. We compared the data and information made available through the quarterly status updates with requirements in GPRAMA that Performance.gov include information for each goal on results achieved during the most recent quarter and overall trend data. To assess the reliability of performance data and information available through Performance.gov we collected information from OMB and PIC staff, and CAP goal representatives, about data quality control procedures. We determined that the data and information were sufficiently reliable for our analysis of what was reported on Performance.gov about progress towards identified goals and milestones. To address the second objective, we reviewed quarterly review memorandums developed for OMB leadership for five quarters, from the third quarter of fiscal year 2012 to the third quarter of fiscal year 2013.We compared the contents of these review memorandums with requirements for the OMB quarterly reviews established in GPRAMA. We also interviewed staff from OMB and the PIC to discuss the various approaches being used to review progress at the CAP-goal level, the data collection and review process, and the role of the PIC in supporting the quarterly review process. To further address the second objective we reviewed (where available) documents created for interagency meetings, such as meeting agendas, presentation materials, meeting notes, and attendee lists. We also observed one quarterly review meeting held for the Closing Skills Gap goals, and conducted interviews with sub-goal leaders from the Closing Skills Gaps and Entrepreneurship and Small Business CAP goals. These interviews were used to learn more about the involvement of officials from contributing agencies in the quarterly review process for each CAP goal, the processes that had been established to review progress at the sub- goal level, and to gain a more complete picture of participating agency officials perceptions of the impact of the CAP goals and review processes. We selected these sub-goals through a two-part process. Of the eight CAP goals for which we had completed interviews through the end of 2013, the team selected one goal for which the goal leader held quarterly meetings dedicated to reviewing progress toward the CAP goal with the goal s contributors (Closing Skills Gaps). The team also selected a second goal for which the goal leader used a review process that did not rely on quarterly meetings between the goal leader and contributing agencies (Entrepreneurship and Small Business). To ensure that the team would have at least one goal representing each type of goal, the team also ensured that one goal would be an outcome-oriented policy goal and one goal would be a management goal. For both the Closing Skills Gaps and Entrepreneurship and Small Business CAP goals the team then selected four sub-goals for interviews. For the Closing Skills Gaps CAP goal the team interviewed the sub-goal leaders for the Economist; Information Technology/Cybersecurity; Science, Technology, Engineering, and Mathematics (STEM) Education; and Human Resources sub-goals. For the Entrepreneurship and Small Business CAP goal the team held interviews with the sub-goal leaders for the sub-goals to Accelerate commercialization of Federal research grants, Advance federal small business procurement goals, Improve access to government services and information, and Streamline immigration pathways for immigrant entrepreneurs. These were selected to ensure that the team would capture sub-goals in which a range of approaches for measuring and reviewing progress were being used. Specifically, sub- goals were selected to ensure the team would have some that did, and did not, hold regular meetings, and some that did, and did not, track quantitative measures of performance or milestones with time frames. Our selection of these sub-goals was nonstatistical and therefore our findings from these interviews are not generalizable to the other CAP goals. We compared what we learned about review processes at the CAP goal and sub-goal levels, through interviews and the collection of documentation, used by leaders from each goal against leading practices for performance reviews previously identified by GAO. Because the scope of our review was to examine the implementation of quarterly progress reviews, we did not evaluate whether these goals were appropriate indicators of performance, sufficiently ambitious, or met other dimensions of quality. We conducted our work from May 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Interim Cross-Agency Priority Goals and Goal Statements <7. Goal Broadband> Goal Statement As part of expanding all broadband capabilities, ensure 4G wireless broadband coverage for 98 percent of Americans by 2016. <8. Closing Skills Gaps> Close critical skills gaps in the federal workforce to improve mission performance. By September 30, 2013, close the skills gaps by 50 percent for three to five critical federal government occupations or competencies, and close additional agency-specific high risk occupation and competency gaps. <9. Cybersecurity> Executive branch departments and agencies will achieve 95 percent implementation of the administration s priority cybersecurity capabilities by the end of FY 2014. These capabilities include strong authentication, trusted Internet connections, and continuous monitoring. <10. Data Center Consolidation> Improve information technology service delivery, reduce waste, and save $3 billion in taxpayer dollars by closing at least 2,500 data centers by fiscal year 2015. <11. Energy Efficiency> Increase energy productivity (amount of real gross domestic product in dollars/energy demand) 50 percent by 2030. <12. Entrepreneurship and Small Business> Increase federal services to entrepreneurs and small businesses with an emphasis on 1) startups and growing firms and 2) underserved markets. <13. Exports Improper Payments> Double U.S. exports by the end of 2014. The federal government will achieve a payment accuracy rate of 97 percent by the end of 2016. <14. Job Training> Ensure our country has one of the most skilled workforces in the world by preparing 2 million workers with skills training by 2015 and improving the coordination and delivery of job training services. <15. Real Property> The federal government will maintain the fiscal year 2012 square footage baseline of its office and warehouse inventory. <16. Science, Technology, Engineering, and Mathematics (STEM) education> In support of the president s goal that the U.S. have the highest proportion of college graduates in the world by 2020, the federal government will work with education partners to improve the quality of STEM education at all levels to help increase the number of well- prepared graduates with STEM degrees by one-third over the next 10 years, resulting in an additional 1 million graduates with degrees in STEM subjects. <17. Strategic sourcing> Reduce the costs of acquiring common products and services by agencies strategic sourcing of at least two new commodities or services in both 2013 and 2014, that yield at least a 10 percent savings. In addition, agencies must increase their use of Federal Strategic Sourcing Initiative vehicles by at least 10 percent in both fiscal years 2013 and 2014. <18. Sustainability> By 2020, the federal government will reduce its direct greenhouse gas emissions by 28 percent and will reduce its indirect greenhouse gas emissions by 13 percent by 2020 (from 2008 baseline). <19. Veteran Career Readiness> By September 30, 2013, increase the percent of eligible service members who will be served by career readiness and preparedness programs from 50 percent to 90 percent in order to improve their competitiveness in the job market. Appendix III: Interagency Group Membership and Meeting Frequency and Purpose Goal leaders for 13 of 14 cross-agency priority (CAP) goals leveraged interagency groups for the purposes of coordinating efforts designed to contribute to progress on the cross-agency priority goal. This appendix includes information on the membership of these interagency groups, the frequency with which they met, and the purposes of those meetings. Membership Fourteen agencies with federal property management or transportation funding responsibilities, and broadband or other related expertise. To discuss best practices on broadband-related land management issues, and actions to implement an executive order on accelerating broadband infrastructure deployment. Senior officials from agencies considered major spectrum stakeholders and users of spectrum, including the Departments of Defense, Justice, Homeland Security (DHS), Commerce, and the National Aeronautics and Space Administration (NASA). To provide advice on spectrum policy and strategic plans, discuss commercial transfer of federal agency spectrum, and resolve issues affecting federal/non-federal users. <20. Closing Skills Gaps> To review progress on performance metrics and actions taken to close skills gaps in each of the six sub- goal areas. <21. Cybersecurity> Officials from National Institute of Standards and Technology, General Services Administration (GSA), DHS, National Security Staff, Office of Management and Budget (OMB) and Performance Improvement Council. Twice each quarter Beginning in 2013, a meeting was held each quarter prior to the collection of data on agency progress on cybersecurity metrics. Another was held after data had been collected and analyzed to review and discuss agency progress. <22. Data Center Consolidation> Data center consolidation program managers from 24 federal agencies. To identify and disseminate key information about solutions and processes to help agencies make progress towards data center consolidation goals. <23. Goal or sub-goal Energy Efficiency Entrepreneurship and Small Business Improve Access to Government Information and Services Sub-Goal> Interagency group No interagency groups were used to manage efforts related to this goal. Interagency groups were used to manage efforts at the sub-goal level. Senior-level representatives from 24 participating agencies. To oversee strategy, resources and timetables for the development of the BusinessUSA website, resolve interagency issues and ensure department/agency viewpoints are represented. Mid-to-senior level program, technology and customer service managers from 24 participating agencies. To assist the BusinessUSA program management office coordinate the design, development, and operation of the BusinessUSA website, and to track and monitor performance metrics on customer service and outcomes. <24. Commercialization of Federal Research Grants Sub-Goal> SBIR/STTR program managers from11 agencies, and coordinating officials from Small Business Administration (SBA) and Office of Science and Technology Policy (OSTP) To discuss the development of SBIR/STTR program policy directives, the implementation of requirements, outreach and access to the programs, and program best practices. <25. Streamlining Pathways for Immigrant Entrepreneurs Sub- Goal> To provide updates on relevant agency activities and identify opportunities for interagency collaboration. <26. Small Business Procurement Sub-Goal> To share best practices for expanding contracting to small and disadvantaged businesses, and reviewing progress on agency simplified-acquisition threshold goals. To provide officials from the White House, SBA, Commerce, and OMB with an opportunity to meet with senior agency leaders and discuss the steps agencies are taking to increase small business contracting. <27. Goal or sub-goal Exports> Membership Principals (cabinet secretaries and deputies) and staff from 20 agencies involved in export policy, service, finance, and oversight. To review progress on deliverables supporting the National Export Strategy, communications, and the status of individual export promotion initiatives. <28. Improper Payments> Bi-weekly to monthly To review the status of agency implementation of Do Not Pay requirements and milestones, and guidance for implementation. Officials from agencies with high-priority programs, as designated by OMB. To discuss the government- wide improper payment initiative and overall strategy. <29. Job Training> To discuss expanding access to job training performance data, and opportunities to promote its use at the local, state, and federal levels. <30. Real Property> Among other policy discussions, to discuss the development of agency Freeze the Footprint plans. <31. Sustainability> To discuss policy to guide the federal government on sustainability issues, and to discuss sustainability goals. <32. Science, Technology, Engineering, and Mathematics (STEM) Education> Every 4-6 weeks, during the development of the 5-year strategic plan. To develop a 5-year strategic plan for federal support for STEM education. <33. Strategic Sourcing> Representatives from Departments of Defense, Energy, and Veterans Affairs (VA), DHS, HHS, GSA, NASA, and SBA. To discuss the development and adoption of strategic sourcing options. <34. Goal or sub-goal Veteran Career Readiness> To review ongoing policy initiatives and opportunities for collaboration between agencies. To develop and implement a redesigned veterans transition program. Appendix IV: Description of Interim Cross- Agency Priority Goal Review Processes <35. Goal Broadband> <36. Energy Efficiency> meetings with officials from each agency. According to OMB staff, during these reviews participants reviewed metrics from across the agency s information technology portfolio, which included, in some cases, those related to data center consolidation. Each quarter staff supporting the goal leader would collect updated information on contributing agency priority goals for the purposes of updating the quarterly status update. Each quarter the deputy goal leader would collect updated information on goals and milestones from the leaders of each of 10 sub-goals for the purposes of developing the quarterly status update. The deputy goal leader would follow-up with sub-goal leaders or agency officials, as necessary, to address issues or questions about the status of efforts. The goal leader would then review and approve the quarterly status update. Some sub-goal leaders would hold in-person meetings with officials from contributing agencies to, among other things, review progress on identified goals and milestones. See appendix III for information on interagency groups that were used to manage efforts for four of the sub-goals. Each quarter the goal leader, with the assistance of staff from Commerce and the PIC, would collect updated information on relevant agency metrics and activities for the purposes of updating the quarterly status update. Periodic meetings of the Export Promotion Cabinet/Trade Promotion Coordinating Committee, and its Small Business and other working groups, were also used to discuss the status of export promotion efforts and progress on specific deliverables. Each year OMB would collect and report data on agency improper payment rates. Staff from the OMB Office of Federal Financial Management led monthly meetings with agency representatives to discuss the implementation of the Do Not Pay initiative, which was designed to contribute to the reduction of improper payments. The Department of Treasury, as the agency leading implementation of the Do Not Pay initiative, would track agency progress on implementation milestones. <37. Science, Technology, Engineering, and Mathematics (STEM) Education> Each quarter staff from the PIC would collect updated information on progress towards agency milestones, and work with the goal leader on the development of the quarterly status update. After this goal was revised in the second quarter of 2013, a new review process to track agency adherence to the goal was under development by OMB. Twice a year the Council on Environmental Quality (CEQ) would collect and review quantitative and qualitative data on agency progress towards established sustainability goals, including the reduction of agency greenhouse gas emissions. Following the collection of these data, the goal leader hosted meetings of the Steering Committee on Federal Sustainability, which were used to discuss federal sustainability policy and progress on sustainability goals. According to CEQ staff, the goal leader and CEQ staff would meet with representatives from agencies about sustainability issues on an ad hoc basis. In instances where there was a gap between an agency s actual performance and the target established in that area, the goal leader, or other staff from CEQ, would meet with officials from that agency to discuss ways to address the performance gap. Each quarter the goal leader would collect updated information on agency milestones for inclusion in the quarterly status updates. Progress on some identified strategies to achieve the goal, such as the National Science Foundation s efforts to improve undergraduate STEM education, were reviewed at the agency level. After progress was reviewed at the agency level the information was passed onto the goal leader and reported publicly in the quarterly status update. Each quarter the General Services Administration would collect data on agency adoption and spending rates for the Federal Strategic Sourcing Initiative (FSSI) solutions for domestic delivery and office supplies. The Strategic Sourcing Leadership Council met bi-monthly to guide the creation and adoption of new FSSI options, and, as part of that effort, might review quarterly data on agency adoption and spending rates. According to the goal leader, each month staff from the Departments of Defense and Veterans Affairs, and the PIC, would provide data for one-pagers and other status update documents with key pieces of relevant information, such as the veterans unemployment rate and the number of active employers on the Veteran s Job Bank. These one-pagers would be used to inform regular Interagency Policy Council (IPC) discussions, along with more specific briefing memorandums, which were used to cover the latest issues, keep stakeholders focused on overall outcomes, and to inform discussion around specific outliers. Some of the data in these one-pagers would also be incorporated into the quarterly status updates. More frequently, issue papers and data analysis were provided to Veterans Employment Initiative (VEI) Task Force and IPC members as needed to address topical issues. Ongoing milestone reviews held by the VEI Task Force and its associated working groups on Education, Employment, Transition, and Entrepreneurship, provided an opportunity to discuss strategies being employed to improve performance. Appendix V: Full Text for Interactive Figure 2 on Frequency of Data Reporting for Cross- Agency Priority Goals Overall Planned Levels of Performance This appendix includes the print version of the text and rollover graphics contained in interactive figure 2. Overall Planned Level of Performance achieve 95 percent implementation of the Administration s priority cybersecurity capabilities by the end of fiscal year 2014. Data reported for primary performance goal save $3 billion in taxpayer dollars by closing at least 2500 data centers by fiscal year 2015. <38. Data reported for primary performance goal> Agencies have already closed 640 data centers <39. Cross-Agency Priority Goal Exports Data reported for primary performance goal> Overall Planned Level of Performance Double U.S. exports by the end of 2014. Frequency of Data Reporting for Overall Goal Quarterly agencies strategic sourcing of at least two new commodities or services in both 2013 and 2014, that yield at least a 10 percent savings In addition, agencies must increase their use of Federal Strategic Sourcing Initiative vehicles by at least 10 percent in both fiscal years 2013 and 2014. <40. Broadband> ensure 4G wireless broadband coverage for 98 percent of Americans by 2016. <41. Cross-Agency Priority Goal Improper Payments> Overall Planned Level of Performance achieve a payment accuracy rate of 97 percent by the end of 2016. <42. Data reported for primary performance goal> Data Not Reported <43. Entrepreneurship and Small Businesses> Increase federal services to entrepreneurs and small businesses with an emphasis on 1) startups and growing firms and 2) underserved markets. <44. Data reported for primary performance goal> Data Not Reported <45. Job Training> <46. Data reported for primary performance goal> Data Not Reported <47. Real Property> The Federal Government will maintain the fiscal year 2012 square footage baseline of its office and warehouse inventory. <48. Data reported for primary performance goal> Data Not Reported <49. Science, Technology, Engineering, and Math (STEM) Education> increase the number of well-prepared graduates with STEM degrees by one-third over the next 10 years, resulting in an additional 1 million graduates with degrees in STEM subjects. <50. Cross-Agency Priority Goal Data reported for primary performance goal> Frequency of Data Reporting for Overall Goal Data Not Reported <51. Veteran Career Readiness> <52. Data reported for primary performance goal> Data Not Reported Appendix VI: GAO Contact and Staff Acknowledgments <53. GAO Contact> <54. Staff Acknowledgments> In addition to the contact named above, Elizabeth Curda (Assistant Director) and Adam Miles supervised the development of this report. Virginia Chanley, Jehan Chase, Steven Putansu, Stacy Ann Spence, and Dan Webb made significant contributions to this report. Deirdre Duffy and Robert Robinson also made key contributions.
Why GAO Did This Study The federal government faces complex, high-risk challenges, such as protecting our nation's critical information systems. Effectively managing these challenges is essential for national and economic security and public health and safety. However, responsibility for addressing these challenges often rests with multiple agencies. To effectively address them, shared goals and cross-agency collaboration are fundamental. This report responds to GAO's mandate to evaluate the implementation of GPRAMA. It assesses (1) what is known about progress made towards the interim CAP goals; and (2) how, if at all, quarterly progress reviews reflected GPRAMA requirements and leading practices for reviews, as well as how reviews contributed to improved cross-agency performance and collaboration. To address these objectives, GAO analyzed CAP goal status updates and other documents from OMB and CAP goal progress-review meetings, and interviewed OMB staff and CAP goal representatives. GAO compared this information to GPRAMA requirements and to leading practices for performance reviews previously reported on by GAO. What GAO Found CAP Goal Progress. The GPRA Modernization Act of 2010 (GPRAMA) requires the Office of Management and Budget (OMB) to coordinate with agencies to: (1) establish outcome-oriented, federal government priority goals (known as cross-agency priority, or CAP, goals) with annual and quarterly performance targets and milestones; and (2) report quarterly on a single website now known as Performance.gov the results achieved for each CAP goal compared to the targets. In February 2012, OMB identified 14 interim CAP goals and subsequently published five quarterly updates on the status of the interim CAP goals on Performance.gov. While updates for eight of the goals included data that indicated performance towards an overall planned level of performance, only three also contained annual or quarterly targets that allowed for an assessment of interim progress. Updates for the other six goals did not report on progress towards a planned level of performance because the goals lacked either a quantitative target or the data needed to track progress. The updates on Performance.gov also listed planned activities and milestones contributing to each goal, but some did not include relevant information, including time frames for the completion of specific actions and the status of ongoing efforts. The incomplete information in the updates provided a limited basis for ensuring accountability for the achievement of targets and milestones. OMB Quarterly Progress Reviews. GPRAMA also requires that OMB—with the support of the Performance Improvement Council (PIC)—review CAP goal progress quarterly with goal leaders. OMB instituted processes for reviewing progress on the goals each quarter, which involved the collection of data from goal leaders and the development of a memorandum for the OMB Director. However, the information included in these memorandums was not fully consistent with GPRAMA requirements. For example, GPRAMA requires OMB to identify strategies for improving the performance of goals at risk of not being met, but this was not consistently done. Without this information, OMB leadership and others may not be able to adequately track whether corrective actions are being taken, thereby limiting their ability to hold officials accountable for addressing identified risks and improving performance. Leading Practices for Reviews. At the CAP-goal level, goal leaders for two CAP goals and one sub-goal instituted in-person progress reviews with officials from contributing agencies that were broadly consistent with the full range of leading practices for reviews, such as leadership involvement in reviews of progress on identified goals and milestones, and rigorous follow-up on issues identified through these reviews. In these cases, goal managers reported there were positive effects on performance, accountability, and collaboration. In contrast, review processes used by other goal leaders did not consistently reflect the full range of leading practices. Effective review processes consistently engage leaders and agency officials in efforts to identify and address performance deficiencies, and to ensure accountability for commitments. Thus, not using them may result in missed opportunities to hold meaningful performance discussions, ensure accountability and oversight, and drive performance improvement. What GAO Recommends GAO is making seven recommendations to OMB to improve the reporting of performance information for CAP goals and ensure that CAP goal progress reviews meet GPRAMA requirements and reflect leading practices. OMB staff generally agreed to consider GAO's recommendations.
<1. Background> <1.1. Federal Civil Rights Laws, School Desegregation Litigation, and the Federal Role> On May 17, 1954, in its Brown v. Board of Education of Topeka decision, the United States Supreme Court unanimously held that state laws establishing separate but equal public schools for Blacks and Whites were unconstitutional. Ten years after this decision, a relatively small percentage of Black children in the Deep South attended integrated schools. The Civil Rights Act of 1964 prohibited discrimination in schools, employment, and places of public accommodation, and created a new role for federal agencies. Both the Department of Education s (Education) Office for Civil Rights and the Department of Justice s (Justice) Civil Rights Division s Educational Opportunities Section have some responsibility for enforcing Title VI of the Civil Rights Act of 1964, which prohibits discrimination on the basis of race, color, or national origin in programs or activities that receive federal funding, including educational institutions. In addition, Title IV of the Act authorizes Education to provide technical assistance to states or school districts in preparing, adopting, and implementing desegregation plans, to arrange for training for school personnel on dealing with educational problems caused by desegregation, and to provide grants to school boards for staff training or hiring specialists to address desegregation. Title IV of the Act also authorizes Justice to file suit in federal court to enforce the civil rights of students in public education, and Title IX of the Act authorizes Justice to intervene that is, become a party in federal discrimination lawsuits alleging constitutional violations. Further, Justice has responsibility for enforcing the Equal Educational Opportunities Act of 1974, which among other things, prohibits states from denying equal educational opportunity to individuals, including deliberate segregation of students on the basis of race, color, or national origin. To aid it in its enforcement and oversight of federal civil rights laws, Education also collects data from school districts about student characteristics and academic offerings, among other things, and compiles these data into a dataset referred to as the Civil Rights Data Collection (or Civil Rights Data). In school year 2011-12, for the first time in about a decade, Education collected these data from all K-12 public schools in the United States. It makes its Civil Rights Data available to the public so that researchers, states, and districts can conduct their own analyses. Beyond its enforcement of federal civil rights laws, Education funds several programs to support diversity in schools. Through its Magnet Schools Assistance Program, Education provides grants to local educational agencies to establish and operate magnet schools that are operated under an eligible desegregation plan. These grants are intended to assist in the desegregation of public schools by supporting the elimination, reduction, and prevention of minority group isolation in elementary and secondary schools with substantial proportions of minority group students. Additionally, through its Excellent Educators of All Initiative, Education launched a 50-state strategy to enforce a statutory provision that required states to take steps to ensure that poor and minority students are not taught by inexperienced, unqualified, or out-of- field teachers at higher rates than other students. Justice also monitors and enforces the implementation of any open school desegregation court order to which Justice is a party. In court cases where school districts were found to have engaged in segregation or discrimination, courts may issue orders requiring the districts to take specific steps to desegregate their schools or otherwise comply with the law. These desegregation orders may include various requirements, such as creating special schools and redrawing attendance zones in such a way as to foster more racial diversity. A federal desegregation order may be lifted when the court determines that the school district has complied in good faith with the order since it was entered and has eliminated all vestiges of past unlawful discrimination to the extent practicable, which is commonly referred to as achieving unitary status. According to Justice officials, the onus is on the school district, not Justice, to seek unitary status because Justice cannot compel a district to ask the court to lift its order. In general, if a district seeks to have a desegregation order lifted, it must file a motion for unitary status with the court. According to information we reviewed, some districts may choose to keep their order in place, even though they have successfully desegregated. Among other things, these orders, according to experts, can help to ensure that schools will not resegregate. Some of the cases that originally ordered districts to desegregate their schools back in the 1960s and 1970s are still open today. School districts that are not subject to a desegregation order may voluntarily take actions to increase the racial diversity of their schools. Court decisions have also shaped such efforts. For example, in 2007, in Parents Involved in Community Schools v. Seattle School District No. 1, the U.S. Supreme Court struck down several school districts student assignment plans that relied on racial classification. The Court held that the districts failed to show that the use of race in their student assignment plans was necessary to achieve their goal of racial diversity, noting among other things that the racial classifications used had minimal effect on student assignments and that the districts had failed to consider race- neutral alternatives to increase diversity. <1.2. Racial and Socioeconomic Demographics of Schools> The composition of the student population in U.S. K-12 public schools has changed significantly over time. In 1975, approximately a decade after enactment of the Civil Rights Act of 1964, Black students were the largest minority group in schools, comprising 14 percent of students and with a poverty rate of about 40 percent. In school year 2013-14, Hispanic students were the largest minority group in schools (25 percent Hispanic students compared to 16 percent Black students), and both groups continue to have poverty rates two to three times higher than the rates of White students. The link between racial and ethnic minorities and poverty is long-standing, as reflected in these data. According to several studies, there is concern about this segment of the population that falls at the intersection of poverty and minority status in schools and how this affects their access to quality education. Of the approximately 93,400 K-12 public schools in the United States, in school year 2013-2014 90 percent of them were traditional schools (which are often located within a neighborhood or community to serve students residing there), 7 percent were charter schools, and 3 percent were magnet schools. <1.3. Research on Student Outcomes> An extensive body of research over the past 10 years shows a clear link between schools socioeconomic (or income) composition and student academic outcomes. That is, the nationally representative studies we reviewed (published from 2004 to 2014) showed that schools with higher concentrations of students from low-income families were generally associated with worse outcomes, and schools with higher concentrations of students from middle- and high-income families were generally associated with better outcomes. For example, one study we reviewed showed that as the average family income of a school increased, the academic achievement and attainment of students of all racial backgrounds increased. The converse was also true. For example, another study found that students attending schools with lower average family income learned at a slower pace than students attending schools where income was higher. The studies, however, paint a more nuanced picture of the effects of schools racial composition on student academic outcomes. Specifically, while some of the studies found that having higher percentages of Black or Hispanic students resulted in weaker student outcomes, those effects were often confounded by other factors, including family income, and sometimes the racial composition of schools affected students differently. For example, one study concluded that the average family income of a school had a stronger and more negative effect on academic outcomes, but it also found that, after controlling for other factors, as the percentage of minority students increased in a school, Hispanic students were more likely to graduate from high school, and Asian students were less likely to graduate compared to White students. In another example, a 2010 study found that, after controlling for characteristics such as average family income in the neighborhood, the percentage of Black students in a school had no effect on the likelihood of high school graduation for students of all racial groups and had a small positive effect for all students chances of earning a bachelor s degree. See appendix III for the list of studies we reviewed. <2. The Percentage of High-Poverty Schools with Mostly Black or Hispanic Students Increased over Time, and Such Schools Tend to Have Fewer Resources> <2.1. High-Poverty Schools with Mostly Black or Hispanic Students Represent 16 Percent of All K-12 Public Schools> Over time, there has been a large increase in schools that are the most isolated by poverty and race. From school years 2000-01 to 2013-14 (most recent data available), both the percentage of K-12 public schools that were high poverty and comprised of mostly Black or Hispanic students (H/PBH) and the students attending these schools grew significantly. In these schools 75 to 100 percent of the students were eligible for free or reduced-price lunch, and 75 to 100 percent of the students were Black or Hispanic. As shown in figure 1, the percentage of H/PBH schools out of all K-12 public schools increased steadily from 9 percent in 2000-01 (7,009 schools) to 16 percent in 2013-14 (15,089 schools). See table 3 in appendix II for data separately breaking out these schools by the percent that are majority Black students and the percent that are majority Hispanic students. While H/PBH schools represented 16 percent of all K-12 public schools, they represented 61 percent of all high-poverty schools in 2013-14. See table 4 in appendix II for additional information on high-poverty schools. Further, at the other end of the spectrum, the percentage of schools that were low poverty and comprised of fewer Black or Hispanic students (L/PBH) decreased by almost half over this same time period. In L/PBH schools, 0 to 25 percent of the students were eligible for free or reduced- priced lunch, and 0 to 25 percent were Black or Hispanic. In addition, more students are attending H/PBH schools than in the past. As shown in figure 2, the number of students attending H/PBH schools more than doubled, increasing by about 4.3 million students, from about 4.1 million to 8.4 million students (or from 10 percent to 17 percent of all K-12 public school students). Also, the percentage of Hispanic students is higher than that of Black students in these schools. Hispanic students tend to be triply segregated by race, income, and language, according to subject matter specialists we interviewed and, according to Education data, are the largest minority group in K-12 public schools. The U.S. Census Bureau projects that by 2044, minorities will be the majority in the United States. Further, among H/PBH schools, there is a subset of schools with even higher percentages of poverty and Black or Hispanic students, and growth in these schools has been dramatic. Specifically, according to our analysis of Education s data, the number of schools where 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic grew by 143 percent from school years 2000-01 to 2013-14. In school year 2013-14, these schools represented 6 percent of all K-12 public schools, and 6 percent of students attended them (see appendix II for additional information on this subset of schools). H/PBH schools are largely traditional schools; however, the percentage of H/PBH schools that are traditional schools decreased from 94 percent to 81 percent from school years 2000-01 to 2013-14. In contrast, the percentage of such schools that were charter schools and magnet schools increased over that time period from 3 percent to 13 percent and from 3 percent to 5 percent, respectively (see fig. 3). In addition, with respect to the socioeconomic and racial composition of charter schools and magnet schools, both are disproportionately H/PBH schools. For example, in 2013-14, 13 percent of H/PBH schools were charter schools, while 5 percent of L/PBH schools were charter schools. To comply with federal law, some districts may have converted low-performing public schools to charter schools, which may have contributed, in part, to the growth among high-poverty and minority populations in charter schools. Further, 5 percent of H/PBH schools were magnet schools, while 2 percent of L/PBH schools were magnet schools. In terms of school type, the percentage of students who attended H/PBH schools decreased for traditional schools but increased among charter and magnet schools. For traditional schools the percentage of students dropped from 95 percent to 83 percent, even though there was an absolute increase in the number of students at H/PBH traditional schools (from 3.9 million to 6.9 million students, according to our analysis of Education s data). The percentage of students who attended H/PBH charter schools increased from 1 percent to 9 percent (55,477 to 795,679 students), and those who attended H/PBH magnet schools increased from 4 percent to 8 percent (152,592 to 667,834) (see fig. 4). <2.2. High-Poverty Schools with Mostly Black or Hispanic Students Generally Have Fewer Resources and More Disciplinary Actions Than Other Schools> Research shows that lower levels of income were generally associated with worse student educational outcomes (see app. III). Our analysis of Education data also showed that schools that were highly isolated by poverty and race generally had fewer resources and disproportionately more disciplinary actions than other schools. As shown in figures 5 through 9, when comparing H/PBH schools to L/PBH schools and all other schools (i.e., schools that fall outside of these two categories), disparities existed across a range of areas in school year 2011-12, the most recent year for which these data were available. Further, disparities were even greater for the subset of H/PBH schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic, across most areas analyzed. In addition, comparing just the H/PBH traditional, charter, and magnet schools, we also found differences. (See app. II for additional data, including data comparing schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic to other schools). As previously mentioned, although our analyses of Education s data showed disparities across a range of different areas, these analyses, taken alone, should not be used to make conclusions about the presence or absence of unlawful discrimination. The Importance of Middle School Algebra, STEM courses, and AP and GATE Programs Several academic courses and programs are especially beneficial in preparing students for college and successful careers. Among these are middle school algebra; courses in Science, Technology, Engineering, and Mathematics (STEM) fields; Advanced Placement (AP) courses; and Gifted and Talented Education (GATE) programs. According to the Department of Education, access to algebra in middle school that is, in 7th or 8th grade positions students to complete higher-level courses in math and science in high school, which is critical to preparing students for college and careers. Therefore, access to a full range of STEM courses in high school, such as calculus, chemistry, and physics, is important in preparing students for college and careers in high-demand fields. In addition, rigorous academic programs, such as AP and GATE, can improve student achievement and build skills that help students move toward college- and career-readiness. AP courses help prepare high school students for college-level courses and upon passing the AP exam, may enable students to receive college credit. <2.2.1. Academic and College Preparatory Courses> According to our analysis of Education s data, lower percentages of H/PBH schools offered a range of math courses, with differences greatest for 7th or 8th grade algebra and calculus, and differences less evident for algebra II and geometry compared to L/PBH schools and all other schools (see fig. 5). According to Education, access to algebra in 7th or 8th grade positions students to complete higher-level courses in math and science in high school, which is critical to preparing students for college and careers. Among just the H/PBH schools, a higher percentage of magnet schools offered these four math courses. Between just H/PBH traditional schools and charter schools, a higher percentage of traditional schools offered 7th or 8th grade algebra and calculus, while a higher percentage of charter schools offered algebra II and geometry (see app. II for additional data). Similarly, with respect to science courses biology, chemistry, and physics our analyses of Education data show disparities, with a lower percentage of H/PBH schools offering these courses compared to L/PBH schools and all other schools, with differences most evident for physics. Among just the H/PBH schools, a higher percentage of magnet schools offered all three science courses. Between just H/PBH traditional schools and charter schools, a higher percentage of charter schools offered biology and chemistry (see fig. 6). With respect to AP courses, there were also disparities, as a lower percentage of H/PBH schools offered these courses compared to L/PBH schools and all other schools. Differences were the greatest between H/PBH schools (48 percent of these schools offered AP courses) and L/PBH schools (72 percent of these schools offered these courses). Among just the H/PBH schools, a higher percentage of magnet schools (83 percent) offered AP courses than did the traditional schools (50 percent) or charter schools (32 percent) (see fig. 7). In addition, among schools that offered AP courses, a lower percentage of students of all racial groups (Black, Hispanic, White, Asian, and Other) attending H/PBH schools took AP courses compared to students of all racial groups in L/PBH schools and all other schools. Specifically, among schools that offered AP courses, 12 percent of all students attending H/PBH schools took an AP course compared to 24 percent of all students in L/PBH schools and 17 percent of all students in all other schools. In addition, with respect to Gifted and Talented Education programs, or GATE, a lower percentage of H/PBH schools offered these programs compared to all other schools; however, a higher percentage of H/PBH schools offered GATE programs compared to L/PBH schools. Looking at just H/PBH schools, almost three-quarters of magnet schools and almost two-thirds of traditional schools offered this program, while less than one- fifth of charter schools offered it (see fig. 7). Students in H/PBH schools were held back in 9th grade, suspended (out- of-school), and expelled at disproportionately higher rates than students in L/PBH schools and all other schools. Specifically, although students in H/PBH schools were 7 percent of all 9th grade students, they were 17 percent of all students retained in 9th grade, according to our analysis of Education s data (see fig. 8). Further, with respect to suspensions and expulsions, there was a similar pattern. Specifically, although students in H/PBH schools accounted for 12 percent of all students, they represented 22 percent of all students with one or more out-of-school suspensions and 16 percent of all students expelled (see fig. 9 and fig. 10). For additional information comparing students in schools with different levels of Black, Hispanic, and poor students, and by school type (traditional, charter, and magnet schools), see tables 20 and 21 in appendix II. H/PBH schools have large percentages of Hispanic students and, as expected, have a disproportionately greater percentage of students who were English Learners (EL). With respect to students with disabilities, our analysis of Education s data showed small differences across two of the school groupings we analyzed. Specifically, L/PBH schools had 19 percent of all students and 17 of the students with disabilities, and all other schools had 69 percent of all students and 71 percent of the students with disabilities, according to our analysis of Education s data. Further, while these comparisons show some slight differences by school in the percent of students with disabilities, Education s own analysis of these data by race showed there are differences among racial groups, with Black students overall being overrepresented among students with disabilities. <3. To Address Racial Imbalances and Demographic Shifts, Selected Districts Reported Taking Various Actions to Increase Diversity of Schools> Because their schools were largely isolated by race and poverty or had experienced large demographic shifts, the three school districts we reviewed located in the Northeast, South, and West reported implementing a variety of actions in an effort to increase racial and socioeconomic diversity in their schools. However, in implementing these efforts aimed at increasing diversity, school districts struggled with providing transportation to students and obtaining support from parents and the community, among other things. School District in the Northeast. The district in the Northeast, an urban, predominantly low-income, Black and Hispanic district surrounded by primarily White suburban districts, had tried for over two decades to diversify its schools, according to state officials. Despite these efforts, continued racial isolation and poverty among schools in the district prompted a group of families to file a lawsuit against the state in state court, alleging that the education students received in the urban district was inferior to that received in the more affluent, largely White suburban schools. The plaintiffs argued that the state s system of separate city and suburban school districts, which had been in place almost a century, led to racially segregated schools. The state supreme court ruled that the conditions in the district violated the state constitution, requiring the state to take action to diversify the urban district and its surrounding suburban schools. In response, the state and district took a variety of actions. In particular the state provided funding to build several new or completely renovated state-of-the-art magnet schools within the region to attract suburban students. To attract students from the city and suburbs, the magnet schools used highly specialized curriculum. For example, one newly renovated environmental sciences magnet school we visited offered theme-based instruction that allowed students to work side-by-side with resident scientists to conduct investigations and studies using a variety of technologies and tools. Other magnet schools in this area offered different themes, such as aerospace and engineering or the performing arts. To further facilitate its efforts at diversity, the state provided funding for transportation to magnet schools, enabling suburban and urban students to more easily attend these schools. In addition, according to officials, consistent with the court order, the state required the district s magnet schools to maintain a student enrollment of no more than 75 percent minority students. However, the district faced several challenges with respect to its magnet schools. For example, officials said maintaining a certain ratio of non- minority students posed challenges. According to the district superintendent, even if there were openings, many minority students in the district were unable to attend certain magnet schools because doing so would interfere with the ratio of minorities to non-minorities the state was attempting to achieve. In addition, because assignment to magnet schools was done through a lottery, students were not guaranteed a slot in a magnet school. Officials told us that in those cases where there was not enough space in a magnet school or where admitting more minority students would disrupt the ratio of minorities to non-minorities, these students would attend their traditional neighborhood school. Because the lottery did not guarantee all students in the urban district a magnet school slot, a student also had to designate four other school options. However, without a similar infusion of funds that was available for the magnet schools, officials we spoke to said that the neighborhood schools in the urban district declined. As a result, families that did not gain access to well-supported magnet schools resented resources spent on these schools, according to officials. Also, because the neighborhood schools were not required to maintain a specified percentage of minority students like the magnets, they, as well as the charter schools in the urban district, continued not to be very diverse, according to officials. The state also enabled students from the urban district to enroll in traditional schools (non-magnet) in the suburbs by drawing four attendance zones around the urban district. Creation of these zones reduced bus travel times for students and facilitated relationships between parents in the community whose children were attending the same suburban school, according to officials. Parents could apply for these traditional, suburban schools through the lottery, selecting up to five participating suburban school districts that are designated within their zone. If a student was not placed in one of these schools, they would attend a school in their urban district. In addition to providing transportation so that students could attend suburban schools, the state offered suburban schools grants of up to $8,000 per student, an academic and social support grant of up to $115,000 per school district, and a capital funds grant of up to $750,000 per school district. Despite these incentives, according to officials we interviewed, some families chose not to enroll their children in the suburban schools and instead opted to stay in close-by neighborhood schools, dampening the effects of the efforts to diversify. School District in the South. The district in the South had previously been under a federal desegregation order and experienced major demographic changes going from a district serving primarily Black and White students to one serving many other races and ethnicities as well as immigrant populations. Students in the district represented about 120 different nationalities and languages, and according to officials, this included students from Somalia and Coptic Christians and Kurds from Egypt. To address the major demographic changes and help achieve diversity across more schools in the district, the district did away with its previous school attendance zones, which had generally assigned students to schools located in their geographic area or neighborhood. In its place, the district created new student assignment zones for its schools, and also hired an outside expert to help implement a new diversity plan. Specifically, under the new student assignment plan, the new zones were intended to provide greater socioeconomic and racial diversity nearer to where students lived, according to school district officials we interviewed. Under the new plan, parents were allowed to choose among schools within their attendance zones, which allow greater choice of schools for children closer to their neighborhoods. The plan also supported students who chose to attend schools outside of these zones by providing public transit passes, while school bus transportation was provided to students who attended schools within their attendance zones. According to documents we reviewed, this district experienced challenges implementing its revised student assignment plan. Parents choices of schools resulted in resegregation of students, prompting a complaint leading to a Department of Education investigation, as well as a federal lawsuit. According to Education officials, their investigation of the complaint found that after the school choice period was completed and students were enrolled for the school year, there was a significant increase in racial isolation in some of the schools in particular urban and suburban areas. In addition, several families and a nonprofit organization filed a federal lawsuit alleging that the implementation of the school district s revised student assignment plan was causing unconstitutional racial segregation in the district. The court upheld the plan, finding that although the plan had caused a segregative effect in the district, there was no discriminatory intent by the officials in adopting and implementing the plan. To address the concerns raised in the lawsuit, the district hired an expert to refine and develop a school diversity plan. Under this diversity plan, student diversity was defined broadly, to include language and disability, as well as race/ethnicity and income (see text box). However, even after implementing the new diversity plan, officials told us that some families in their district sent their children to private schools, rather than attend the district s public schools. These officials also said that, in their opinion, some White families in their district were less eager to have their children attend diverse schools. Diversity Plan in a School District in the South According to district documents, a school in the district is diverse if it meets at least one of the following measures: enrolls multiple racial/ethnic groups, and no single group represents more than 50 percent of the school s total enrollment; enrolls at least three racial/ethnic groups, and each represents at least 15 percent of the school s total enrollment; or enrolls at least two racial/ethnic groups, and each represents at least 30 percent of the school s total enrollment; and at least two of the following measures: percentage of students eligible for free or reduced meals is at least two-thirds the average of other schools, percentage of English Learners is at least two-thirds the average of other schools, or percentage of students with a disability is at least two-thirds the average of other schools. The district measures schools within their grade tier level. The typical grade tier levels are elementary school (Pre-K 4th grade), middle school (5th-8th grade), and high school (9th- 12th grade). As part of the new diversity plan, the district is also hiring staff that reflect, to the extent possible, the diversity of the student body. Further, when making decisions about a range of matters, such as drawing school boundary lines, placement of new schools, providing student transportation, and recruiting and training school staff, the plan calls for them to consider the impact of those decisions on diversity. In addition, the district is in the process of allocating school resources with a goal of better reflecting the different needs of students in the schools (e.g., English Learners). School District in the West. The district we visited in the West is located in a state with an open-enrollment law, which gives parents a significant degree of choice in determining the schools their children attend, including schools outside of their neighborhoods. District officials told us that, in their opinion, as a result of the state law, White students often choose not to attend certain schools in the district. District officials told us that this left a largely Hispanic and low-income student population in those schools, prompting the district to implement several actions in an attempt to diversify. Specifically, the district, led by the school board, converted some of its existing public schools into magnet schools. Further, to meet diverse student needs, the state provided additional funds for high-needs students, such as those eligible for free or reduced- price lunch, English Learners, or foster care youth. According to officials, this district struggled to diversify because parents have a significant degree of choice in where to enroll their children, magnet schools give priority to children in their neighborhood, and funding was limited for some schools. After the district implemented its diversity efforts, district officials told us that, in their opinion, some White families continued to choose schools outside the district and many other families chose to keep their children in neighborhood schools where diversity was low. In addition, the magnet schools gave priority to neighborhood children, which further hampered attempts at diversity. Further, although the district converted some of its schools to magnet schools to attract students, they provided no transportation for students, and some of the schools were converted without any upgrades to the facilities, as state funding for education declined due to an economic recession. One principal we interviewed at a converted magnet school expressed frustration that his school did not have the proper signage or visual appeal to attract families. Further, principals and other school district officials we interviewed said that they struggled to reach capacity in some of their schools. In contrast, one of the magnet schools we visited had a waiting list and was a state-of-the-art facility, with Wi-Fi, computers for every student, and 3D printers. Unlike the other magnet schools, this school has been operating as a magnet for nearly 20 years, and at the time of our review, had a waiting list. In further contrast, this school received most of its funding from private donations at a level significant enough to fund the technology focus of this school. <4. Education and Justice Have Taken a Range of Actions to Address Racial Discrimination against Students, although Better Use of Available Data May Enhance These Efforts Education Addresses Discrimination by Conducting Investigations, Issuing Guidance, and Providing Technical Assistance> Education has taken a range of actions to address racial discrimination in schools. For example, Education has conducted investigations on its own initiative as well as investigations in response to complaints or reports of possible discrimination. Depending on the outcome of these investigations, Education may enter into agreements, called resolution agreements, which establish the actions a school or school district agrees to take to address issues found during an investigation. Education also may withhold federal funds if a recipient is in violation of the civil rights laws and Education is unable to reach agreement with the parties, although officials told us that this rarely happens. Education s agency-initiated investigations, which are called compliance reviews, target problems that appear particularly acute. Education s Office for Civil Rights launched 32 compliance reviews in fiscal years 2013 and 2014 across a range of issues related to racial discrimination. For example, in 2014 Education completed a compliance review of an entire district s disciplinary practices. As a result of that review, Education found that Black students were disproportionately represented among students subject to suspensions, other disciplinary actions, and referrals to law enforcement and that Black students were disciplined differently from White students for similar offenses. In one instance, Education cited an example of an 8th-grade White student who was given detention for leaving class without permission while an 8th-grade Black student was suspended 3 days for skipping a class even though this student had no such prior incidents. Education entered into a resolution agreement with the district to resolve the issues it identified, which, among other things, required the district to collect data to monitor its disciplinary practices for potential discrimination. The agreement also required the district to assign a staff person responsible for ensuring that disciplinary practices are equitable and to provide training for teachers and staff. In 2013, another compliance review initiated by Education of a district found that Black and Hispanic students were under-represented in high school honors and AP courses, as well as elementary and middle school advanced courses and gifted and talented programs. To resolve these issues, Education entered into a resolution agreement with the district which, among other things, required the district to identify potential barriers to student participation in these courses, such as eligibility and selection criteria, hire a consultant to help address this issue, and provide training for district and school staff on how to encourage and retain student participation in these courses. The agreement also required the district to collect and evaluate data on an ongoing annual basis of its enrollment policies, practices, and procedures to determine whether they are being implemented in a non- discriminatory manner. Further, Education has also conducted more narrowly-focused investigations in response to complaints of discrimination, which can be filed by anyone who believes that an educational institution that receives federal funds has discriminated against someone on the basis of race, color, or national origin. According to Education, it received about 2,400 such complaints in fiscal year 2014. For example, in response to a 2011 complaint alleging that a high school s football coach subjected Black players to racial harassment and that the district failed to address it, Education launched an investigation of the district. Education found that the football coach directed racial slurs at Black players, and players who complained were harassed by their fellow students and staff, who supported the coach. Education also found that the coach did not assist Black players with obtaining athletic scholarships, even stating that athletic scholarships are for White players and financial aid is for Black players. To resolve these findings, Education negotiated a resolution agreement with the district that required the district to review and revise its harassment and discrimination policies and take appropriate steps to remedy the harassment by the coach, including appointing a new coach and offering counseling for the students. Education has also issued guidance to schools on their obligations under the federal civil rights laws, and its decision to issue such guidance may be prompted by factors such as its findings from investigations or developments in case law. For example, Education issued guidance jointly with Justice in 2014 on school discipline to assist states, districts, and schools in developing practices and strategies to enhance the atmosphere in the school and ensure those policies and practices comply with federal law. The guidance included a letter on applicable federal civil rights laws and discipline that describes how schools can meet their obligations under federal law to administer student discipline without discriminating against students on the basis of race, color, or national origin. Also in that year, Education issued guidance addressing the issue of equitable access to educational resources. Specifically, in its guidance, Education states that chronic and widespread racial disparities in access to rigorous courses, academic programs, and extracurricular activities and in other areas hinder the education of students of color today and strongly recommends that school districts proactively assess their policies and practices to ensure that students are receiving educational resources without regard to their race, color, or national origin. In addition, Education issued guidance jointly with Justice in 2011 following the 2007 U.S. Supreme Court decision in Parents Involved that addressed districts voluntary use of race to diversify their schools. This guidance sets forth examples of the types of actions school districts could take to diversify their schools or avoid racial isolation, consistent with this decision and the federal civil rights laws. It states that districts should first consider approaches that do not rely on the race of individual students, for example, by using race-neutral criteria such as students socioeconomic status, before adopting approaches that rely on individual racial classifications. For approaches that do consider a students race as a factor, districts should ensure their approach closely fits their goals and considers race only as one factor among other non-racial considerations. Further, Education also offers technical assistance, through various means, such as conducting webinars, sponsoring and presenting at conferences, and disseminating resource guides to schools and school districts. For example, at a 2015 magnet school workshop, Education officials discussed the benefits to improving diversity in the schools and the ramifications of relevant court decisions related to diversifying schools. They also offered examples of actions schools can take consistent with these court decisions to promote greater school diversity. <4.1. Analyses of Civil Rights Data by School Groupings Could Help Education Discern Further Disparities> Education uses its Civil Rights Data to identify patterns, trends, disparities, and potential discrimination by performing analysis of particular groups of students, such as by race and ethnicity, and could further enhance its current efforts by also more routinely analyzing data by school types and groupings. Analyzing data by schools may help discern patterns and trends occurring in different types of schools, such as the disparities our analysis revealed in high-poverty schools comprised of mostly Black or Hispanic students. For example, through its analysis of its Civil Rights Data, Education identified an issue nationwide with disproportionately high suspension and expulsion rates of certain groups of students by race, among other characteristics. Education uses these analyses to inform its investigations and guidance. For example, its analysis of its Civil Rights Data, which showed disparities across groups of students by race and other factors in students access to academic courses (such as algebra and AP courses), helped inform an investigation and resulted in guidance. According to Education, it typically analyzes its data by student groups to help it identify disparities or potential discrimination against students on the basis of race, color, or national origin, consistent with the civil rights laws it enforces. While these analyses, by specific groups of students, are important to its enforcement responsibilities, by also more routinely analyzing data by different types and groupings of schools, other patterns might be revealed, as our own analyses show. In addition, although socioeconomic status is not a protected class under the U.S. Constitution or federal civil rights laws, research has shown that poverty (socioeconomic status) and race overlap (see app. III). By examining these two phenomena in tandem, Education has another lens for examining any possible issues at the school level. Education has used its Civil Rights Data to publish a 2014 data snapshot on school discipline that highlighted disparities by race, ethnicity, and English Learner status, among other characteristics. To illustrate where Education might enhance such an analysis, our analysis of the same data also found disparities and differences between groups of schools with disparities most evident for H/PBH schools. Further, Education s data snapshot on college and career readiness, also based on its analysis of Civil Rights Data, showed disparities in access to core subjects, such as algebra I and II, geometry, biology, chemistry, and AP courses by various student groups. Again, analyzing the same data, we also found these disparities, but we found them among schools grouped by level of poverty and among Black and Hispanic students, with disparities most acute among H/PBH schools. In addition, our analyses showed further disparities when we grouped schools by types traditional, charter, and magnet schools. For example, one of our analyses of Education s school year 2011-12 data showed that, among H/PBH schools, a higher percentage of magnet schools (83 percent) offered AP courses than did the traditional schools (50 percent) or charter schools (32 percent). While Education s analyses of its Civil Rights Data provide critical information to aid its enforcement of civil rights laws, also analyzing these data by different groupings and types of schools could provide Education with an additional layer of information that, as we found, further illuminates disparities and could enhance their efforts. Federal internal control standards state that agencies should use operational data to ensure effective and efficient use of agency resources. By analyzing its data by groupings and types of schools, Education has an opportunity to enhance its efforts and better inform guidance and technical assistance to the groups and types of schools that need it most. <4.2. Justice Addresses Discrimination by Conducting Investigations, Issuing Guidance, and Taking Legal Action> The Department of Justice s Educational Opportunities Section of the Civil Rights Division has taken several actions to address racial discrimination against students. Similar to Education, Justice conducts investigations in response to complaints or reports of possible violations. Depending on the outcome of its investigation and the circumstances of the case, Justice may take a number of actions, which could include entering into a settlement agreement with the district or initiating litigation to enforce the civil rights laws. For example, Justice investigated complaints in 2011 alleging that a student had been subject to racial harassment at a high school, which included receiving race-based death threats and retaliation for reporting the harassment. The investigation found that the district failed to adequately investigate, address, and prevent recurrence of the harassment, which resulted in the student leaving the district out of fear for her safety, and that other Black students had experienced racial harassment and retaliation. Justice entered into a settlement agreement with the district that included making revisions to the policies and procedures for handling racial harassment complaints. Justice has also intervened, that is joined in and became a party, in discrimination lawsuits. For example, in 2000 Justice intervened in a civil rights lawsuit against a district, alleging the district failed to appropriately address harassment of a pair of students by other students. The alleged harassment included racial slurs, including some within earshot of teachers, and racial graffiti on walls and desks. Further, one of the students was the victim of a racially motivated assault. The parties negotiated an agreement, which was adopted by the court as an order, that required the district to, among other things, maintain written records of each harassment allegation received, investigation conducted, and corrective action taken by the district to ensure a consistent and effective review of allegations. Further, as previously mentioned, Justice has issued guidance jointly with Education to ensure states and school districts understand their responsibilities to prevent and address racial discrimination in schools. Justice also monitors and enforces open federal school desegregation cases where Justice is a party to the litigation. According to Justice officials, as of November 2015 there were 178 of these cases. Justice officials told us they routinely work with districts (and other parties to the desegregation case) to close out those cases where the school district has met its statutory and Constitutional duty to desegregate. For example, in January 2015, Justice completed its compliance monitoring visits for a school district that had been operating under a series of consent orders since 1970, most recently one from 2012. Justice determined that the district had complied with the terms of the desegregation order. The parties agreed, and in May 2015 the court declared the district unitary, thus allowing the desegregation order to be lifted. Justice has also recently engaged in active litigation in several open desegregation cases. For example, in 2011, as a party to another long- standing desegregation case, Justice filed a motion asking the court to find that the district had violated its obligations under several prior desegregation orders. In 2012, the court determined, among other things, that although the district had made significant progress, two predominantly Black schools had never been desegregated, and the court ordered the district to draft a plan to improve integration at those schools. Justice officials said that they initiate action on an open desegregation case in response to various factors, including requirements from the court, complaints or inquiries they receive, or issues raised in media reports. According to Justice officials, the agency also conducts agency-initiated affirmative reviews of districts under open desegregation orders, which could include requests for additional supplemental data, site visits, and initiation of negotiations if compliance issues are identified, among other things. <4.3. Justice Does Not Systematically Track Key Data to Inform Actions on Open Desegregation Cases> As noted above, Justice is responsible for monitoring and enforcing the 178 open federal desegregation orders to which it is a party many of which originated 30 or 40 years ago. However, it does not systematically track important summary information on these orders. As a consequence, the potential exists that some cases could unintentionally languish for long periods of time. For example, in a 2014 opinion in a long-standing desegregation case, the court described a long period of dormancy in the case and stated that lack of activity had taken its toll, noting, among other things, that the district had not submitted the annual reports required under the consent order to the court for the past 20 years. Although the court found certain disparities in educational programs and student test results, based on the record at the time it was unable to determine when the disparities arose or whether they were a result of discrimination. The court noted that had Justice been keeping an eye on relevant information, such as disparities in test scores, it could have brought it to the court s attention more quickly, allowing the court and district to address the issue in a timely fashion. While Justice officials told us that they maintain a system to track certain identifying information about each case, which includes the case name, the court docket number, the identification number generated by Justice, and the jurisdiction where the case originated, officials were unable to provide more detailed summary information across all of the open cases, such as the date of the last action, or the nature of the last action taken. Justice officials said that to obtain such information they would have to review each individual case file, some of which are voluminous and many of which are not stored electronically. Thus, Justice officials were unable to respond with specificity as to when or the nature of the last action taken on the open orders within broad time frames of 5 years, 10 years, or 20 years ago. According to Justice s Strategic Plan, the agency has a goal to protect the rights of the American people and enforce federal law. This Plan includes an objective for implementing this goal to promote and protect American civil rights by preventing and prosecuting discriminatory practices. According to this Plan, Justice seeks to address and prevent discrimination and segregation in elementary and secondary schools. The Plan states that the extent to which societal attitudes and practices reflect a continuing commitment to tolerance, diversity, and equality affect the scope and nature of Justice s work. In addition, federal internal control standards state that routine monitoring should be a part of normal operations to allow an agency to assess how the entity being monitored is performing over time. These standards also state that agencies should use information to help identify specific actions that need to be taken and to allow for effective monitoring of activities. Specifically, the standards state that information should be available on a timely basis to allow effective monitoring of events and activities and to allow prompt reaction. Also, the standards state that information should be summarized and presented appropriately and provide pertinent information while permitting a closer inspection of details as needed. In addition, the standards state that agencies should obtain any relevant external information that may affect achievement of missions, goals, and objectives. Without a systematic way to track key information about all of the open desegregation cases, such as the date of the last action or receipt of required reports, Justice may lack the summary information needed to monitor the status of its orders. This may affect the agency s ability to effectively manage its caseload and to promote and protect civil rights. <5. Conclusions> More than 60 years after the Brown decision, our work shows that disparities in education persist and are particularly acute among schools with the highest concentrations of minority and poor students. Further, Black and Hispanic students are increasingly attending high-poverty schools where they face multiple disparities, including less access to academic offerings. Research has shown a clear link between a school s poverty level and student academic outcomes, with higher poverty associated with worse educational outcomes. While the districts we contacted in different areas across the nation have efforts under way to help improve the quality of education for students, the Departments of Education and Justice have roles that are critical because they are responsible for enforcing federal laws that protect students from racial discrimination and ensuring schools and districts provide all students with equitable access. In doing so, both agencies can better leverage data available to them to aid their guidance, enforcement, and oversight efforts. Education has ongoing efforts to collect data that it uses to identify potential discrimination and disparities across key groups of students, but it has not routinely analyzed its data in a way that may reveal larger patterns among different types and groups of schools. As a result, the agency may miss key patterns and trends among schools that could enhance its efforts. In addition, Justice is a party to 178 federal desegregation orders that remain open, but Justice does not track key summary information about the orders that would allow them to effectively monitor their status. Without systematically tracking such information, the agency may lack information that could help in its enforcement efforts. <6. Recommendations for Executive Action> We recommend that the Secretary of Education direct Education s Office for Civil Rights to more routinely analyze its Civil Rights Data Collection by school groupings and types of schools across key elements to further explore and understand issues and patterns of disparities. For example, Education could use this more detailed information to help identify issues and patterns among school types and groups in conjunction with its analyses of student groups. We recommend that the Attorney General of the United States direct the Department of Justice s Civil Rights Division to systematically track key summary information across its portfolio of open desegregation cases and use this data to inform its monitoring of these cases. Such information could include, for example, dates significant actions were taken or reports received. <7. Agency Comments and Our Evaluation> We provided a draft of this report to the Departments of Education and Justice for their review and comment. Education s written comments are reproduced in appendix IV, and Justice s written comments are reproduced in appendix V. Education also provided technical comments, which we incorporated into the report, as appropriate. In its written comments, Education stated that its Office for Civil Rights already analyzes its Civil Rights Data Collection (Civil Rights Data) in some of the ways we recommend, and in light of our recommendation, it will consider whether additional analysis could augment the Office for Civil Rights core civil rights enforcement mission. Specifically, Education said it is planning to conduct some of the analysis suggested in our recommendation for future published data analysis based on the 2013- 2014 Civil Rights Data and will consider whether additional analysis would be helpful. Education also stated it is committed to using every tool at its disposal to ensure all students have access to an excellent education. In addition, Education stated that when appropriate, the Office for Civil Rights often uses the types of analyses recommended by GAO in its investigations. It also noted that racial disparities are only one potential element for investigations of potential discrimination. Education also said that it publishes reports based on the Civil Rights Data, referring to the Office for Civil Rights published data snapshots on College and Career Readiness and Teacher Equity, which we reviewed as part of this study. We found they do provide some important information about schools with high and low levels of minority populations. Further, Education stated that the disaggregations of the data that we presented in our report were the type of specialized analysis that the Office for Civil Rights encourages users outside the agency to explore. While we recognize the important ways Education is currently using its data and the additional analyses it is considering and planning in the future, it was our intent in making the recommendation that Education more routinely examine the data for any disparities and patterns across a key set of data elements by the school groupings we recommended. Further, while we support the engagement of researchers and other interested stakeholders outside the agency, we also believe that Education should conduct these analyses as part of its mission to provide oversight. We believe that by doing so, Education will be better positioned to more fully understand and discern the nature of disparities and patterns among schools. In light of Education s response about its data analysis efforts, which we agree are consistent with good practices to use agency resources effectively and efficiently, we modified the recommendation and report accordingly. We now specify in the recommendation that Education should more routinely analyze its Civil Rights Data across key elements in the ways recommended by our report to help it identify disparities among schools. We believe that such analysis will enhance current efforts by identifying and addressing disparities among groups and types of schools helping, ultimately, to improve Education s ability to target oversight and technical assistance to the schools that need it most. In its written comments, Justice stated it believes its procedures for tracking case-related data are adequate. Nevertheless, consistent with our recommendation, Justice said it is currently developing an electronic document management system that may allow more case-related information to be stored in electronic format. Justice agreed that tracking information concerning its litigation docket is important and useful and that it shares our goal of ensuring it accurately and adequately tracks case-related information. However, Justice also stated that our report fails to appreciate the extensive amount of data the agency maintains on its desegregation cases, which it maintains primarily for the purpose of litigation. Justice stated that it tracks and preserves information received from school districts and all case-related correspondence and pleadings, and because the data it collects are used to litigate each individual case, it does not track such data across cases. We understand Justice s need to maintain voluminous case-specific evidentiary files, some of which are maintained in hard copy. It was out of recognition for the extensive nature of these files that we recommended Justice also have a way to track key, summary information across its cases. Such summary information would allow for timely and effective monitoring and for prompt reaction, in accordance with federal standards for internal control. Further, Justice said various terms in our recommendation, such as systematically or key were not clear or well defined. In deference to the agency s expertise, in making the recommendation, we intentionally used broad language that would allow Justice to make its own judgments about what would best serve its mission. Justice also said it is concerned that the report could be read to suggest that racial disparities within a public school district constitute per se evidence of racial discrimination. Although our report does not make this statement, we have added additional language to further clarify that data on disparities alone are not sufficient to establish unlawful discrimination. With respect to the report s description of a selected desegregation case, Justice stated it was concerned with the emphasis we placed on one comment in the lengthy court opinion ( if Justice had been keeping an eye on relevant information ), which it said was based solely on the absence of entries on the court s docket sheet. Justice said in this case and in many others, it is engaged in a range of related activities, such as site visits and settlement agreements, which are not recorded on the courts docket sheets. We appreciate that courts may not be aware of all of Justice s activities in any one case; however, we believe this case illustrates how important it is for Justice to have timely information about its cases and how better information tracking could help the agency better manage and oversee its caseload. Also, with respect to this case, Justice commented that the existence of disparities in test scores alone is not sufficient to trigger a remedy under Justice s legal authority, and Justice must consider multiple factors before taking action in a case. We have clarified in the report that data on disparities taken alone are insufficient to establish unlawful discrimination. While we understand that tracking such information may not necessarily trigger action by Justice in any particular case, the case described was selected to serve as an example of the potential benefits of more proactive tracking of information in these cases. Further, Justice said it was concerned the report could be read to suggest that some cases have remained dormant or languished for long periods of time as a result of Justice s tracking system, without sufficient appreciation for the responsibilities of the school districts and courts in advancing and resolving the cases (such as by achieving unitary status). In the draft report on which Justice commented, we stated that the onus is on the district, not Justice, to seek unitary status. We have amended the final report to state this more prominently. However, while we acknowledge the key roles of the districts and the courts in resolving and advancing a desegregation case, the focus of our report is on the federal role, and Justice, too, plays an important role in litigating these cases a role we believe would be enhanced by improving its tracking of information about the cases. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Education and the Attorney General, and other interested parties. In addition, the report will be available at no charge on GAO s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Scope and Methodology The objectives of this study were to examine: (1) how the percentage of schools with high percentages of poor and Black or Hispanic students has changed over time and the characteristics of these schools, (2) why and how selected school districts have implemented actions to increase student diversity, and (3) the extent to which the Departments of Education (Education) and Justice (Justice) have taken actions to identify and address issues related to racial discrimination in schools. <8. Analysis of Federal Datasets> <8.1. Population Focus and Definitions> To answer our objectives, we analyzed the (1) poverty level of schools and (2) Black and Hispanic student composition of schools, as a basis for grouping and comparing schools. We measured poverty level at the school level using the percentage of students eligible for free or reduced- price lunch. A student is generally eligible for free or reduced-price lunch based on federal income eligibility guidelines that are tied to the federal poverty level and the size of the family. We focused on Black and Hispanic students because they are the two largest minority groups in U.S. K-12 public schools, and existing research has suggested that these groups experience disparities in school. The thresholds and measure of poverty discussed here and below was commonly used in the literature and also aligns with how Education analyzes its data. We categorized schools for our analysis based on both the percent of students in a school eligible for free or reduced-price lunch and the percent of Black or Hispanic students collectively in a school (see table 1). We divided our data into three school groups as follows: 1. Schools whose student populations were comprised of 0 to 25 percent students eligible for free or reduced-price lunch (i.e., low-poverty) and 0 to 25 percent Black or Hispanic students (referred to as L/PBH schools ), 2. Schools whose student populations were comprised of 75 to 100 percent students eligible for free or reduced-price lunch (i.e., high- poverty) and 75 to 100 percent Black or Hispanic students (referred to as H/PBH schools ), and 3. Schools that fall outside of these two categories (referred to as all other schools ). Because the literature also suggests that schools with even higher levels of Blacks and Hispanics and poverty face disparities that are even more acute, we also analyzed the group of schools in which 90 to 100 percent of the students were eligible for free or reduced-price lunch and 90 to 100 percent of the students were Black or Hispanic. These schools represent 6 percent of all K-12 public schools and are included in appendix II for further comparison. Our analyses of Education s data in this report are intended to describe selected characteristics of these schools; they should not be used to make conclusions about the presence or absence of unlawful discrimination. <8.2. Analysis of the Common Core of Data> To describe how the percentage and characteristics of schools with different levels of poverty among students and Black or Hispanic students has changed over time, we analyzed schools with both the highest and lowest percentages of poverty and Blacks or Hispanics and schools with all other percentages of these groups (see table 1). We used Education s Common Core of Data (CCD) from school years 2000-01, 2005-06, 2010- 11, and 2013-14, the most recent year of data available for these analyses. CCD is administered by Education s National Center for Education Statistics, which annually collects non-fiscal data about all public schools, as well as fiscal and non-fiscal data on public school districts, and state education agencies in the United States. The data are supplied by state education agency officials describing their schools and school districts. Data elements include name, address, and phone number of the school or school district; demographic information about students and staff; and fiscal data, such as revenues and current expenditures. To assess the reliability of these data, we reviewed technical documentation and interviewed relevant officials from Education. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. The data in the CCD represent the full universe of all U.S. K-12 public schools. To further understand the trends underlying the growth or decline of these categories of schools, we examined whether any variation in growth existed by region (Northeastern, Midwestern, Southern, and Western areas of the United States) and school type (traditional neighborhood schools, charter schools, and magnet schools). For our analysis of the CCD, we excluded schools that did not report information on (1) free or reduced-price lunch, which we used as a proxy to categorize the poverty level of the school or (2) the number of Black or Hispanic students, which we used to categorize the level of Black or Hispanic students in the school. For school year 2000-01, we included 78,194 schools and excluded 16,520 schools; for school year 2005-06, we included 91,910 schools and excluded 8,717 schools; for school year 2010-11, we included 94,612 schools and excluded 7,413 schools; and for school year 2013-14, we included 93,458 schools and excluded 7,633 schools. Because CCD collects information on the universe of schools, these exclusions would not affect our overall findings. There are several sources of non-sampling error associated with the CCD, which is self-reported and collected from the universe of schools and school districts. Non-sampling errors can be introduced in many ways. For example, they can result from data processing or data entry, when respondents misinterpret survey questions, do not follow survey instructions, or do not follow the item definitions correctly. Further, while CCD s coverage of traditional public schools and school districts is very complete, coverage of publicly funded education outside of traditional school districts has varying levels of coverage within different states and jurisdictions. Some states do not report schools that are administered by state organizations other than state educational agencies. Examples include charter schools authorized by an organization that is not a school district, schools sponsored by health and human services agencies within a state, and juvenile justice facilities. In recent years, Education has increased efforts to identify schools that may be underreported by state educational agencies. Further, because this information is self-reported, there is also the potential for misreporting of information. Education attempts to minimize these errors in several ways, including through training, extensive quality reviews, and data editing. <8.3. Analysis of the Civil Rights Data Collection> To examine additional characteristics about schools the students attended, we analyzed data from the public use file of Education s Civil Rights Data Collection (referred to as the Civil Rights Data in this report) for school year 2011-12, which was the most recent year of data available. The Civil Rights Data collected on a biennial basis consists of data on the nation s public schools, including student characteristics and enrollment; educational and course offerings; disciplinary actions; and school environment, such as incidences of bullying. To assess the reliability of these data, we reviewed technical documentation, and interviewed relevant officials from Education. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. The Civil Rights Data is part of Education s Office for Civil Rights overall strategy for administering and enforcing the federal civil rights statutes for which it is responsible. While this information was collected from a sample of schools in previous years, it was collected from the full universe of all U.S. K-12 public schools in 2011-12. By analyzing these data across the school categories in table 1, we were able to present data on the differences in the availability of courses offered among schools with different levels of poverty among students and Black or Hispanic students. For example, we were able to analyze differences among schools with respect to school offerings, such as advanced math and science courses as well as advanced academic programs, Advanced Placement courses, and Gifted and Talented Education programs. We were also able to examine differences in the level of disciplinary incidents such as more than one out-of-school suspension, arrests related to school activity, and bullying and the percentage of English Learners and students with disabilities. We also examined the numbers of full-time teachers with more than one year of experience, licensed and certified teachers, and teacher absences. The data also allowed us to analyze differences by type of school traditional neighborhood schools, charter schools, and magnet schools (see app. II). For this analysis we matched schools in the Civil Rights Data for school year 2011-12 (the most year recent for which Civil Rights Data are available) to schools in the CCD for school year 2011-12 and excluded schools for which there was not a match. Further, from the Civil Rights Data, we also excluded schools that did not report (1) free or reduced-price school lunch, which we used as a proxy to categorize the poverty level of the school or (2) the number of Black or Hispanic students, which we used to categorize the level of Black or Hispanic students in the school. As a result, our analysis of the Civil Rights Data for school year 2011-12 included 95,635 schools and excluded 5,675 schools. In the report, we present different years for the Civil Rights Data and CCD and, as a result, the numbers and percentages of schools and students derived from these two sets of data will not match. As with the CCD, the school year 2011-12 Civil Rights Data collected the full universe of schools and districts, with 99.2 and 98.4 percent response rate, respectively. These data are also subject to non-sampling error, and because these data are self-reported, there is also the potential for misreporting of information. For these data, Education put in place quality control and editing procedures to reduce errors. Further, for the school year 2011-12 Civil Rights Data, respondents were to answer each question on the Civil Rights Data survey prior to certification. Null or missing data prevented a school district from completing their Civil Rights Data submission to Education s Office for Civil Rights. Therefore, in cases where a school district may not have complete data, some schools or districts may have reported a zero value in place of a null value. It is not possible to determine all possible situations where this may have occurred. As such, it may be the case that the item response rates may be positively biased. Further, within this dataset there are outliers that likely represented misreported values. These outliers had the potential to heavily influence state or national totals. To ensure the integrity of the state and national totals, the Office for Civil Rights suppressed outliers identified by data quality rules. These rules flagged inconsistent and implausible values for suppression. To mitigate the potential for suppressions that distort aggregate totals, suppressed data were replaced with imputed data where possible. For example, where the number of students disciplined exceeded the number in membership, the number was set to the number of students in membership. <9. School District Site Visits> We selected a school district in each of three states (one in the Northeast, South, and West) and interviewed officials to describe why and how selected school districts have taken actions to address the diversity of their schools. We selected states to include different regions of the country, and we selected school districts within these states that had taken action to increase diversity. Within these districts, the schools we visited were selected to include a mix of grade level (elementary, middle, and high school), school type (traditional public and magnet), and location (urban and suburban). To select districts, we relied on recommendations from subject matter specialists and a review of available information. For example, we reviewed the school districts that had participated in Education s Voluntary Public School Choice grant program. Information from the districts we contacted is illustrative and not meant to reflect the situation in other districts with similar efforts. In the districts we selected, we interviewed different stakeholders, such as school district superintendents, school board members, state education officials, community leaders, and school officials. We conducted these interviews in person (in two locations) or by phone. During our interviews, we collected information about issues related to racial and socioeconomic diversity in public schools, including types of actions implemented to increase diversity, reasons for implementing the actions, challenges faced in implementing the actions, and comments about federal actions in this area. In addition to interviewing officials, in some locations we toured schools to learn more about how and why various actions were implemented at those schools. We provided the relevant sections of a draft of this report to the appropriate officials from each district for their review. We did not assess the extent to which the selected districts have achieved any diversity goals or complied with any applicable court orders. Because we selected the school districts judgmentally, we cannot generalize the findings about the actions officials took to address diversity to all school districts and schools nationwide. <10. Review of Federal Actions to Address School Diversity> To assess the actions taken by the Departments of Education and Justice to address issues related to racial discrimination in schools, we interviewed agency officials and reviewed relevant federal laws, regulations, and agency documents. With both agencies, we interviewed officials about each agency s responsibilities with respect to federal civil rights laws and regulations, as well as the actions the agencies took to enforce them. With Education officials, we discussed the agency s investigations, guidance, and data collection, and we reviewed agency procedures, selected documents from recently concluded investigations, and guidance documents. With Justice officials, we discussed the agency s litigation activities, investigations, and guidance and reviewed agency procedures and guidance documents, as well as certain documents from selected court cases, including selected desegregation orders. We assessed agencies actions using guidance on internal controls in the federal government related to oversight and monitoring as well as agency guidance and strategic plans. We also interviewed representatives of civil rights organizations and academic experts to discuss issues related to racial and socioeconomic diversity in public schools, including actions taken by school districts to increase diversity and federal actions to enforce federal civil rights laws with respect to race in public schools. <11. Research on Student Outcomes> We identified studies about the effect that the racial and socioeconomic composition of K-12 public schools has on various student outcomes, using specific terms to search several bibliographic databases. From these searches, we used studies published between 2004 and 2014 on U.S. students, as these studies are more reflective of current students and their outcomes. We looked at studies concerned primarily with the effect of socioeconomic composition of schools, or racial composition of schools, or both factors together. The studies selected were based on nationally representative samples of students that allowed us to examine the socioeconomic or racial composition of the schools, and the studies analyzed the effect these school-level characteristics had on student academic outcomes, such as test scores, grade point average, high school graduation or dropout rates, and/or college enrollment using research methodologies that controlled for potentially confounding factors. We excluded from consideration some studies based on factors including outdated data, limited scope, or research methods that failed to control for multiple factors when assessing outcomes. Although the findings of the studies we identified are not representative of the findings of all studies looking at whether a school s racial or socioeconomic composition affects student outcomes, they provide examples of published and peer-reviewed research that used strong research designs to assess these effects. See appendix III for the list of studies we reviewed. We conducted this performance audit from November 2014 through April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Analyses of Schools with Different Levels of Poverty and Black or Hispanic Students and Their Students, Using Common Core of Data and the Civil Rights Data Collection This appendix contains the results of our additional analyses to examine trends and disparities among schools with different levels of poverty among students and Black or Hispanic students. For these analyses, we used school- and student-level data from both the Common Core of Data (CCD) for selected school years from 2000-01 to 2013-14 and the Civil Rights Data Collection (Civil Rights Data) for school year 2011-12. This information is presented as a supplement to the findings presented in this report; however, we noted in the report when the information in these tables helped inform our findings. <12. Additional Analyses of Schools and Students Using CCD> These tables present the results of our additional analyses that used school- and student-level data from the Common Core of Data for students attending K-12 public schools. The tables include data on schools by different poverty levels and different concentrations of Black or Hispanic students, and data on students who attend these schools. For both schools and students, we present additional data by school type (traditional, charter, and magnet schools) and by region of country. <13. Additional Analyses of Schools and Students Using the Civil Rights Data Collection> These tables present the results of our additional analyses that used school- and student-level data from the Civil Rights Data Collection. The tables provide data on academic courses and programs offered, including advanced math and science courses and Advanced Placement and Gifted and Talented Education Programs. We also present school- and student-level data on retention and disciplinary incidents, including out-of- school suspensions, expulsions, reports of bullying, and school-related arrests, as well as data on special populations, such as English Learners and students with disabilities. We also present data on teaching-related variables, including teacher experience, certification and licensing, and absences. We present these data by different levels of poverty, Black or Hispanic students, and school type (traditional, charter, and magnet schools). Appendix III: List of Studies on Student Outcomes We Reviewed The following studies examined the effects of poverty and/or racial composition of schools on student outcomes: Aikens, Nikki L. and Oscar Barbarin. Socioeconomic Differences in Reading Trajectories: The Contribution of Family, Neighborhood, and School Contexts. Journal of Educational Psychology, vol. 100, no. 2 (2008): 235-251. Berends, Mark and Roberto Pe aloza. Increasing Racial Isolation and Test Score Gaps in Mathematics: A 30-Year Perspective. Teachers College Record, vol. 112, no. 4 (2010): 978-1007. Borman, Geoffrey D. and Maritza Dowling. Schools and Inequality: A Multilevel Analysis of Coleman s Equality of Educational Opportunity Data. Teachers College Record, vol. 112, no. 5 (2010): 1201-1246. Condron, Dennis J. Social Class, School and Non-School Environments, and Black/White Inequalities in Children s Learning. American Sociological Review, vol. 74, no. 5 (2009): 683-708. Crosnoe, Robert. Low-Income Students and the Socioeconomic Composition of Public High Schools. American Sociological Review, vol. 74, no. 5 (2009): 709-730. Goldsmith, Pat Rubio. Schools or Neighborhoods or Both? Race and Ethnic Segregation and Educational Attainment. Social Forces, vol. 87, no. 4 (2009): 1913-1941. Harris, Douglas N. Lost Learning, Forgotten Promises: A National Analysis of School Racial Segregation, Student Achievement, and Controlled Choice Plans. Center for American Progress. Washington, D.C; 2006. Logan, John R., Elisabeta Minca, and Sinem Adar. The Geography of Inequality: Why Separate Means Unequal in American Public Schools. Sociology of Education, vol. 85, no. 3 (2012): 287-301. McCall, Martha S., Carl Hauser, John Cronin, G. Gage Kingsbury, and Ronald Houser. Achievement Gaps: An Examination of Differences in Student Achievement and Growth. Northwest Evaluation Association. Portland, OR; 2006. Mickelson, Roslyn Arlin, Martha Cecilia Bottia, Richard Lambert. Effects of School Racial Composition on K 12 Mathematics Outcomes: A Metaregression Analysis. Review of Educational Research, vol. 83, no. 1 (2013): 121-158. Owens, Ann. Neighborhoods and Schools as Competing and Reinforcing Contexts for Educational Attainment. Sociology of Education, vol. 83, no. 4 (2010): 287-311. Palardy, Gregory J. High School Socioeconomic Segregation and Student Attainment. American Educational Research Journal, vol. 50, no. 4 (2013): 714-754. Palardy, Gregory J. Differential School Effects Among Low, Middle, and High Social Class Composition Schools: A Multiple Group, Multilevel Latent Growth Curve Analysis. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, vol. 19, no. 1 (2008): 21-49. Riegle-Crumb, Catherine and Eric Grodsky. Racial-Ethnic Differences at the Intersection of Math Course-Taking and Achievement. Sociology of Education, vol. 83, no. 3 (2010): 248-270. Rumberger, Russell W., Parsing the Data on Student Achievement in High-Poverty Schools. North Carolina Law Review, vol. 85 (2007): 1293- 1314. Rumberger, Russell W. and Gregory J. Palardy. Does Segregation Still Matter? The Impact of Student Composition on Academic Achievement in High School. Teachers College Record, vol. 107, no. 9 (2005): 1999- 2045. Ryabov, Igor. Adolescent Academic Outcomes in School Context: Network Effects Reexamined. Journal of Adolescence, vol. 34 (2011): 915-927. Ryabov, Igor and Jennifer Van Hook. School Segregation and Academic Achievement Among Hispanic Children. Social Science Research, vol. 36 (2007): 767-788. van Ewijk, Reyn and Peter Sleegers. Peer Ethnicity and Achievement: A Meta-Analysis Into the Compositional Effect. Tier Working Paper Series (2010). Appendix IV: Comments from the Department of Education Appendix V: Comments from the Department of Justice Appendix VI: GAO Contact and Staff Acknowledgments <14. GAO Contact> <15. Staff Acknowledgments> In addition to the contact named above, Sherri Doughty (Assistant Director), Linda Siegel (Analyst-in-Charge), Rachel Beers, Lisa Brown, Grace Cho, Sarah Cornetto, Camille Henley, John Mingus, Anna Maria Ortiz, and David Reed made key contributions to this report. Also contributing to this report were Deborah Bland, Holly Dye, Farrah Graham, Kirsten Lauber, Mimi Nguyen, and Cady Panetta.
Why GAO Did This Study Recent literature shows that poor and minority students may not have full access to educational opportunities. GAO was asked to examine poverty and race in schools and efforts by the Departments of Education and Justice, which are responsible for enforcing federal civil rights laws prohibiting racial discrimination against students. This report examined (1) how the percentage of schools with high percentages of poor and Black or Hispanic students has changed over time and the characteristics of these schools, (2) why and how selected school districts have implemented actions to increase student diversity, and (3) the extent to which the Departments of Education and Justice have taken actions to identify and address issues related to racial discrimination in schools. GAO analyzed Education data for school years 2000-01 to 2013-14 (most recent available); reviewed applicable federal laws, regulations, and agency documents; and interviewed federal officials, civil rights and academic subject matter specialists, and school district officials in three states, selected to provide geographic diversity and examples of actions to diversify. What GAO Found The percentage of K-12 public schools in the United States with students who are poor and are mostly Black or Hispanic is growing and these schools share a number of challenging characteristics. From school years 2000-01 to 2013-14 (the most recent data available), the percentage of all K-12 public schools that had high percentages of poor and Black or Hispanic students grew from 9 to 16 percent, according to GAO's analysis of data from the Department of Education (Education). These schools were the most racially and economically concentrated: 75 to 100 percent of the students were Black or Hispanic and eligible for free or reduced-price lunch—a commonly used indicator of poverty. GAO's analysis of Education data also found that compared with other schools, these schools offered disproportionately fewer math, science, and college preparatory courses and had disproportionately higher rates of students who were held back in 9th grade, suspended, or expelled. In the three districts GAO reviewed as case studies, officials reported implementing various actions to increase economic and racial diversity to address racial or other demographic shifts in school composition. For example, in one predominantly low-income, Black and Hispanic school district, the state and district created state-of-the-art magnet schools to attract students from more economically and racially diverse groups. However, these three districts faced challenges. For example, one state devoted funding to magnet schools while the district's traditional schools declined in quality, according to local officials. Further, according to officials, some magnets with openings could not accept minority students because doing so would interfere with the ratio of minority to non-minority students that the district was trying to achieve. The Departments of Education and Justice have taken a range of actions to identify and address racial discrimination against students. Education has investigated schools, analyzed its data by student groups protected under federal civil rights laws, and found discrimination and disparities in some cases. GAO analyzed Education's data among types of schools (charters, magnets, and traditional public schools) by percentage of racial minorities and a proxy for poverty level and found multiple disparities, including in access to academic courses. Education does not routinely analyze its data in this way. Conducting this type of analysis would enhance Education's ability to target technical assistance and identify other disparities by school types and groups. The Department of Justice (Justice) has also investigated discrimination claims, and it monitors and enforces 178 open federal desegregation court cases to which it is a party, many of which originated 30 or 40 years ago to remedy segregation. However, GAO found that Justice does not track key summary case information, such as the last action taken in a case. As a result, some may unintentionally remain dormant for long periods. For example, in one case the court noted there had been a lack of activity and that if Justice had “been keeping an eye” on relevant information, such as test score disparities, the issue could have been addressed in a more timely way. Federal internal control standards state that agencies should use information to help identify specific actions that need to be taken to allow for effective monitoring. Without tracking key information about open cases, Justice's ability toward effectively monitor such cases is hampered. What GAO Recommends GAO recommends that Education more routinely analyze its civil rights data to identify disparities among types and groups of schools and that Justice systematically track key information on open federal school desegregation cases to which it is a party to better inform its monitoring. In response, both agencies are considering actions in line with GAO's recommendations.
<1. Background> Since the early 1990s, hundreds of residential treatment programs and facilities have been established in the United States by state agencies and private companies. Many of these programs are intended to provide a less- restrictive alternative to incarceration or hospitalization for youth who may require intervention to address emotional or behavioral challenges. As mentioned earlier, it is difficult to obtain an overall picture of the extent of this industry. According to a 2006 report by the Substance Abuse and Mental Health Services Administration, state officials identified 71 different types of residential treatment programs for youth with mental illness across the country. A wide range of government or private entities, including government agencies and faith-based organizations, can operate these programs. Each residential treatment program may focus on a specific client type, such as those with substance abuse disorders or suicidal tendencies. In addition, the programs provide a range of services, either on-site or through links with community programs, including educational, medical, psychiatric, and clinical/mental health services. Regarding oversight of residential treatment programs, states have taken a variety of approaches ranging from statutory regulations that require licensing to no oversight. States differ in how they license and monitor the various types of programs in terms of both the agencies involved and the types of requirements. For example, some states have centralized licensing and monitoring within a single agency, while other states have decentralized these functions among three or more different agencies. There are currently no federal laws that define and regulate residential treatment programs. However, three federal agencies the Departments of Health and Human Services, Justice, and Education administer programs that can provide funds to states to support eligible youth who have been placed in some residential treatment programs. For example, the Department of Health and Human Services, through its Administration for Children and Families, administers programs that provide funding to states for a wide range of child welfare services, including foster care, as well as improved handling, investigation, and prosecution of youth maltreatment cases. In addition to the lack of a standard, commonly recognized definition for residential treatment programs, there are no standard definitions for specific types of programs wilderness therapy programs, boot camps, and boarding schools, for instance. For our purposes, we define these programs based on the characteristics we identified during our review of the 10 case studies. For example, in the context of our report, we defined wilderness therapy program to mean a program that places youth in different natural environments, including forests, mountains, and deserts. Figure 1 shows images we took near the wilderness therapy programs we visited. According to wilderness therapy program material, these settings are intended to remove the distractions and temptations of modern life from teens, forcing them to focus on themselves and their relationships. Included as part of a wilderness training program, participants keep journals that often include entries related to why they are in the program and their experiences and goals while in the wilderness. These journals, which program staff read, are part of the individual and group therapy provided in the field. As part of the wilderness experience, these programs also teach basic survival skills, such as setting up a tent and camp, starting a fire, and cooking food. Figure 2 is photo montage of living arrangements for youth enrolled in the wilderness programs we visited. Some wilderness therapy programs may include a boot camp element. However, many boot camps (which can also be called behavioral modification facilities) exist independently of wilderness training. In the context of our report, a boot camp is a residential treatment program in which strict discipline and regime are dominant principles. Some military- style boot camp programs also emphasize uniformity and austere living conditions. Figure 3 is a photo montage illustrating a boot camp which minimizes creature comfort and emphasizes organization and discipline. A third type of residential treatment program is known as a boarding school. Although these programs may combine wilderness or boot camp elements, boarding schools (also called academies) are generally advertised as providing academic education beyond the survival skills a wilderness therapy program might teach. This academic education is sometimes approved by the state in which the program operates and may also be transferable as elective credits toward high school. These programs often enroll youth whose parents force them to attend against their will. The schools can include fences and other security measures to ensure that youth do not leave without permission. Figure 4 shows some of the features boarding schools may employ to keep youth in the facilities. A variety of ancillary services related to residential treatment programs are available for an additional fee in some programs. These services include: Referral services and educational consultants to assist parents in selecting a program. Transport services to pick up a youth and bring him or her to the program. Parents frequently use a transport service if their child is unwilling to attend the program. Additional individual, group, or family counseling or therapy sessions as part of treatment. These services may be located on the premises or nearby. Financial services, such as loans, to assist parents in covering the expense of residential treatment programs. These services are marketed toward parents and, with the exception of financial services, are not regulated by the federal government. <2. Widespread Allegations of Abuse and Death at Residential Treatment Programs> We found thousands of allegations of abuse, some of which involved death, at public and private residential treatment programs across the country between the years 1990 and 2007. We are unable to identify a more concrete number of allegations because we could not locate a single Web site, federal agency, or other entity that collects comprehensive nationwide data related to this issue. Although the NCANDS database, operated by the Department of Health and Human Services, collects some data from states, data submission is voluntary and not all states with residential treatment programs contribute information. According to the most recent NCANDS data, during 2005 alone 33 states reported 1,619 staff members involved in incidents of abuse in residential programs. Because of limited data collection and reporting, we could not determine the numbers of incidents of abuse and death associated with private programs. It is important to emphasize that allegations should not be confused with proof of actual abuse. However, in terms of meeting our objective, the thousands of allegations we found came from a number of sources besides NCANDS. For example: We identified claims of abuse and death in pending and closed civil or criminal proceedings with dozens of plaintiffs alleging abuse. For instance, according to one pending civil lawsuit filed as recently as July 2007, dozens of parents allege that their children were subjected to over 30 separate types of abuse. We found attorneys around the country who represent youth and groups of youth who allege that abuse took place while these youth were enrolled in residential treatment programs. For example, an attorney based in New Jersey with whom we spoke has counseled dozens of youth who alleged they were abused in residential treatment programs in past cases, as has another attorney, a retired prosecutor, who advocates for abuse victims. We found that allegations are posted on various Web sites advocating for the shutdown of certain programs. Past participants in wilderness programs and other youth residential treatment programs have individually or collectively set up sites claiming abuse and death. The Internet contains an unknown number of such Web sites. One site on the Internet, for example, identifies over 100 youth who it claims died in various programs. In other instances, parents of victims who have died or were abused in these programs have similarly set up an unknown number of Web sites. Conversely, there are also an unknown number of sites that promote and advocate the benefits of various programs. Because there are no specific reporting requirements or definitions for private programs in particular, we could not determine what percentage of the thousands of allegations we found are related to such programs. There is likely a small percentage of overlapping allegations given our inability to reconcile information from the sources we used. <3. Cases of Death at Selected Residential Treatment Programs> We selected 10 closed cases from private programs to examine in greater detail. Specifically, these cases were focused on the death of a teenager in a private residential treatment program that occurred between 1990 and 2004. We found significant evidence of ineffective management in most of these 10 cases, with many examples of how program leaders neglected the needs of program participants and staff. In some cases, program leaders gave their staff bad advice when they were alerted to the health problems of a teen. In other cases, program leaders appeared to be so concerned with boosting enrollment that they told parents their programs could provide services that they were not qualified to offer and could not provide. Several cases reveal program leaders who claimed to have credentials in therapy or medicine that they did not have, leading parents to trust them with teens who had serious mental or physical disabilities requiring proper treatment. These ineffective management techniques compounded the negative consequences of (and sometimes directly resulted in) the hiring of untrained staff; a lack of adequate nourishment; and reckless or negligent operating practices, including a lack of adequate equipment. These specific factors played a significant role in most of the deaths we examined. Untrained staff. A common theme of many of the cases we examined is that staff misinterpreted legitimate medical emergencies. Rather than recognizing the signs of dehydration, heat stroke, or illness, staff assumed that a dying teen was in fact attempting to use trickery to get out of the program. This resulted in the death of teenagers from common, treatable illnesses. In some cases, teens who fell ill from less- common ailments exhibited their symptoms for many days, dying slowly while untrained staff continued to believe the teen was faking it. Unfortunately, in almost all of our cases, staff only realized that a teen was in distress when it was already too late. Lack of adequate nourishment. In many cases, program philosophy (e.g., tough love ) was taken to such an extreme that teenagers were undernourished. One program fed teenagers an apple for breakfast, a carrot for lunch, and a bowl of beans for dinner while requiring extensive physical activity in harsh conditions. Another program forced teenagers to fast for 2 days. Teenagers were also given equal rations of food regardless of their height, weight, or other dietary needs. In this program, an ill teenager lost 20 percent of his body weight over the course of about a month. Unbeknownst to staff, the teenager was simultaneously suffering from a perforated ulcer. Reckless or negligent operating practices. In at least two cases, program staff set out to lead hikes in unfamiliar territory that they had not scouted in advance. Important items such as radios and first aid kits were left behind. In another case, program operators did not take into account the need for an adjustment period between a teenager s comfortable home life and the wilderness; this endangered the safety of one teenager, who suddenly found herself in an unfamiliar environment. State licensing initiatives attempt, in part, to minimize the risk that some programs may endanger teenagers through reckless and negligent practices; however, not all programs we examined were covered by operating licenses. Furthermore, some licensed programs deviated from the terms of their licenses, leading states, after the death of a teen, to take action against programs that had flouted health and safety guidelines. See table 1 for a summary of the cases we examined. <3.1. Case One> The victim was a 15-year-old female. Her parents told us that she was a date-rape victim who suffered from depression, and that in 1990 she enrolled in a 9-week wilderness program in Utah to build confidence and improve her self-esteem. The victim and her parents found out about the program through a friend who claimed to know the owner. The parents of the victim spoke with the owner of the program several times and reviewed brochures from the owner. The brochure stated that the program s counselors were highly trained survival experts and that the professional experience and expertise of its staff was unparalleled. The fees and tuition for the program cost a little over $20,600 (or about $327 per day). The victim and her parents ultimately decided that this program would meet their needs and pursued enrollment. The victim s parents said they trusted the brochures, the program owner, and the program staff. However, the parents were not informed that the program was completely new and that their daughter would be going on the program s first wilderness trek. Program staff were not familiar with the area, relied upon maps and a compass to navigate the difficult terrain, and became lost. As a result, they crossed into the state of Arizona and wandered onto Bureau of Land Management (BLM) land. According to a lawsuit filed by her parents, the victim complained of general nausea, was not eating, and began vomiting water on about the third day of the 5-day hike. Staff ignored her complaints and thought she was faking it to get out of the program. Police documents indicate that the two staff members leading the hike stated that they did not realize the victim was slowly dehydrating, despite the fact that she was vomiting water and had not eaten any food. On the fifth day of the hike, the victim fell several times and was described by the other hikers as being in distress. It does not appear that staff took any action to help her. At about 5:45 p.m. on the fifth day, the victim collapsed in the road and stopped breathing. According to police records, staff did not call for help because they were not equipped with radios instead, they performed CPR and attempted to signal for help using a signal fire. CPR did not revive the victim; she died by the side of the road and her body was covered with a tarp. The following afternoon, a BLM helicopter airlifted her body to a nearby city for autopsy. The death certificate for the victim states that she died of dehydration due to exposure. Although local police investigated the death, no charges were filed. Utah officials wanted to pursue the case, but they did not have grounds to do so because the victim died in Arizona. The parents of the victim filed a civil suit and settled out of court for an undisclosed sum. Soon after the victim s death and 6 months after opening, the founder closed the program and moved to Nevada, where she operated in that state until her program was ordered to close by authorities there. In a hearing granting a preliminary judgment that enjoined the operator of the program, the judge said that he would not shelter this program, which was in effect hiding from the controls of the adjoining state. He chastised the program owner for running a money-making operation while trying to escape the oversight of the state, writing, wishes to conduct a wilderness survival program for children for profit, without state regulation and she hide the children from the investigating state authorities and appear uncooperative towards them. He expressed further concerns, including a statement that participants in the program did not appear to be receiving adequate care and protection and that qualified and competent counselors were not in charge of the program. The judge also noted that one of the adult counselors was an ex-felon and a fugitive. After this program closed, the program founder returned to Utah and joined yet another program where another death occurred 5 years later (this death is detailed in case seven). We found that the founder of this residential treatment program had a history in the industry prior to opening the program discussed in this case, she worked as an administrator in the program covered in another case (case two). Today, the program founder is still working in the industry as a consultant, providing advice to parents who may not know of her history. <3.2. Case Two> The victim was a 16-year-old female who had just celebrated her birthday. According to her mother, in 1990 the victim was enrolled in a 9-week wilderness therapy program because she suffered from depression and struggled with drug abuse. The victim s mother obtained brochures from the program owner and discussed the program with him and other program staff. According to the mother, the program owner answered all her questions and really sold the program. She told us: I understood there would be highly trained and qualified people with who could handle any emergency they boasted of a 13-year flawless safety record, I thought to myself why should I worry? Why would anything happen to her? Believing that the program would help her daughter, the victim s mother and stepfather secured a personal loan to pay the $25,600 in tuition for the program (or about $400 per day). She also paid about $4,415 to have a transport service come to the family home and take her daughter to the program. The victim s mother and stepfather hired the service because they were afraid their daughter would run away when told that she was being enrolled in the program. According to the victim s mother, two people came to the family home at 4 a.m. to take her daughter to the program s location in the Utah desert, where a group hike was already under way. Three days into the program, the victim collapsed and died while hiking. According to the program brochure, the first 5 days of the program are days and nights of physical and mental stress with forced march, night hikes, and limited food and water. Youth are stripped mentally and physically of material facades and all manipulatory tools. After the victim collapsed, one of the counselors on the hike administered CPR until an emergency helicopter and nurse arrived to take the victim to a hospital, where she was pronounced dead. According to the victim s mother, her daughter died of exertional heatstroke. The program had not made any accommodation or allowed for any adjustment for the fact that her daughter had traveled from a coastal, sea-level residence in Florida to the high desert wilderness of Utah. The mother of the victim also said that program staff did not have salt tablets or other supplies that are commonly used to offset the affects of heat. Shortly after the victim died, the 9-week wilderness program closed. A state hearing brought to light complaints of child abuse in the program and the owner of the program was charged with negligent homicide. He was acquitted of criminal charges. However, the state child protective services agency concluded that child abuse had occurred and placed the owner on Utah s registry of child abusers, preventing him from working in the state at a licensed child treatment facility. Two other program staff agreed to cooperate with the prosecution to avoid standing trial; these staff were given probation and prohibited from being involved with similar programs for up to 5 years. In 1994, the divorced parents of the victim split a $260,000 settlement resulting from a civil suit against the owner. After this program closed, its owner opened and operated a number of domestic and foreign residential treatment programs over the next several years. Although he was listed on the Utah registry of suspected child abusers, the program owner opened and operated these programs elsewhere many of which were ultimately shut down by state officials and foreign governments because of alleged and proven child abuse. At least one of these programs is still operating abroad and is marketed on the Internet, along with 10 other programs considered to be part of the same network. As discussed above, the program owner in our first case originally worked in this program as an administrator before it closed. <3.3. Case Three> The victim was a 16-year-old male. According to his parents, in 1994 they enrolled him in a 9-week wilderness therapy program in Utah because of minor drug use, academic underachievement, and association with a new peer group that was having a negative impact on him. The parents learned of the program from an acquaintance and got a program brochure that looked great in their opinion. They thought the program was well-suited for their son because it was an outdoor program focusing on small groups of youth who were about the same age. They spoke with the program owner and his wife, who flew to Phoenix, Arizona, to talk with them. To be able to afford the program s cost of about $18,500 (or $263 per day), the victim s parents told us they took out a second mortgage on their house. They also paid nearly $2,000 to have their son transported to the campsite in the program owner s private plane. At the time they enrolled their son, the parents were unaware that this program was started by two former employees of a program where a teenager had died (this program is discussed in our second case). According to the victim s father, his son became sick around the 11th day of the program. According to court and other documents, the victim began exhibiting signs of physical distress and suffered from severe abdominal pain, weakness, weight loss, and loss of bodily functions. Although the victim collapsed several times during daily hikes, accounts we reviewed indicate that staff ignored the victim s pleas for help. He was forced to continue on for 20 days in this condition. After his final collapse 31 days into the program, staff could not detect any respiration or pulse. Only at this time did staff radio program headquarters and request help, although they were expected to report any illnesses or disciplinary incidents and had signed an agreement when employed stating that they were responsible for the safety and welfare of fellow staff members and students. The victim was airlifted to a nearby hospital and was pronounced dead upon arrival. The 5-foot 10-inch victim, already a thin boy, had dropped from 131 to 108 pounds a loss of nearly 20 percent of his body weight during his month-long enrollment. The victim s father told us that when he was notified of his son s death, he could only think that some terrible accident had occurred. But according to the autopsy report, the victim died of acute peritonitis an infection related to a perforated ulcer. This condition would have been treatable provided there had been early medical attention. The father told us that the mortician, against his usual policy, showed him the condition of his son s body because it was something that needed to be investigated. The victim s father told us he buckled at the knees when he saw the body of his son emaciated and covered with cuts, bruises, abrasions, blisters, and a full-body rash; what he saw was unrecognizable as his son except for a childhood scar above the eye. In the wake of the death, the state revoked the program s operating license. According to the state s licensing director, the program closed 3 months later because the attorney general s office had initiated an investigation into child abuse in the program, although no abuse was found after examining the 30 to 40 youth who were also enrolled in the program when the victim died. The state attorney general s office and a local county prosecutor filed criminal charges against the program owners and several staff members. After a change of venue, one defendant went to trial and was convicted of abuse or neglect of a disabled child in this case. Five other defendants pleaded guilty to a number of other charges five guilty pleas on negligent homicide and two on failure to comply with a license. The defendants in the case were sentenced to probation and community service. The parents of the victim subsequently filed a civil suit that was settled out of court for an undisclosed amount. <3.4. Case Four> The victim was a 15-year-old male. According to the victim s mother, in 2000 she enrolled her son in a wilderness program in Oregon to build his confidence and develop self-esteem in the wake of a childhood car accident. The accident had resulted in her son sustaining a severe head injury, among other injuries. After an extensive Internet search and discussions with representatives of various wilderness programs and camps for head-injury victims, the mother told us she selected a program that she believed would meet her son s needs. What sold me on the program, she said, was the program owner s repeated assurances over the telephone that the program was a perfect fit for her son. She told us that to pay for the $27,500 program, she withdrew money from her retirement account. The program was between 60 to 90 days (about $305 to $450 per day) depending on a youth s progression through the program. The victim s mother said that she became suspicious about the program when she dropped her son off. She said that the program director and another staff person disregarded her statements about her son s likes and dislikes, despite believing that the program would take into account the personal needs of her son. Later, she filed a lawsuit alleging that the staff had no experience dealing with brain-injured children and others with certain handicaps who were in the program. What she also did not know was that the founder of the program was himself a former employee of two other wilderness programs in another state where deaths had occurred (we discuss these programs in cases two and three). The program founder also employed staff who had been charged with child abuse while employed at other wilderness programs. According to her lawsuit, her son left the program headquarters on a group hike with three counselors and three other students. Several days into the multiday hike, while camping under permit on BLM land, the victim refused to return to the campsite after being escorted by a counselor about 200 yards to relieve himself. Two counselors then attempted to lead him back to the campsite. According to an account of the incident, when he continued to refuse, they tried to force him to return and they all fell to the ground together. The two counselors subsequently held the victim face down in the dirt until he stopped struggling; by one account a counselor sat on the victim for almost 45 minutes. When the counselors realized the victim was no longer breathing, they telephoned for help and requested a 9-1-1 operator s advice on administering CPR. The victim s mother told us that she found out about the situation when program staff called to tell her that her son was being airlifted to a medical center. Shortly afterwards, a nurse called and urged her to come to the hospital with her husband. They were not able to make it in time on the drive to the hospital, her son s doctor called, advised her to pull to the side of the road, and informed her that her son had died. The victim s mother told us that she was informed, after the autopsy, that the main artery in her son s neck had been torn. The cause of death was listed as a homicide. In September 2000, after the boy s death, one of the counselors was charged with criminally negligent homicide. A grand jury subsequently declined to indict him. The victim s mother told us that at the grand jury hearing, she found out from parents of other youth in the program that they had been charged different amounts of money for the same program, and that program officials had told them what they wanted to hear about the program s ability to meet each of their children s special needs. In early 2001, the mother of the victim filed a $1.5 million wrongful death lawsuit against the program, its parent company, and its president. The lawsuit was settled in 2002 for an undisclosed amount. Due in part to the victim s death, in early 2002, Oregon implemented its outdoor licensing requirements. The state s Department of Justice subsequently filed a complaint alleging numerous violations of the state s Unlawful Trade Practices Act and civil racketeering laws, including charges that the program misrepresented its safety procedures and criminally mistreated enrolled youth. In an incident unconnected to this case, the program was also charged with child abuse related to frostbite. As a result of these complaints, in February of 2002, the program entered into agreement with the state s attorney general to modify program operations and pay a $5,000 fee. The program continued to work with the State of Oregon throughout 2002 to comply with the agreement. In the summer of 2002, BLM revoked the camping permit for the program due, in part, to the victim s death. The program closed in December of 2002. <3.5. Case Five> The victim was a 14-year-old male. According to his father, in 2001 the victim was enrolled in a private West Virginia residential treatment center and boarding school. He told us that his son had been diagnosed with clinical depression, had attempted suicide twice, was on medication, and was being treated by a psychiatrist. Because their son was having difficulties in his school, the parents in consultation with their son s psychiatrist decided their son would benefit by attending a school that was more sensitive to their son s problems. To identify a suitable school, the family hired an education consultant who said he was a member of an educational consultants association and that he specialized in matching troubled teens with appropriate treatment programs. The parents discussed their son s personality, medical history (including his previous suicide attempts), and treatment needs with the consultant. According to the father, the consultant quickly recommended the West Virginia school. The program was licensed by the state and cost almost $23,000 (or about $255 per day). According to the parents and court documents, the victim committed suicide 6 days into the program. On the day before he killed himself, while participating in the first phase of the program ( survival training ), the victim deliberately cut his left arm four times from wrist to elbow using a pocket knife issued to him by the school. After cutting himself, the victim approached a counselor and showed him what he had done, pleading with the counselor to take the knife away before he hurt himself again. He also asked the counselor to call his mother and tell her that he wanted to go home. The counselor spoke with the victim, elicited a promise from him not to hurt himself again, and gave the knife back. The next evening the victim hung himself with a cord not far from his tent. Four hours passed before the program chose to notify the family about the suicide. When the owner of the program finally called the family to notify them, according to the father, the owner said, There was nothing we could do. In the aftermath of the suicide, the family learned that the program did not have any procedures for addressing suicidal behavior even though it had marketed itself as being able to provide appropriate therapy to its students. Moreover, one of the program owners, whom the father considered the head therapist, did not have any formal training to provide therapy. The family also learned that the owner and another counselor had visited their son s campsite, as previously scheduled, the day he died. During this visit, field staff told them about the self-inflicted injury and statements the victim had made the night before. According to the father, the owner then advised field staff that the victim was being manipulative in an attempt to be sent home, and that the staff should ignore him to discourage further manipulative behavior. The owners and the program were indicted by a grand jury on criminal charges of child neglect resulting in death. According to the transcript, the judge who was assigned to the case pushed the parties not to choose a bench trial to avoid a lengthy and complicated trial. The program owner pleaded no contest to the charge of child neglect resulting in death with a fine of $5,000 in exchange for dismissal of charges. The state conducted an investigation into the circumstances and initially planned to close the program. However, the program owners negotiated an agreement with the state not to shut down the program in exchange for a change of ownership and management. According to the victim s father, the family of the victim subsequently filed a civil suit and a settlement was reached for $1.2 million, which included the owners admitting and accepting personal responsibility for the suicide. This program remains open and operating. Within the last 18 months, a group of investors purchased the program and are planning to open and operate other programs around the country, according to the program administrators with whom we spoke. As part of our work we also learned that the program has a U.S. Forest Service permit however, because it has not filed all required usage reports nor paid required permit fees in almost 8 years, it is in violation of the terms of the permit. We estimate that the program owes the U.S. Forest Service tens of thousands of dollars, although we could not calculate the actual debt. <3.6. Case Six> The victim was a 14-year-old male. According to police documents, the victim s mother enrolled him in a military-style Arizona boot camp in 2001 to address behavioral problems. The mother told us that she thought it would be a good idea. In addition, she told us that her son suffered from some hearing loss, a learning disability, Attention Deficit Hyperactivity Disorder (ADHD), and depression. To address these issues her son was taking medication and attending therapy sessions. According to the mother, her son s therapist had recommended the program, which he described as a tough love program and what needed. The mother said she trusted the recommendation of her son s therapist; in addition, she spoke with other parents who had children in the program, who also recommended the program to her. She initially enrolled her son in a daytime Saturday program in the spring of 2001 so he could continue attending regular school during the week. Because her son continued to have behavioral problems, she then enrolled him in the program s 5-week summer camp, which she said cost between $4,600 and $5,700 (between $131 and $162 per day). Her understanding was that strenuous program activities took place in the evening and that during the day youth would be in the shade. Police documents indicate about 50 youth between the ages of 6 and 17 were enrolled in the summer program. According to police, youth were forced to wear black clothing and to sleep in sleeping bags placed on concrete pads that had been standing in direct sunlight during the day. Both black clothing and concrete absorb heat. Moreover, according to documents subsequently filed by the prosecutor, youth were fed an insufficient diet of a single apple for breakfast, a single carrot for lunch, and a bowl of beans for dinner. On the day the victim died, the temperature was approximately 113 degrees Fahrenheit, according to the investigating detective. His report stated that on that day, the program owner asked whether any youth wanted to leave the program; he then segregated those who wanted to leave the program, which included the victim, and forced them to sit in the midday sun for several hours while the other participants were allowed to sit in the shade. Witnesses said that while sitting in the sun, the victim began eating dirt because he was hungry. Witnesses also stated that the victim had become delirious and dehydrated saw water everywhere, and had to chase the Indians. Later on the victim appeared to have a convulsive seizure, but the camp staff present felt he was faking, according to the detective s report. One staff member reported that the victim had a pulse rate of 180, more than double what is considered a reasonable resting heart rate for a teenager. The program owner then directed two staff and three youth enrolled in the program to take the victim to the owner s room at a nearby motel to cool him down and clean up. They placed the victim in the flatbed of a staff member s pickup truck and drove to the motel. Over the next several hours, the following series of events occurred. In the owner s hotel room, the limp victim was stripped and placed into the shower with the water running. The investigating detective told us that the victim was left alone for 15 to 20 minutes for his privacy. During this time, one of the two staff members telephoned the program owner about the victim s serious condition; the owner is said to have told the staff person that everything will be okay. However, when staff members returned to the bathroom they saw the victim facedown in the water. The victim had defecated and vomited on himself. After cleaning up the victim, a staff member removed him from the shower and placed him on the hotel room floor. Another staff member began pressing the victim s stomach with his hands, at which point, according to the staff member s personal account, mud began oozing out of the victim s mouth. The staff member then used one of his feet to press even harder on the victim s stomach, which resulted in the victim vomiting even more mud and a rock about the size of quarter. At this point, a staff member again called the owner to say the boy was not responding; the owner instructed them to take the victim back to the camp. They placed the victim in the flatbed of the pickup truck for the drive back. Staff placed the victim on his sleeping bag upon returning to camp. He was reportedly breathing at this time, but then stopped breathing and was again put in the back of the pickup truck to take him for help. However, one staff member expressed his concern that the boy would die unless they called 9-1-1 immediately. The county sheriff s office reported receiving a telephone call at approximately 9:43 p.m. that evening saying a camp participant had been eating dirt all day, had refused water, and was now in an unconscious state and not breathing. This is the first recorded instance in which the program owner or staff sought medical attention for the victim. Instructions on how to perform CPR were given and emergency help was dispatched. The victim was pronounced dead after being airlifted to a local medical center. The medical examiner who conducted the autopsy expressed concern that the victim had not been adequately hydrated and had not received enough food while at the camp. His preliminary ruling on the cause of death was that of near drowning brought on by dehydration. After a criminal investigation was conducted, the court ultimately concluded that there was clear and convincing evidence that program staff were not trained to handle medical emergencies related to dehydration and lack of nutrition. The founder (and chief executive officer) of the program was convicted in 2005 of felony reckless manslaughter and felony aggravated assault and sentenced to 6-year and 5- year terms, respectively. He was also ordered to pay over $7,000 in restitution to the family. In addition, program staff were convicted of various charges, including trespassing, child abuse, and negligent homicide but were put on probation. According to the detective, no staff member at the camp was trained to administer medication or basic medical treatment, including first aid. The mother filed a civil suit that was settled for an undisclosed amount of money. The program closed in 2001. <3.7. Case Seven> The victim was a 16-year-old female. Because of defiant, violent behavior, her parents enrolled her in a Utah wilderness and boarding school program in 2001, which was a state-licensed program for youth 13 to 18 years old. The 5 month program cost around $29,000 (or about $193 per day) and operated on both private and federal land. The parents also hired a transport service at a cost of over $3,000 to take their daughter to the program. We found that the director and another executive of this wilderness program had both worked at the same program discussed in our second case and the executive owned the program discussed in our first case. According to program documents and the statements of staff members, a group hiking in this program would normally require three staff one in front leading the hike, one in the middle of the group, and one at the end of the group. However, this standard structure had been relaxed on the day the victim fell. It was Christmas Day, and only one staff member accompanied four youth. While hiking in a steep and dangerous area that staff had not previously scouted out, the victim ran ahead of the group with two others, slipped on a steep rock face, and fell more than 50 feet into a crevasse according to statements of the other two youth one of whom ran back to inform the program staff of the accident. The staff radioed the base camp to report the accident, then called 9-1-1. One of the staff members at the accident scene was an emergency medical technician (EMT) and administered first aid. However, in violation of the program licensing agreement, the first aid kit they were required to have with them had been left at the base camp. An ambulance arrived about 1 hour after the victim fell. First responders decided to have the victim airlifted to a medical center, but the helicopter did not arrive until about 1-1/2 hours after they made the decision to call for an airlift. According to the coroner s report, the victim died about 3 weeks later in a hospital without ever regaining consciousness. She had suffered massive head trauma, a broken arm, broken teeth, and a collapsed lung. As a result of the death, the state planned to revoke the program s outdoor youth program license based on multiple violations. In addition to an inappropriate staff-to-child ratio (four youth for one staff member, rather than three to one), failure to prescreen the hiking area, and hiking without a first aid kit, the state identified the following additional license violations: Program management did not have an emergency or accident plan in place. Two of the four staff members who escorted the nine youth in the wilderness had little experience one had 1 month of program experience and the other had 9 days. Neither of them had completed the required staff training. The two most senior staff members on the trip had less than 6 months of wilderness experience but they remained at the camp while other two inexperienced staff members led the hike. A lawsuit filed by the family in November 2002 claims that the program did not take reasonable measures to keep the youth in the program safe, especially given the hiking inexperience of the youth and the insufficient number of staff. Specifically, the suit claims that the program s executive director waited for an hour before calling assistance after the victim fell. Additionally, the suit claims that staff only had one radio and no medical equipment or emergency plan. The parents filed an initial lawsuit for $6 million but eventually settled in 2003 for $200,000 before attorneys fees and health insurance reimbursement were taken out. The program closed in May 2002 due to fiscal insolvency. However, its parent program a boarding school licensed by the state is still in operation. We have not been able to determine whether the wilderness director at the time of the victim s death is still in the industry. However, the other program executive remains in the industry, working as a referral agent for parents seeking assistance in identifying programs for troubled youth. <3.8. Case Eight> The victim, who died in 2002, was a 15-year-old female. The parents of the victim told us that she suffered from depression, suicidal thoughts, and bipolar disorder. She also reportedly had a history of drug use, including methamphetamines, marijuana, and cocaine. Her parents explained that they selected a program after researching several programs and consulting with an educational advisor. Although the program was based in Oregon, it operated a 3-week wilderness program in Nevada, which was closer to the family home. The total cost of the program was over $9,200 (or about $438 per day), which included a nonrefundable deposit and over $300 for equipment. The parents of the victim drove their daughter several hundred miles to enroll her in the program. Because of the distance involved, they stayed overnight in a motel nearby. The next day, when the parents arrived home, they found a phone message waiting for them it was from the program, saying that their daughter had been in an accident and that she was receiving CPR. According to documents we reviewed, three staff members led seven students on a hike on the first day of the program. The victim fell several times while hiking. The last time she fell, she lost muscle control and had difficulty breathing. The EMT on the expedition had recently completed classroom certification and had no practical field experience. While the staff called for help, the EMT and other staff began CPR and administered epinephrine doses to keep her heart beating during the 3 hours it took a rescue helicopter to arrive. The victim was airlifted to a nearby hospital where she was pronounced dead. The victim s death was ruled an accident by the coroner heat stroke complicated by drug-induced dehydration. According to other youth on the hike, they were aware the victim had taken methamphetamines prior to the hike. The victim had had a drug screening done 1 week before entering the program; she tested positive for methamphetamine, which the program director knew but the staff did not. However, the program did not make a determination whether detoxification was necessary, which was required by the state where the program was operating (Nevada), according to a court document. The victim was also taking prescribed psychotropic medications, which affected her body s ability to regulate heat and remain hydrated. At the time the victim died, this private wilderness treatment program had been in operation for about 15 years in Oregon. Although it claimed to be accredited by the Joint Commission on Heath Care Organizations, this accreditation covered only the base program not the wilderness program or its drug and alcohol component in which the victim participated. Moreover, even though the wilderness program attended by the victim had been running for 2 years, it was not licensed to operate in Nevada. The district attorney s office declined to file criminal child abuse and neglect charges against two program counselors, although those charges had been recommended by investigating officers. The parents of the victim were never told why criminal charges were never filed. They subsequently filed a civil lawsuit and settled against the program for an undisclosed sum. Two other deaths occurred in this program shortly after the first one resulted from a previously unknown heart defect and the other from a fallen tree. Although the wilderness program had a federal permit to operate in Nevada, it was not licensed by that state. After the death, that state investigated and ordered the program closed. The parent company had (and continues to maintain) state licenses in Oregon to operate as a drug and alcohol youth treatment center, an outpatient mental health facility, and an outdoor youth facility, as well as federal land permits from BLM and the U.S. Forest Service. According to program officials, the program has modified its procedures and policies it no longer enrolls youth taking the medication that affected the victim s ability to regulate her body temperature. <3.9. Case Nine> The victim was a 14-year-old male who died in July 2002. According to documents we reviewed, the mother of the victim placed her son in this Utah wilderness program to correct behavioral problems. The victim kept a journal with him during his stay at the program. It stated that he had ADHD and bipolar disorder. His enrollment form indicates that he also had impulse control disorder and that he was taking three prescription medications. His physical examination, performed about 1 month before he entered the program, confirms that he was taking these medications. We could not determine how much the program cost at the time. According to documents we reviewed, the victim had been in the program for about 8 days when, on a morning hike on BLM land, he began to show signs of hyperthermia (excessively high body temperature). He sat down, breathing heavily and moaning. Two staff members, including one who was an EMT, initially attended to him, but they could not determine if he was truly ill or simply faking a problem to get out of hiking. When the victim became unresponsive and appeared to be unconscious, the staff radioed the program director to consult with him. The director advised the staff to move the victim into the shade. The director also suggested checking to see whether the victim was feigning unconsciousness by raising his hand and letting go to see whether it dropped onto his face. They followed the director s instructions. Apparently, because the victim s hand fell to his side rather than his face, the staff member who was an EMT concluded that the victim was only pretending to be ill. While the EMT left to check on other youth in the program, a staff member reportedly hid behind a tree to see whether the victim would get up reasoning that if the victim were faking sickness, he would get up if he thought nobody was watching. As the victim lay dying, the staff member hid behind the tree for 10 minutes. He failed to see the victim move after this amount of time, so he returned to where the victim lay. He could not find a pulse on the victim. Finally realizing that he was dealing with a medical emergency, the staff member summoned the EMT and they began CPR. The program manager was contacted, and he called for emergency help. Due to difficult terrain and confusion about the exact location of the victim, it took over an hour for the first response team to reach the victim. An attempt to airlift the victim was canceled because a rescue team determined that the victim was already dead. According to the coroner s report, the victim died of hyperthermia. State Department of Human Services officials initially found no indication that the program had violated its licensing requirements, and the medical examiner could not find any signs of abuse. Subsequently, the Department of Human Services ruled that there were, in fact, licensing violations, and the state charged the program manager and the program owner with child abuse homicide (a second degree felony charge). The program manager was found not guilty of the charges; additionally, it was found that he did not violate the program s license regarding water, nutrition, health care, and other state licensing requirements. Moreover, the court concluded that the State did not prove that the program owner engaged in reckless behavior. Later that year, however, an administrative law judge affirmed the Department of Human Services decision to revoke the program s license after the judge found that there was evidence of violations. The owner complied with the judge and closed the program in late 2003. About 16 months later, the owner applied for and received a new license to start a new program. According to the Utah director of licensing, as of September 2007, there have been no problems with the new program. We could not find conclusive information as to whether the parents of the victim filed a civil case and, if so, what the outcome was. <3.10. Case Ten> The victim was a 15-year-old male. According to investigative reports compiled after his death, the victim s grades dropped during the 2003 2004 school year and he was withdrawing from his parents. His parents threatened to send him to a boarding or juvenile detention facility if he did not improve during summer school in 2004. The victim ran away from home several times that summer, leading his frustrated parents to enroll him in a boot camp program. When they told him about the enrollment, he ran away again the day before he was taken to the program in a remote area of Missouri. The 5-month program describes itself as a boot camp and boarding school. Because it is a private facility, the state in which it is located does not require a license. According to Internet documents, the program costs almost $23,000 (or about $164 per day). Investigative documents we reviewed indicate that at the time the parents enrolled the teenager, he did not have any issues in his medical history. Staff logs indicate that the victim was considered to be a continuous problem from the time he entered the program he did not adhere to program rules and was otherwise noncompliant. By the second day of the boot camp phase of the program, staff noticed that the victim exhibited an oozing bump on his arm. School records and state investigation reports showed that the victim subsequently began to complain of muscle soreness, stumbled frequently, and vomited. As days passed, students noticed the victim was not acting normally, and reported that he defecated involuntarily on more than one occasion, including in the shower. Staff notes confirmed that the victim defecated and urinated on himself numerous times. Although he was reported to have fallen frequently and told staff he was feeling weak or ill, the staff interpreted this as being rebellious. The victim was taken down forced to the floor and held there on more than one occasion for misbehaving, according to documents we reviewed. Staff also tied a 20-pound sandbag around the victim s neck when he was too sick to exercise, forcing him to carry it around with him and not permitting him to sit down. Staff finally placed him in the sick bay in the morning on the day that he died. By midafternoon of that day, a staff member checking on him intermittently found the victim without a pulse. He yelled for assistance from other staff members, calling the school medical officer and the program owners. A responding staff member began CPR. The program medical officer called 9-1-1 after she arrived in the sick bay. An ambulance arrived about 30 minutes after the 9-1-1 call and transported the victim to a nearby hospital, where he was pronounced dead. The victim died from complications of rhabdomyolysis due to a probable spider bite, according to the medical examiner s report. A multiagency investigation was launched by state and local parties in the aftermath of the death. The state social services abuse investigation determined that staff did not recognize the victim s medical distress or provide adequate treatment for the victim s bite. Although the investigation found evidence of staff neglect and concluded that earlier medical treatment may have prevented the death of the victim, no criminal charges were filed against the program, its owners, or any staff. The state also found indications that documents submitted by the program during the investigation may have been altered. The family of the victim filed a civil suit against the program and several of its staff in 2005 and settled out of court for $1 million, according to the judge. This program is open and operating. The tuition is currently $4,500 per month plus a $2,500 start-up fee. The program owner claims to have 25 years of experience working with children and teenagers. Members of her family also operate a referral program and a transport service out of program offices located separately from the actual program facility. During the course of our review, we found that current and former employees with this program filed abuse complaints with the local law enforcement agency but that no criminal investigation has been undertaken. Mr. Chairman and Members of the Committee, this concludes my statement. We would be pleased to answer any questions that you may have at this time. <4. Contacts and Acknowledgments> For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Residential treatment programs provide a range of services, including drug and alcohol treatment, confidence building, military-style discipline, and psychological counseling for troubled boys and girls with a variety of addiction, behavioral, and emotional problems. This testimony concerns programs across the country referring to themselves as wilderness therapy programs, boot camps, and academies, among other names. Many cite positive outcomes associated with specific types of residential treatment. There are also allegations regarding the abuse and death of youth enrolled in residential treatment programs. Given concerns about these allegations, particularly in reference to private programs, the Committee asked the General Accountability Office (GAO) to (1) verify whether allegations of abuse and death at residential treatment programs are widespread and (2) examine the facts and circumstances surrounding selected closed cases where a teenager died while enrolled in a private program. To achieve these objectives, GAO conducted numerous interviews and examined documents from closed cases dating as far back as 1990, including police reports, autopsy reports, and state agency oversight reviews and investigations. GAO did not attempt to evaluate the benefits of residential treatment programs or verify the facts regarding the thousands of allegations it reviewed. What GAO Found GAO found thousands of allegations of abuse, some of which involved death, at residential treatment programs across the country and in American-owned and American-operated facilities abroad between the years 1990 and 2007. Allegations included reports of abuse and death recorded by state agencies and the Department of Health and Human Services, allegations detailed in pending civil and criminal trials with hundreds of plaintiffs, and claims of abuse and death that were posted on the Internet. For example, during 2005 alone, 33 states reported 1,619 staff members involved in incidents of abuse in residential programs. GAO could not identify a more concrete number of allegations because it could not locate a single Web site, federal agency, or other entity that collects comprehensive nationwide data. GAO also examined, in greater detail, 10 closed civil or criminal cases from 1990 through 2004 where a teenager died while enrolled in a private program. GAO found significant evidence of ineffective management in most of the 10 cases, with program leaders neglecting the needs of program participants and staff. This ineffective management compounded the negative consequences of (and sometimes directly resulted in) the hiring of untrained staff; a lack of adequate nourishment; and reckless or negligent operating practices, including a lack of adequate equipment. These factors played a significant role in the deaths GAO examined.
<1. Interior Continues to Face Challenges Hiring and Retaining Key Oil and Gas Staff, Primarily Because of Higher Industry Salaries and the Lengthy Federal Hiring Process> We found that Interior continues to experience problems hiring and retaining sufficient staff to provide oversight and management of oil and gas activities on federal lands and waters. BLM, BOEM, and BSEE office managers we surveyed reported that they continue to find it difficult to fill vacancies for key oil and gas oversight positions, such as petroleum engineers, inspectors, geologists, natural resource specialists, and geophysicists. These managers reported that it was difficult to retain staff to oversee oil and gas activities because staff leave for higher salaries in the private sector. They also reported that high rates of attrition are a concern because some Interior offices have just one or two employees per position, so a single retirement or resignation can significantly affect office operations and oversight. Nearly half of the petroleum engineers that left BLM in fiscal year 2012 resigned rather than retired, suggesting that they sought employment outside the bureau. According to Office of Personnel Management (OPM) data, the fiscal year 2012 attrition rate for petroleum engineers at BLM was over 20 percent, or more than double the average federal attrition rate of 9.1 percent. We found hiring and retention problems were most acute in areas where industry activity is greatest, such as in the Bakken shale play in western North Dakota, because the government is competing there with industry for the same group of geologists and petroleum engineers. Interior officials cited two major factors that affect the agency s ability to hire and retain sufficient staff to oversee oil and gas activities on federal leases: Higher industry salaries. BLM, BOEM, and BSEE office managers surveyed reported that they have lost potential applicants and staff to industry because it can pay higher salaries. Bureau of Labor Statistics data confirm that there is a wide and growing gap between industry and federal salaries for some positions, particularly petroleum engineers and geologists. For example, from 2002 through 2012, mean federal salaries for petroleum engineers have remained fairly constant at about $90,000 to $100,000 per year whereas private sector salaries have steadily increased from about $120,000 to over $160,000 during this same time period. The lengthy federal hiring process. BLM, BOEM, and BSEE officials surveyed reported that the federal hiring process has affected their ability to fill key oil and gas positions because it is lengthy, with multiple required steps, and that many applicants find other employment before the federal hiring process ends. We analyzed Interior s hiring data and found that the average hiring time for petroleum engineers was 197 days, or more than 6 months, at BOEM and BSEE. BLM fared a little better; its average hiring time for petroleum engineers was 126 days, or a little more than 4 months. However, all hiring times were much longer than 80 calendar days OPM s target. According to BLM, BOEM, and BSEE officials, other factors have contributed to difficulties hiring and retaining key oil and gas oversight personnel, such as few qualified applicants in remote areas, or areas with a high cost of living. <2. Interior Has Taken Some Actions to Address Hiring and Retention Challenges> Interior and its three bureaus BLM, BOEM, and BSEE have taken some steps to address hiring and retention challenges but could do more. Interior has used special salary rates and incentives to increase hiring and retention for key oil and gas positions, but use of these incentives has been limited. Interior has taken some steps to reduce the time it takes to hire oil and gas oversight staff but does not collect data to identify the causes of delays in the hiring process and opportunities for reducing them. Finally, Interior has taken some actions to improve recruiting, such as developing workforce plans to coordinate hiring and retention efforts, but this work is ongoing, and the extent to which these plans will help is uncertain. Special salary rates. For fiscal years 2012 and 2013, Congress approved a special 25 percent base pay increase for geologists, geophysicists, and petroleum engineers at BOEM and BSEE in the Gulf of Mexico. According to Interior officials in the Gulf of Mexico, this special pay authority helped retain some geologists, geophysicists, and petroleum engineers, at least in the near term. BOEM and BSEE requested an extension of this special pay authority though fiscal year 2014. In 2012, BLM met with OPM officials to discuss special salary rates for petroleum engineers and petroleum engineering technicians in western North Dakota and eastern Montana, where the disparity between federal and industry salaries is most acute, according to a BLM official. A BLM official told us that OPM requested that BLM provide more data to support its request. The official also told us that BLM submitted draft language to Congress requesting special salary rates through a congressional appropriation. According to Interior officials, all three bureaus are preparing a department-wide request for special salary rates to submit to OPM. Incentives. BLM, BOEM and BSEE have the authority to pay incentives in the form of recruitment, relocation, and retention awards of up to 25 percent of basic pay, in most circumstances, and for as long as the use of these incentives is justified, in accordance with OPM guidance, such as in the event an employee is likely to leave federal service. However, we found that the bureaus use of these incentives has been limited. For example, during fiscal years 2010 through 2012, the three bureaus hired 66 petroleum engineers but awarded just four recruitment incentives, five relocation incentives, and four retention incentives. BLM awarded two of the four retention incentives in 2012 to help retain petroleum engineers in its North Dakota Field Office. OPM data showed that, in 2011, Interior paid about one-third less in incentive awards than it did in 2010. BLM officials cited various factors that contributed to the limited use of incentives, such as limited funds available for incentives. A BLM official also told us that there was confusion about an OPM and Office of Management and Budget (OMB) requirement to limit incentive awards to 2010 levels and that some field office managers were uncertain about the extent to which office managers were allowed to use incentive awards. Without clear guidance outlining when these incentives should be used, and a means to measure their effectiveness, we concluded that Interior will not be able to determine whether it has fully used its authority to offer incentives to hire and retain key oil and gas oversight staff. Hiring times. To improve its hiring times, Interior participated in an OPM- led, government-wide initiative to streamline the federal hiring process. In 2009, a team of hiring managers and human resources specialists from Interior reviewed the department s hiring process and compared it with OPM s 80 calendar-day hiring target. The team identified 27 action items to reduce hiring times, such as standardizing position descriptions and reducing the number of managers involved in the process. Interior and its bureaus implemented many of the action items over the past few years and made significant progress to reduce hiring times, according to Interior officials and agency records. For example, BSEE reduced the time to select eligible applicants from 90 to 30 days by limiting the amount of time allowed for managers to review and select applicants. A BLM official told us that the bureau is working to automate vacancy announcements to improve the efficiency of its hiring process. However, neither the department nor the three bureaus have complete and accurate data on hiring times that could help them identify and address the causes of delays in the hiring process. Beginning in 2011, Interior provided quarterly data on hiring times to OPM, calculated based on Interior s personnel and payroll databases. However, we identified discrepancies in some of the data for example, in some cases, hiring times were erroneously recorded as 0 or 1 day. In addition, none of the bureaus systematically analyze the data collected. For instance, BSEE and BOEM collect hiring data on a biweekly basis, but officials told us they use the data primarily to track the progress of individual applicants as they move through the hiring process. Likewise, a BLM official stated that the bureau does not systematically analyze data on hiring times. Without reliable data on hiring times, Interior s bureaus cannot identify how long it takes to complete individual stages in the hiring process or effectively implement changes to expedite the hiring process. Recruiting. BLM, BOEM, and BSEE have taken some steps to improve recruiting. In 2012, BOEM and BSEE contracted with a consulting firm to draft a marketing strategy highlighting the advantages of employment at the bureaus, such as flexible work hours and job security. BOEM and BSEE used this marketing strategy to revise the recruiting information on their external websites and develop recruiting materials such as brochures and job fair displays. According to a BLM workforce strategy planning document, the bureau is considering contracting with a consulting firm to review its recruiting strategy. All three bureaus are also visiting colleges and universities to recruit potential applicants for oil and gas positions, and each has had some success offering student intern positions that may be converted to full-time employment. Workforce planning. Interior is participating in a government-wide initiative led by OPM to identify and address critical skills gaps across the federal government. The effort aims to develop strategies to hire and retain staff possessing targeted skills and address government-wide and department-specific mission-critical occupations and skill gaps. In March 2012, Interior issued a plan providing an overview of workforce planning strategies that it can use to meet emerging workforce needs and skills gaps within constrained budgets. As part of the next phase of this effort, Interior asked its bureaus to develop detailed workforce plans using a standardized model based on best practices used at Interior. Both planning efforts are ongoing, however, so it is too early to assess the effect on Interior s hiring and retention challenges for key oil and gas positions at this time. BLM, BOEM, and BSEE are developing or implementing workforce plans as well. As we reported in July 2012, BOEM and BSEE did not have strategic workforce plans, and we recommended that the bureaus develop plans to address their hiring and retention challenges.workforce plan, and BOEM officials told us that they expect to complete one in 2014. BLM issued a workforce planning strategy in March 2012 that outlined strategic objectives to address some of its key human capital challenges; however, this strategy does not include implementation; address challenges with the hiring process; or outline mechanisms to monitor, evaluate, or improve the hiring process; so it is too soon to tell whether BLM s planning strategy will help the bureau address its human capital challenges. Moreover, we found that the bureaus efforts do not appear to have been conducted as part of an overarching workforce plan, or in a coordinated and consistent manner, therefore the bureaus do not have a basis to assess the success of these efforts or determine whether and how these efforts should be adjusted over time. <3. Hiring and Retention Challenges Have Made it More Difficult for Interior to Carry Out Some Oversight Activities> The BLM, BOEM, and BSEE officials that we interviewed and surveyed reported that hiring and retention challenges have made it more difficult to carry out their oversight activities. These officials stated that position vacancies have resulted in less time for oversight, and vacancies directly affect the number of oversight activities they can carry out including the number of inspections conducted and the time for reviewing applications to drill. Officials at some BLM field offices told us that they have not been able to meet their annual inspection and enforcement goals because of vacancies. Of the 20 offices with inspector vacancies that we surveyed, 13 responded that they conducted fewer inspections in 2012 compared with what they would have done if fully staffed, and 9 responded that the thoroughness of inspections was reduced because of vacancies. Of the 21 BLM and BSEE offices with petroleum engineer vacancies, 8 reported that they reviewed fewer applications to drill in 2012 compared with what they would have done if fully staffed. BSEE officials told us that fewer or less-thorough inspections may mean that some offices are less able to ensure operator compliance with applicable laws and regulations and, as a result, there is an increased risk to human health and safety due to a spill or accident. According to a BSEE official, the longer federal inspectors are away from a site, the more likely operators are to deviate from operating in accordance with laws and regulations. Officials at each of the three bureaus cited steps they have taken to address vacancies in key oil and gas positions; specifically, reassigning staff from lower-priority to higher-priority tasks, borrowing staff from other offices, or increasing overtime. However, each of these steps comes at a cost to the agency and is not a sustainable solution. Interior officials told us that moving staff from lower to higher priority work means that the lower priority tasks many of which are still critical to the bureaus missions are deferred or not conducted, such as processing permits. Likewise, offices that borrow staff from other offices gain the ability to carry out activities, but this comes at a cost to the office that loaned the staff. With regard to overtime, BOEM officials reported that a heavy reliance on overtime was exhausting their staff. BLM and BSEE are developing and implementing risk-based inspection strategies long recommended by GAO and others as they work to ensure oversight resources are efficiently and effectively allocated; however, staffing shortfalls and turnover may adversely affect the bureaus ability to carry out these new strategies. In 2010, we reported that BLM routinely did not meet its goals for conducting key oil and gas facility inspections, and we recommended that the bureau consider an alternative inspection strategy that allows for the inspection of all wells within a reasonable time frame, given available resources. In response to this recommendation, in fiscal year 2011, BLM implemented a risk- based inspection strategy whereby each field office inspects the highest risk wells first. Similarly, BSEE officials told us that they have contracted with Argonne National Laboratory to help develop a risk-based inspection strategy. In our January 2014 report, to address the hiring challenges we identified, we recommended that Interior explore its bureaus expanded use of recruitment, relocation, retention, and other incentives and systematically collect and analyze hiring data. Interior generally agreed with our recommendations. Chairman Lamborn, Ranking Member Holt, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. <4. GAO Contact and Staff Acknowledgments> If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Christine Kehr, Assistant Director; Mark Braza, Glenn Fischer, Michael Kendix, Michael Krafve, Alison O Neill, Kiki Theodoropoulos, Barbara Timmerman, and Arvin Wu. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Interior employs a wide range of highly trained specialists and scientists with key skills to oversee oil and gas operations on leased federal lands and waters. GAO and others have reported that Interior has faced challenges hiring and retaining sufficient staff to carry out these responsibilities. In February 2011, GAO added Interior's management of federal oil and gas resources to its list of programs at high risk of fraud, waste, abuse, and mismanagement in part because of Interior's long-standing human capital challenges. This testimony and the January 2014 report on which it is based address (1) the extent to which Interior continues to face challenges hiring and retaining key oil and gas staff and the causes of these challenges, (2) Interior's efforts to address its hiring and retention challenges, and (3) the effects of hiring and retention challenges on Interior's oversight of oil and gas activities. To do this work, GAO surveyed all 44 Interior offices that oversee oil and gas operations, of which 40 responded; analyzed offshore inspection records and other documents; and interviewed agency officials. What GAO Found The Department of the Interior continues to face challenges hiring and retaining staff with key skills needed to manage and oversee oil and gas operations on federal leases. Interior officials noted two major factors that contribute to challenges in hiring and retaining staff: lower salaries and a slow hiring process. In response to GAO's survey, officials from a majority of the offices in the three Interior bureaus that manage oil and gas activities—the Bureau of Land Management (BLM), the Bureau of Ocean Energy Management (BOEM), and the Bureau of Safety and Environmental Enforcement (BSEE)—reported ongoing difficulties filling vacancies, particularly for petroleum engineers and geologists. Many of these officials also reported that retention is an ongoing concern as staff leave for positions in industry. Bureau of Labor Statistics data confirm a wide gap between industry and federal salaries for petroleum engineers and geologists. According to Office of Personnel Management (OPM) data, the fiscal year 2012 attrition rate for petroleum engineers at BLM was over 20 percent, or more than double the average federal attrition rate of 9.1 percent. Field office officials stated that attrition is of concern because some field offices have only a few employees in any given position, and a single separation can significantly affect operations. Additionally, Interior records show that the average time required to hire petroleum engineers and inspectors in recent months generally exceeded 120 calendar days—much longer than OPM's target of 80 calendar days. Interior and the three bureaus—BLM, BOEM, and BSEE—have taken some actions to address their hiring and retention challenges, but they have not fully used their existing authorities to supplement salaries or collect and analyze hiring data to identify the causes of delays in the hiring process. For instance, BLM, BOEM, and BSEE officials said that recruitment, relocation, and retention incentives are key options to help hire and retain staff, but the bureaus' use of these incentives to attract and retain petroleum engineers and inspectors has been limited for various reasons. Moreover, Interior and its bureaus have taken some steps to reduce hiring times, but they do not have complete and accurate data on hiring times. For instance, while BSEE and BOEM collect hiring data on a biweekly basis, the data are used primarily to track the progress of individual applicants as they move through the hiring process. Likewise, a BLM official stated that the bureau does not systematically analyze data on hiring times. Without reliable data on hiring times, Interior's bureaus cannot identify how long it takes to complete individual stages in the hiring process or effectively implement changes to expedite the hiring process. According to BLM, BOEM, and BSEE officials, hiring and retention challenges have made it more difficult to carry out oversight activities in some field offices. For example, many BLM and BSEE officials GAO surveyed reported that vacancies have resulted in a reduction in the number of inspections conducted. As a result of these challenges, bureau officials cited steps they have taken to address vacancies in key positions, such as borrowing staff from other offices or using overtime, but these are not sustainable, long-term solutions. What GAO Recommends In its January 2014 report, GAO recommended that Interior explore its oil and gas management bureaus' expanded use of recruitment, relocation, retention, and other incentives and systematically collect and analyze hiring data. Interior generally agreed with GAO's recommendations. GAO is not making any new recommendations in this testimony.
<1. Use of Liens, Levies, and Seizures in Collecting Taxes> The magnitude of IRS collection workload is staggering. As of the beginning of fiscal year 1996, IRS reported that its inventory of unpaid tax assessments totaled about $200 billion. Of this amount, IRS estimated that about $46 billion had collection potential. In addition, during the fiscal year, an additional $59 billion in unpaid tax assessments were added to the inventory. who have not paid the amount due as determined by the tax assessment.In the first stage of the process, a series of notices are to be sent to the taxpayer from one of IRS service centers. Collectively, these notices are to provide the taxpayer with statutory notification of the tax liability, IRS intent to levy assets if necessary, and information on the taxpayer s rights. If the taxpayer fails to pay after being notified, the Internal Revenue Code authorizes a federal tax lien to be filed to protect the government s interest over other creditors and purchasers of taxpayer property. The second stage of IRS collection process involves attempts to collect the taxes by making telephone contact with the taxpayer. IRS carries out this stage through its Automated Collection System (ACS) program. During this stage, IRS may levy taxpayer assets and file notices of federal tax liens. In the final stage of the collection process, information about the tax delinquency is referred to IRS field offices for possible face-to-face contact with the taxpayer. During this stage, IRS may also levy taxpayer assets and file notices of federal tax liens. Additionally, as a final collection action, taxpayer property, such as cars or real estate, may be seized. Attachment I presents a flowchart that provides additional detail about the collection process. At any time in the collection process, IRS may find that a taxpayer cannot pay what is owed or does not owe the tax IRS assessed. In such situations, IRS may enter into an installment agreement with a taxpayer, compromise for an amount less than the original tax assessment, suspend or terminate the collection action, or abate an erroneous assessment. Also, if the taxpayer is having a problem resolving a collection action with the initiating IRS office, the taxpayer may go to IRS Taxpayer Advocate or to IRS appeals program for resolution. If an enforcement action is taken that involves a reckless or intentional disregard of taxpayer rights by an IRS employee, a taxpayer may sue for damages. In the case of an erroneous bank levy, a taxpayer may file a claim with IRS for reimbursement of bank charges incurred because of the levy in addition to a refund of the erroneously levied amount. If a taxpayer believes that enforced collection would be a hardship, the taxpayer may request assistance from the Taxpayer Advocate. <2. IRS Has Some Limited Data on the Use and Misuse of Lien, Levy, and Seizure Authority> IRS produces management information reports that provide some basic information on tax collections and the use of collection enforcement authorities, including the number of liens, levies, and seizures filed and, in the case of seizures, the tax delinquency that resulted in the seizure and the tax proceeds achieved. Also, some offices within IRS collect information on the misuse of these collection enforcement authorities, but the information is not complete. Overall, IRS management reports show that IRS collection program collected about $29.8 billion during fiscal year 1996, mostly without taking enforced collection action. In attempting to collect on delinquent accounts, the reports show IRS filed about 750,000 liens against taxpayer property, issued about 3.2 million levies on taxpayer assets held by third parties, and completed about 10,000 seizures of taxpayer property. Attachment II presents this overall information on IRS use of lien, levy, and seizure authority during fiscal years 1993-96. Attachment III presents a summary of the distribution of seizure cases by type of asset seized in fiscal year 1996. For the seizure cases completed in fiscal year 1996, the average tax delinquency was about $233,700, and the average net proceeds from the seizures was about $16,700. Although complete data were not available on tax delinquencies and associated net proceeds for liens and levies, the best information available from IRS indicates that about $2.1 billion of the $29.8 billion was collected as a result of lien, levy, and seizure actions. The remainder was collected as a result of contacts with taxpayers about their tax delinquencies. The best data that IRS has on the potential misuse of collection authorities are from the Office of the Taxpayer Advocate. However, those data alone are not sufficient to determine the extent of misuse. The data show that about 9,600 complaints involving allegations of inappropriate, improper, or premature collection actions were closed by the Advocate in fiscal year 1996, as were 11,700 requests for relief from collection actions because of hardship. Although the Advocate does not routinely collect data on the resolution of taxpayer complaints, it does collect data on the resolution of requests for relief. According to the Advocate, during fiscal year 1996, the requests for relief resulted in the release either full or partial from about 4,000 levy and seizure actions and 156 liens. These Taxpayer Advocate data are not sufficient to determine the extent to which IRS initial collection actions were appropriate or not for several reasons. First, the release of a lien could result from a taxpayer subsequently paying the tax liability or offering an alternative solution, or because IRS placed the lien in error. Although the Taxpayer Advocate maintains an information system that accommodates collecting the data to identify whether IRS was the cause of the taxpayer s problem, the Advocate does not require that such information be reported by the IRS employee working to resolve the case or be otherwise accumulated. Thus, about 82 percent of the taxpayer complaints closed in fiscal year 1996 did not specify this information. Of the remaining 18 percent, about 9 percent specified that IRS collection action was in error either through taking an erroneous action, providing misleading information to the taxpayer, or taking premature enforcement action. In addition, the Advocate s data do not cover the potential universe of cases in which a collection action is alleged to have been made improperly. The Advocate requires each complaint that is covered by its information system to be categorized by only one major code to identify the issue or problem. If a complaint had more than one problem, it is possible that a collection-related code could be superseded by another code such as one covering lost or misapplied payments. Also, complaints that are handled routinely by the various IRS offices would not be included in the Advocate s data because that office was not involved in the matter. For example, appeals related to lien, levy, and seizure actions are to be handled by the Collection Appeals Program (effective April 1, 1996). For fiscal year 1996, the Appeals Program reported that of the 705 completed appeals of IRS enforced collection actions, it fully sustained IRS actions on 483 cases, partially sustained IRS in 55 cases, did not sustain IRS actions in 68 cases, and returned 99 cases to the initiating office for further action because they were prematurely referred to the Collection Appeals Program. According to IRS Appeals officials, a determination that Appeals did not sustain an IRS enforcement action does not necessarily mean that the action was inappropriate. If a taxpayer offered an alternative payment method, the Appeals Officer may have approved that offer and thus not sustained the enforcement action even if the enforcement action was justified. In any event, the Collection Appeals Program keeps no additional automated or summary records on the resolution of appeals as they relate to the appropriateness of lien, levy, or seizure action. <3. Further Assessment of Extent or Causes of Misuse of Liens, Levies, and Seizures Is Limited by IRS Record-Keeping Practices> IRS record-keeping practices limit both our and IRS ability to generate data needed to determine the extent or causes of the misuse of lien, levy, and seizure authority. Neither IRS major data systems masterfiles and supplementary systems nor the summary records (manual or automated) maintained by the IRS offices responsible for the various stages of the collection process systematically record and track the issuance and complete resolution of all collection enforcement actions, i.e., liens, levies, and seizure actions. Moreover, the detailed records kept by these offices do not always include data that would permit a determination about whether an enforcement action was properly used. But, even if collection records contained information relevant to the use of collection enforcement actions, our experience has been that obstacles exist to retrieving records needed for a systematic review. <3.1. Major Information Systems Do Not Contain Data Necessary to Assess Enforcement Actions> IRS maintains selected information on all taxpayers, such as taxpayer identification number; amount of tax liability by tax year; amount of taxes paid by tax year; codes showing the event triggering the tax payment, including liens, levies, and seizures; and taxpayer characteristics, including earnings and employment status, on its Individual and Business Masterfiles. Also, if certain changes occur to a taxpayer s account, such as correction of a processing error in a service center, IRS requires information to be captured on the source of the error, that is, whether the error originated with IRS or the taxpayer. the characteristics of affected taxpayers. The lack of such data also precludes us from identifying a sample of affected taxpayers to serve as a basis for evaluating the use or misuse of collection actions. <3.2. Offices With Authority to Initiate Liens, Levies, and Seizures Do Not Keep Summary Records Related to Appropriateness of Actions> As I noted earlier, the IRS tax collection process involves several steps, which are carried out by different IRS offices that are often organizationally dispersed. Since authorities exist to initiate some of the collection actions at different steps in the process, several different offices could initiate a lien, levy, or seizure to resolve a given tax assessment. In addition, our examination of procedures and records at several of these offices demonstrated that records may be incomplete or inaccurate. For example, the starting point for a collection action is the identification of an unpaid tax assessment. The assessment may originate from a number of sources within IRS, such as the service center functions responsible for the routine processing of tax returns; the district office, ACS, or service center functions responsible for examining tax returns and identifying nonfilers; or the service center functions responsible for computer-matching of return information to identify underreporters. These assessments may not always be accurate, and as reported in our financial audits of IRS, cannot always be tracked back to supporting documentation. Since collection actions may stem from disputed assessments, determining the appropriateness of IRS actions would be problematic without an accurate tax assessment supported by documentation. Further, offices responsible for resolving taxpayer complaints do not always maintain records on the resolution of those complaints that would permit identification of instances of inappropriate use of collections authorities. We found several examples of this lack of data during our review. cases involving ACS, where an automated system is used for recording data, specific information about complaints may not be maintained because the automated files have limited space for comments and transactions. If a taxpayer complaint is not resolved by the responsible office, the taxpayer may seek assistance from the Taxpayer Advocate. As noted earlier, the Advocate has some information on complaints about the use of collection enforcement authorities, but those data are incomplete. In addition, starting in the last quarter of 1996, the Advocate was to receive notification of the resolution of taxpayer complaints involving IRS employee behavior (that is, complaints about IRS employees behaving inappropriately in their treatment of taxpayers, such as rudeness, overzealousness, discriminatory treatment, and the like.) These notifications, however, do not indicate if the problem involved the possible misuse of collection authority. If a taxpayer s complaint involves IRS employee integrity issues, the complaint should be referred to IRS Inspection Office. According to Inspection, that office is responsible for investigating allegations of criminal and serious administrative misconduct by specific IRS employees, but it would not normally investigate whether the misconduct involved inappropriate enforcement actions. In any event, Inspection does not keep automated or summary records on the results of its investigations as they relate to appropriateness of lien, levy, or seizure actions. Court cases are to be handled by the Chief Counsel s General Litigation Office. Internal Revenue Code sections 7432 and 7433 provide for taxpayers to file a claim for damages when IRS (1) knowingly or negligently fails to release a lien or (2) recklessly or intentionally disregards any provision of law or regulation related to the collection of federal tax, respectively. According to the Litigation Office, a total of 21 cases were filed under these provisions during 1995 and 1996. However, the Litigation Office does not maintain information on case outcomes. The Office has recently completed a study that covered court cases since 1995 involving damage claims in bankruptcy cases. As a part of that study, the Office identified 16 cases in which IRS misapplied its levy authority during taxpayer bankruptcy proceedings. IRS officials told us that the results of this study led IRS to establish a Bankruptcy Working Group to make recommendations to prevent such misapplication of levy authority. <3.3. Existing Records Cannot Always Be Retrieved> collection enforcement authorities. As we have learned from our prior work, IRS cannot always locate files when needed. For example, locating district office closed collection files once they have been sent to a Federal Records Center is impractical because there is no list identifying file contents associated with the shipments to the Records Centers. On a number of past assignments, we used the strategy of requesting IRS district offices to hold closed cases for a period of time, and then we sampled files from those retained cases. However, the results of these reviews could not be statistically projected to the universe of all closed cases because we had no way to determine if the cases closed in the relatively short period of time were typical of the cases closed over a longer period of time. <4. IRS Officials Said That Collecting Data to Assess Enforcement Actions Is Impractical and Unnecessary Because Taxpayers Are Protected Through Checks and Balances> We discussed with IRS the feasibility of collecting additional information for monitoring the extent to which IRS may have inappropriately used its collection enforcement authorities, and the characteristics of taxpayers who might be affected by such inappropriate actions. IRS officials noted that, although IRS does not maintain specific case data on enforcement actions, they believed that sufficient checks and balances (e.g., supervisory review of collection enforcement actions, collection appeals, complaint handling, and taxpayer assistance) are in place to protect taxpayers from inappropriate collection action. The development and maintenance of additional case data are, according to IRS officials, not practical without major information system enhancements. The IRS officials further observed that, given the potential volume and complexity of the data involved and the resources needed for data gathering and analysis, they were unable to make a compelling case for compiling the information. We recognize that IRS faces resource constraints in developing its management information systems and that IRS has internal controls, such as supervisory review and appeals, that are intended to avoid or resolve inappropriate use of collection authorities. We also recognize that the lack of relevant information to assess IRS use of its collection enforcement authorities is not, in itself, evidence that IRS lacks commitment to resolve taxpayer collection problems after they occur. However, the limited data available and our prior work indicate that, at least in some cases, these controls may not work as effectively as intended. IRS is responsible for administering the nation s voluntary tax system in a fair and efficient manner. To do so, IRS oversees a staff of more than 100,000 employees who work at hundreds of locations in the United States and foreign countries and who are vested, by Congress, with a broad set of discretionary enforcement powers, including the ability to seize taxpayer property to resolve unpaid taxes. Given the substantial authorities granted to IRS to enforce tax collections, IRS and the other stakeholders in the voluntary tax system such as Congress and the taxpayers should have information to permit them to determine whether those authorities are being used appropriately; whether IRS internal controls are working effectively; and whether, if inappropriate uses of the authorities are identified, the problems are isolated events or systemic problems. At this time, IRS does not have the data that would permit it or Congress to readily determine the extent to which IRS collections enforcement authorities are misused, the causes of those occurrences, the characteristics of the affected taxpayers, or whether the checks and balances that IRS established over the use of collection enforcement authorities are working as intended. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you may have. Flowchart of the Collection Process A. B. is researched. C. proceed? amount owed? owe? Case closed. Cannot full pay. IA,OIC, or CNC considered. processed. Establish IA, OIC or tolerance? approved? CNC. (ACS Collection) D. (Field Collection) E. (ACS Collection) D. Case closed. pays? attempted. proceed? rights. mail (statutory requirement). taxpayer rights. (ACS Collection) pay? for ACS? (Field Collection) Case closed. (Field Collection) E. (Field Collection) E. find levy sources. found? contact. pay? owe? pay? sources found. pay? to proceed? Closed. Establish IA or OIC if taxpayer cannot full pay, if taxpayer cannot pay at all, then CNC. File lien, if appropriate. (Field Collection) information. something? lien, if appropriate. Establish IA or OIC. File lien, if appropriate. full? pay? Enforce collection. Issue levies if sources available. Are taxes fully paid? Seize assets. Are taxes fully paid? actions until paid or case otherwise closed. IRS Collection Enforcement Actions Distribution of Seizure Cases by Type of Asset Seized, Fiscal Year 1996 The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Why GAO Did This Study GAO discussed: (1) the availability of information on the Internal Revenue Service's (IRS) use of its enforcement authorities to collect delinquent taxes; and (2) whether information existed that could be used to determine whether collection enforcement authorities were properly used. What GAO Found GAO found that: (1) while IRS has some limited data about its use, and misuse, of collection enforcement authorities, these data are not sufficient to show: (a) the extent of the improper use of lien, levy, or seizure authority; (b) the causes of improper actions; or (c) the characteristics of taxpayers affected by improper actions; (2) the lack of information exists because IRS' systems--both manual and automated--have not been designed to capture and report comprehensive information on the use and possible misuse of collection authorities; (3) also, much of the data that are recorded on automated systems cannot be aggregated without a significant investment of scarce programming resources; (4) some information is available in manual records, but--because collection enforcement actions can be taken by a number of different IRS offices and records resulting from these actions are not always linked to IRS' automated information systems--this information cannot be readily assembled to assess the use of enforcement actions; (5) also, data are not readily available from other potential sources, such as taxpayer complaints, because, in many circumstances, IRS does not require that information on the resolution of the complaints be recorded; (6) IRS officials told GAO that collecting complete data on the use of enforcement actions that would permit an assessment of the extent and possible causes of misuse of these authorities is unnecessary because they have adequate checks and balances in place to protect taxpayers; and (7) however, IRS does not have the data that would permit it or Congress to readily resolve resonable questions about the extent to which IRS' collections enforcement authorities are misused, the causes of those occurrences, the characteristics of the affected taxpayers, or whether IRS' checks and balances over the use of collection enforcement authorities are working as intended.
<1. Background> Although the current focus of concern is largely on the potential for several years of declining physician fees, the historic challenge for Medicare has been to find ways to moderate the rapid growth in spending for physician services. Before 1992, the fees that Medicare paid for those services were largely based on physicians historical charges. Spending for physician services grew rapidly in the 1980s, at a rate that the Secretary of Health and Human Services (HHS) characterized as out of control. Although Congress froze fees or limited fee increases, spending continued to rise because of increases in the volume and intensity of physician services. From 1980 through 1991, for example, Medicare spending per beneficiary for physician services grew at an average annual rate of 11.6 percent. The ineffectiveness of fee controls alone led Congress to reform the way that Medicare set physician fees. The Omnibus Budget Reconciliation Act of 1989 (OBRA 1989) established both a national fee schedule and a system of spending targets, which first affected physician fees in 1992. From 1992 through 1997, annual spending growth for physician services was far lower than the previous decade. The decline in spending growth was the result in large part of slower volume and intensity growth. (See fig. 1.) Over time, Medicare s spending target system has been revised and renamed. The SGR system, Medicare s current system for updating physician fees, was established in the Balanced Budget Act of 1997 (BBA) and was first used to adjust fees in 1999. Following the implementation of the fee schedule and spending targets in 1992, through 1999, average annual growth in volume and intensity of service use per beneficiary fell to 1.1 percent. More recently volume and intensity growth has trended upward, rising at an average annual rate of about 5 percent from 2000 through 2003. Although this average annual rate of growth remains substantially below that experienced before spending targets were introduced, the recent increases in volume and intensity growth are a reminder that inflationary pressures continue to challenge efforts to moderate growth in physician expenditures. <2. SGR System Designed to Limit or Reduce Physician Fee Updates in Response to Excess Growth in Volume and Intensity> The SGR system establishes spending targets to moderate physician services spending increases caused by excess growth in volume and intensity. SGR s spending targets do not cap expenditures for physician services. Instead, spending in excess of the target triggers a reduced fee update or a fee cut. In this way, the SGR system applies financial brakes to physician services spending and thus serves as an automatic budgetary control device. In addition, reduced fee updates signal physicians collectively and Congress that spending due to volume and intensity has increased more than allowed. To apply the SGR system, every year the Centers for Medicare & Medicaid Services (CMS) follows a statutory formula to estimate the allowed rate of increase in spending for physician services and uses that rate to construct the spending target for the following calendar year. The sustainable growth rate is the product of the estimated percentage change in (1) input prices for physician services;9, 10 (2) the average number of Medicare beneficiaries in the traditional fee-for-service (FFS) program; (3) national economic output, as measured by real (inflation-adjusted) GDP per capita; and (4) expected expenditures for physician services resulting from changes in laws or regulations. SGR spending targets are cumulative. That is, the sum of all physician services spending since 1996 is compared to the sum of all annual targets since the same year to determine whether spending has fallen short of, equaled, or exceeded the SGR targets. The use of cumulative targets means, for example, that if actual spending has exceeded the SGR system targets, fee updates in future years must be lowered sufficiently both to offset the accumulated excess spending and to slow expected spending for the coming year. CMS calculates changes in physician input prices based on the growth in the costs of providing physician services as measured by the Medicare Economic Index, growth in the costs of providing laboratory tests as measured by the consumer price index for urban consumers, and growth in the cost of Medicare Part B prescription drugs included in SGR spending. Under the SGR and MVPS systems, the Secretary of Health and Human Services defined physician services to include services and supplies incident to physicians services, such as laboratory tests and most Part B prescription drugs. Under SGR, spending per beneficiary adjusted for the estimated underlying cost of providing physician services is allowed to grow at the same rate that the national economy grows over time on a per-capita basis currently projected to be slightly more than 2 percent annually. If volume and intensity grow faster, the annual increase in physician fees will be less than the estimated increase in the cost of providing services. Conversely, if volume and intensity grow more slowly than 2 percent annually, the SGR system permits physicians to benefit from fee increases that exceed the increased cost of providing services. To reduce the effect of business cycles on physician fees, MMA modified the SGR system to require that economic growth be measured as the 10-year moving average change in real per capita GDP. This measure is projected to range from 2.1 percent to 2.5 percent during the 2005 through 2014 period. When the SGR system was established, GDP growth was seen as a benchmark that would allow for affordable increases in volume and intensity. In its 1995 annual report to Congress, the Physician Payment Review Commission stated that limiting real expenditure growth to 1 or 2 percentage points above GDP would be a realistic and affordable goal. Ultimately, BBA specified the growth rate of GDP alone. This limit was an indicator of what the 105th Congress thought the nation could afford to spend on volume and intensity increases. If cumulative spending on physician services is in line with SGR s target, the physician fee schedule update for the next calendar year is set equal to the estimated increase in the average cost of providing physician services as measured by the Medicare Economic Index (MEI). If cumulative spending exceeds the target, the fee update will be less than the change in MEI or may even be negative. If cumulative spending falls short of the target, the update will exceed the change in MEI. The SGR system places bounds on the extent to which fee updates can deviate from MEI. In general, with an MEI of about 2 percent, the largest allowable fee decrease would be about 5 percent and the largest fee increase would be about 5 percent. <3. Continued Volume and Intensity Growth and Legislated Fee Updates Contribute to Projected Decline in Physician Fees> The 2004 Medicare Trustees Report announced that the projected physician fee update would be about negative 5 percent for 7 consecutive years beginning in 2006; the result is a cumulative reduction in physician fees of more than 31 percent from 2005 to 2012, while physicians costs of providing services, as measured by MEI, are projected to rise by 19 percent. According to projections made by CMS Office of the Actuary (OACT) in July 2004, maximum fee reductions will be in effect from 2006 through 2012, while fee updates will be positive in 2014. (See fig. 2.) There are two principal reasons for the projected fee declines: increases in volume and intensity that exceed the SGR s allowance partly as a result of spending for Part B prescription drugs and the minimum fee updates for 2004 and 2005 specified by MMA. <3.1. Volume and Intensity Growing Rapidly, Partly as a Result of Included Spending for Outpatient Drugs> Recent growth in spending due to volume and intensity increases has been larger than SGR targets allow, resulting in excess spending that must be recouped through reduced fee updates. In general, the SGR system allows physician fee updates to equal or exceed the MEI as long as spending growth due to volume and intensity increases is no higher than the average growth in real GDP per capita about 2.3 percent annually. However, in July 2004, CMS OACT projected that the volume and intensity of physician services paid for under the physician fee schedule would grow by 3 percent per year. To offset the resulting excess spending, the SGR system will have to reduce future physician fee updates. Additional downward pressure on physician fees arises from the growth in spending for other Medicare services that are included in the SGR system, but that are not paid for under the physician fee schedule. Such services include laboratory tests and many Part B outpatient prescription drugs that physicians provide to patients. Because physicians influence the volume of services they provide directly that is, fee schedule services as well as these other services, defined by the Secretary of HHS as incident to physician services, expenditures for both types of services were included when spending targets were introduced. In July 2004, CMS OACT projected that SGR-covered Part B drug expenditures would grow more rapidly than other physician service expenditures, thus increasing the likelihood that future spending would exceed SGR system targets. To the extent that spending for SGR Part B drugs and other incident to services grows larger as a share of overall SGR spending, additional pressure is put on fee adjustments to offset excess spending and bring overall SGR spending in line with the system s targets. This occurs because the SGR system attempts to moderate spending only through the fee schedule, even when the excess spending is caused by expenditures for incident to services, such as Part B drugs, which are not paid for under the fee schedule. <3.2. MMA s Minimum Updates for 2004 and 2005 Contribute to Future Physician Fee Cuts> The MMA averted fee reductions projected for 2004 and 2005 by specifying an update to physician fees of no less than 1.5 percent for those 2 years. The MMA increases replaced SGR system fee reductions of 4.5 percent in 2004 and 3.3 percent in 2005 and thus will result in additional aggregate spending. Because MMA did not make corresponding revisions to the SGR system s spending targets, the SGR system must offset the additional spending by reducing fees beginning in 2006. An examination of the SGR fee update that would have gone into effect in 2005, absent the MMA minimum updates, illustrates the impact of the system s cumulative spending targets. To begin with, actual expenditures under the SGR system in 2004 are estimated to be $84.9 billion, whereas target expenditures for 2004 were $77.1 billion. As a result, SGR s 2005 fee updates would have needed to offset the $7.8 billion deficit from excess spending in 2004 plus the accumulated excess spending of $5.9 billion from previous years to realign expected spending with target spending. Because the SGR system is designed to offset accumulated excess spending over a period of years, the deficit for 2004 and preceding years reduces fee updates for multiple years. <4. Alternatives for Updating Physician Fees Would Eliminate Spending Targets or Revise Current SGR System> The projected sustained period of declining physician fees and the potential for beneficiaries access to physician services to be disrupted have heightened interest in alternatives for the current SGR system. In general, potential alternatives cluster around two approaches. One approach would end the use of spending targets as a method for updating physician fees and encouraging fiscal discipline. The other approach would retain spending targets but modify the current SGR system to address perceived shortcomings. These modifications include such options as removing the prescription drug expenditures that are currently counted in the SGR system; resetting the targets and not requiring the system to recoup previous excess spending; and raising the allowance for increased spending due to volume and intensity growth. The Part B premium amount is adjusted each year so that expected premium revenues equal 25 percent of expected Part B spending. Beneficiaries must pay coinsurance usually 20 percent for most Part B services. physicians appropriately, it is important to consider how modifications or alterations to the SGR system would affect the long-term sustainability and affordability of the Medicare program. <4.1. Eliminate Spending Targets, Base Fee Updates on Physician Cost Increases> See GAO-05-85 for more information about these alternatives. See Medicare Payment Advisory Commission, Report to the Congress: Medicare Payment Policy (Washington, D.C.: March 2001, 2002, 2003, and 2004). MedPAC suggested that other adjustments to the update might be necessary, for example, to ensure overall payment adequacy, correct for previous MEI forecast errors, and to address other factors. likely produce fee updates that ranged from 2.1 percent to 2.4 percent over the period from 2006 through 2014. (See table 1.) However, Medicare spending for physician services would rise, resulting in cumulative expenditures that are 22 percent greater over a 10-year period than under current law, based on CMS OACT estimates. Although MedPAC s recommended update approach would limit annual increases in the price Medicare pays for each service, the approach does not contain an explicit mechanism for constraining aggregate spending resulting from increases in the volume and intensity of services physicians provide. In 2004 testimony, MedPAC stated that fee updates for physician services should not be automatic, but should be informed by changes in beneficiaries access to services, the quality of services provided, the appropriateness of cost increases, and other factors, similar to those that MedPAC takes into consideration when considering updates for other providers. <4.2. Retain Spending Targets, Modify Current SGR System> Another approach for addressing the perceived shortcoming of the current SGR system would retain spending targets but modify one or more elements of the system. The key distinction of this approach, in contrast to basing updates on MEI, is that fiscal controls designed to moderate spending would continue to be integral to the system used to update fees. Although spending for physician services would likely also rise under this approach, the advantage of retaining spending targets is that the fee update system would automatically work to moderate spending if volume and intensity growth began to increase above allowable rates. The SGR system could be modified in a number of ways: for example, by raising the allowance for increased spending due to volume and intensity growth; resetting the base for the spending targets and not requiring the system to recoup previous excess spending; or removing the prescription drug expenditures that are currently counted in the SGR system. <4.2.1. Increase Allowance for Volume and Intensity Growth> The current SGR system s allowance for volume and intensity growth could be increased, through congressional action, by some factor above the percentage change in real GDP per capita. As stated earlier, the current SGR system s allowance for volume and intensity growth is approximately 2.3 percent per year the 10-year moving average in real GDP per capita while CMS OACT projected that volume and intensity growth would be more than 3 percent per year. To offset the increased spending associated with the higher volume and intensity growth, the SGR system will reduce updates below the increase in MEI. According to CMS OACT simulations, increasing the allowance for volume and intensity growth to GDP plus 1 percentage point would likely produce positive fee updates beginning in 2012 2 years earlier than is projected under current law. Because fee updates would be on average greater than under current law during the 10- year period from 2005 through 2014, Medicare spending for physician services would rise. CMS OACT estimated that cumulative expenditures over the 10-year period would increase by 4 percent more than under current law. (See table 1.) <4.2.2. Reset Spending Base for Future SGR System Targets> In 2002, we testified that physician spending targets and fees may need to be adjusted periodically as health needs change, technology improves, or health care markets evolve. Such adjustments could involve specifying a new base year from which to set future targets. Currently, the SGR system uses spending from 1996, trended forward by the sustainable growth rate computed for each year, to determine allowable spending. MMA avoided fee declines in 2004 and in 2005 by stipulating a minimum update of 1.5 percent in each of those 2 years, but the law did not similarly adjust the spending targets to account for the additional spending that would result from the minimum update. Consequently, under the SGR system the additional MMA spending and other accumulated excess spending will have to be recouped through fee reductions beginning in 2006. If the resulting negative fee updates are considered inappropriately low, one solution would be, through congressional action, to use actual spending from a recent year as a basis for setting future SGR system targets and forgiving the accumulated excess spending attributable to MMA and other factors. The effect of this action would be to increase future updates and, as with other alternatives presented here, overall spending. According to CMS OACT simulations, forgiving the accumulated excess spending as of 2005 that is, resetting the cumulative spending target so that it equals cumulative actual spending would raise fees in 2006. However, because volume and intensity growth is projected to exceed the SGR system s allowance for such growth, negative updates would return beginning in 2008 and continue through 2013. Resulting cumulative spending over the 10-year period from 2005 through 2014 would be 13 percent higher than is projected under current law. (See table 1.) <4.2.3. Remove Prescription Drugs from the SGR System> The Secretary of HHS could, under current authority, consider excluding Part B drugs from the definition of services furnished incident to physician services for purposes of the SGR system. Expenditures for these drugs have been growing rapidly, which, in turn, has put downward pressure on the fees paid to Medicare physicians. However, according to CMS OACT simulations, removing Part B drugs from the SGR system beginning in 2005 would not prevent several years of fee declines and would not decrease the volatility in the updates. Fees would decline by about 5 percent per year from 2006 through 2010. There would be positive updates beginning in 2011 3 years earlier than is projected under current law. (See table 1.) CMS OACT estimated that removing Part B drugs from the SGR system would result in cumulative spending over the 10-year period from 2005 through 2014 that is 5 percent higher than is projected under current law. <4.2.4. Combine Multiple Spending Target Modifications> Together Congress and CMS could implement several modifications to the SGR system, for example, by increasing the allowance for volume and intensity growth to GDP plus 1 percentage point, resetting the spending base for future SGR targets, and removing prescription drugs. According to CMS OACT simulations, this combination of options would result in positive updates ranging from 2.2 percent to 2.8 percent for the 2006 2014 period. CMS OACT projected that the combined options would increase aggregate spending by 23 percent over the 10-year period. (See table 1.) <5. Concluding Observations> Medicare faces the challenge of moderating the growth in spending for physician services while ensuring that physicians are paid fairly so that beneficiaries have appropriate access to their services. Concerns have been raised that access to physician services could eventually be compromised if the SGR system is left unchanged and the projected fee cuts become a reality. These concerns have prompted policymakers to consider two broad approaches for updating physician fees. The first approach eliminating targets emphasizes fee stability while the second approach retaining and modifying targets includes an automatic fiscal brake. Either of the two approaches could be implemented in a way that would likely generate positive fee updates and each could be accompanied by separate, focused efforts to moderate volume and intensity growth. Because multiple years of projected 5 percent fee cuts are incorporated in Medicare s budgeting baseline, almost any change to the SGR system is likely to increase program spending above the baseline. As policymakers consider options for updating physician fees, it is important to be mindful of the serious financial challenges facing Medicare and the need to design policies that help ensure the long-term sustainability and affordability of the program. We look forward to working with the Subcommittee and others in Congress as policymakers seek to moderate program spending growth while ensuring appropriate physician payments. Madam Chairman, this concludes my prepared statement. I will be happy to answer questions you or the other Subcommittee Members may have. <6. Contact and Acknowledgments> For further information regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101. James Cosgrove, Jessica Farb, Hannah Fein, and Jennifer Podulka contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Concerns were raised about the system Medicare uses to determine annual changes to physician fees--the sustainable growth rate (SGR) system--when it reduced physician fees by almost 5 percent in 2002. Subsequent administrative and legislative actions modified or overrode the SGR system to avert fee declines in 2003, 2004, and 2005. However, projected fee reductions for 2006 to 2012 have raised new concerns about the SGR system. Policymakers question the appropriateness of the SGR system for updating physician fees and its effect on physicians' continued participation in the Medicare program if fees are permitted to decline. At the same time, there are concerns about the impact of increased spending on the long-term fiscal sustainability of Medicare. GAO was asked to discuss the SGR system. Specifically, this statement addresses the following: (1) how the SGR system is designed to moderate the growth in spending for physician services, (2) why physician fees are projected to decline under the SGR system, and (3) options for revising or replacing the SGR system and their implications for physician fee updates and Medicare spending. This statement is based on GAO's most recent report on the SGR system, Medicare Physician Payments: Concerns about Spending Target System Prompt Interest in Considering Reforms (GAO-05-85). What GAO Found To moderate Medicare spending for physician services, the SGR system sets spending targets and adjusts physician fees based on the extent to which actual spending aligns with specified targets. If growth in the number of services provided to each beneficiary--referred to as volume--and in the average complexity and costliness of services--referred to as intensity--is high enough, spending will exceed the SGR target. While the SGR system allows for some volume and intensity spending growth, this allowance is limited. If such growth exceeds the average growth in the national economy, as measured by the gross domestic product per capita, fee updates are set lower than inflation in the cost of operating a medical practice. A large gap between spending and the target may result in fee reductions. There are two principal reasons why physician fees are projected to decline under the SGR system beginning in 2006. One problem is that projected volume and intensity spending growth exceeds the SGR allowance for such growth. Second, the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) increased the update for 2004 and 2005--thus increasing spending--but did not raise the spending targets for those years. The SGR system, which is designed to keep spending in line with its targets, must reduce fees beginning in 2006 to offset excess spending attributable to both volume and intensity growth and the MMA provision. In general, proposals to reform Medicare's method for updating physician fees would either (1) eliminate spending targets and establish new considerations for the annual fee updates or (2) retain spending targets, but modify certain aspects of the current system. The first approach emphasizes stable and positive fee updates, while the second approach automatically applies financial brakes whenever spending for physician services exceeds predefined spending targets. Either approach could be complemented by focused efforts to moderate volume and intensity growth directly. As policymakers consider options for updating physician fees, it is important to be mindful of the serious financial challenges facing Medicare and the need to design policies that help ensure the long-term sustainability and affordability of the program.
<1. Background> Under DERP, DOD is required to carry out a program of environmental restoration activities at sites located on former and active defense installations that were contaminated while under DOD s jurisdiction. The goals of the program include the identification, investigation, research and development, and cleanup of contamination from hazardous substances, pollutants, and contaminants; the correction of other environmental damage (such as detection and disposal of unexploded ordnance) which creates an imminent and substantial endangerment to public health or welfare or the environment; and demolition and removal of unsafe buildings and structures. To that end, DOD has established performance measures and goals and identified over 31,600 sites that are eligible for cleanup, including about 4,700 FUDS, 21,500 sites on active installations, and 5,400 sites on installations that have been closed or are designated to be closed or realigned under the Base Realignment and Closure (BRAC) process. The DERP was established by section 211 of the Superfund Amendments and Reauthorization Act of 1986 which amended the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980. In implementing the DERP, DOD is required to carry out its activities addressing hazardous substances, pollutants, or contaminants in a manner consistent with section 120 of CERCLA. Funding for DERP cleanup activities comes from the Environmental Restoration and BRAC accounts. The Environmental Restoration accounts fund cleanup activities at FUDS and active sites and the BRAC accounts fund cleanup activities at certain closing or realigning installations. To carry out the DERP at FUDS, DOD has established three major program categories: the Installation Restoration Program, the Military Munitions Response Program, and the Building Demolition/Debris Removal Program. Specifically: Installation Restoration Program (IRP). DOD established the IRP in 1985 to address the release of hazardous substances, pollutants, or contaminants resulting from past practices that pose environmental health and safety risks on both active sites and FUDS. For FUDS, the IRP includes (1) hazardous, toxic, and radioactive waste (HTRW) sites and (2) containerized hazardous, toxic, and radioactive waste (CON/HTRW) sites, such as sites with transformers and aboveground or underground storage tanks. In FY 2008, 2,621 FUDS were included in the IRP. DOD has developed performance measures to assess progress toward the agency s IRP goals. These goals are based on the achievement of certain CERCLA cleanup phases and include progress toward achieving DOD milestones of remedy in place and/or response complete at installations, and progress in reducing overall risks. Specific IRP targets are included in DOD s annual report to the Congress. Military Munitions Response Program (MMRP). DOD established the MMRP in September 2001 as a separate program to focus on addressing potential explosive and environmental hazards associated with munitions sites on both active installations and FUDS, due to the unique issues associated with munitions sites. The MMRP includes sites with munitions and explosives of concern, munitions constituents, and chemical warfare materiel. In FY 2008, 1,661 FUDS were included in the MMRP. The objectives of the program include compiling a comprehensive inventory of military munitions sites, establishing a prioritization protocol for cleanup work at these sites, and establishing program goals and performance measures to evaluate progress. In December 2001, shortly after DOD established the program, the Congress passed the National Defense Authorization Act for FY 2002, which, among other things, required DOD to develop, by May 31, 2003, an initial inventory of defense sites, other than military ranges still in operation, that are known or suspected to contain military munitions and to provide annual updates thereafter, among other requirements. DOD provides these updates as part of its Defense Environmental Programs Annual Report to Congress. Building Demolition/Debris Removal Program (BD/DR). To address the demolition and removal of unsafe buildings or structures, DOD established the BD/DR Program. In FY 2008, the Corps had 423 FUDS in the BD/DR program. Because of the small number of FUDS in the BD/DR Program, DOD measures and reports cleanup progress at BD/DR sites with the IRP program. Figure 1 shows these three program categories and the types of cleanup projects within each category at FUDS. A FUDS property may have multiple types of cleanup projects, which we refer to as sites. For example, a single FUDS property could have a munitions site; a building demolition/debris removal site; and a hazardous, toxic, and radioactive waste site. DOD is responsible for cleaning up its releases of hazardous substances under DERP, in accordance with CERCLA. The remedy chosen for such a release must meet certain standards for contaminants set under state or federal laws or regulations. If there is no standard for a given contaminant, DOD must still achieve a degree of cleanup, which at a minimum, assures protection of human health and the environment. Thus, the absence of a federal or state standard for the cleanup of a particular hazardous substance does not negate DOD s responsibility to clean up releases of that substance. Currently, all seven of the Corps geographic military divisions and 14 of its 45 districts within these divisions have responsibilities for identifying, investigating, and cleaning up hazards at FUDS. The Corps Environmental and Munitions Center of Expertise provides specialized technical assistance to help the Corps divisions and districts execute their responsibilities. The Corps FUDS program policy follows DERP management guidance, provides specific policy and guidance for managing and executing the FUDS program, and applies to all Corps elements engaged in FUDS program activities. Depending on the types of hazards involved and their severity, either a state environmental regulatory agency or EPA is the lead regulator at a FUDS. The lead regulator is responsible for providing regulatory oversight of the Corp s actions to clean up FUDS. In general, EPA is the lead regulator for all sites, including FUDS properties, on EPA s list of some of the most contaminated sites in the country the National Priorities List. Most FUDS are not on the National Priorities List, and states are typically the lead regulators for these FUDS properties. To be eligible for FUDS cleanup under the DERP and FUDS program policy, a property must have been under the jurisdiction of DOD and owned by, leased to, or otherwise possessed by the United States at the time of actions leading to contamination by hazardous substances or other hazards prior to October 17, 1986. In deciding which actions, if any, need to be taken at FUDS, the Corps uses the process outlined in the National Oil and Hazardous Substance Pollution Contingency Plan (NCP) for identifying, investigating, and cleaning up releases of hazardous substances under CERCLA. The Corps describes its usual process in the following, generally sequential, phases: Preliminary assessment The Corps uses available information, including a search of historical records, to determine whether the property was ever under the jurisdiction of DOD and owned or controlled by the United States, and if hazards caused by DOD s use may be present. If the Corps determines that the property was under the jurisdiction of DOD and owned or controlled by the United States, but does not find evidence of any hazards caused by DOD, it designates the property as no DOD action indicated. If, however, the Corps determines that hazards caused by DOD prior to October 17, 1986, may be present, the Corps begins further study. Site inspection The Corps inspects the site to confirm the presence and possible sources of hazards; to confirm that a release has occurred; or eliminate from further consideration those sites that pose no significant threat to public health or the environment. The site inspection builds upon the preliminary assessment and involves sampling to determine the nature of contamination, potential pathways of exposure, and recommendations for further action. Remedial investigation The Corps conducts more rigorous sampling and analysis to determine the nature and extent of the release, evaluates the baseline risk to human health and the environment posed by the release, and determines if further response action is required to respond to an unacceptable risk. Feasibility study The Corps analyses the feasibility of alternative remedies to respond to the release using the CERCLA remedy selection criteria and establishes the cleanup criteria for the remedial action. Proposed plan The Corps proposes to the public and the lead regulator its recommendation for a remedial action to respond to the release and explains how it will satisfy the remedy selection criteria of CERCLA and the NCP. Remedy selection The Corps issues a record of decision or decision document signed by an authorized agency official to formally select the remedial action to be taken to respond to the release and explains the elements of the remedy and the basis for its selection using the remedy selection criteria of CERCLA and the NCP. Remedial design The Corps designs the remedy selected by the feasibility study. Remedial action construction The Corps constructs the selected remedy. At the end of construction, the DOD s milestone remedy-in-place is met when testing of the remedy shows that it will function as designed. Remedial action operation The Corps operates the selected remedy until the cleanup objective is achieved. At the end of operation, the DOD s milestone, response complete is met. Long-term management The Corps may conduct ongoing environmental management for a number of years to ensure that the remedy continues to provide the protection it was designed to achieve for human health, safety, and the environment. Examples of long-term management activities are monitoring of a groundwater treatment system, maintenance of a landfill cap, and enforcement of land use controls. In addition, the NCP requires that the Corps, as the agency responsible for FUDS cleanup, conduct 5- year reviews of sites not less than every 5 years after the start of remedial action, when the chosen remedy does not allow for unlimited use and unrestricted exposure. The Corps continues long-term management activities until they are no longer required. DOD has also established a three-tiered process for identifying and evaluating changes in the information about emerging contaminants or how these contaminants are regulated that may affect DOD s actions or decisions in several areas, including cleanup of contaminated sites. DOD s Chemical and Material Risk Management Directorate manages this process, called scan-watch-action, and has developed watch and action lists of emerging contaminants (see table 1). The watch list identifies chemicals for which there is a potential for a regulatory change that may affect DOD and the action list includes chemicals for which there is significant potential for regulatory change that may affect DOD. <2. The Corps Uses the CERCLA Process to Address Emerging Contaminants at FUDS, but Has Gaps in its 5-Year Review Procedures> According to DOD and Corps officials, the Corps addresses emerging contaminants at FUDS the same way it does other contaminants by using the established CERCLA process. However, using this process has not often led the Corps to re-examine sites after response actions are completed to determine whether emerging contaminants are present or need to be addressed. Further, our analysis of information on the 5-year reviews completed in 4 divisions identified problems with the Corps 5- year review procedures. We found that (1) reviews were not completed on time; (2) DOD and the Corps do not have accurate, complete information on how many 5-year reviews are required, completed, or planned; (3) divisions are inconsistent in their approaches to conducting 5-year reviews for sites where they are recommended, but not required; and (4) review reports did not always receive the technical review by Corps experts required by Corps policy. <2.1. The Corps Uses the CERCLA Process for Investigating and Responding to Emerging Contaminants> The Corps identifies and addresses emerging contaminants at FUDS, as it does other contaminants using the CERCLA process for identifying, investigating, and cleaning up releases of hazardous substances that is outlined in the NCP. Corps officials told us that in their initial evaluations of FUDS under CERCLA, the agency tested for most known emerging contaminants at those sites where there was a reason to suspect these contaminants were present and caused by DOD. They also told us that the Corps would sample a site for a contaminant, if appropriate, regardless of whether there is a federal or state standard for it. Appendix III provides information on the occurrence of emerging contaminants in groundwater, surface water, soil, and sediment at HTRW FUDS, based on the samples taken as of September 30, 2008. To make informed decisions on which contaminants to sample for at a site, the Corps districts review records of DOD s past use of the site. In 2002, we reported that the Corps lacked comprehensive guidance on the typical hazards that may be present at DOD properties as a result of certain types of DOD activities. However, the Corps subsequently developed guidance, and between 2003 and 2008, issued a series of reports identifying potential chemicals that past military activities may have released. District officials told us that they use this guidance, called Common Operations Reports, in conjunction with historical information about a site to determine which contaminants may have been released there. After sampling for contaminants at a site, the Corps uses the sampling results and other site-specific information to assign relative risk levels to sites. These relative risk levels are not based on a comprehensive risk assessment, but are a tool used to prioritize the site for cleanup based on information collected early in the cleanup process. After further defining the nature and extent of contamination at a site, the Corps then uses scientific information on contaminants, sampling results, and other site-specific data such as information on exposure pathways, potential receptors, and site use to conduct more comprehensive site- specific risk assessments that are to be used in making cleanup decisions for the site. This requires the Corps to identify and select appropriate contaminant toxicity values to use in assessing risks to human health and the environment. The identification of toxicity values is a crucial step that presents special challenges for emerging contaminants, for which information on human health effects may be insufficient, limited, or evolving. DOD s and EPA s preferred source for the fundamental toxicity information needed to develop human health risk assessments is EPA s Integrated Risk Information System (IRIS), a database that contains EPA s scientific position on the potential human health effects of exposure to more than 540 chemicals. However, IRIS does not contain final assessments for some emerging contaminants. For example: naphthalene, a component of jet fuel that has contaminated many military trichloroethylene (TCE), a solvent widely used in industrial and manufacturing settings; cyclotrimethylenetrinitramine, which is also known as Royal Demolition Explosive, a highly powerful explosive used by the U.S. military in thousands of munitions; dioxin, a chemical that is often the byproduct of combustion and other tetrachloroethylene, which is also known as perchloroethylene, a manufactured chemical widely used for dry cleaning fabrics, metal degreasing, and production of some consumer products and other chemicals. DOD worked with EPA and the Environmental Council of States (ECOS) to develop a white paper in 2007 on the process to use for identifying and selecting toxicity values when there are no values available from IRIS. ECOS endorsed the white paper by a formal resolution of the member states. DOD formalized the process outlined in this paper with a June 2009 policy on emerging contaminants. Using the site-specific risk assessments, the Corps develops site-specific, risk-based cleanup levels for contaminants at FUDS. Under CERCLA, the Corps must choose a cleanup alternative that, at a minimum, assures protection of human health and the environment. In developing a protective remedy, the Corps considers generally acceptable risk ranges and must choose a remedy which will comply with the applicable or relevant and appropriate requirements (ARAR) that have been identified for the site. The Corps and most FUDS state regulators identify ARARs based on site-specific factors such as the contaminants present, site location and physical features, and response actions being considered. Federal or state standards are not automatically applied to a site they must first be identified as ARARs for the site. ARARs, which are used as a starting point to assess the protectiveness of a remedy, consist of the following two sets of requirements: Applicable requirements are cleanup standards; standards of control; and other substantive requirements, criteria, or limitations promulgated under federal environmental or state environmental or facility siting laws that specifically address a hazardous substance, pollutant, contaminant, remedial action, location, or other circumstance found at a CERCLA site. Only those state standards that are identified by a state in a timely manner and that are more stringent than federal requirements may be applicable. Relevant and appropriate requirements are requirements that do not meet this definition of applicable, but address situations sufficiently similar to those encountered at a CERCLA site that their use is well suited to the particular site. State standards must be identified in a timely manner and be more stringent than federal requirements to be considered relevant and appropriate. If there are no ARARs for contaminants at a site, cleanup levels for the contaminants are established based on the Corps site-specific risk assessments. In addition, as part of the ARAR identification process, the Corps also identifies other information that may be considered under CERCLA in establishing cleanup levels. This information is not legally binding and can include nonpromulgated guidelines, advisories, or guidance issued by states or the federal government for example, drinking water health advisories issued by EPA. After the Corps selects a remedy under CERCLA, if the remedy results in any hazardous substances, pollutants, or contaminants remaining at the site, the Corps must review the remedy no less often than each 5 years after the remedial action was initiated to assure that the remedy is protecting human health and the environment. EPA, the primary regulatory agency for CERCLA, interprets the 5-year review requirement to apply when the remedy for a site will not clean up the site to a level that allows for unlimited use and unrestricted exposure. In addition, EPA guidance notes that 5-year reviews are appropriate, even if not required, for sites where the cleanup will eventually allow unlimited use and unrestricted exposure, but will require more than 5 years to complete. EPA and Corps guidance recommend that 5-year review reports include, among other things, a history of the site, a description of the response actions, a summary of the review process, and certain analysis. This analysis should identify whether (1) the response action for example, a groundwater treatment system to reduce contaminant concentrations, or land use controls to prevent access to a site is functioning as intended; (2) assumptions such as exposure assumptions, toxicity of contaminants, and cleanup levels used at the time of selecting the response action are still valid; and (3) any new information such as on changes in the use and accessibility of the site that indicates the response action may no longer be protective of human health, safety, and the environment. In addition, although CERCLA does not require the collection of new samples to determine the presence of additional contaminants during the 5-year review, the review provides a mechanism to consider available evidence of new contamination that is brought to the attention of the districts. In this regard, EPA guidance instructs those conducting 5-year reviews to consider whether new contaminants have been identified when evaluating the continuing validity of the assumptions used at the time of remedy selection. Finally, EPA and Corps guidance recommend that the 5-year review reports include recommendations for follow-up actions, if necessary, to address identified deficiencies. The Corps may need to modify the cleanup actions at a site if the 5-year review identifies significant changes in contaminant or site information that call into question the protectiveness of the remedy, as determined by a comparison of site-specific risks with the generally acceptable risk ranges. In addition to using the CERCLA process, the Corps also uses EPA, DOD, and Army policies or guidance specific to certain contaminants or issues, including perchlorate and TCE, emerging contaminants that are of particular concern to DOD because they have significant potential to affect people or DOD s mission. Appendix IV provides more information on these contaminants at FUDS. <2.2. The Corps Considers New Information on Contaminants in its 5-Year Reviews, but Has Problems with its Review Procedures> Our analysis of information on the Corps 5-year reviews for FUDS in 4 divisions identified the following problems with the Corps review process: (1) the reviews were not completed on time; (2) DOD and the Corps lack accurate, complete information on these reviews; (3) Corps divisions are inconsistent in their approaches to conducting 5-year reviews for sites where they are recommended, but not required; and (4) the reports resulting from these reviews did not always receive the technical review by Corps experts as required by Corps policy. These 5-year reviews can be conducted in the remedial action construction, remedial action operation, and long-term management phases. These reviews provide a mechanism for identifying and responding to changes that may occur, such as new scientific knowledge, regulation of emerging contaminants, or the discovery of additional munitions at a site. Corps officials told us that, to date, few FUDS have required 5-year reviews, due to a variety of factors. For example: The Corps strives to clean up FUDS to a level that allows unlimited use and unrestricted exposure, which does not require a 5-year review preventing exposure to contaminants left in place can be more difficult at FUDS properties than active DOD installations because DOD no longer owns or controls FUDS properties and does not have the same ability to restrict land use. The Corps has completed cleanup at a higher percentage of building demolition/debris removal sites and containerized waste sites than hazardous, toxic, and radioactive sites. This is because, while cleanup is under way at hazardous, toxic, and radioactive sites, these sites are typically much more complex than building demolition/debris removal sites and containerized waste sites, and significantly more time and investment is required to complete cleanup. Building demolition/debris removal sites do not generally require 5-year reviews because these types of sites involve unsafe buildings, or structures and generally not the hazardous substances, pollutants, or contaminants to which CERCLA applies. Containerized waste sites can require 5-year reviews, but Corps officials in 2 divisions told us that most of these sites were cleaned up to a level that did not require 5-year reviews. In addition, one division told us that most of the containerized waste site cleanups completed by the Corps to date were for petroleum storage tanks. Petroleum is not a hazardous substance, pollutant, or contaminant under CERCLA, so 5-year reviews are not required for actions to address petroleum contamination at FUDS. In some divisions and for certain types of sites, the Corps has not yet reached the point at which 5-year reviews become required. For example, one division official told us that they have only recently begun completing cleanup of HTRW sites, and another division official said that most of the sites that might need a 5-year review have not yet been cleaned up. Further, many munitions sites have not yet reached the trigger date for 5- year reviews the initiation of remedial actions because they are still in the investigation phase. Corps guidance states that all FUDS where an ordnance and explosives response action is implemented require 5-year reviews. Corps officials told us that the districts will be conducting more 5-year reviews in the future. For example, officials in the 4 districts we contacted told us that they will be responsible for completing a total of 30 5-year reviews from FY 2009 through 2014. As of May 2009, these districts had completed a total of 15 5-year reviews 5 for IRP sites and 10 for MMRP sites. However, our examination of information on these 15 reviews indicated that these districts have not consistently implemented the 5-year review process in accordance with CERCLA or Corps and EPA guidance. We found that: 5-year reviews were not always completed on time. For example, all of the five 5-year reviews conducted for IRP sites and at least five of the reviews for MMRP sites were completed late, with the reports being late by 3 months to 9 years. In addition, for three of the five IRP sites for which 5-year reviews had been completed, we determined that the Corps incorrectly identified the trigger dates for initial 5-year reviews as the completion, rather than initiation, of the remedial action. We also found that one additional review for an IRP site is already overdue by more than 3 years, and at least 3 additional reviews for MMRP sites are overdue by 1 to 5 years. Corps officials cited a variety of reasons for these delays, including turnover of program and project managers; lack of internal staffing resources for conducting the reviews; and multiple report iterations resulting from lengthy internal and external reviews involving Corps staff, EPA headquarters and regional offices, and state regulators. Officials also cited program budget and resource constraints, with one district highlighting a higher programmatic emphasis on meeting DOD s goal of completing site inspections for MMRP sites by September 30, 2010. In addition, the delay for the 5-year review for an IRP site at the Former Weldon Spring Ordnance Works in Missouri resulted from the discovery of additional contamination during the Corps activities to close the site. DOD and the Corps do not have accurate, complete information on how many 5-year reviews are required, completed, or planned for the FUDS program. DOD and the Corps rely on data from FUDSMIS for program- wide information on the status of 5-year reviews at FUDS. To manage and implement the FUDS program, DOD, the Army, and the Corps use FUDSMIS to support planning, programming, budgeting, annual workplan development, execution, and reporting requirements for the FUDS program. This system includes data fields to indicate whether a 5-year review is required at a site, record the actual date the 5-year review was completed, and record the scheduled date of subsequent 5-year reviews, among other things. Moreover, DOD uses these data from FUDSMIS to provide information on the status of 5-year reviews at each FUDS property in its Defense Environmental Programs Annual Report to Congress. Corps policy requires the districts to enter this information in the available data fields in FUDSMIS, and the divisions also have responsibility for ensuring that the data are accurate and complete. In addition, the accuracy and completeness of FUDSMIS data on 5-year review planning is part of one of the FUDS Program Management Indicators established by the Corps to evaluate divisions and districts performance and to measure and demonstrate progress toward cleaning up contamination at FUDS. However, we found that the three divisions we spoke with did not consistently track 5-year reviews in FUDSMIS. Of these three divisions, officials at one division were not aware of the data fields in FUDSMIS related to 5-year reviews. Officials at the remaining two divisions told us that neither they nor their districts enter the required information on 5- year reviews because there is no way for them to later retrieve that information in a way that would be useful to them in managing their work, such as a report listing sites that require such reviews. In addition, they noted that improvements to the system are needed particularly to (1) alert them to upcoming reviews they need to conduct, (2) better track the dates that trigger the reviews, and (3) enable the generation of reports on 5-year reviews. Divisions are inconsistent in their approaches to conducting 5-year reviews for sites where they are recommended but not required. Officials in only one division told us that their districts would conduct 5-year reviews for sites where the cleanup will eventually allow unlimited use and unrestricted exposure, but where the cleanup will require more than 5 years to complete sites for which 5-year reviews are recommended by EPA but not required. Conducting reviews under these circumstances may be important for sites where cleanup may require many years, during which information on emerging contaminants may evolve. For example, the Corps estimates that cleanup of TCE in groundwater at the former Nebraska Ordnance Plant will take more than 100 years. According to division officials, the district managing cleanup of this FUDS is in the process of finalizing the site s first IRP 5-year review, which will indicate that the remedy remains protective of human health and the environment. The 5-year review will recommend an evaluation of a newly identified exposure pathway intrusion of TCE vapors from the subsurface into buildings at a limited portion of the site that is currently residential. In addition, the review discusses changes in toxicity data for carcinogenic and noncarcinogenic effects of TCE. The 5-year review also evaluates the potential long-term impacts of a newly identified exposure pathway at the site intrusion of TCE vapors from the subsurface into buildings on the protectiveness of the remedy. Conducting 5-year reviews at this site in the future, although not required under CERCLA and the NCP, will allow the Corps the opportunity to periodically identify whether information on TCE has changed to a degree that may affect the protectiveness of the remedy. 5-year review reports did not always receive the required technical review by Corps experts. Since 1999 and 2004, Corps policy has required districts to provide their 5-year review reports for MMRP sites and IRP sites, respectively, to the Corps Center of Expertise for comment. According to Corps officials, the Center of Expertise s staff has up-to-date technical expertise that enables them to help the districts identify and respond to potentially important changes that may occur with respect to emerging contaminants for example, changes in contaminant standards, toxicity information, and exposure pathways, such as vapor intrusion. When conducting their technical review of the districts 5-year review reports, specialists at the Center of Expertise are to assess whether the reports are well-documented, follow relevant guidance, such as EPA s 2001 guidance on conducting 5-year reviews, and include the necessary statement on the protectiveness of the response actions. They also are to evaluate how the reports address whether the assumptions used at the time of selecting the response action are still valid such as exposure assumptions, toxicity of contaminants, and cleanup levels. Nine of the ten 5-year reviews completed for MMRP sites were completed after the Corps 1999 policy requiring submission of review reports was issued; however, we found that the districts in one division did not submit three of their five reports to the Center of Expertise for technical review and comment. In addition, four of the five 5-year reviews completed for IRP sites were completed after the Corps 2004 policy was issued, but we found that one of these four reports was not submitted for technical review and comment. <2.3. The Corps Infrequently Re- examines FUDS Outside of the 5-Year Review Process> The Corps has reevaluated some sites outside the 5-year review process, but these reevaluations have been infrequent, and officials told us that they have generally not been for the purpose of addressing emerging contaminants. Outside the 5-year review process, Corps districts generally only reevaluate sites at the request of state regulators or other stakeholders. For eligible FUDS properties that were previously determined to have no FUDS eligible projects or no further action required, Corps policy allows districts to re-examine up to five FUDS properties per state per year upon request by states, tribes, EPA, or other stakeholders. In responding to such requests, the Corps may review the records or other original information for the property, as well as any additional information provided by EPA, the state, or tribe concerning potential DOD contamination. However, Corps officials told us that they have not received many requests for re-examination, and the requests received by the divisions and districts we visited were generally not for the purpose of addressing emerging contaminants. For example, officials in one district told us that their re-examination of sites has mostly been in response to concerns about petroleum contamination and munitions issues. In addition, in some instances, the Corps reports that it has re-examined certain FUDS on its own initiative. For example, in 2004, the Corps re- examined certain munitions sites to assess the potential for contamination from munitions constituents, particularly lead an emerging contaminant on DOD s watch list and other heavy metals. Corps officials note that this effort was in response to an Army policy change requiring that munitions constituents be addressed as part of MMRP cleanup projects. The Corps re-examined 196 FUDS munitions sites many of which were former small arms ranges with no issues relating to munitions or explosives of concern that had previously been determined to have negligible risk and no need for DOD action. We also found that one of the districts we contacted is in the process of re-examining 513 sites, beginning with records research in 2002. While the Corps has not often re-examined FUDS outside the 5-year review process to date, DOD and Corps officials told us that they would reevaluate the need for additional response actions at sites if there were changes in information on a contaminant. If a hazardous substance release is discovered at a FUDS that was never previously addressed at the site but occurred when the site was under DOD s jurisdiction, DOD is responsible for addressing that release in accordance with the DERP and CERCLA, regardless of whether the Corps has already completed cleanup of other releases at the site. In addition, DOD may need to initiate further response actions at a site where it has already addressed some releases and no 5-year review is required, but new information becomes available for a contaminant, to fulfill its responsibility in accordance with the DERP and CERCLA. For example, new standards may be established for such contaminants, or previously existing standards or toxicity values may be revised. In addition, new exposure pathways may be identified. Until fairly recently, vapor intrusion the migration of volatile chemicals such as TCE from subsurface media into the indoor air of overlying buildings was rarely evaluated as part of human health risk assessments and was not well understood. However, given the current inventory of FUDS still requiring cleanup, there may be practical limitations to re-opening sites for further cleanup. In addition, the need for DOD to re-open FUDS to respond to changes in information or standards for contaminants may also depend on agreements reached with EPA or state regulatory agencies. <3. DOD Proposes Funding for Cleanup at FUDS, Active Sites, and BRAC Sites Based on DERP Goals, and Funding Is Proportional to Site Inventories> DOD uses the same method to propose funding for cleanup at FUDS, active sites, and BRAC sites; cleanup funding is based on DERP goals and is generally proportional to the number of sites in each of these categories. Officials in the Military Departments, Defense Agencies, and FUDS program who are responsible for executing the environmental restoration activities at the sites for which they are responsible formulate cleanup budget proposals based on instructions provided in DOD s financial management regulation and DERP environmental restoration performance goals. DOD s DERP goals include: reducing risk to human health and the environment; preparing BRAC properties to be environmentally suitable for transfer; having final remedies in place and completing response actions; and fulfilling other established milestones to demonstrate progress toward meeting program performance goals. DERP goals are target dates representing when the current inventory of active and BRAC sites and FUDS are expected to complete the preliminary assessment phase, site inspection phase, or achieve the remedy in place or response complete (RIP/RC) milestone. In addition, Congress has required the Secretary of Defense to establish specific performance goals for MMRP sites. A summary of these goals for the IRP and MMRP is shown in table 2. DOD components plan cleanup actions that are required to meet these goals at the installation or site level. DOD requires components to rank their inventory of sites by relative risk to help make informed decisions about which sites to clean up first. Using these risk rankings, as well as other factors, components set more specific restoration targets each fiscal year to demonstrate progress and prepare a budget to achieve those goals and targets. The Department of the Army has established more specific performance goals for FUDS in its Environmental Cleanup Strategic Plan. For example, the Corps goals for the FUDS IRP are to achieve RIP/RC at 46 percent of all 357 high-risk sites containing HTRW by the end of FY 2008, 48 percent of all 147 medium-risk HTRW sites by the end of FY 2011, and all low-risk HTRW sites by the end of FY 2020. For the FUDS MMRP, the Corps goals are to complete 40 percent of the baseline site inspections by the end of FY 2008, 55 percent of the baseline site inspections by the end of FY 2009, 100 percent of the baseline site inspections by the end of FY 2012, and all site inspections by the end of FY 2014. Another factor that can influence the proposed budgets and obligations among site categories is the need to fund long-term management activities. While DOD uses the number of sites achieving RIP/RC status as a primary performance metric, sites that have reached this goal may still require long-term management and, therefore, additional funding for a number of years. Table 3 shows the completion status for active and BRAC sites and FUDS, as of the end of FY 2008. See appendix V for the completion status of these sites by component for FY 2004 through 2008. The data show that there are currently significantly fewer FUDS that require long-term management and, consequently, require less funding for this activity than do active and BRAC sites. Corps officials told us that since FUDS are located on properties that have been transferred outside DOD s control, they prefer to clean sites to allow unlimited use and unrestricted exposure, when possible. However, Corps officials also said that ongoing site inspections at FUDS MMRP sites indicates that more of these sites may require long-term management in the future. DOD data show that, in applying the broad restoration goals, performance goals, and targets, cleanup funding is generally proportional to the number of sites in the active, BRAC, and FUDS site categories. Table 4 shows the total DERP inventory of sites, obligations, and proportions for FY 2008. Since DERP was established, approximately $18.4 billion dollars has been obligated for environmental cleanup at individual sites on active military bases, $7.7 billion for cleanup at sites located on installations designated for closure under BRAC, and about $3.7 billion to clean up FUDS sites. During FY 2004 through 2008, about $4.8 billion was spent on environmental cleanup of sites on active bases, $1.8 billion for cleanup at BRAC sites, and $1.1 billion for FUDS sites. Appendix VI provides DOD s funding obligations and estimated costs to complete environmental cleanup by military component and program category for FY 2004 through 2008. <4. The Corps Uses Risk- Based Criteria to Prioritize FUDS for Cleanup, but Also Considers Other Factors> The Corps uses risk-based criteria and other factors to prioritize FUDS for cleanup. Since cleanup projects or phases of work cannot all be completed in any given year with the funding the Corps receives for the FUDS program, the Corps must prioritize sites for cleanup. Based on sites risks, as well as other factors, districts prepare annual work plans prioritizing their projects and submit these to their division, which combines the districts work plans into a single division work plan. Divisions send their annual work plans to the Corps headquarters, which sends the FUDS annual work plan to the Department of the Army for approval. While the risk levels of sites are a significant factor in determining cleanup priorities, high risk sites are not always addressed before low risk sites. FUDS program data indicate that, as of the end of FY 2008, 35 percent of high- risk MMRP and HTRW sites (218 of 622 sites) had achieved response complete status, compared to 28 percent of medium risk sites (86 of 303 sites) and 21 percent of low-risk sites (110 of 530 sites). Based on site-specific information, the Corps uses several methods to assign risk-based priority levels to sites to categorize them for cleanup. The method used depends on whether the site contains HTRW, which fall under the IRP; munitions under the MMRP; or building debris under the BD/DR program. Consequently, the Corps may use multiple methods at a single FUDS property that contains multiple types of sites for example, a munitions site and a hazardous waste site. According to DOD guidance, the components also use the same methods to prioritize HTRW and MMRP sites at active and BRAC installations for cleanup. Appendix VII provides information on the number of high-risk sites at FUDS and active and BRAC installations, for FY 2004 through 2008. At HTRW sites, the Corps uses the Relative Risk Site Evaluation (RRSE) to assign a relative risk level of high, medium, or low, based on an evaluation of three factors for four environmental media: sediment, surface soil, surface water, and groundwater. These factors include the: contaminant hazard factor, which compares the maximum concentrations of contaminants detected to benchmark comparison values; migration pathway factor, which summarizes the likelihood that contamination will migrate; and receptor factor, which summarizes human or ecological receptors that could be exposed to contamination. At MMRP sites, the Corps uses the Munitions Response Site Prioritization Protocol (MRSPP), which DOD began implementing in FY 2007 to assign sites a relative priority level of 1 (highest hazard) to 8 (lowest hazard) using three modules. Explosive hazard evaluation and chemical warfare materiel hazard evaluation modules, which evaluate the presence and accessibility of these hazards and the receptors that may be affected, and A health hazard evaluation module, which evaluates chronic health and environmental hazards associated with munitions constituents as well as incidental nonmunitions-related contaminants and builds on the framework established in the RRSE. According to DOD and Corps officials, the Corps is in the process of applying the MRSPP to FUDS, but as of the end of FY 2008, no FUDS had been assigned a final MRSPP score. FUDS program data available on the relative risk levels of MMRP sites are based on the method used prior to implementation of the MRSPP the Risk Assessment Code, which the Corps used to assign munitions sites a risk-based score of 1 (highest priority) to 5 (no DOD action necessary). This method evaluated potential safety hazards associated with explosives based on the severity and probability of the hazard. According to DOD and Corps officials, Risk Assessment Code scores are no longer used and are being replaced by the relative priority assigned by the MRSPP. The Corps assigns a risk-based priority level to containerized waste sites, based on the condition and location of the storage tanks. For example, a site with known leaks or spills would have a priority of 1, while sites with tanks that are not leaking and located in urban and rural areas would have priority levels of 2 and 3, respectively. The Corps assigns a risk-based priority level to building demolition and debris removal sites, based on the location of the site and ease of access to the site. For example, a site in an urban or densely populated area and with unrestricted access would have a priority of 1, while a site in a rural area or remote island with a guarded entrance would have a priority of 9. In addition, DOD and Corps officials told us that a small number of high- priority FUDS receive separate funding from the Corps headquarters in addition to the funds they receive from the relevant district to guarantee that cleanup at these sites is funded. These are high-risk, high-visibility sites with cleanup costs high enough to consume a district s entire budget. According to DOD and Corps officials, the list of such sites may change from year to year. Table 5 identifies the FUDS that received such funding in FY 2008 and the Corps estimated costs to complete cleanup at these sites. DOD and Corps officials told us that in addition to considering a site s risk level, the Corps sets cleanup priorities based on program goals, which have evolved over time. In the past, according to these officials, the Corps initially focused on addressing building demolition and debris removal sites and containerized waste sites FUDS program data indicate that 81 percent of building demolition and debris removal sites and 82 percent of containerized waste sites have reached response complete status, compared to 54 percent of HTRW sites and 33 percent of MMRP sites without chemical warfare materiel. DOD officials also said that the Corps has made significant progress in completing cleanup at building demolition and debris removal sites and containerized waste sites because they can be completed quickly and at less cost. In contrast, they told us that hazardous, toxic, and radioactive sites and MMRP sites are generally larger and more complex than building demolition/debris removal sites and containerized waste sites, and require more time and investment to clean up. Table 6 shows the percentage of sites that have achieved response complete status by project category, as of the end of FY 2008. According to DOD and Corps officials, containerized waste and building demolition/debris removal sites are now a low priority for the Corps because it is trying to focus on meeting DOD s goals for the FUDS program and these goals do not measure cleanup of containerized waste and building demolition sites. Specifically: IRP: DOD s goal is to reduce risk, have a remedy in place, or achieve response complete status at high, medium, and low risk sites by the end of FY 2007, 2011, and 2020, respectively. According to DOD s FY 2008 Defense Environmental Programs Annual Report to Congress, DOD did not meet its goal for high relative risk FUDS by the end of FY 2007, but is working aggressively to complete required cleanup actions at these sites, while mitigating potential threats to human health and the environment. MMRP: The John Warner National Defense Authorization Act for FY 2007 specified the following two goals for adoption by DOD: complete preliminary assessments by the end of FY 2007 and complete site inspections by the end of FY 2010 and complete remedy in place or response complete by a date set by the Secretary of Defense. However, because the Corps centrally funds MMRP site inspections with a budget established by the Corps headquarters and Center of Expertise, other sites do not compete with MMRP site inspections in cleanup prioritization. By the end of FY 2008, DOD had completed preliminary assessments for 99 percent of the FUDS MMRP sites, according to DOD s FY 2008 Defense Environmental Programs Annual Report to Congress, and will reevaluate current goals at the end of FY 2010. DOD has not yet established a date for achieving remedy in place or response complete status for FUDS MMRP sites. The Corps headquarters sets annual performance measures for its divisions and districts such as achieving remedy in place or response complete status at a certain number of sites each year that can also play a role in how districts prioritize sites. For example, officials at two districts we visited told us that in order to meet the remedy in place or response complete measure, they try to focus on sites where work is already in progress and cleanup can be completed. Officials at one of these districts told us that it is not appropriate to use risk rankings exclusively in deciding which sites to clean up and that they cannot stop work on sites where actions are already under way because they face pressure to meet the cleanup performance measures. Although containerized waste sites are currently a low priority for the FUDS program, officials at this district also told us they are working on many such sites that are not high-risk, but are low-cost and can be completed in a matter of months. Similarly, officials at another district told us that once a site enters a phase of the CERCLA cleanup process, they try to complete that phase before starting work on another site. However, cleanup costs for sites can also influence the order in which districts address sites. Officials at one district told us that a high-cost site could consume the district s entire annual budget. While certain high-risk sites with high costs may receive additional funding from the Corps headquarters, another district also told us that they may delay a cleanup action until they can allocate enough funds to complete that action. The Corps also recognizes in its FUDS program policy and FUDS program management plan for FY 2009 that the concerns of regulators, the Congress, and the public can influence the Corps decisions about which sites to address first and can potentially result in decisions to fund projects that are not high-risk. For example, the Corps FUDS program policy states that regulator involvement through Statewide Management Action Plans (SMAP) is essential to the successful implementation of the relative risk concept. The SMAP program at FUDS began in 2001, and the primary purpose of a SMAP is to involve regulators in the development of life-cycle plans for the investigation and cleanup of all FUDS properties within a state. EPA, states, and the Corps may participate in jointly developing the SMAP, which is a living document that had, among its goals, determining a statewide cleanup priority for each property and project. Of the 57 states and territories, over 30 had SMAPs or equivalent agreements, as of December 2008, according to the Association of State and Territorial Solid Waste Management Officials. In the spring of 2008, the association surveyed the states about the effectiveness of SMAPs. Based on responses from 41 states, it reported that the overall effectiveness of SMAPs varied with regard to prioritizing and funding sites for cleanup, among other things. In addition, the Defense and State Memorandum of Agreement (DSMOA), which provides a mechanism for state or territory involvement in environmental restoration activities at DOD installations including FUDS and state laws and regulations play a role in how the Corps prioritizes and funds sites for cleanup. The districts record certain factors that may drive prioritization and funding of sites in the legal drivers data fields in FUDSMIS. Our analysis of FUDS program data for one of these legal drivers indicated that 72 percent of sites with memorandum of agreement commitments such as DSMOAs between the Corps and state regulatory agencies had reached response complete status, compared to 56 percent of sites without such agreements. Similarly, we found that 74 percent of sites subject to state laws and regulations requiring a response within a specified period had reached response complete status, compared to the 56 percent of sites not subject to such requirements. Community or public concerns, as well as owner or congressional interest, also shape the Corps decisions on which sites to address first. In particular, officials at all four of the districts we contacted noted that congressional interest can influence decisions on the order in which sites are addressed. Officials at one district told us that this would not lead a medium or low-risk site to be prioritized and funded for cleanup above a high-risk site, but officials at another district and the FUDS program management plan for FY 2009 note that congressional interest could potentially result in a decision to fund a site not considered high risk. However, our analysis of FUDS program data for the congressional or owner interest legal driver showed that 59 percent of sites with this driver had reached response complete status, compared to 57 percent of sites without this legal driver. In addition, officials at one division told us that the Corps past focus on addressing containerized waste and building demolition and debris removal projects was because these were the types of hazards that politicians wanted addressed. Moreover, DOD and Corps officials told us that the Corps may still work on these types of sites, which are currently low risk priority, if Congress or other stakeholders, such as state regulators were to demand it. <5. The Corps Has Reduced Direct Management and Support Costs for the FUDS Program and Implements Accountability Measures for these Costs> DOD and the Corps report what they consider to be management and support costs for the FUDS program as part of their overall budget proposal to the Congress. DOD and the Corps track program management and support costs, such as salaries for FUDS personnel staffed at the Corps headquarters and the operating costs for FUDSMIS. Overall management and support costs and direct management and support costs, as a proportion of the total FUDS budget, have decreased slightly between FY 2004 and 2008, largely due to a restructuring of FUDS responsibilities among the Corps field districts and specific DOD direction to the Corps to lower certain costs. The Corps implements several measures to ensure control over how these funds are spent, both by restricting who records expenditures in the financial management information system and assigning these funds a specific tracking code. Federal agencies and programs are not required to use any specific definition of overhead for budgeting or reporting purposes. However, DOD s Financial Management Regulation for environmental restoration programs including FUDS directs that administrative and overhead expenses be identified under the Program Management and Support element in budget justification materials. DOD submits a three-part budget justification request to the Congress each year for each DERP program. The first part of the budget submission is the overall Program Management and Support budget, while the second part includes all site- specific cleanup costs, and the third part includes progress toward the DERP goals. DOD divides program management and support funding for the FUDS program into direct and indirect costs. The components of direct and indirect program management and support costs are shown in table 7. The following table shows the amounts obligated for all FUDS activities from FY 2004 through 2008. See appendix VIII for more detailed cost information for the FUDS program. From FY 2004 through 2008, direct program management and support costs for the FUDS program have generally decreased, both in dollar amount and as a percentage of the overall dollars obligated for the FUDS program, largely as a result of a restructuring effort by the Corps. Table 9 shows the total amount that the Corps obligated for both direct and indirect program management and support costs and Table 10 shows both as a percentage of the total program management and support budget and as a percentage of the overall amount obligated for the FUDS budget from FY 2004 through 2008: In FY 2006, the Deputy Assistant Secretary of the Army for Environmental, Safety and Occupational Health directed the FUDS program to reduce the program management and support costs of the program in order to make more funds available for cleanup projects. In addition, the Corps set goals which began taking effect in FY 2007 to reduce direct program management and support funding from FY 2006 levels. Under these goals, by the end of FY 2009, direct management and support costs need to be reduced by 25 percent of the FY 2006 funding level, and this level has to be maintained through FY 2010 and beyond. In order to achieve these goals, the Corps restructured the way the FUDS program was administered by reducing the number of districts with FUDS responsibilities from 22 districts to 14 districts. These 14 districts operate within 7 regional divisions, with 2 Corps districts with FUDS program or project management responsibilities in each Corps division. However, Corps officials told us they cannot determine the overall savings they have achieved to reduce program management and support costs. For example, they said the Corps cannot measure the reduction in full-time employees assigned to FUDS program as a reduction in program management and support costs because a variety of employees, such as those who provide legal and real estate expertise at the district level, must continue to charge their time to the FUDS program. In addition, Corps officials in the divisions and districts we visited told us that they now charge time to the FUDS program for activities that they would have charged to program management and support prior to the FUDS transformation. Several factors help to ensure that overall program management and support funds are spent only on items the Corps has approved. In this regard, only resource managers at the Corps headquarters are able to add money to the Corps of Engineers Financial Management System which is how money, including all program management and support funds, is distributed for the FUDS program. The program management and support funding is also assigned a specific code, which can be used to track its expenditures in the financial system. Additionally, each division receives only a relatively small amount of program management and support funding in relation to their overall budget for the FUDS program. Corps officials in the divisions and districts we visited told us that these funds are critical to their operations, because they pay for manager s salaries, travel for training, and respond to administrative requests from headquarters and state regulators. Due to the limited amount of these funds, division and district FUDS managers keep close watch on them, according to Corps officials, and vigorously question any expenditure of these funds, which helps to ensure that program management and support resources are spent only on the approved items. <6. Conclusions> Although the issues we identified regarding the Corps 5-year review process have implications for all FUDS where 5-year reviews are required or may be appropriate, they are particularly relevant to sites with emerging contaminants. As more FUDS begin to reach the cleanup phase and knowledge on emerging contaminants continues to evolve, 5-year reviews may play a more important role in identifying and responding to changes in information used in FUDS cleanup decisions, such as toxicity values and standards. The issues we identified raise concerns about the extent to which the districts and divisions (1) will record and review data on 5-year reviews in the Corps information management systems, (2) will conduct all required 5-year reviews on time, and (3) will consistently conduct reviews when appropriate. In addition, the lack of technical review of some of these reports by the Corps Center of Expertise raises concerns about the Corps ability to fully identify and appropriately respond to changes, such as evolving knowledge and standards for emerging contaminants. Without timely, accurate, and complete 5-year reviews for sites and reliable information on the status of such reviews, the Corps cannot be certain that remedies at FUDS remain protective of human health and the environment and cannot adequately inform stakeholders including the Congress, the public, and regulators regarding actual site conditions. <7. Recommendations> To help ensure that the remedies at FUDS continue to protect human health, safety, and the environment, we are making three recommendations. We recommend that the Secretary of Defense direct the Corps to conduct 5-year reviews for sites where emerging contaminants are present and the cleanup will eventually allow unlimited site use and unrestricted exposure, but will require more than 5 years to complete, consistent with EPA s guidance that such reviews are appropriate, even if not required; modify its FUDS program information management system to allow districts to more easily track information on 5-year reviews, and take steps to ensure that the districts utilize this system to plan for 5-year reviews and track progress on completing them; and determine why districts have not always completed timely 5-year reviews and provided all 5-year review reports to the Center of Expertise for comment consistent with Corps guidelines and develop procedures and controls to address these causes. <8. Agency Comments and Our Evaluation> We provided a draft of this report to DOD for official review and comment. DOD agreed with two of our recommendations and partially agreed with one. Specifically, DOD agreed with our recommendation that the Corps modify its FUDS program information management system to allow districts to more easily track information on 5-year reviews, and take steps to ensure that the districts utilize this system to plan for 5-year reviews and track progress on completing them. DOD stated that the Army has initiated actions to modify the FUDS information management system to address the recommendation. DOD also agreed with our recommendation that the Corps determine why districts have not always completed timely 5-year reviews and provided all 5-year review reports to the Center of Expertise for comment consistent with Corps guidelines and develop procedures and controls to address these causes. DOD said it will ensure that the Corps conducts a review of the FUDS 5-year review process, including management, tracking, and record keeping procedures. DOD partially agreed with our recommendation that the Corps conduct 5-year reviews at FUDS where emerging contaminants are present and the cleanup will eventually allow unlimited site use and unrestricted exposure, but will require more than 5 years to complete, consistent with EPA guidance that such reviews are appropriate, even if not required. DOD said that it will ensure that the Corps conducts 5-years reviews where required by CERCLA, but did not agree to conduct the additional precautionary reviews that are recommended by EPA. We continue to believe that, particularly for sites where emerging contaminants are present, it is important to conduct reviews when the cleanup will eventually allow unlimited use and unrestricted exposure, but will require more than 5 years to complete. Over an extended cleanup period, information on these contaminants may evolve, and these reviews may play an important role in identifying and appropriately responding to such changes as revised toxicity values or standards and new exposure pathways. If the Corps does not conduct reviews under these circumstances, it is missing an important opportunity to evaluate whether remedies remain protective of human health and the environment and to fully inform stakeholders including the Congress, the public and regulators regarding actual site conditions. DOD also provided technical and clarifying comments, which we incorporated as appropriate. DOD s letter is included in appendix IX. We are sending copies of this report to appropriate congressional committees and the Secretary of Defense. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix X. Appendix I: Status of Cleanup Actions at the Former Almaden Air Force Radar Station Between 1958 and 1980, the United States Air Force operated a radar station on approximately 100 acres atop Mt. Umunhum and Mt. Thayer near San Jose, California. In 1986, the Midpeninsula Regional Open Space District (MROSD), a California state government entity, acquired the former Almaden Air Force Station. The property contained various structures, including radar towers, operations buildings, housing facilities, a power plant, above- and below-ground fuel storage tanks, and a sewer treatment plant. MROSD staff occupied several buildings from 1986 to 1998. A 1989 earthquake damaged some buildings, transformers, and fuel tanks. In 1991, the U.S. Army Corps of Engineers (Corps) determined that the site was eligible for cleanup under the Formerly Used Defense Sites (FUDS) program and ranked it as high risk due to the presence of various contaminants in transformers, drums, and storage tanks. Between 1994 and 1996, the Corps removed transformers, above, and below-ground fuel storage tanks and associated piping, and drums filled with chemicals. After these removal actions, the Corps turned its attention to other FUDS in the Corps District for a number of years. In 2006, the Corps returned to remove more waste from buildings, pipes, generators, and sumps. In 2007, the Corps initiated a site inspection to determine if it had overlooked any contamination, particularly polychlorinated biphenyls (PCB) from electrical transformers and petroleum from underground storage tanks and to determine if any further remediation action is needed. Corps officials anticipated completing the investigation in 2009. From 1991 through 2008, the Corps spent $3.5 million investigating, removing materials, and taking remedial actions at the site. In addition to cleaning up any remaining contamination, MROSD also wanted the Corps to demolish and remove all remaining structures many of which contain deteriorating lead-based paint and asbestos so that it may open the site to the public for recreational use. Department of Defense (DOD) and Corps officials told us that no building demolition/debris removal can be conducted at this property because the buildings and structures were not unsafe at the time of transfer out of DOD jurisdiction. They said that MROSD is responsible for maintaining all buildings and structures on the property, beginning on the date they took title. DOD and Corps officials also said the Defense Environmental Restoration Program (DERP) authority does not extend to the removal of buildings and structures that become unsafe after they are transferred out of DOD jurisdiction and then not maintained by the subsequent owner. In fiscal year (FY) 2009, MROSD requested $4 million from Congress for economic adjustment programs, including feasibility studies, legal services, and other activities related to cleaning up the site and language in the National Defense Authorization or Appropriations Acts directing DOD to clean up the site under the Defense Base Closure and Realignment Act of 1990. No language or funding regarding Almaden was included in either law. Appendix II: Scope and Methodology To determine how the U.S. Army Corps of Engineers (Corps) addresses emerging contaminants at formerly used defense sites (FUDS), and the extent to which the Corps reevaluates sites to determine the need to address emerging contaminants, we reviewed key laws, regulations, policy, and guidance for the Department of Defense (DOD), the Department of the Army, the Corps, and the Environmental Protection Agency (EPA). We interviewed officials from DOD s Office of the Deputy Undersecretary of Defense for Installations and Environment and Chemical and Material Risk Management Directorate; the Department of the Army s Office of the Assistant Secretary of the Army for Installations and Environment and Office of the Assistant Chief of Staff for Installation Management, and the Corps Directorate of Military Programs; and EPA s Office of Solid Waste and Emergency Response and Federal Facilities Restoration and Reuse Office. We also interviewed officials at two state associations the National Governors Association and the Association of State and Territorial Solid Waste Management Officials to obtain their perspectives on the approaches DOD and the Corps use to address emerging contaminants. We reviewed program information obtained from FUDS program managers in four of seven Corps military divisions the North Atlantic, Northwestern, South Atlantic, and South Pacific divisions and 4 of the 14 districts responsible for executing the FUDS program the New England, Omaha, Sacramento, and Savannah districts and technical experts at the Corps Environmental and Munitions Center of Expertise. We selected the four divisions based on (1) geographic dispersion, (2) the number of FUDS sites within each division, and (3) planned obligations for fiscal year (FY) 2009, and, within these four divisions, we selected 4 of the 8 districts with FUDS program management responsibility. We reviewed additional information from the Corps on their 5-year review process from the South Atlantic division and districts in Kansas City and Los Angeles, and examined the completed 5- year review reports from the North Atlantic, Northwestern, and South Pacific divisions. To evaluate the Corps process for addressing emerging contaminants and prioritizing sites for cleanup, we reviewed and analyzed the nationwide property and project data in the Corps Formerly Used Defense Sites Management Information System (FUDSMIS) through September 30, 2008, the end of their most recent reporting cycle. We assessed the reliability of relevant fields in this database by electronically testing for obvious errors in accuracy and completeness, reviewing information about the data and the system that produced them, and interviewing agency officials knowledgeable about the data. When we found inconsistencies, we worked with DOD and Corp s officials to correct the discrepancies before conducting our analyses. We determined that the data needed for our analyses were sufficiently reliable for the purposes of our report. To assess DOD s process for determining funding levels for cleanup among FUDS and other sites with defense waste, we spoke with officials at the Office of the Deputy Undersecretary of Defense for Installations and Environment and officials at Corps headquarters who manage the FUDS program about how budget requirements are determined, and the targets or goals that exist for the overall Defense Environmental Restoration Program (DERP). We also reviewed DOD s budget justification documents for FY 2004 through 2009 and budget data from officials at the Office of the Deputy Undersecretary of Defense for Installations and Environment and DOD s Defense Environmental Programs Annual Report to Congress for FY 2004 through 2008. In order to determine the Corps criteria for prioritizing FUDS for cleanup and how closely the Corps follows these criteria, we obtained and reviewed relevant policy, guidance, laws, and regulations directing DOD s cleanup activities, including relevant risk ranking protocols for contaminated sites. We interviewed and obtained information from officials from the Office of the Deputy Undersecretary of Defense for Installation and Environment, Corps headquarters personnel in charge of managing FUDS, three of seven Corps military divisions and 4 of 14 districts responsible for executing the FUDS program, and the Corps Environmental and Munitions Center of Expertise. We also gathered and analyzed data from FUDSMIS, as well as the Defense Environmental Programs Annual Reports to Congress. In addition, we interviewed officials at two state associations the National Governors Association and the Association of State and Territorial Solid Waste Management Officials to obtain their perspectives on the approaches DOD and the Corps uses to prioritize FUDS for cleanup. To review the components and total amounts of management and support costs for the FUDS program and how these costs have changed over time, we reviewed DOD s budget justification documents for FY 2004 through 2009 and interviewed and obtained budget data from officials at the Office of the Deputy Undersecretary of Defense for Installation and Environment, who are in charge of compiling the overall DERP budget, as well as Corps officials in charge of budgeting for the FUDS program. We also reviewed relevant federal accounting standards, financial management regulations, and guidance. To determine the Corps accountability measures for these costs, we interviewed Corps headquarters personnel in charge of managing FUDS, and conducted interviews and gathered data from three of seven Corps Divisions responsible for executing the FUDS program, and 4 of 14 Corps Districts. We did not conduct a financial audit of the FUDS program. In addition, at the request of the committee, this report provides information on the status of the Corps cleanup efforts at the former Almaden Air Force Station. We conducted interviews and obtained information from the Corps district and division officials in charge of cleanup at Almaden and, in addition, we visited the site and interviewed and obtained detailed site information from the current owners, the Midpeninsula Regional Open Space District. We conducted this performance audit from September 2008 through October 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Occurrence of Emerging Contaminants at Formerly Used Defense Sites with Hazardous, Toxic, and Radioactive Waste The Department of Defense (DOD) defines an emerging contaminant as a contaminant that (1) has a reasonably possible pathway to enter the environment; (2) presents a potential unacceptable human health or environmental risk; and (3) either does not have regulatory standards based on peer-reviewed science or has regulatory standards that are evolving due to new science, detection capabilities, or exposure pathways. Tables 10 through 13 provide information on the occurrence of emerging contaminants in groundwater, surface water, soil, and sediment at formerly used defense sites (FUDS) with hazardous, toxic, and radioactive waste (HTRW). The tables are based on the sampling information the U.S. Army Corps of Engineers used in assigning risk levels to HTRW FUDS through its Relative Risk Site Evaluation (RRSE) process. More specifically, they include the numbers of HTRW sites where contaminants on DOD s action and watch lists were detected and the range of the maximum concentrations detected across these sites. The data shown in Tables 11 through 14 do not necessarily represent all FUDS where these contaminants may have been detected, for several reasons. For example: Some sites do not have a relative risk score. Certain sites are excluded, such as those with containerized hazardous, toxic, and radioactive waste and those that have achieved the remedy-in-place or response complete (RIP/RC) milestone. For other sites, the Corps has not completed the relative-risk site evaluation process. Naturally occurring contaminants are not included in the RRSE if they are detected within established background concentration ranges. The contaminant data used in the RRSE are collected in the early phases of the cleanup process. Based on our interviews with selected Corps divisions and districts, the extent to which districts update the RRSE later is unclear. The Corps is testing for some of these contaminants at munitions sites as part of the MMRP site inspections those data are not included in these tables and are not yet available. Appendix IV: Perchlorate and Trichloroethylene Contamination at Formerly Used Defense Sites <9. Perchlorate> Perchlorate is a chemical used in propellant for certain rockets and missiles and is also found in fireworks, road flares, automobile air bags, and other manufactured items. Perchlorate can also occur naturally and is found in certain fertilizers. Exposure to perchlorate can affect the thyroid gland by blocking the uptake of iodide and may cause developmental impairments in fetuses of pregnant women. Perchlorate has been found in drinking water sources nationwide, although the extent of perchlorate contamination was not revealed until 1997, when new analytical methods enabled measurement of perchlorate at low concentrations. According to the Environmental Protection Agency (EPA), in testing of 3,865 public water supplies between 2001 and 2005, approximately 160 systems (4.1 percent) located in 26 states and 2 territories had at least one detection of perchlorate at levels greater than or equal to 4 micrograms per liter ( g/l, or parts per billion (ppb)). In addition, the Food and Drug Administration and Centers for Disease Control and Prevention have identified perchlorate in a wide variety of foods, as well as commercially available powdered infant formulas. There are currently no federal standards for the presence of perchlorate in water. DOD has used perchlorate in propellant for certain rockets and military missiles since the 1940s. In 2003, DOD issued an Interim Policy on Perchlorate Sampling, which directed DOD components to sample for perchlorate at any previously unexamined sites including formerly used defense sites (FUDS) where (1) there was a reasonable basis to suspect that a release has occurred as a result of DOD activities, and (2) a complete human exposure pathway was likely to exist; and consider, in determining the likelihood of perchlorate occurrence, the volume of perchlorate used or disposed and/or the intensity of perchlorate-related activities at the site. Because of uncertainties as to the concentration at which perchlorate should be regulated, DOD, the Department of Energy, the National Aeronautics and Space Administration, and EPA asked the National Research Council to assess the potential adverse health effects of perchlorate. At the conclusion of its study in 2005, the Council recommended a reference dose of 0.7 g per kilogram of body weight per day, which translates to a drinking water equivalent level of 24.5 ppb. EPA adopted this recommended level and, in January 2006, directed its regional offices to use this concentration as a preliminary remediation goal when cleaning up sites under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and the National Oil and Hazardous Substances Pollution Contingency Plan, the regulation that implements CERCLA. In response to the Council s study and EPA s new guidance, DOD updated its perchlorate policy in January 2006. With regard to FUDS, the policy directed DOD to (1) test for perchlorate, (2) conduct a site-specific risk assessment if perchlorate levels in water exceed 24 ppb, and (3) prioritize the site for risk management if the risk assessment indicates that the perchlorate contamination could potentially result in adverse health effects. According to DOD, the sampling requirement applied to all media, and the level of concern of 24 ppb was intended to apply to current and potential sources of drinking water. In December 2008, EPA issued an Interim Drinking Water Health Advisory for perchlorate, which established 15 ppb as the advisory level for perchlorate in water. Unlike the previous level of 24.5 ppb, this new level incorporates exposure to perchlorate from food sources. In January 2009, EPA directed its regional offices to use 15 ppb as a preliminary remediation goal when cleaning up sites under CERCLA where there is an actual or potential drinking water exposure pathway and no applicable or relevant and appropriate requirements (ARAR) for perchlorate. In April 2009, DOD responded by again updating its perchlorate policy, adopting a preliminary remediation goal of 15 ppb for perchlorate where (1) there is an actual or potential drinking water exposure pathway, and (2) no ARARs exist under federal or state laws. While EPA has taken some steps to consider regulation of perchlorate under the Safe Drinking Water Act, it issued a preliminary determination in October 2008 not to regulate the chemical in drinking water, citing the lack of a meaningful opportunity for health risk reduction through a national drinking water regulation. As of August 2009, EPA is considering its final regulatory determination for perchlorate and expects to issue a final health advisory concurrent with the final regulatory determination. In the absence of a federal perchlorate standard, some states have established standards for the chemical for example, Massachusetts and California have promulgated drinking water standards for perchlorate. In addition, some states have established nonregulatory action levels or advisories for perchlorate. The Corps has sampled, and is continuing to sample, for perchlorate at FUDS. As of April 2008, the Corps had sampled 95 FUDS properties for perchlorate. According to DOD data, sampling performed at FUDS before June 2006 detected perchlorate at 13 of 32 FUDS properties sampled, five of which had concentrations exceeding 4 ppb in water. The Corps took action at 5 of the 13 FUDS properties where perchlorate was detected and determined that the remaining 8 FUDS properties did not require any actions to address perchlorate. Table 15 presents data for the perchlorate sampling conducted at FUDS between June 2006 and April 2008, by environmental media. According to DOD, there are only two FUDS properties where DOD- caused perchlorate concentrations in groundwater have exceeded EPA s and DOD s current preliminary remediation goal of 15 ppb. Specifically: At the Spring Valley FUDS property in Washington, D.C., perchlorate was detected in 45 of 51 groundwater samples between June 2006 and April 2008 at concentrations ranging from 0.093 ppb to 146 ppb. Perchlorate concentrations in six samples exceeded EPA s and DOD s preliminary remediation goal of 15 ppb. During this period, perchlorate was also detected in 22 of 23 surface water samples at concentrations ranging from 0.361 ppb to 7.18 ppb. At the Boardman Air Force Range FUDS property in Oregon, perchlorate was detected prior to June 2006 in seven of nine groundwater samples at concentrations ranging from 0.2 ppb to 20.1 ppb, as well as at 0.34 ppb in the single surface water sample collected. The Corps is testing FUDS for perchlorate during the site inspections currently being conducted under the Military Munitions Response Program (MMRP), which DOD established in September 2001 to address potential explosive and environmental hazards associated with munitions at active installations and FUDS. The MMRP includes sites with munitions and explosives of concern, munitions constituents, and chemical warfare material. Many of the FUDS sampled prior to FY 2007 will be resampled as part of the MMRP. According to DOD officials, sampling conducted as part of the FUDS MMRP site inspections, as of July 2009, had identified perchlorate in: 116 of 247 water samples analyzed, at concentrations ranging from 0.0088 ppb to 1.91 ppb. These samples were collected from 85 FUDS MMRP sites. 9 of 38 soil samples analyzed, at concentrations ranging from 0.27 ppb to 3.0 ppb. These samples were collected from 6 FUDS MMRP sites. <9.1. Trichloroethylene (TCE)> TCE has been widely used as a degreasing agent in metal cleaning for industrial and maintenance processes since the 1950s. Low levels of exposure to TCE have been documented to cause headaches and difficulty concentrating. High-level exposure may cause dizziness, headaches, nausea, unconsciousness, cancer, and possibly death. TCE in groundwater can take decades to clean up for example, cleaning up TCE at the Former Nebraska Ordnance Plant site is estimated to take 130 years. According to officials with the Corps Center of Expertise, TCE is the most significant emerging contaminant in terms of prevalence at FUDS and is the most significant emerging contaminant in terms of the cost of cleanup at FUDS. The Corps has detected TCE at a minimum of 166 sites 15 percent of the hazardous, toxic, and radioactive waste (HTRW) sites on 143 FUDS properties. EPA has regulated TCE in drinking water since 1989 with a maximum contaminant level of 5 ppb. However, concerns about this contaminant have increased in recent years. For example, in 2006, NRC reported that the evidence on carcinogenic risk and other health hazards from exposure to TCE has strengthened since 2001. New information may lead to changes in the toxicity values used to assess risks of TCE exposure. In making cleanup decisions for FUDS, the Corps uses toxicity values for contaminants in conducting assessments of a site s risks to human health and the environment. EPA s Integrated Risk Information System (IRIS), a database that contains EPA s scientific position on the potential human health effects of exposure to more than 540 chemicals, is DOD s and EPA s preferred source for the fundamental toxicity information needed to develop human health risk assessments. However, EPA has not finalized its IRIS assessment of the risks TCE may pose. Given EPA s ongoing assessment and different preferences among regulatory agencies, DOD has used a variety of different toxicity values in assessing risks of TCE exposure at FUDS. In January 2009, EPA issued interim guidance recommending toxicity values to use in assessing potential cancer and noncancer risks from inhalation of or oral exposure to TCE. However, EPA withdrew this guidance in April 2009, stating that the agency would further evaluate the recommendations regarding the noncancer TCE toxicity value to use in assessing the risk of inhalation exposures. DOD plans to use the interim values in the withdrawn EPA guidance, but DOD officials noted that an EPA regional office or state regulatory agency may press DOD to use a value preferred by an individual risk assessor at that agency. According to DOD, in these cases, DOD works with EPA and state officials to develop an agreed-upon value. In addition, intrusion of TCE vapors from soil or groundwater into buildings is a relatively newly-identified exposure pathway. A federal standard exists for TCE in indoor air at places of work, but not in residences or other buildings. EPA, the Army, and DOD have issued guidance on vapor intrusion, and officials told us that the Corps evaluates the vapor intrusion pathway, when appropriate, through the site-specific risk assessment. In 2002, EPA issued its Office of Solid Waste and Emergency Response (OSWER) Draft Guidance for Evaluating the Vapor Intrusion to Indoor Air Pathway from Groundwater and Soils (Subsurface Vapor Intrusion Guidance), which has not been finalized and, according to DOD, is not followed by all state health agencies. In 2006, the Army released its Interim Vapor Intrusion Policy for Environmental Response Actions, which established environmental response actions related to vapor intrusion modeling and investigation for existing and future buildings. It also noted that potential vapor intrusion risks in existing or future buildings will be evaluated as part of the CERCLA Five-Year Review, consistent with the guidelines in the policy, if these risks were not evaluated in the Record of Decision or Decision Document for the site. In January 2009, DOD published its Tri-Services Handbook for the Assessment of the Vapor Intrusion Pathway, a technical guidance manual that discusses various approaches for evaluating the vapor intrusion pathway, including information on developing and interpreting vapor intrusion investigations. As of July 2009, DOD was revising its 2001 Defense Environmental Restoration Program Management Guidance, which will outline the conditions under which the DOD components are instructed to evaluate whether contamination in soil or groundwater poses a potential for unacceptable risk from vapor intrusion into overlying or nearby existing structures. The revisions call for appropriate response actions for a vapor intrusion pathway in existing structures when the potential for vapor intrusion exists and a site-specific risk assessment indicates an unacceptable risk to human health due to a release to the environment that is the responsibility of DOD and not the responsibility of any other party. In addition, the revisions note that the DOD components are to notify non-DOD property owners in writing of potential vapor intrusion risks and, as appropriate, include this information in decision documents and/or transfer documents. Further, the revisions state that a transferee will address the potential for vapor intrusion in future structures at its own expense by adding appropriate mitigating measures during construction, and that these obligations are to be included in decisions documents and/or transfer documents for the site. Appendix V: Completion Status of Department of Defense Sites by Program Category and Military Component Tables 16 through 18 show the completion status of Department of Defense (DOD) sites and those that require long-term management under the Installation Restoration Program (IRP), the Military Munitions Response Program (MMRP) and the Building Demolition/Debris Removal (BD/DR) Program by military component, for fiscal year (FY) 2004 through 2008. Appendix VI: Department of Defense Obligations and Estimated Costs to Complete Environmental Restoration by Military Component and Program Category Table 19 shows the Department of Defense s (DOD) obligations for cleanup at active sites for the Installation Restoration Program (IRP), the Military Munitions Response Program (MMRP), the Building Demolition/Debris Removal (BD/DR) Program, and program management and support for fiscal year (FY) 2004 through 2008. Table 20 shows DOD s obligations for cleanup at installations that have been closed or are designated to be closed or realigned under the Base Realignment and Closure (BRAC) process under the IRP, MMRP, and for program management and support for FY 2004 through 2008. Table 21 shows DOD s obligations to clean up formerly used defense sites (FUDS) under the IRP, MMRP, and BD/DR Program, and program management and support for FY 2004 through 2008. Table 22 shows the DOD s estimated cost to complete environmental clean up for sites located at active installations, BRAC installations, and FUDS under the IRP, MMRP, and BD/DR Program for FY 2004 through 2008. Appendix VII: Department of Defense s Inventory of Sites and Number of High Risk Sites by Military Component and Program Category Table 23 shows the total inventory of Department of Defense (DOD) sites and number ranked high risk in the Installation Restoration Program (IRP) and the Military Munitions Response Program (MMRP) by military component, for fiscal year (FY) 2004 through 2008. Table 24 shows the total inventory of Base Realignment and Closure (BRAC) sites and number ranked high risk in the IRP and the MMRP by military component, for FY 2004 through 2008. Table 25 shows the total inventory of formerly used defense sites (FUDS) and number ranked high risk in the IRP and MMRP for FY 2004 through 2008. Appendix VIII: Formerly Used Defense Sites Program Costs Details Table 26 shows all of the line item costs for the formerly used defense sites (FUDS) program for fiscal year (FY) 2004 through 2008. It includes expenses for the Installation Restoration Program (IRP), the Military Munitions Response Program (MMRP), Building Demolition/Debris Removal (BD/DR) Program, and all program management costs (including direct and indirect costs). Table 26 shows the percentage of the total FUDS budget that is accounted for by each line item for FY 2004 through 2008. The subcomponents of the overall Program Management and Support budget are separately calculated as a percentage of total FUDS budget and the overall Program Management and Support budget is also calculated as a percentage of the total FUDS budget. Appendix IX: Comments from the Department of Defense Appendix X: GAO Contact and Staff Acknowledgments <10. Staff Acknowledgments> In addition to the individual named above, Vincent P. Price, Assistant Director; Krista Anderson; Melissa Hermes; and John Smith made key contributions to this report. Mark Braza, Antoinette Capaccio, Pamela Davidson, Arthur James, Jr., and Allison O Neill also made important contributions.
Why GAO Did This Study The Department of Defense (DOD) estimates that cleaning up known hazards at the over 4,700 formerly used defense sites (FUDS)--sites transferred to other owners before October 1986--will require more than 50 years and cost about $18 billion. This estimate excludes any additional needed cleanup of emerging contaminants--generally, those not yet governed by a health standard. DOD delegated FUDS cleanup responsibility to the U.S. Army Corps of Engineers (Corps). In addition to FUDS, DOD is responsible for cleaning up about 21,500 sites on active bases and 5,400 sites on realigned or closed bases. The House Armed Services Committee directed GAO to examine (1) the extent to which the Corps reevaluates sites to identify emerging contaminants; (2) how DOD allocates cleanup funds; (3) how the Corps prioritizes FUDS for cleanup; and (4) FUDS program overhead costs. GAO analyzed nationwide FUDS property and project data; policies, guidance and budget documents; and interviewed DOD and Corps officials. What GAO Found The Corps has not often re-examined sites after they have been cleaned up to determine whether emerging contaminants are present or need to be addressed. Generally, the Corps reevaluates sites only when requested by states or others, or when reviewing the completed remedy to ensure its continuing protectiveness. Such reviews are required every 5 years for sites where the chosen remedy does not allow for unlimited use and unrestricted exposure. Corps officials said that they had not received many requests to re-examine sites and few FUDS had required 5-year reviews. Reports on the 15 5-year reviews completed as of May 2009 within four Corps divisions indicated that the Corps has not consistently (1) conducted required 5-year reviews on time, (2) conducted reviews when they are not required but may be appropriate, as EPA recommends, and (3) submitted reports on these reviews for technical evaluation, as required by Corps policy. Also, DOD and the Corps lack accurate, complete information on the status of these reviews. Without timely, accurate, and complete reviews, the Corps cannot ensure that remedies continue to protect human health and the environment. DOD proposes funding to clean up defense sites based on the department's environmental restoration goals and obligations are generally proportional to the number of sites in each site category. Funding is directed toward reducing risks to human health and the environment, among other goals. The Army, Navy, Marine Corps, Air Force, and Defense Logistics Agency each determine the funding requirements to clean up sites based on these goals. The Corps prioritizes individual FUDS for cleanup on the basis of risk and other factors. The Corps assigns each site a risk level, considering such factors as the presence of hazards, the potential for human contact, and the concentrations of contaminants and their potential for migrating, among others. According to DOD officials, sites' risk levels are the single most important criterion in determining cleanup priorities. However, the Corps also takes into account specific FUDS program goals, and other factors--such as regulators' and the public's concerns--that can influence the Corps' decisions about which sites to address first. Consequently, high risk sites are not always addressed before low risk sites. Direct program management and support costs for the FUDS program have decreased slightly in recent years, mostly due to structural changes in the program. The Corps' obligations for FUDS direct program management and support costs have declined from 11.0 percent of total program obligations in fiscal year 2004 to 9.0 percent in fiscal year 2008. In addition, to further reduce certain components of these costs to make more funds available for FUDS cleanup, the Corps reduced the number of employees managing the program and the number of districts responsible for FUDS from 22 to 14. Furthermore, Corps officials told GAO that they have implemented a number of controls--such as assigning tracking codes--to ensure that program management and support funds are spent only on approved items.
<1. Background> The US Coast Guard is a multimission, maritime military service within the Department of Homeland Security. The Coast Guard has responsibilities that fall under two broad missions homeland security and non homeland security. (See table 1.) One of the Coast Guard s strategic goals is maritime mobility, that is, to facilitate maritime commerce, eliminate interruptions and impediments to the movement of goods and people, and maximize access to and enjoyment of the water. The two non-homeland security missions through which the Coast Guard achieves this goal are aids to-navigation (ATON) and domestic icebreaking, which is part of ice operations. Aids-to-Navigation Mission Through its ATON mission, the Coast Guard promotes safe waterways and an efficient Marine Transportation System. The Coast Guard has statutory responsibility to operate and maintain a system of maritime aids to facilitate navigation and to prevent disasters, collisions, and wrecks. To fulfill this mission, the Coast Guard operates over 53,000 aids. These aids- to-navigation are like road signs of the waterways and are placed along coasts and navigable waters as guides to mark safe water and to assist mariners in determining their position in relation to land and hidden dangers. These aids consist of both floating aids, such as buoys, and fixed aids, such as lights or signs mounted on pilings. See figure 1 for an example of a buoy and fixed aid-to-navigation. The Coast Guard uses several types of vessels to place and service its aids- to-navigation such as buoy tenders, construction tenders, and boats that make up its ATON fleet. These vessels are used to perform both periodic routine maintenance of aids and discrepancy response, when, for example, a light is extinguished or a buoy is moved from its intended location. The assets are shown in table 2. <1.1. Domestic Icebreaking Mission> Domestic icebreaking is a key component of the Coast Guard s ice operations mission, which facilitates the safe and efficient navigation on lakes, rivers, channels, and harbors during the winter season. The Coast Guard has statutory icebreaking responsibilities that are additionally addressed by an executive order that directs the Coast Guard to break ice in channels and harbors in order to keep them open to navigation. Like plowing snow-covered roads, the Coast Guard keeps areas of water open as much as is reasonably possible for commercial traffic in winter. It also performs icebreaking for search and rescue and prevention of flooding by ice. To conduct this mission, the Coast Guard uses assets that are specially designed with strengthened hulls. The key icebreaker types the Coast Guard uses are shown in table 3. The Coast Guard classifies its vessel assets, such as those used in the ATON and domestic icebreaking missions, as cutters (assets 65 feet long or longer with adequate accommodations for crew to live on board) or boats (assets less than 65 feet in length that usually operate near shore and on inland waterways). For purposes of this report, the three main asset groups are ATON cutters, ATON boats, and domestic icebreakers. <2. Icebreaking and ATON Assets Show Significant Increases in Time Spent on Homeland Security Missions since 2001> Since 2001, the Coast Guard s domestic icebreakers and ATON cutters have experienced significant increases in the time spent conducting missions related to homeland security. Most of this increase has come in the Ports, Waterways and Coastal Security (PWCS) mission, which involves such activities as conducting security patrols and escorting vessels. The increase was greatest for domestic icebreakers, which continue to be used more for homeland security missions than for icebreaking because of their availability during months when no icebreaking is needed. By contrast, ATON cutters and boats still spend most of their time on ATON-related activities, reflecting the year-round nature of the ATON mission. Some newer ATON cutters with greater multi-mission capabilities, however, continue to have a more diverse workload. Coast Guard officials said icebreakers and ATON vessels, while less than ideal for carrying out security missions, can perform these missions adequately. <2.1. Domestic Icebreakers Show the Largest Increase in Time Spent on Homeland Security Missions> During fiscal years 2001 through 2005, the domestic icebreakers divided their time between several of the Coast Guard s 11 missions, but PWCS activities accounted for roughly half of all resource hours during fiscal years 2002 through 2005. PWCS activities grew quickly from 15 percent of total resource hours in fiscal year 2001 to 53 percent in fiscal year 2002, and they have remained at 44 percent or more of total hours through fiscal year 2005. At the same time, icebreaking hours began at 41 percent in fiscal year 2001 and then dropped down to 13 percent in 2002, but ended at 26 percent in fiscal year 2005. The vast majority of this increase in PWCS has occurred in the New York City area with smaller increases in other East Coast ports. As figure 2 shows, the increase came about largely by adding to the total number of hours these assets were operated. The total number of resource hours for these assets grew from about 12,000 hours in fiscal year 2002 to a high of about 20,000 the following year. The increase in PWCS hours for domestic icebreakers mainly reflects their availability during those months when no icebreaking needs to be done. Icebreaking needs are typically greatest from December 15 to April 15. Coast Guard officials said that because icebreakers do not have a primary summertime mission, using them to conduct PWCS missions during slack periods has not limited the Coast Guard s ability to conduct routine icebreaking missions. Icebreaking hours, however, did see some marked shifts during this period most notably a decrease in fiscal year 2002 followed by a substantial increase in fiscal year 2003. The decrease in 2002 appears related to two main factors: a mild winter, during which the Great Lakes region was virtually free of ice throughout December and most of January, and a change in the way the Coast Guard accounted for its use of icebreakers. The Coast Guard does not record resource hours under two mission categories simultaneously, and prior to the attacks on September 11, 2001, resource hours used to break ice while escorting a vessel with hazardous cargo would only have been recorded as ice operations. After the attacks, these same hours could be logged either as PWCS or icebreaking at the discretion of the vessel s commanding officer. The increase in ice operations hours for fiscal year 2003 reflected an unusually severe winter in the Great Lakes. Increased workloads have placed some icebreakers above the maximum number of recommended operating hours for the assets. The maximum recommended operating level, called an underway hours limit, reflects maximum use established from planning documents, missions, and maintenance requirements, and historic use. In particular, the 65-foot small harbor tug fleet exceeded their underway hours limit from fiscal years 2001 to 2003 by increasingly more hours, starting with being 10 hours over the underway limit in 2001 and progressing to nearly 2,000 hours over the underway limit in 2003. In contrast, the 140-foot icebreaking tugs were operated within their underway hours limit from 2001 to 2005. Coast Guard officials said domestic icebreakers, while not their vessel of choice for maritime security missions, can perform all PWCS missions adequately except for shore side patrols. The Coast Guard s 87-foot coastal patrol boats are the preferred assets for PWCS missions. Commissioned since 1998, these boats can travel at up to 25 knots and have a system that allows the crew to launch and recover small boats. Relative to these vessels, domestic icebreakers show both advantages and disadvantages (see table 4). Icebreakers are more capable of operating in cold weather, and their substantial size provides a significant presence on the waterways, but they are slower, less able to launch small boats, and pose increased training challenges for training crews in law enforcement. <2.2. Increase in Homeland Security Missions Is Less Extensive for ATON Assets> ATON assets also experienced an increase in use for homeland security missions after September 11, 2001, but to a much lesser degree than for domestic icebreakers. Overall, ATON assets were used for several of the Coast Guard s missions during fiscal years 2001 through 2005, but ATON remained the primary mission, accounting for more than 85 percent of the fleet s total resource hours for fiscal years 2001 to 2005. Time spent in PWCS activities increased from 4 percent of total resource hours in fiscal year 2001 to 10 percent in fiscal year 2002; since then, PWCS mission hours have steadily decreased (see fig. 3). Overall, PWCS activities accounted for 6 percent of resource hours during the period. When resource hours are analyzed more closely by type of ATON asset, there are significant differences in the amount of hours used for the PWCS mission. The increase in PWCS resource hours came primarily from cutters (vessels ranging in length from 65 to 225 feet). Overall, ATON activities account for about 79 percent of total resource hours for the cutters, compared with about 90 percent for ATON boats (vessels less 65 feet in length). ATON boats were the only vessels that did not have as much of an increase in PWCS resource hours immediately after the attacks on September 11, 2001, though their use in PWCS activities did rise in fiscal year 2003. Among the ATON cutters, the newer cutters have greater multiple mission capabilities and consequently tend to be used more often in other missions. For example, ATON cutters acquired between 1944 and 1976 performed an average of 4 of the Coast Guard s 11 missions during fiscal years 2001 through 2005, while the 225-foot seagoing buoy tender, which the Coast Guard completed the acquisition of in 2004, was used in all 11 of the Coast Guard s missions in fiscal years 2004 through 2005. <3. Available Evidence Indicates Condition of Assets Varies Greatly and Mixed Impact on Mission Performance> The available evidence does not give a consistent picture of how usage trends may be affecting the condition of these assets and, ultimately, the Coast Guard s ability to meet performance goals for icebreaking and ATON missions. We analyzed three types of evidence related to condition the Coast Guard s primary measure for reporting asset condition, overall trends in maintenance expenditures on each type of asset, and a body of anecdotal evidence gathered primarily through interviews with Coast Guard personnel and site visits to various installations. The Coast Guard s primary condition measure shows some assets meeting the operating standard and others falling below it. However, the current measure for asset condition is not clearly linked to mission performance, but the Coast Guard is working on developing a measure that links the two. Trends in maintenance costs and the anecdotal evidence we gathered tend to indicate that asset conditions are declining, though not substantially beyond what Coast Guard officials said they would expect for vessels of this age. Performance indicators for the icebreaking and ATON missions likewise show mixed results, with the Coast Guard meeting some performance goals and not meeting others. In part, these mixed results can be explained by the many other factors besides asset condition, such as the severity of weather in any given year. <3.1. Current Condition Measure Is Limited and Does Not Show a Clear Pattern in Asset Condition> For icebreaking and ATON cutter assets, the Coast Guard s key summary measure of condition shows mixed results. This measure percent of time free (POTF) of major casualties has been mixed. The Coast Guard s standard is 72 percent or better. Measured against this standard for fiscal years 2000 through 2004, the various types of icebreakers and ATON cutters vary considerably. As table 5 shows, some assets, such as the 65-foot small harbor icebreaking tugboat and the 65- and 75-foot river buoy tenders met the standard nearly every year, while others such as the 140-foot icebreaking tugboat and the 75-foot, 100-foot, and 160-foot inland construction tenders met it either not at all or only once during the 5-year period. Fiscal year 2004 was the worst of the 5 years, with only two of the eight types of cutters meeting the standard. The Coast Guard s condition measure for these assets, while instructive, needs to be viewed with some caution. As we have reported in our analysis of the condition of the Coast Guard s legacy deepwater assets, the measure captures only major equipment casualties, which degrade mission capabilities but does not capture minor equipment casualties that may also degrade mission capabilities. As such, this measure may underestimate the decline in asset condition. The Coast Guard has acknowledged the limitations of this measure and is working on a replacement for it, which will better determine specific mission impacts. The POTF condition measure applies to cutters; the Coast Guard only recently started tracking POTF data for assessing condition trends on ATON small boats. During the fiscal year 2000 through 2004 period we reviewed, the Coast Guard did not have a centralized system for tracking the condition of these boats. Its internal assessment of the condition of the boats was based on two approaches, as follows: For standard boats, which are purchased by Coast Guard headquarters and have similar capabilities and equipment for all boats of a particular type, the Coast Guard assessed condition by determining the boats remaining service lives through a process referred to as ship structure and machinery evaluation boards (SSMEB). The SSMEB, which is conducted 10 years after a boat is commissioned and is repeated at 5-year intervals, applied to two of the six types of ATON boats we reviewed. For nonstandard boats, which are purchased by individual Coast Guard units for individual needs, the Coast Guard s assessment was based on anecdotal information from district boat managers, maintenance managers, annual boat inspection reports, and site visits. This approach was used on four of the six types of boats we reviewed. Using these approaches, the Coast Guard characterized most of these asset types as in fair to poor condition. By contrast, however, when the Coast Guard assembled POTF data for a portion of these boats in fiscal year 2005, the data did not support this assessment. The boats analyzed had average scores above the Coast Guard s goal of 72 percent. (App. III provides further details on condition measures for each of the ATON and domestic icebreaking assets.) <3.2. Increasing Maintenance Costs Indicate Possible Condition Issues> For our second measure trends in maintenance expenditures the picture with regard to condition is more consistent than for our first measure: maintenance expenditures for domestic icebreaking and ATON cutters are increasing, even after taking inflation into account. We analyzed three types of maintenance costs: Scheduled maintenance costs, which are planned for in advance and include such things as repainting the vessel; Unscheduled maintenance costs, which are for unforeseen emergencies; and Engineering Logistic Center (ELC) costs, which include fleetwide projects that require engineering assistance (such as checking for watertight integrity) and therefore cannot be handled at the unit level. These projects, such as replacing a generator, help sustain capability but do not enhance it, according to Coast Guard officials. While there are some asset-by-asset variations, total maintenance costs for domestic icebreakers and ATON cutters increased during the period we examined (fiscal years 2001 through 2005). As figure 4 shows, total annual maintenance costs for domestic icebreakers nearly doubled, from slightly more than $3 million to slightly more than $6 million. The increase came primarily in two of the three categories in most years scheduled maintenance costs and ELC costs. Although maintenance costs are also affected by the amount of funding available in any given year, according to Coast Guard officials, maintenance managers have discretion to move some funds to those assets most in need of maintenance money. Coast Guard officials have also said that these costs are indicative of condition issues. For example, an ELC official said that the increase in ELC costs was related to condition because the money was used for the overhaul of domestic icebreakers. In addition to the amount of funding available in any given year, these maintenance costs can also be impacted by a variety of other factors such as the pace of operations. For example, maintenance costs can be expected to increase as the pace of operations increases. Total annual cost increases for ATON cutters showed a similar trend, more than doubling from over $13 million to over $32 million during the 5-year period (see fig. 5). For ATON cutters, cost increases were greatest in ELC maintenance and unscheduled maintenance. As with icebreakers, Coast Guard officials considered these expenditures to be related to asset condition. <3.3. Evidence Gathered from Interviews, Site Visits, and Other Records also Indicate Condition Issues> Evidence we gathered during our discussions with maintenance personnel, our visits to various Coast Guard installations, and our review of other Coast Guard records also pointed to declining condition of a number of these assets. However, according a program manager who previously served as a commanding officer on an icebreaker, for some of the older assets, the decline in condition of those assets has not been beyond what would be expected of assets 20 years or more in age. During our interviews and site visits, Coast Guard personnel reported to us that crew members have had to spend increasingly more time and resources to troubleshoot and resolve maintenance issues on older domestic icebreaking and ATON assets. They indicated that because the systems and parts are outdated compared with the technology and equipment available today, it can be challenging and time consuming to diagnose a maintenance issue and find parts or determine what corrective action to take. For example, the propulsion control system on the 140-foot icebreaking tugs uses circuit cards that were state-of-the-art when the tugs were commissioned in the late 1970s to 1980s but are no longer manufactured today and have been superseded by computer control systems (see fig. 6). Coast Guard personnel said the lack of a readily available supply of these parts has forced maintenance personnel to order custom made parts or refurbish the faulty ones, increasing the time and money it takes to address maintenance problems. Finding knowledgeable individuals to identify problems with outdated equipment is difficult, they said, which further complicates maintenance. Crews of other assets we visited also confirmed the difficulty of diagnosing problems and obtaining replacement parts for other critical subsystems such as the main diesel engines. Since at least 2002, the Coast Guard has been on record as saying these assets are in decline. In a mission needs analysis issued that year, the Coast Guard concluded that its domestic icebreaking and ATON assets were affected in varying degrees with respect to safety, supportability, environmental compliance, and habitability, and that addressing these issues would require replacing or rehabilitating the assets. The analysis noted that the need to replace or rehabilitate inland buoy tenders and 45-foot buoy boats had been identified as early as 1993 but had not yet been addressed. It also noted that the 21-foot trailerable aids-to- navigation boats and the 55-foot aids-to-navigation boats most of which have yet to be replaced had been extended beyond their projected service lives. When we asked Coast Guard officials if current usage patterns were precipitating the decline of these assets, they said that overages can have an impact on some assets, especially those with more complex systems and subsystems. The officials said that exceeding planned usage limits may leave less time to maintain these systems. They said that the deterioration of an asset and its systems from usage consistently above the limits would be reflected in periodic engineering assessments, known as SSMEBs, of the assets. In our site visits, we did learn of one example in which increased use of assets for security-related purposes may be affecting condition. The example involves the 140-foot icebreakers, which currently are being used extensively for security-related activities when they are not engaged in icebreaking activity. According to Coast Guard personnel, these icebreakers were designed to operate at maximum power for icebreaking; however, maritime security missions typically require several hours of idling, which is detrimental to the engine. Extended periods of idling, they said, causes oil discharge and sludge build up in the engine and mufflers. Thus, running assets in ways for which they were not designed could result in faster degradation of their condition. <3.4. Performance Indicators for Icebreaking and ATON Missions Show Mixed Results> Against this backdrop of condition indicators, the Coast Guard s measures of performance for domestic icebreaking and ATON missions show mixed results, with several indicators showing that mission performance has been improving or largely unchanging, while at least one other indicator indicates a decline. For domestic icebreaking, the Coast Guard s performance indicator is the number of days that ice leads to closures of waterways in the Great Lakes region the region in which most domestic icebreaking activity occurs. The Coast Guard s performance goal is to have 2 or fewer closure days during average winters. During fiscal years 2001 through 2005 the Coast Guard met this goal every year but 1. The exception was fiscal year 2004, when waterways were closed for 4 days. According to Coast Guard officials, however, vessel condition was not a factor in waterway closures; instead, they were related to an icebreaker s being diverted to free a stuck vessel and to a response to a commercial aircraft crash. For ATON, the Coast Guard s primary performance indicator is the number of collisions, allisions, and groundings. Since these events can cause deaths and injuries, environmental and property damage, and lead to waterway closures that limit commercial and recreational activity, a decline in this measure is an improvement. During fiscal years 2001 through 2005, this measure declined a positive development (see fig. 7). While the Coast Guard s primary ATON performance indicator was showing improvement, however, an important secondary measure was showing an adverse effect. This indicator, which measures the probability that an aid to navigation or a system of aids-to-navigation is performing its specified function at any randomly chosen time and is expressed as a percentage of total time, is the leading performance measure used in managing the ATON program, according to Coast Guard officials. This measure has steadily declined since fiscal year 2002 (see table 6), and since a smaller percentage means fewer aids are available, a decline in this measure is an adverse development. Coast Guard officials said some of this decline was attributable to the condition of ATON cutters and boats for servicing the navigational aids, but they were not able to estimate how much of the decline could be attributed to this cause. In other work, we have noted that the Coast Guard s performance indicators can be affected by multiple factors and that there are challenges to using such measures in linking resources to results. The ability to meet icebreaking goals, for example, can be affected by the severity of the winter. In fact, when the Coast Guard met its goal for waterway closures in fiscal year 2005, Coast Guard officials attributed the result in part to less severe average ice conditions than in previous years. Similarly, the ability to keep navigational aids in working order can be adversely affected by such uncontrollable factors as the severity of a hurricane or winter storm season. The Coast Guard has launched a number of initiatives designed to address challenges in linking resources to results of these missions. These initiatives followed program assessments conducted by the Office of Management and Budget, which completed an assessment of the ATON program in 2002 and the domestic icebreaking program in 2004. For the ATON program, the assessment determined that the program did not demonstrate results and recommended that the program have specific long-term performance goals that focus on outcomes. The assessment of the domestic icebreaking program determined that the program was effective, but that more ambitious performance targets needed to be set. In response to these findings, the Coast Guard has worked to set long-term performance targets and develop new measurement frameworks to align with OMB s recommendations. <4. To Continue to Achieve the Missions of Its ATON and Domestic Icebreaking Assets, the Coast Guard Has Taken Actions, Made Proposals, and Studied Outsourcing Possibilities> The Coast Guard has considered or proposed a wide variety of actions to continue to achieve the missions that its domestic icebreaking and ATON assets perform and is moving ahead with several of them. Actions under way include developing new ways to measure asset condition, manage boat and cutter maintenance, and make choices about which maintenance projects to conduct. The Coast Guard has also acquired some new buoy tenders and a new icebreaker, though the bulk of its icebreaking and ATON fleet remains at or beyond projected service lives. Coast Guard officials stated that to determine whether and when to replace or rehabilitate aging assets, factors such as the assets condition and trends in maintenance costs, among other things, are taken into account. Proposals to systematically rehabilitate or replace these assets have been denied or deferred by DHS or the Office of Management and Budget (OMB), apparently due to competition from initiatives such as the $24-billion Deepwater project for replacing or renovating other Coast Guard vessels and aircraft. In response, the Coast Guard has separated the proposals into smaller parts and is trying to fund some projects from within the Coast Guard s budget. Finally, the Coast Guard studied what mission activities make the best business case for outsourcing of functions to the private sector, but states that potential disadvantages to outsourcing exist such as loss of capabilities and inability to retain personnel. <4.1. Actions Have Been Taken to Manage Assets and Acquire Some New Ones> Three main steps to manage assets are under way, and several acquisitions have been completed in both the icebreaking and ATON fleets. <4.1.1. Coast Guard Is Developing a More Robust Condition Measure> The Coast Guard is working on the development of a new measure to track an asset s condition. As mentioned above, the Coast Guard s previous measure, percent of time free of major casualties, did not capture the extent to which equipment casualties degraded mission capabilities. Called percent of time fully mission capable, this new measure is intended to more directly link a cutter s condition to its mission capability. Developed after our examination of the condition of deepwater assets, this measure will be used for ATON and domestic icebreaking assets as well. For the new measure, the Coast Guard is developing codes that rank the degree of importance of each piece of a cutter s equipment to each mission that the cutter could perform. The Coast Guard plans to use these codes in casualty reports, providing engineers and operators with information about the impact of equipment casualties on each possible mission. This information will then be used in calculating the condition measure for each cutter class and mission. This information would allow Coast Guard officials to determine, for example, the degree of icebreaking capability of its domestic icebreaking fleet at any given time. Coast Guard officials said they expect final approval of this measure this year. <4.1.2. Coast Guard Is Implementing New Approaches to Manage ATON Boat and Cutter Maintenance> The Coast Guard is implementing a centralized boat maintenance initiative to improve the management of its boat fleet, which includes many ATON boats. In contrast to the previous approach in which local boat operators managed boat maintenance and oversaw the spending of maintenance funds, the new initiative places management of boat maintenance and expenditures with naval engineers. According to the Coast Guard, the key advantages of this initiative include standardized maintenance practices for the boats, better oversight of maintenance funding, and enhanced tracking and analysis of casualties. In addition, it should improve the tracking of the condition of the Coast Guard s small boat fleet, which has lacked a centralized tracking system. Known as Centralized Boat Maintenance Management, this initiative is expected to be rolled out Coast Guard-wide by fiscal year 2008 if adequate resources and personnel are available. Since 2002, the Coast Guard has also been gradually implementing a maintenance approach called condition based maintenance for select subsystems and parts of its newer coastal and seagoing buoy tending cutters. Under this approach, the condition of a part or subsystem, such as the main diesel engine, is evaluated or assessed at regular intervals to determine whether it needs to be replaced or have maintenance performed. Parts or systems would be replaced or receive maintenance only if their condition showed excessive wear or did not perform at an acceptable level. Under the previous approach, maintenance occurred at time-based intervals even if the part showed no excessive wear or performed acceptably. The key advantage of the change, according to Coast Guard officials, is reduced costs. For example, the Coast Guard estimated that it reduced drydock costs related to maintaining newer coastal and seagoing buoy tenders $2 million in fiscal year 2005. The Coast Guard is considering expanding the use of this maintenance approach to other subsystems of the newer buoy tending assets and the new Great Lakes icebreaker commissioned in 2006. <4.1.3. Coast Guard Is Developing a Tool to Better Prioritize Upgrades and Maximize Asset Capabilities> In 2002, we recommended that the Coast Guard develop a long-term strategy to set and assess levels of mission performance. We found this was an important step to take because it links investments to asset capabilities and mission priorities so that the Coast Guard can better decide how limited budget dollars should be spent. The Coast Guard has been working to apply the principles behind such a strategy to (1) better prioritize the projects needed to upgrade assets such as aging ATON and domestic icebreakers and (2) obtain the greatest overall mix of capabilities for its assets within its budget in order to maximize mission performance. The tool it is developing is called the Capital Asset Management Strategy (CAMS). CAMS is designed to analyze the capability trade-offs for upgrades and maintenance projects across asset classes, allowing the Coast Guard to determine which combination of projects will provide the most capability for the dollars invested. These analyses take into account such factors as asset condition, the asset s importance to specific missions, and the relative importance of missions. The Coast Guard continues to refine CAMS and expects to have it in full use beginning with the budget for fiscal year 2009. The recommendations stemming from CAMS are intended to augment the information currently provided to decision makers in the budget development process. <4.1.4. Coast Guard Acquired Some New Assets> Since the 1990s, the Coast Guard has been able to replace buoy tenders with new assets that represent about 15 percent of its current ATON fleet. From 1996 to 2004, the Coast Guard commissioned 14 new 175-foot coastal buoy tenders and 16 new 225-foot seagoing buoy tenders to replace an aging fleet of 11 coastal and 27 seagoing buoy tenders that were built between 1942 and 1971. The new buoy tenders have improved capabilities such as the following: A computerized positioning system that automates the task of holding the vessel in place while working on a navigational aid. Previously, this task had to be done manually, requiring the crew to constantly monitor and maintain the vessel s position, sometimes for up to 10 hours at a time. The system relieves the crew of this task and reduces safety concerns associated with crew fatigue. Bow and stern thrusters to enhance the vessels maneuverability and improve the crew s ability to maintain position. Hydraulic chain stoppers and winches to reduce the number of crew members required to do the work and enhance safety (see fig. 8). Accommodations that allow for dual gender crews, increasing the Coast Guard s ability to allow women to serve on the vessels. These and other features also allow the newer buoy tenders to carry out other missions, according to a Coast Guard official involved in their design and acquisition. Their size, stability, and maneuverability are useful for such missions as search and rescue, homeland security, and law enforcement, and they have specific capabilities for dealing with other duties as skimming oil or mounting machine guns for security patrols. With their sizeable fuel tanks and storage capacity, they can also serve as logistics support platforms to restock vessels involved in drug interdiction and other activities. For 7 weeks in 2005, for example, one tender served as a supply platform for a Coast Guard vessel conducting drug interdiction patrols. According to Coast Guard officials, this enabled the patrol vessel to remain on patrol in the area for a longer period of time than it would have otherwise with its limited fuel and storage capacities. Besides the buoy tenders, the Coast Guard commissioned a new 240-foot multimission icebreaker in 2006 to replace a 62-year-old icebreaker and an aging buoy tender on the Great Lakes. The new icebreaker has enhanced icebreaking capabilities and the same ATON capabilities as the newer seagoing buoy tenders, enabling it to work on navigational aids in ice conditions as well as during other times of the year when no icebreaking is needed. <4.2. Additional Proposals to Rehabilitate or Replace Aging Assets Remain Largely Unfunded> Despite the new acquisitions, more than half of the assets in the domestic icebreaking and ATON fleet have reached or are nearing the end of their service lives. Coast Guard officials stated that they use a process that considers information such as how close the assets are to the end of their design life, the condition of the assets as determined by periodic assessments, and trends in maintenance costs, among other things to determine whether to rehabilitate or replace these aging assets. This information is used to identify the asset types that are most in need of replacement or major maintenance, and therefore should be given greater consideration in maintenance planning and budgeting. In 2002, the Coast Guard proposed options for systematically rehabilitating or replacing 164 cutters and boats in these fleets. According to Coast Guard officials, these options were proposed after determining that the age, condition, and cost of operating these assets would diminish the capability of the Coast Guard to carry out ATON and domestic icebreaking missions over time without rehabilitation or replacement of some or all of the assets. In 2004, it completed a preliminary analysis of four approaches, including the status quo that is, maintaining the existing fleet. This analysis provided an estimate of the total life-cycle costs for each approach over a 33-year period from fiscal year 2005 to fiscal year 2037. (See table 7 for a description of each approach.) Estimated costs ranged as high as $8.5 billion; however, Coast Guard officials emphasized that these estimates were preliminary and are not reliable. As a result, we are not reporting these numbers in detail. No funds have been allocated to pursue this project further, apparently due to competing funding requests for replacing or rehabilitating other Coast Guard assets. According to a Coast Guard program official, although resource proposals to carry out this project were made during the budget planning processes for fiscal years 2004 through 2007, the requests were either deferred or denied by DHS or the Office of Management and Budget. Coast Guard officials involved in the program said they were not aware of the exact reasons why the requests were terminated or deferred. The officials said that the funding demands from other major Coast Guard programs already underway (such as the Deepwater program for replacing or rehabilitating aircraft and cutters with greater at-sea capability) that likely had higher priority in the competition for limited resources combined with the large scope and size of the proposed project, may have prevented the project from being funded. Without specific funding to move the project forward, the Coast Guard has attempted to break the project into smaller components and pursue potential funding from within the Coast Guard s budget, focusing on the assets most in need of maintenance or replacement. In February 2006, the Coast Guard began a project to replace its fleet of 80 trailerable aids-to- navigation boats with new boats that have enhanced capabilities to do ATON work as well as other missions. The enhanced capabilities include equipment to lift navigational aids out of the water for service, more deck space for working on these aids, an elevated work platform for working on aids that are high in the water, and faster speeds to reduce transit times. The Coast Guard intends for the new boats to be more multimission capable. For example, their added speed and deck space will help with search and rescue missions, and they will have gun mounts for use in law enforcement or maritime security missions. According to a Coast Guard official, this acquisition would cost approximately $14.4 million if all 80 boats are purchased and would bring on new boats over a 5-year period as funds allow. The Coast Guard official responsible for the project said the Coast Guard intends to make the purchases using a funding stream appropriated for the maintenance of nonstandard boats that can be allocated to the boats with the most pressing maintenance or recapitalization needs. Availability of these funds, however, depends on the condition and maintenance needs of other nonstandard boats; if this funding has to be applied to meet other needs, such as unanticipated problems, it may not be available for purchasing these boats. Separate from this effort to acquire new trailerable boats, the Coast Guard has made a request as part of the budget process to begin rehabilitating aging river buoy and construction tenders. This project, which will focus on rehabilitating the systems within the engine rooms of the assets, is estimated to cost approximately $75 million. The Coast Guard plans to include this project in future budget requests. Coast Guard officials indicated that they were submitting this request because these assets were determined to be in the worst condition. <4.3. Study Identified Outsourcing Possibilities but May Face Disadvantages to Implement> In 2004, the Coast Guard examined possibilities for outsourcing missions in response to an OMB assessment of the ATON program. As a result of that assessment, Coast Guard and OMB officials agreed to study which ATON activities make the best business case for being performed by contractors outside of the Coast Guard with minimal impact on the Coast Guard s ability to carry out its other missions. The subsequent study, completed in April 2004, found that inland construction tenders spent most of their resource hours on the ATON mission with minimal impact or use in other missions and provided one of the best opportunities for further study of outsourcing. However, the study did not quantify the potential benefits that could be derived from outsourcing these activities. In August 2006 the Coast Guard completed an analysis as to whether ATON functions could feasibly be outsourced, and which parts, if any, were inherently governmental in nature. The objective of this analysis was to compare the Coast Guard's inland construction and river buoy tender operating costs with representative private sector marine industry costs and make recommendations regarding the feasibility to commercially support and operate the inspection, servicing, and contingency response capability of the ATON mission and assets. According to Coast Guard officials, the results will be incorporated into future acquisition plans for replacing the current capabilities represented by inland construction tenders and river buoy tenders. This Coast Guard analysis was finalized after we had completed our audit work. Therefore, we were unable to obtain and review the study in time for the final preparation of this report. Although possibilities for outsourcing were identified, outsourcing also carries potential disadvantages, according to Coast Guard program officials. Potential disadvantages they mentioned include the following: Outsourcing could lead loss of surge capability that is, the capacity to respond to emergencies or unusual situations. In part, this capability may be needed within the ATON or icebreaking mission itself, such as when a hurricane or ice destroys or damages a large number of navigation aids. In the case of Hurricane Katrina, Coast Guard officials stated that because the Coast Guard had ATON assets such as construction tenders, crews were able to begin working immediately to repair damaged aids and get the waterways open to maritime traffic again. This surge capability may also be needed for other missions, such as occurs when ATON assets can be used to support search and rescue efforts. In the aftermath of Hurricane Katrina, for example, some ATON assets provided logistical support for first responders or transported stranded individuals. Outsourcing may disrupt the Coast Guard s personnel structure and weaken the agency s ability to attract and retain personnel. Specifically, they are concerned that outsourcing would likely reduce opportunities that provide important experience for personnel to advance in their careers and eliminate positions that typically have more predictable work schedules than positions in some of the other Coast Guard s missions. <5. Concluding Observations> The Coast Guard has been using its domestic icebreaking vessels and its ATON assets to a lesser extent, to accommodate the need for additional homeland security activities in the post-September 11 environment, and it is doing so thus far largely without curtailing ATON or domestic icebreaking activities or unduly straining these assets past their designed workloads. The available evidence also indicates that despite some decline in the condition of some asset types, the Coast Guard s ability to meet its aids-to-navigation and domestic icebreaking missions, as indicated by mixed outcomes of its key mission performance measures, has not shown clear trends of decline. Efforts by Coast Guard personnel to troubleshoot operational problems and to take other steps to keep assets operating appear to be one reason mission performance has not been further affected, and many other factors, such as the harshness of a winter or the severity of storm damage on navigation buoys and beacons, can also affect performance results. For the present, however, the impact of these additional mission responsibilities does not appear to be a cause for alarm. That said, the future of these assets bears close watching. The fact that many of the assets have or will be approaching the end of their design service lives could mean the need for rehabilitated or new assets may become more pressing in the future. Another issue is whether current operations, both in level and types of usage, are adding to these costs and incidents beyond what the Coast Guard would normally expect. For example, operating domestic icebreakers beyond their underway hours limit could potentially accelerate the level of decline. If this is the case, using these assets to meet security missions could be meeting the Coast Guard s immediate needs but accelerating the need for replacement or rehabilitation. According to Coast Guard officials, the Coast Guard s attempt to systematically rehabilitate or replace its ATON and domestic icebreaking fleet was proposed at a time when competing demands likely caused postponements of requests for the needed funds. These competing demands, reflected largely in the Coast Guard s expensive and lengthy Deepwater asset replacement program, will continue for some time, as will other pressures on the federal budget. The Coast Guard is moving to improve the process it uses to set budget priorities through actions such as its new tool to link asset condition and funding decisions to better identify the projects that will provide the most capability with the limited funds that are available. Given that many of these actions are recent and need a chance to work, it is too early to evaluate their effectiveness. However, even as the Coast Guard takes steps to determine how best to replace or rehabilitate its assets, limited budgetary resources combined with other competing asset replacement programs already in process will likely continue to challenge the Coast Guard to find sufficient resources to carry out the options identified. <6. Agency Comments> We requested comments on a draft of this report from the Department of Homeland Security and the Coast Guard. The Coast Guard provided technical comments, which we have incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the appropriate congressional committees; the Secretary of Homeland Security; the Commandant of the Coast Guard; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines the time spent by the U.S. Coast Guard s domestic icebreaking and ATON assets on various missions, the condition of these assets, and the actions the Coast Guard has taken to continue to achieve the missions of these assets. Our work focused on three key questions: (1) What are the recent trends in the amount of time these assets have performed various missions? (2) What is the condition of the Coast Guard s ATON and domestic icebreaking assets and how has their condition impacted the performance of their primary missions? (3) What actions has the Coast Guard taken to continue to achieve the missions of its ATON and domestic icebreaking assets? In identifying trends in the amount of time spent on missions and the impact of these trends, we analyzed data from the Coast Guard s Abstract of Operations (AOPS) database, which tracks resource hours for each asset. For each asset type within our scope, we examined trends in the number of resource hours spent between fiscal years 2001 and 2005 conducting each of the Coast Guard s missions. To determine the reliability of this data, we (1) reviewed the results of previous reliability assessments we have conducted of this data for other work and (2) confirmed with the AOPS program manager that the data and the manner in which it is managed has not changed since the previous assessment that would affect its reliability. We determined that the data was sufficiently reliable for the purposes of this report. We supplemented our analysis of these resource hours with documentation from interviews with asset program managers and crews of ATON and domestic icebreaking assets. In assessing the condition of the assets during fiscal years 2001 to 2005, we analyzed what Coast Guard officials identified as the best available condition measures. We obtained concurrence from the Office of Naval Engineering and the Office of Cutter Forces that the appropriate measures to use for the condition of assets were percent of time free of major casualties, scheduled/unscheduled/Engineering Logistics Center maintenance costs, and estimated deferred maintenance costs. To determine the reliability of this data, we (1) reviewed the results of previous reliability assessments we have conducted of this data for other work and (2) examined responses the Coast Guard provided to a questionnaire we sent requesting updated information on the administration and oversight of the databases. We determined that the data was sufficiently reliable for the purposes of this report. We supplemented our analysis of these measures with documentation from internal Coast Guard reports, as well as from interviews of asset program managers at Coast Guard headquarters and crewmembers of the assets located in the field. In addition to talking with crewmembers, we directly observed the condition of various assets during our site visits to Alameda, Calif; Bayonne, N.J.; Buchanan, Tenn; Baltimore, Md; Mobile, Ala; Seattle, Wash; Sault St. Marie, Mich; and Atlantic Beach, N.C. These assets were selected to provide diversity in terms of type and age of asset and geographic location. In addition, we interviewed Coast Guard officials with the Area Commands in Alameda, Calif., and Portsmouth, Va., as well as in Districts 5 and 13. To determine the actions that the Coast Guard has undertaken to continue to achieve the missions of its ATON and domestic icebreaking assets, we interviewed officials with the Coast Guard s Engineering and Logistics Center, Engineering and Logistics Directorate, Office of Naval Engineering, Office of Boat Forces, and Office of Cutter Forces. To obtain information on newer assets the Coast Guard has acquired, we also made site visits to interview personnel and observe the assets in San Francisco, Calif.; Atlantic Beach, N.C.; Baltimore, Md.; Cheboygan, Mich.; and Mobile, Ala. To determine what proposals the Coast Guard has made to rehabilitate or replace its ATON and domestic icebreaking assets, we reviewed Coast Guard project documents and interviewed officials at Coast Guard headquarters. We did not, however, verify the accuracy of the cost estimates provided for those proposals. Finally, to determine the work the Coast Guard has done to study the outsourcing of ATON and domestic icebreaking mission activities and the potential impact of outsourcing those activities, we interviewed Coast Guard officials at headquarters as well as officials and crew members in the field. We also reviewed the completed business case analysis of outsourcing opportunities for ATON mission activities. We performed our review from July 2005 to August 2006 in accordance with generally accepted government auditing standards. Appendix II: Mission Resource Hours of ATON and Domestic Icebreaking Assets, Fiscal Years 2001 to 2005 Appendix II provides information on the number of resource hours Coast Guard ATON and domestic icebreaking assets have spent on various Coast Guard missions during fiscal years 2001 through 2005. The Coast Guard maintains information, on a program-by-program basis, about how resources, such as vessels, boats, and aircraft, are used. Each hour that these resources are used is called a resource hour. Resource hours are accumulated and reported by quarter and represent the time spent by the Coast Guard s major assets to conduct its programs. The table 8 shows by asset type, the hours ATON and domestic icebreaking assets have spent on each of Coast Guard s missions for each fiscal year, from 2001 through 2005. The percentage of each asset types total fiscal year resource hours that these hours represent is shown in parenthesis. Appendix III: Condition Measure of ATON and Domestic Icebreaking Assets, Fiscal Years 2000 to 2005 <7. Condition Measure for ATON and Domestic Icebreaking Cutters> Appendix III provides information on the condition of the Coast Guard s ATON and domestic icebreaking assets. The Coast Guard s key summary measure of condition percent of time free (POTF) of major casualties shows a mixed picture of condition for ATON and domestic icebreaking cutters. However, the measure captures only major equipment casualties, which degrade mission capabilities but does not capture minor equipment casualties that may also degrade mission capabilities. As such, this measure may underestimate the decline in asset condition. The Coast Guard has acknowledged the limitations of this measure and is working on a replacement for it. Because the Coast Guard is in the process of developing a new condition metric and did not have fiscal year 2005 data available, our analysis covers fiscal years 2000 through 2004. See table 9 for individual POTF figures for the ATON and domestic icebreaking cutter assets for fiscal years 2000 through 2004. <8. Condition Measure for ATON Boats> The Coast Guard has less data for ATON boats, with POTF figures available for only fiscal year 2005. Based on these figures most boats appear to be in fair to poor condition. In addition to these figures, the Coast Guard has performed an internal assessment of condition using two approaches, one for standard and another for nonstandard boats. For standard boats the Coast Guard assessed condition through a process referred to as ship structure and machinery evaluation boards (SSMEB). While, for nonstandard boats the Coast Guard assessed condition by obtaining anecdotal evidence from district managers, maintenance managers, annual boat inspection reports, and site visits. This internal assessment, however, seems to further support our original finding that the Coast Guard s POTF figure may be underestimating condition. As you will see in table 10 below, while the 45-foot buoy boat and 21-foot trailerable aids-to-navigation boat had POTF percentages above 90 percent, their internal assessment of condition was rated at poor. Appendix IV: Maintenance Trends of ATON Cutter and Domestic Icebreaking Assets, Fiscal Years 2001 to 2005 <9. Maintenance Trends> Appendix IV provides information on the maintenance costs spent on the Coast Guard s ATON cutter and domestic icebreaking assets during fiscal years 2001 to 2005. Maintenance cost data for domestic icebreakers and ATON cutters shows a consistent increasing trend. The cost figures are broken out by scheduled (planned maintenance), unscheduled (unforeseen maintenance), and Engineering Logistic Center (ELC) (fleetwide projects that require engineering assistance) amounts, which allow for a more specific analysis as to the type of increase being incurred. Table 11 shows the individual maintenance cost data, adjusted for inflation using 2005 dollars, for ATON cutters and domestic icebreakers for fiscal years 2001 through 2005. Appendix V: GAO Contact and Staff Acknowledgments Stephen L. Caldwell, Acting Director, Homeland Security and Justice Issues, (202) 512-9610, or [email protected]. <10. Acknowledgments> In addition to the above, individuals making key contributions to this report include Chuck Bausell, Melanie Brown, Steve Calvo, Michele Fejfar, Geoffrey Hamilton, Christopher Hatscher, Stephanie Sand, Stan Stenersen, Gladys Toro, and Friendly Vang-Johnson. Related GAO Products Coast Guard: Non-Homeland Security Performance Measures Are Generally Sound, but Opportunities for Improvement Exist. GAO-06-816. Washington, D.C.: August 16, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Coast Guard: Observations on Agency Performance, Operations, and Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Coast Guard: Station Readiness Improving, but Resource Challenges and Management Concerns Remain. GAO-05-161. Washington, D.C.: January 31, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond. GAO-04-636T. Washington, D.C.: April 7, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Posthearing Questions Related to Aviation and Port Security. GAO-04-315R. Washington, D.C.: December 12, 2003. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Homeland Security: Efforts to Improve Information Sharing Need to Be Strengthened. GAO-03-760. Washington D.C.: August 27, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing Its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments through Domestic Seaports. GAO-02-955TNI. Washington, D.C.: July 23, 2002. Coat Guard: Non-Homeland Security Performance Measures Are Generally Sound, but Opportunities for Improvement Exist. GAO-06-816, Washington, D.C.: August 16, 2006.
Why GAO Did This Study The marine transportation system is a critical part of the nation's infrastructure. To facilitate the safety and efficiency of this system, the Coast Guard maintains aids-to-navigation (ATON), such as buoys and beacons, and conducts domestic icebreaking in the Great Lakes, St. Lawrence Seaway, and northeast coast. To conduct these missions, the Coast Guard has a fleet of more than 200 vessels, ranging from 225-foot seagoing buoy tenders and 140-foot domestic icebreakers to 21-foot boats. After the terrorist attacks of September 11, 2001, many of these assets took on additional responsibilities for security patrols and other homeland security duties. Although some assets have been recently acquired, many others are reaching or have exceeded their design service lives, raising concerns about how well and for how much longer these older assets may be able to carry out their missions. In response, GAO examined (1) recent trends in the amount of time these assets have spent performing missions; (2) asset condition and its effect on mission performance; and (3) the actions taken by the Coast Guard to continue to achieve the missions of these assets. To conduct this work, GAO reviewed Coast Guard documents, interviewed Coast Guard officials, and made site visits to various locations around the country. In commenting on a draft of this report, the Coast Guard provided technical comments, which were incorporated as appropriate. What GAO Found Many ATON vessels and domestic icebreakers have operated more hours in recent years than in previous years, with the increase coming mainly in homeland security missions. Domestic icebreakers are now used more for homeland security than for icebreaking, reflecting their availability at times of the year when no icebreaking is needed. While not designed for homeland security, the assets can perform such duties acceptably, according to the Coast Guard. Most ATON vessels are used primarily for ATON activities. Newer ATON assets receive the greatest use on other missions, reflecting their greater multi-mission capabilities. Trends are mixed with regard to asset condition and mission performance. Available evidence, such as the amount of maintenance conducted, suggests condition is declining for some assets, though not precipitously. Coast Guard officials said some assets, while being operated for more hours, are still largely being operated within planned limits. Against this backdrop, indicators for measuring performance show mixed results: some have declined, while others have not. The current measure for asset condition is not clearly linked to mission performance, but the Coast Guard is working on developing a measure that links the two. Actions the Coast Guard has taken to continue to achieve the missions of these assets include revising maintenance approaches and developing a new analytical tool for deciding which projects provide the most capability for the dollars invested. The Coast Guard continues to acquire some new vessels to replace aging ones, but proposals to rehabilitate or replace other aging vessels have not been implemented, largely because of other funding priorities. The Coast Guard also studied the feasibility of contracting out some activities. While some possibilities for outsourcing were identified in the study, the Coast Guard has identified potential disadvantages to outsourcing these activities.
<1. Compliance with Legislative Conditions> DHS satisfied or partially satisfied each of the applicable legislative conditions specified in the appropriations act. In particular, the plan, including related program documentation and program officials statements, satisfied or provided for satisfying all key aspects of federal acquisition rules, requirements, guidelines, and systems acquisition management practices. Additionally, the plan partially satisfied the conditions that specified (1) compliance with the capital planning and investment review requirements of the Office of Management and Budget (OMB), (2) compliance with DHS s enterprise architecture, and (3) the plan s review and approval by DHS s Investment Review Board, the Secretary of Homeland Security, and OMB. <2. Status of Open Recommendations> DHS has completely implemented, has partially implemented, is in the process of implementing, or plans to implement all the remaining recommendations contained in our reports on the fiscal years 2002, 2003, and 2004 expenditure plans. Each recommendation, along with its current status, is summarized below: Develop a system security plan and privacy impact assessment. The department has partially implemented this recommendation. First, the US-VISIT program has developed a security plan that provides an overview of system security requirements, describes the controls in place or planned for meeting those requirements, and refers to the applicable documents that prescribe the roles and responsibilities for managing the US-VISIT component systems. However, a security risk assessment of the program has not been completed, and the plan does not include a date for the assessment s completion. Second, the US-VISIT program has completed a privacy impact assessment for Increment 2. However, the assessment does not satisfy all aspects of OMB guidance for such an assessment, such as fully addressing privacy issues in relevant system documentation. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with the Software Engineering Institute s (SEI) guidance. The department is in the process of implementing this recommendation. The US-VISIT Acquisition and Program Management Office has initiated a process improvement program and drafted a process improvement plan. The office has also developed processes or plans, some of which are approved and some of which are in draft, for all except one of SEI s Software Acquisition Capability Maturity Model (SA-CMM ) Level 2 key process areas. Ensure that future expenditure plans are provided to the department s House and Senate Appropriations Subcommittees in advance of US- VISIT funds being obligated. With respect to the fiscal year 2005 expenditure plan, DHS implemented this recommendation by providing the plan to the Senate and House Subcommittees on October 19, 2004. Ensure that future expenditure plans fully disclose US-VISIT system capabilities, schedule, cost, and benefits to be delivered. The department has partially implemented this recommendation. The expenditure plan identifies high-level capabilities and high-level schedule estimates. It also identifies the amounts budgeted for each increment for fiscal years 2003 through 2005, but it does not associate this funding with specific capabilities and benefits. Further, while the plan identifies several benefits and associates these benefits with increments, it does not include any information on related metrics or on progress against achieving any of the benefits. Ensure that future expenditure plans fully disclose how the US-VISIT acquisition is being managed. The department is in the process of implementing this recommendation. The fiscal year 2005 plan describes some activities being employed to manage the US-VISIT acquisition, such as the governance structure, program office organizational structure, and staffing levels. However, the department does not describe how other important aspects of the program are being managed, such as testing, system capacity, and system configuration. Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. The department has partially implemented this recommendation. As of October 2004, US-VISIT had filled 59 of its 115 government positions, with plans to fill about half the vacant positions once security clearances have been completed. As of November 2004, the program office had filled 88 of a planned 117 contractor positions. The expenditure plan indicates that DHS has budgeted $83 million to maintain the US-VISIT program management structure and baseline operations. Clarify the operational context in which US-VISIT is to operate. The department is in the process of implementing this recommendation. In September 2003, DHS released version 1.0 of its enterprise architecture. We reviewed version 1.0 and found that it is missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows. Since we reviewed version 1.0 of the architecture, DHS has drafted version 2.0. We have not reviewed version 2.0. Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The department is in the process of implementing this recommendation. US-VISIT developed a cost-benefit analysis for Increment 2B, but it is unclear whether this increment will produce mission value commensurate with cost and risk. For example, the analysis addresses only government costs and does not address potential nongovernmental costs. Further, the analysis identifies three alternatives and identifies the third alternative as the preferred choice. However, US-VISIT is pursuing an alternative more closely aligned with alternative 2, because alternative 3 was considered too ambitious to meet statutorily required time lines. Define US-VISIT program office positions, roles, and responsibilities. The department has partially implemented this recommendation. US- VISIT has developed descriptions for positions within each office, and working with the Office of Personnel Management (OPM), it has drafted a set of core competencies that define the knowledge, skills, abilities, and other competencies needed for successful employee performance. Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The department has partially implemented this recommendation. The US- VISIT program office, in conjunction with OPM, has drafted a Human Capital Plan. The plan includes an action plan that identifies activities, proposed completion dates, and the organization responsible for completing these activities. The program office has completed some of the activities called for in the plan, including the designation of a liaison responsible for ensuring alignment between DHS and US-VISIT human capital policies. Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The department has partially implemented this recommendation. The US- VISIT program office has developed a risk management plan and process and has established a governance structure involving three primary groups the Risk Review Board, Risk Review Council, and Risk Management Team. The Risk Review Board represents the highest level of risk management within the program and is composed of senior level staff, such as the program director and functional area directors. However, US- VISIT has not reported high risks beyond this board. Define performance standards for each US-VISIT program increment that are measurable and reflect the limitations imposed by relying on existing systems. The department is in the process of implementing this recommendation. The US-VISIT program office has defined some technical performance measures such as availability, timeliness, and output quantity for Increments 1 and 2B, but it has not defined others, such as reliability, resource utilization, and scalability. Additionally, US-VISIT systems documentation does not contain sufficient information to determine the limitations imposed by US-VISIT s reliance on existing systems that have less demanding performance requirements, such as the 98.0 percent availability of the Treasury Enforcement Communications Systems. Develop and approve test plans before testing begins. These test plans should (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. The department is in the process of implementing this recommendation. According to the US-VISIT Systems Assurance Director, the Increment 2B system acceptance test plan was approved on October 15, 2004. However, no documentation was provided that explicitly indicated the approval of the plan. Further, the test plan did not fully address the test environment, include descriptions of tests to be performed, or provide test procedures to be followed in conducting the tests. The plan also did not provide traceability between test cases and the requirements to be verified by the testing. For example, 15 of the 116 requirements did not have test cases, and 2 requirements were labeled not testable. Ensure the independence of the Independent Verification and Validation (IV&V) Contractor. The department is in the process of implementing this recommendation. The US-VISIT Information Technology (IT) Management Office is developing high-level requirements for IV&V, including a strategy and statement of work for acquiring an IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. The department plans to implement this recommendation. The US-VISIT program office has not yet developed or implemented US-VISIT-level configuration management practices or a change control board. The office has developed a draft configuration management plan that describes key configuration management activities that are to be defined and implemented, such as defining and identifying processes and products to be controlled and recording and monitoring changes to the controlled items. The draft plan also proposes a governance structure, including change control boards. Identify and disclose management reserve funding embedded in the fiscal year 2004 expenditure plan to the Appropriations Subcommittees. The department has implemented this recommendation. The US-VISIT program office reported management reserve funding of $33 million for fiscal year 2004 in a briefing to the Subcommittees on Homeland Security, Senate and House Committees on Appropriations. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. With respect to the fiscal year 2005 expenditure plan, DHS implemented this recommendation. The fiscal year 2005 plan specified management reserve funding of $23 million. Assess the full impact of Increment 2B on land ports of entry workforce levels and facilities, including performing appropriate modeling exercises. The department has partially implemented this recommendation. The US- VISIT program office conducted an analysis to help determine the impact of Increment 2B on workforce and travelers. According to program officials, additional staff will not be needed to implement this increment at the land borders. In addition, the US-VISIT program office has conducted space utilization surveys at all of the 166 land ports of entry and has completed survey reports at 16 of the 50 busiest land ports of entry, with the remaining 34 reports planned to have been completed in the fall of 2004. Although the survey reports indicated that most of the ports reviewed were at or near capacity and that facilities had no room for expansion, the program office maintains that Increment 2B will not require expansion of any facilities and will only require minor modifications. Develop a plan, including explicit tasks and milestones for implementing all our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; also report this progress, including reasons for delays, in all future US-VISIT expenditure plans. The Department is in the process of implementing this recommendation. The US-VISIT program office has developed a report for tracking the status of our open recommendations. This report is shared with the program office director but is not shared with the Secretary and Under Secretary. <3. Observations on the Expenditure Plan> Our observations recognize accomplishments to date and address the need for rigorous and disciplined program management practices relating to describing progress against commitments, managing the exit alternatives pilot, managing system capacity, and estimating cost, as well as collaborating with DHS s Automated Commercial Environment (ACE) program. An overview of specific observations follows: The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. On May 28, 2004, and on schedule, DHS awarded a contract for integrating existing and new business processes and technologies to a prime contractor and its related partners. The fiscal year 2005 expenditure plan does not describe progress against commitments made in previous plans. Although this is the fourth US-VISIT expenditure plan, it does not describe progress against commitments made in the previous three plans. For example, the fiscal year 2004 plan committed to analyzing, field testing, and initiating deployment of alternative approaches for capturing biometrics during the exit process at air and sea ports of entry. However, while the fiscal year 2005 plan states that US-VISIT was to expand its exit pilot sites during the summer and fall of 2004 and deploy the exit solution during fiscal year 2005, it does not explain the reason for the change or its potential impact. Additionally, the fiscal year 2004 plan stated that $45 million in fiscal year 2004 was to be used for exit activities. However, the fiscal year 2005 plan states that $73 million in fiscal year 2004 funds were to be used for exit activities, but it does not highlight this difference or address the reason for the change in amounts. The exit capability alternatives are faced with a compressed time line, missed milestones, and potentially reduced scope. In January 2004, US- VISIT deployed an initial exit capability as a pilot to two ports of entry, while simultaneously developing other exit alternatives. The May 2004 Exit Pilot Evaluation Plan stated that all exit pilot evaluation tasks were to be completed by September 2004. The plan allotted about 3 months to conduct the evaluation and report the results. However, an October 2004 schedule indicated that all exit pilot evaluation tasks were to be completed between late October 2004 and December 2004, which is about a 2-month evaluation and reporting period. As of early November 2004, exit alternatives were deployed and operating in only 5 of the 15 ports of entry that were scheduled to be operational by November 1, 2004. According to program implementation officials, this was because of delays in DHS granting security clearances to the civilian employees who would operate the equipment at the ports of entry. Additionally, the Evaluation Execution Plan describes the sample size of outbound passengers required to be evaluated at each port. This sample size will produce a specified confidence level in the evaluation results. Because of the reduced evaluation time frame, the program still plans to collect the desired sample size at each port by adding more personnel to the evaluation teams if needed. These changing facts and circumstances surrounding the exit pilot introduce additional risk concerning US-VISIT s delivery of promised capabilities and benefits on time and within budget. US-VISIT and ACE collaboration is moving slowly. In February 2003, we recognized the relationship between US-VISIT and ACE and recommended steps to promote close collaboration between these two programs. Since then, US-VISIT and ACE managers have met to identify potential areas for collaboration between the two programs and to clarify how the programs can best support the DHS mission and provide officers with the information and tools they need. However, explicit plans have not been developed nor actions taken to understand US- VISIT/ACE dependencies and relationships. Because both programs are making decisions on how to further define, design, develop, and implement these systems, it is important that they exploit their relationships to reduce rework that might be needed to integrate the programs. US-VISIT system capacity is being managed in a compartmentalized manner. Currently, DHS does not have a capacity management program. Instead, the US-VISIT IT Management Office relies on the respective performance management activities of the pre-existing systems, such as those managed by U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement. Until US-VISIT has developed a comprehensive performance management and capacity planning program, the program will continue to be reactive in its efforts to ensure that US-VISIT system resources are sufficient to meet current workloads, increasing the risk that it may not be able to adequately support mission needs. The cost estimating process used for Increment 2B did not follow some key best practices. The US-VISIT cost estimate did not fully satisfy most of the criteria called for in SEI guidance. For example, costs related to development and integration tasks for US-VISIT component systems are specified, but information about estimated software lines of code is not. Additionally, no one outside the US-VISIT program office reviewed and concurred with the cost estimating categories and methodology. Without reliable cost estimates, the ability to make informed investment decisions and effectively manage progress and performance is reduced. <4. Conclusions> The fiscal year 2005 expenditure plan (with related program office documentation and representations) either partially satisfies or satisfies the legislative conditions imposed by Congress. Further, steps are planned, initiated, under way, or completed to address all of our open recommendations. However, overall progress in addressing the recommendations has been slow, leaving considerable work to be done. Given that most of these open recommendations are aimed at correcting fundamental limitations in DHS s ability to manage the program in a way that ensures the delivery of (1) mission value commensurate with costs and (2) promised capabilities on time and within budget, it is important that DHS implement the recommendations quickly and completely through effective planning and continuous monitoring and reporting. Until this occurs, the program will be at high risk of not meeting its stated goals on time and within budget. To its credit, the program office now has its prime contractor on board to support both near-term increments and to plan for and deliver the yet-to-be- defined US-VISIT strategic solution. However, it is important to recognize that this accomplishment is a beginning and not an end. The challenge for DHS is now to effectively and efficiently work with the prime contractor in achieving desired mission outcomes. To accomplish this, it is important that DHS move swiftly in building its program management capacity, which is not yet in place, as shown by the status of our open recommendations and our recent observations about (1) economic justification of US-VISIT Increment 2B, (2) completion of the exit pilot evaluation, (3) collaboration with a closely related import/export processing and border security program, (4) system capacity management activities, and (5) cost estimating practices. Moreover, it is important that DHS improve its measurement and disclosure to its Appropriations Subcommittees of its progress against commitments made in prior expenditure plans, so that the Subcommittees ability to effectively oversee US-VISIT s plans and progress is not unnecessarily constrained. Nevertheless, the fact remains that the program continues to invest hundreds of millions of dollars for a mission-critical capability under circumstances that introduce considerable risk that cost-effective mission outcomes will not be realized. At a minimum, it is incumbent upon DHS to fully disclose these risks, along with associated mitigation steps, to executive and congressional leaders so that timely and informed decisions about the program can be made. <5. Recommendations for Executive Action> To better ensure that the US-VISIT program is worthy of investment and is managed effectively, we reiterate our prior recommendations and further recommend that the Secretary of Homeland Security direct the Under Secretary for Border and Transportation Security to ensure that the US- VISIT program director takes the following five actions: Fully and explicitly disclose in all future expenditure plans how well DHS is progressing against the commitments that it made in prior expenditure plans. Reassess its plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions and better ensures that the exit solution selected is in the best interest of the program. Develop and implement processes for managing the capacity of the US- VISIT system. Follow effective practices for estimating the costs of future increments. Make understanding the relationships and dependencies between the US-VISIT and ACE programs a priority matter, and report periodically to the Under Secretary on progress in doing so. <6. Agency Comments> In written comments on a draft of this report, signed by the Acting Director, Departmental GAO/IG Liaison Office (reprinted in app. II), DHS concurred with our findings and recommendations. DHS also stated that it appreciated the guidance that the report provides for future efforts and described actions taken and progress made in implementing the US-VISIT program. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, Secretary of State, and the Director of OMB. Copies of this report will also be available at no charge on our Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at [email protected]. Another contact and key contributors to this report are listed in appendix III. Briefing to the Staffs of the Subcommittees on Homeland Security, Senate and House Committees on Appropriations <7. Introduction> <8. enhance the security of U.S. citizens and visitors,> facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. <9. The US-VISIT program involves the interdependent application of people, processes, technology, and facilities.> <10. The Department of Homeland Security Appropriations Act, 2005,1 states that DHS may not obligate $254 of the $340 million appropriated for the US-VISIT program until the Senate and House Committees on Appropriations receive and approve a plan for expenditure that> meets the capital planning and investment control review requirements established by the Office of Management and Budget (OMB), including OMB Circular A-11, part 7;2 complies with DHS s enterprise architecture; complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government; is reviewed and approved by the DHS Investment Review Board, the Secretary of Homeland Security, and OMB; and is reviewed by GAO. Pub. L. 108-334 (Oct. 18, 2004). OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. <11. On October 19, 2004, DHS submitted its fiscal year 2005 expenditure plan for $340 million to the House and Senate Appropriations Subcommittees on Homeland Security.> <12. As agreed, our objectives were to> 1. determine whether the US-VISIT fiscal year 2005 expenditure plan satisfies the 2. determine the status of our US-VISIT open recommendations, and 3. provide any other observations about the expenditure plan and DHS s management of US-VISIT. We conducted our work at US-VISIT offices in Rosslyn, Virginia, from June 2004 through November 2004, in accordance with generally accepted government auditing standards. Details of our scope and methodology are described in attachment 1 of this briefing. Satisfies or provides for satisfying many, but not all, key aspects of the condition that we reviewed. Satisfies or provides for satisfying every aspect of the condition that we reviewed. Actions are planned to implement the recommendation. Actions are under way to implement the recommendation. Actions have been taken that fully implement the recommendation. The Software Acquisition Capability Maturity Model (SA-CMM ) developed by Carnegie Mellon University s Software Engineering Institute (SEI) defines acquisition process management controls for planning, managing, and controlling software-intensive system acquisitions. With respect to the fiscal year 2005 expenditure plan. The purpose of independent verification and validation is to provide an independent review of processes and products throughout the acquisition and deployment phase. Results in Brief: Objective 3 Observations The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. The fiscal year 2005 Expenditure Plan does not describe progress against commitments (e.g., capabilities, schedule, cost, and benefits) made in previous plans. The exit capability alternatives evaluation is faced with a compressed time line, missed milestones, and potentially reduced scope. US-VISIT and Automated Commercial Environment (ACE)3 collaboration is moving slowly. US-VISIT system capacity is being managed in a compartmentalized manner. The cost estimating process used for Increment 2B did not follow some key best practices. ACE is a new trade processing system planned to support the movement of legitimate imports and exports and strengthen border security. Results in Brief: Objective 3 Observations To assist DHS in managing US-VISIT, we are making five recommendations to the Secretary of DHS. In their comments on a draft of this briefing, US-VISIT program officials stated that they generally agreed with our findings, conclusions, and recommendations. <13. The US-VISIT program is a governmentwide endeavor intended to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. US-VISIT is to accomplish these things by> collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the border management community. <13.1. US-VISIT Program Office Structure> <14. In May 2004, DHS awarded an indefinite-delivery/indefinite-quantity4 prime contract to Accenture and its partners.5 This collection of contractors is known as the Smart Border Alliance.> An indefinite-delivery/indefinite-quantity contract provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period of time. The government schedules deliveries or performance by placing orders with the contractor. Accenture s partners include, among others, Raytheon Company, the Titan Corporation, and SRA International, Inc. <15. According to the contract, the prime contractor will support the integration and consolidation of processes, functionality, and data, and will develop a strategy to build on the technology and capabilities already available to fully support the US- VISIT vision. Meanwhile, the US-VISIT program will continue to leverage existing contractors in deploying the interim solution using the prime contractor to assist.> <16. On January 5, 2004, Increment 1 capability was deployed to 115 airports and 14 seaports for entry and as a pilot to 2 POEs for exit.7 US-VISIT is evaluating three additional exit alternatives and has recently deployed these alternatives to three additional POEs.8> 8 C.F.R. 235.1(d)(1)(iv) and 215.8(a)(2) state that lasses of travelers that are not subject to US-VISIT are foreign nationals admitted on A-1, A-2, C-3 (except for attendants, servants, or personal employees of accredited officials), G-1, G-2, G-3, G-4, NATO-1, NATO-2, NATO-3, NATO-4, NATO-5, or NATO-6 visas; certain Taiwan officials who hold E-1 visas and members of their immediate families who hold E-1 visas, unless the Secretary of State and the Secretary of Homeland Security jointly determine that a class of such aliens should be subject to the rule; children under the age of 14; persons over the age of 79; classes of aliens to whom the Secretary of Homeland Security and the Secretary of State jointly determine it shall not apply; and an individual alien to whom the Secretary of Homeland Security, the Secretary of State, or the Director of Central Intelligence determines shall not be subject to the rule. At that time, the pilot employed a self-serve kiosk to capture biographic information and biometric data (two index fingerprints). The pilots are deployed to Miami Royal Caribbean seaport and the Baltimore/Washington International Airport. Chicago O Hare International Airport, Denver International Airport, and Dallas/Ft. Worth International Airport. <17. The mobile device includes a handheld wireless unit at the gates to capture> electronic fingerprints and photographs. The hybrid combines the enhanced kiosk, which is used to generate a receipt, with the mobile device, which scans the receipt and the electronic fingerprint of the traveler at the gate to verify exit. As of November 18, 2004, US-VISIT had processed about 13 million foreign nationals, including about 2 million from visa waiver countries. According to US- VISIT, it had positively matched over 1,500 persons against watch list databases. <18. Increment 2 is divided into three Increments 2A, 2B, and 2C.> Pub. L. 108-299 (Aug. 9, 2004) extended the deadline from October 26, 2004, to October 26, 2005. Secondary inspection is used for more detailed inspections that may include checking more databases, conducting more intensive interviews, or both. As required by the Immigration and Naturalization Service Data Management Improvement Act of 2000, 8 U.S.C. 1365a(d)(2). The three sites are Laredo, Texas; Port Huron, Michigan; and Douglas, Arizona. <19. Increment 4 (long-term strategy) is the yet-to-be-defined future vision of US- VISIT program capability, which US-VISIT officials have stated will likely consist of a series of releases. The program is currently working with its prime contractor and partners to develop an overall vision for immigration and border management operations.> Radio frequency (RF) technology relies on proximity cards and card readers. RF devices read the information contained on the card when the card is passed near the device and can also be used to verify the identity of the cardholder. As required by the Immigration and Naturalization Service Data Management Improvement Act of 2000, 8 U.S.C. 1365a(d)(3). <20. For human capital, DHS does not anticipate the need for additional inspection staff for Increment 2B.> <21. Treasury Enforcement Communications Systems (TECS) is a system that> maintains lookout (i.e., watch list) data,15 interfaces with other agencies databases, and is currently used by inspectors at POEs to verify traveler information and update traveler data. Within TECS are several databases, including the following: Advance Passenger Information System (APIS) includes arrival and departure manifest information provided by air and sea carriers. Crossing History includes information about individuals crossing histories. Lookout data sources include DHS s Customs and Border Protection and Immigration and Customs Enforcement; the Federal Bureau of Investigation (FBI); legacy DHS systems; the U.S. Secret Service; the U.S. Coast Guard; the Internal Revenue Service; the Drug Enforcement Agency; the Bureau of Alcohol, Tobacco, & Firearms; the U.S. Marshals Service; the U.S. Office of Foreign Asset Control; the National Guard; the Treasury Inspector General; the U.S. Department of Agriculture; the Department of Defense Inspector General; the Royal Canadian Mounted Police; the U.S. State Department; Interpol; the Food and Drug Administration; the Financial Crimes Enforcement Network; the Bureau of Engraving and Printing; and the Department of Justice Office of Special Investigations. <22. Biographic Watchlist includes biographic information on individuals of> interest. Secondary includes the results of prior secondary inspections performed on an individual, including if the person was admitted or denied entry. US-VISIT Biometric Information File (BIF) includes keys or links to other databases in TECS, IDENT, and ADIS and includes such information as fingerprint identification numbers, name, and date of birth. Addresses includes addresses of individuals. I-94/Non-Immigrant Information System (NIIS) includes information from I- 94 forms. US-Visa (Datashare) includes Department of State records of visa applications, such as photographs, biographic information, and fingerprint identification number. <23. Arrival Departure Information System (ADIS) is a database that stores traveler> arrival and departure data and that provides query and reporting functions. Automated Biometric Identification System (IDENT) is a system that collects and stores biometric data about foreign visitors.16 Student Exchange Visitor Information System (SEVIS) is a system that contains information on foreign students. Computer Linked Application Information Management System (CLAIMS 3) is a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay. Consular Consolidated Database (CCD) is a system that includes information on whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. Includes data such as FBI information on all known and suspected terrorists, selected wanted persons (foreign-born, unknown place of birth, previously arrested by DHS), and previous criminal histories for high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sexual registrants; and DHS information on previous criminal histories and previous IDENT enrollments. Information from the FBI includes fingerprints from the Integrated Automated Fingerprint Identification System. <24. According to DHS, Increment 1 includes the following five processes: pre-entry, entry, status management, exit, and analysis, which are depicted in the graphic below.> <25. In addition, POEs review the APIS list for a variety of factors that would target arriving crew and passengers for additional processing.> <26. The inspector switches to another screen and scans the foreign national s fingerprints (left and right index fingers) and takes a photograph. The system accepts the best fingerprints available within the 5-second scanning period. This information is forwarded to IDENT, where it is checked against stored fingerprints in the IDENT lookout database.> <27. If the foreign national is ultimately determined to be inadmissible, the person is detained, lookouts are posted in the databases, and appropriate actions are taken.> The I-94 form is used to track the arrival and departure of nonimmigrants. It is divided into two parts. The first part is an arrival portion, which includes, for example, the nonimmigrant s name, date of birth, and passport number. The second part is a departure portion, which includes the name, date of birth, and country of citizenship. <28. Commercial air and sea carriers are required by law to transmit departure manifests electronically for each passenger.19 These manifests are transmitted through APIS and shared with ADIS. ADIS matches entry and exit manifest data to ensure that each record showing a foreign national entering the United States is matched with a record showing the foreign national exiting the United States. ADIS also provides the ability to run queries on foreign nationals who have entry information but no corresponding exit information. ADIS receives status information from CLAIMS 3 and SEVIS on foreign nationals.> <29. The exit process includes the carriers electronic submission of departure manifest data to APIS. This biographic information is passed to ADIS, where it is matched against entry information. As we have previously discussed, when the foreign national departs the country through a pilot location, the departure is processed by one of three alternative pilot methods. The alternative used is dependent on the departure port. Within each port, one or more alternatives will be deployed. Not all alternatives are deployed to every pilot port. All three alternatives are generally operated by a Work Station Attendant (WSA), although the mobile device can sometimes be operated by a law enforcement officer. Foreign nationals are informed of the requirement to process through exit upon departure.> <30. Enhanced kiosk: The traveler approaches the kiosk for departure processing.> At the kiosk, the traveler, guided by a WSA if needed, scans the machine- readable travel documents, provides electronic fingerprints, and has a digital photograph taken. A receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler s next attempted entry to the country. After the receipt prints, the traveler proceeds to his/her departure gate. At the conclusion of the transaction, the collected information is transmitted to IDENT. <31. Mobile device: At the departure gate, and just before the traveler boards the departure craft, either a WSA or law enforcement officer scans the machine- readable travel documents, scans the traveler s fingerprints (right and left index fingers), and takes a digital photograph. A receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler s next attempted entry to the country. The device wirelessly transmits the captured data in real time to IDENT via the Transportation Security Administration s Data Operations Center.> If the device is being operated by a WSA, the WSA provides a printed receipt to the traveler, and the traveler then boards the departure craft. If the mobile device is being operated by a law enforcement officer, the captured biographic and biometric information is checked in near real time against watch lists. Any potential match is returned to the device and displayed visually for the officer. If no match is found, the traveler boards the departure craft. <32. Hybrid: Using an enhanced kiosk, the traveler, guided by a WSA if needed,> scans the machine-readable travel documents, provides electronic fingerprints, and has a digital photograph taken. As with the enhanced kiosk alternative, a receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler s next attempted entry to the country. However, this receipt has biometrics (i.e., the traveler s fingerprints and photograph) embedded on the receipt. At the conclusion of the transaction, the collected information is transmitted to IDENT. The traveler presents his or her receipt to the WSA or law enforcement officer at the gate or departure area, who scans the receipt using a mobile device. The traveler s identity is verified against the biometric data embedded on the receipt. Once the traveler s identity is verified, he/she is allowed to board the departure craft. The captured information is not transmitted in real time back to IDENT. Data collected on the mobile device are periodically uploaded through the kiosk to IDENT. <33. An ongoing analysis capability is to provide for the continuous screening against watch lists of individuals enrolled in US-VISIT for appropriate reporting and action. As more entry and exit information becomes available, it can be used to analyze traffic volume and patterns as well as to perform risk assessments. The analysis is to be used to support resource and staffing projections across the POEs, strategic planning for integrated border management analysis performed by the intelligence community, and determination of travel use levels and expedited traveler programs.> <34. No advance passenger information is to be available to the inspector before> the traveler arrives for inspection. Travelers subject to US-VISIT are to be processed at secondary inspection, rather than at primary inspection. Inspectors workstations are to use a single screen, which eliminates the need to switch between the TECS and IDENT screens. <35. No electronic exit information is to be captured.> Datashare includes a data extract from State s CCD system and includes the visa photograph, biographical data, and the fingerprint identification number assigned when a nonimmigrant applies for a visa. Chronology of US-VISIT Expenditure Plans Since November 2002, four US-VISIT expenditure plans have been submitted. On November 15, 2002, the Immigration and Naturalization Service (INS)21 submitted to its appropriations subcommittees its first expenditure plan, which outlined $13.3 million in expenditures for contract activities; design, development, and deployment of the Visa Waiver Support System; facilities assessments; biometric standards development; prototyping; IBIS support activities; travel; program office operations; and fingerprint scanner procurements. On June 5, 2003, the second expenditure plan outlined $375 million in expenditures for system enhancements and infrastructure upgrades, POE information technology (IT) and communication upgrades, facilities planning analysis and design, program management support, proof of concept demonstrations, operations and system sustainment, and training. Effective March 1, 2003, INS became part of DHS. On January 27, 2004, the third expenditure plan outlined $330 million in expenditures for exit pilots; capability to read biometrically enabled travel documents; land infrastructure upgrades; system development and testing; radio frequency technology deployment to the 50 busiest land POEs; technical infrastructure planning and development; program management; and operations and maintenance. The current and fourth expenditure plan, submitted on October 19, 2004, outlines $340 million in expenditures (see table, next slide). Background Review of Current Expenditure Plan Fiscal Year 2005 Expenditure Plan Summary (see next slides for descriptions) <36. Increment 4 Long-Term Strategy: Includes developing the long-term strategy; integrating the strategy with the interim system, legacy systems, and the DHS enterprise architecture; and planning for facilities compliance.> <37. Operations and Maintenance: Includes operations and maintenance of existing information systems and support costs for ongoing software configuration and maintenance.> Objective 1: Legislative Conditions Condition 1 The US-VISIT expenditure plan satisfies or partially satisfies each of the legislative conditions. Condition 1. The plan, including related program documentation and program officials statements, partially satisfies the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7, which establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The table that follows provides examples of the results of our analysis. Examples of A-11 conditions Provide justification and describe acquisition strategy. Results of our analysis US-VISIT has completed an Acquisition Plan, dated November 2003. The plan provides a high-level justification and description of the acquisition strategy for the system. US-VISIT completed a cost/benefit analysis for Increment 2B on June 11, 2004. Summarize life-cycle costs and cost/benefit analysis, including the return on investment. Provide performance goals and measures. Address security and privacy. Provide risk inventory and assessment. The plan includes benefits, but does not identify corresponding metrics. The plan states that performance measures are under development. US-VISIT has developed a security plan that partially satisfies OMB and the National Institute of Standards and Technology security guidance. US-VISIT has not yet conducted a security risk assessment on the overall US-VISIT program. While the plan states the intention to do the assessment, it does not specify when it will be completed. The US-VISIT program published a privacy policy and privacy impact assessment for Increment 2. US-VISIT has developed a risk management plan and process for developing, implementing, and institutionalizing a risk management program. Risks are currently tracked using a risk- tracking database. Objective 1: Legislative Conditions Condition 2 Condition 2. The plan, including related program documentation and program officials statements, partially satisfies the condition that it provide for compliance with DHS s enterprise architecture (EA). DHS released version 1.0 of the architecture in September 2003.22 We reviewed the initial version of the architecture and found that it was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as a description of business processes, information flows among these processes, and security rules associated with these information flows.23 Since we reviewed version 1.0, DHS has drafted version 2.0 of its EA. We have not reviewed this draft. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. GAO, Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains, GAO-04-777 (Washington, D.C.: Aug. 6, 2004). Objective 1: Legislative Conditions Condition 2 According to officials from the Office of the Chief Strategist, concurrent with the development of the strategic vision, the US-VISIT program office has been working with the DHS EA program office in developing version 2.0 to ensure that US-VISIT is aligned with DHS s evolving EA. According to these officials, US-VISIT representatives participate in both the DHS EA Center of Excellence and the DHS Enterprise Architecture Board.24 In July 2004, the Center of Excellence reviewed US-VISIT s submission for architectural alignment with some EA components, but not all. Specifically, the submission included information intended to show compliance with business and data components, but not, for example, the application and technology components. According to the head of DHS s EA Center of Excellence, the application and technical components were addressed by this center, which found that US-VISIT was in compliance. The Center of Excellence supports the Enterprise Architecture Board in reviewing component documentation. The purpose of the Board is to ensure that investments are aligned with the DHS EA. Objective 1: Legislative Conditions Condition 2 Based on its review, the DHS Enterprise Architecture Board recommended that the US-VISIT program be given conditional approval to proceed for investment, provided that the program resubmit its documentation upon completion of its strategic plan, which is anticipated in January 2005. DHS has not yet provided us with sufficient documentation to allow us to understand DHS architecture compliance methodology and criteria, or verifiable analysis justifying the conditional approval. Objective 1: Legislative Conditions Condition 3 Condition 3. The plan, including related program documentation and program officials statements, satisfies the condition that it comply with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. The plan provides for satisfying this condition, in part, by describing efforts to develop Software Engineering Institute (SEI) Software Acquisition Capability Maturity Model (SA-CMM) key process areas, such as requirements development and management and contract tracking and oversight. The plan also states that the program intends to achieve SA-CMM Level 225 by establishing a process improvement program based on SEI-identified industry best practices. As part of establishing this program, US-VISIT has developed a draft process improvement plan that specifies process improvement goals, objectives, assumptions, and risks, and which describes a process improvement time line and phase methodology. If these processes are implemented effectively, they will help US-VISIT meet federal acquisition rules, requirements, and guidelines and comply with systems acquisition management practices. The SA-CMM ranks organizational maturity according to five levels. Maturity levels 2 through 5 require verifiable existence and use of certain key process areas. Objective 1: Legislative Conditions Condition 4 Condition 4. The plan, including related program documentation and program officials statements, partially satisfies the requirement that it be reviewed and approved by the DHS Investment Review Board (IRB), the Secretary of Homeland Security, and OMB. The DHS Under Secretary for Management26 reviewed and approved the fiscal year 2005 expenditure plan on October 14, 2004, and OMB approved the plan on October 15, 2004. According to the US-VISIT Budget and Finance Director, the IRB reviewed the fiscal year 2005 expenditure plan but did not approve it because DHS management determined that review of the expenditure plan was not in the scope of the IRB review process. According to DHS Delegation Number 0201.1, the Secretary of Homeland Security delegated authority to the Under Secretary for Management for, among other things, the budget, appropriations, and expenditure of funds. Objective 1: Legislative Conditions Condition 5 Condition 5. The plan satisfies the requirement that it be reviewed by GAO. Our review was completed on November 23, 2004. Objective 2: Open Recommendations Recommendation 1 Open Recommendation 1: Develop a system security plan and privacy impact assessment. Security Plan. US-VISIT has developed a security plan.27 OMB and the National Institute of Standards and Technology (NIST) have issued security planning guidance28 requiring, in part, the completion of system security plans that (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the security requirements, and (3) delineate roles and responsibilities of all individuals who access the system. US-VISIT Program, Security Plan for US-VISIT Program Version 1.1 (Sept. 13, 2004). OMB Circular A-130, Revised (Transmittal Memorandum No. 4), Appendix III, Security of Federal Automated Information Resources (Nov. 28, 2000) and NIST, Guide for Developing Security Plans for Information Technology Systems, NIST Special Publication 800-18 (December 1998). Objective 2: Open Recommendations Recommendation 1 According to the guidance, the plan should also describe the methodology used to identify system threats and vulnerabilities and to assess risks, and it should include the date the assessment was conducted. If no system risk assessment has been completed, the plan is to include a milestone date for completion. The US-VISIT security plan provides an overview of the system security requirements, describes the controls in place or planned for meeting those requirements, and references the applicable documents that contain roles and responsibilities for the US-VISIT component systems. However, the plan states that although a security risk assessment on the US-VISIT program will be completed in accordance with NIST guidelines, it has not yet been completed, and the plan does not indicate a date for doing so. Objective 2: Open Recommendations Recommendation 1 Privacy Impact Assessment. The US-VISIT program has conducted a privacy impact assessment for Increment 2, and according to the US-VISIT Privacy Officer, a privacy impact assessment will be completed for the exit portion of Increment 1 in early 2005. According to OMB guidance,29 the depth and content of such an assessment should be appropriate for the nature of the information to be collected and the size and complexity of the system involved. The assessment should also, among other things, (1) be updated when a system change creates new privacy risk, (2) ensure that privacy is addressed in the documentation related to system development, (3) address the impact the system will have on an individual s privacy, (4) analyze the consequences of collection and flow of information, and (5) analyze alternatives to collection and handling as designed. OMB, Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002, OMB M-03-22 (Sept. 26, 2003). Objective 2: Open Recommendations Recommendation 1 The Increment 2 assessment satisfies some, but not all, of the above OMB guidance areas. To DHS s credit, the assessment, which was completed in September 2004, states that the DHS Chief Privacy Officer directed that the assessment be updated as necessary to reflect future changes to Increment 2. The assessment also discusses the impact that Increment 2 will have on an individual s privacy and analyzes the consequences of collection and flow of information. However, privacy is only partially addressed in the Increment 2 system documentation. For example, privacy is used in the Increment 2B cost-benefit analysis to evaluate the weighted risk of Increment 2B alternative solutions. Additionally, the ADIS functional requirements specify that access to information contained in the system, which is protected by the Privacy Act,30 must be limited to authorized users. However, the IDENT Server 2.0 requirements do not consider privacy at all. Additionally, the assessment s only discussion of design is a statement that a major choice for US-VISIT was whether to develop an entirely new system, develop a largely new system, or build upon existing systems. The assessment does not analyze these options. The timing of the planned privacy impact assessment for the exit portion of Increment 1 is consistent with plans for completing the exit pilots. Objective 2: Open Recommendations Recommendation 2 Open Recommendation 2: Develop and implement a plan for satisfying key acquisition management controls including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support and implement the controls in accordance with SEI guidance. The US-VISIT program plans to achieve SEI SA-CMM Level 2 status in October 2006. According to SEI, a process improvement effort should involve building a process infrastructure, establishing current levels of process maturity, and completing an action plan. The plan should include, among other things, process improvement assumptions and risks, goals, objectives, and criteria for success. The US-VISIT Acquisition and Program Management Office (APMO) has initiated a process improvement program and drafted a process improvement plan. Objective 2: Open Recommendations Recommendation 2 The draft US-VISIT plan discusses assumptions, such as the improvement program being sponsored and supported by senior US-VISIT management, and risks, such as not meeting the process improvement time line if the process improvement effort is not fully staffed. The plan also lists both process improvement goals and short- and long-term objectives. However, the goals and objectives are generally not defined in measurable terms. For example, the plan identifies the following goal and objective: Goal: ensure that US-VISIT is in compliance with federal mandates, making future funding more likely. Objective: define a strategy for attaining SEI SA-CMM Level 2 as soon as possible within the existing constraints limited contractor and government staff resources and centralized facility. The plan also does not address criteria for success. Objective 2: Open Recommendations Recommendation 2 APMO has developed processes or plans, some of which are approved and some of which are in draft, for all key process areas except transition to support. 31 The Director of APMO could not say when APMO plans to develop the documentation for this key process area, but noted that US-VISIT is considering a transition from the SA-CMM to SEI s Capability Maturity Model Integration (CMMI) model.32 No time line was provided as to when this decision might be made. The Director of APMO acknowledges that a transition to the CMMI will likely change the previously mentioned time line for CMM certification. The purpose of transition to support is to provide for the effective and efficient handing off of the acquired software products to the support organization responsible for software maintenance. CMU/SEI-2004-TR-001 (February 2004). Objective 2: Open Recommendations Recommendation 3 Open Recommendation 3: Ensure that future expenditure plans are provided to the DHS s House and Senate Appropriations Subcommittees on Homeland Security in advance of US-VISIT funds being obligated. On October 18, 2004, the President signed the Department of Homeland Security Appropriations Act, 2005, which included $340 million in fiscal year 2005 funds for the US-VISIT program.33 The act states that $254 million of the $340 million is subject to the expenditure plan requirement. On October 19, 2004, DHS provided its fiscal year 2005 expenditure plan to the Senate and House Appropriations Subcommittees on Homeland Security. Department of Homeland Security Appropriations Act, 2005, Pub. L. 108-334 (Oct. 18, 2004). Objective 2: Open Recommendations Recommendation 4 Open Recommendation 4: Ensure that future expenditure plans fully disclose US- VISIT system capabilities, schedule, cost, and benefits to be delivered. The expenditure plan identifies high-level capabilities by increments. However, the capabilities are not consistently presented. For example, in one section of the plan, Increment 2B capabilities are identified as collect biometric data and verify identity at the 50 busiest land POEs, develop global enrollment system capability, and support facilities delivery. However, later in the plan, Increment 2B capabilities are identified as Increment 1 functionality at the top 50 land POEs, biometric data collection, and infrastructure upgrades. Objective 2: Open Recommendations Recommendation 4 Further, some of the capabilities are described in vague and ambiguous terms. For example, the plan describes such Increment 2C capabilities as integration of Border Crossing Cards with US-VISIT, test, model, and deploy technology to preposition biographic and biometric data of enrolled travelers, and desktop upgrades. The plan identifies specific milestones for some increments, but not for others. For example, it states that Increment 2B is to be implemented by December 31, 2004, and Increment 3 by December 31, 2005. However, it states that Increment 1 exit and Increment 2C are to be implemented in fiscal year 2005. Objective 2: Open Recommendations Recommendation 4 The plan identifies the amounts budgeted for each increment for fiscal years 2003 through 2005. For example, the plan states that US-VISIT plans to obligate $55 million in fiscal year 2005 funds for Increment 2C. However, the plan does not associate the $55 million with specific Increment 2C capabilities and benefits. Rather, it states that this amount will be used to support Increment 2C by funding the installation of technology in entry and exit lanes at land borders and supporting facility delivery. Further, the plan does not identify any estimated nongovernmental costs, such as the social costs associated with any potential economic impact at the border. Objective 2: Open Recommendations Recommendation 4 The plan identifies several benefits and associates these benefits with increments. For example, for Increment 1, the plan identifies such benefits as prevention of entry of high-threat or inadmissible individuals through improved and/or advanced access to data before the foreign national s arrival, improved enforcement of immigration laws through improved data accuracy and completeness, reduction in foreign nationals remaining in the country under unauthorized circumstances, and reduced threat of terrorist attack and illegal immigration through improved identification of national security threats and inadmissible individuals. As we previously reported,34 these benefits were identified in the fiscal year 2004 expenditure plan, although they were not associated with Increment 1. GAO, Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed, GAO-04-586 (Washington, D.C.: May 11, 2004). Objective 2: Open Recommendations Recommendation 4 Further, the fiscal year 2004 plan included planned metrics for the first two benefits identified above and stated that US-VISIT was developing metrics for measuring the projected benefits, including baselines by which progress can be assessed. However, the fiscal year 2005 plan does not include any information on these metrics or on progress against any of the benefits. The fiscal year 2005 plan again states that performance measures are still under development. While the plan does not associate any measures with the defined benefits, it does identify several measures and links them to the US-VISIT processes pre-entry, entry, status management, exit, and analysis. The plan also identifies examples of how US-VISIT is addressing its four stated goals. The examples, however, largely describe US-VISIT functions rather than measures of goal achievement. For example, in support of the stated goal of ensuring the integrity of our immigration system, the plan states that through US- VISIT, officers at primary inspection are able to instantly search databases of known criminals and known and suspected terrorists. It does not, however, identify how this ensures immigration system integrity. Objective 2: Open Recommendations Recommendation 5 Open Recommendation 5: Ensure that future expenditure plans fully disclose how the US-VISIT acquisition is being managed. The expenditure plan describes some activities being employed to manage the US- VISIT acquisition. For example, the plan describes the US-VISIT governance structure, as well as the program office organizational structure and staffing levels. The plan also describes certain management processes currently being used. For example, the plan states that US-VISIT program officials hold formal weekly meetings to discuss program risks/issues, schedule items, and critical path items. In addition, it states that formal points of contact for risk issues have been designated across the Increment Integrated Project teams, and the US-VISIT program organization and the plan states that US-VISIT is establishing a formal risk review board to review and manage risk. However, the plan does not describe how other important aspects of the program are being managed, several of which are discussed in this briefing. For example, it does not describe how testing, system capacity, and systems configuration are being managed. Objective 2: Open Recommendations Recommendation 6 Open Recommendation 6: Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. DHS established the US-VISIT program office in July 2003 and determined the office s staffing needs to be 115 government and 117 contractor personnel. As of October 2004, DHS had filled 59 of the 115 government positions. Of those positions that have not been filled, 5 have reassignments in progress and 51 have competitive announcements pending. According to US-VISIT, about half of these positions are to be filled when security clearances are completed. In addition, US-VISIT has changed its organizational structure, and some positions were moved to other offices within US-VISIT. For example, the number of positions in the Office of Mission Operations Management decreased from 23 to 18, and the number of positions in the Office of Chief Strategist increased from 10 to 14. Also, the number of positions in the Office of Administration and Management now called the Office of Administration and Training increased from 10 to 11. Objective 2: Open Recommendations Recommendation 6 The graphic on the next page shows the US-VISIT program office organization structure and functions, the number of positions needed by each office, and the number of positions filled. This graphic reflects the recent changes to the US-VISIT organizational structure. Objective 2: Open Recommendations Recommendation 6 In addition to the 115 government staff that were anticipated, the program anticipated 117 contractor support staff. As of November 2004, program officials told us they had filled 88 of these 117 positions. The expenditure plan also states that DHS has budgeted $83 million to maintain the program management structure and baseline operations, including, among other things, salaries and benefits for government full-time equivalents, personnel relocation costs, rent, and supplies. Objective 2: Open Recommendations Recommendation 7 Open Recommendation 7: Clarify the operational context in which US-VISIT is to operate. DHS is in the process of defining the operational context in which US-VISIT is to operate. In September 2003, DHS released version 1.0 of its enterprise architecture.35 We reviewed the initial version of the architecture and found that this architecture was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows.36 Since we reviewed version 1.0, DHS has drafted version 2.0 of its architecture. We have not reviewed the draft, but DHS EA program officials told us this version focuses on departmental operations, and that later versions will incrementally focus on the national homeland security picture. This is important to the US-VISIT operational context because US-VISIT is a governmentwide program, including entities outside DHS, such as the Departments of State and Justice. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. GAO, Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains, GAO-04-777 (Washington, D.C.: Aug. 6, 2004). Objective 2: Open Recommendations Recommendation 8 Open Recommendation 8: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. US-VISIT developed a cost-benefit analysis (CBA) for Increment 2B, dated June 11, 2004. However, the CBA s treatment of both benefits and costs raises several issues, making it unclear whether Increment 2B will produce mission value commensurate with cost and risks. First, the CBA primarily addresses government costs and is silent on some potential nongovernmental costs. For example, the CBA does not consider potential social costs like the economic impact on border communities. Objective 2: Open Recommendations Recommendation 8 operational performance benefits, such as improvement of traveler identification and validation of traveler documentation. Moreover, the CBA does not explain why these benefits cannot be quantified. Also, the CBA states that none of the proposed alternatives result in a positive net present value or return on investment, which it attributes to the limited scope of Increment 2B. Third, the CBA includes three alternatives and identifies alternative 3 as the preferred alternative. However, US-VISIT is not pursuing alternative 3, but rather is pursuing an alternative more aligned with alternative 2. According to the Program Director, this is because alternative 3 was considered too ambitious to meet the statutory requirement that US-VISIT be implemented at the 50 busiest land POEs by December 31, 2004. Objective 2: Open Recommendations Recommendation 9 Open Recommendation 9: Define US-VISIT program office positions, roles, and responsibilities. US-VISIT has developed descriptions for positions within each office. In addition, US-VISIT has worked with the Office of Personnel Management (OPM) to draft a set of core competencies that define the knowledge, skills, abilities, and other characteristics (competencies) needed for successful employee performance. According to US-VISIT s draft Human Capital Plan, these core competencies will form the foundation for recruitment and selection, training and development, and employee performance evaluations. Currently, US-VISIT is using some of these draft core competencies in its employee performance appraisal process. Objective 2: Open Recommendations Recommendation 10 Open Recommendation 10: Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The US-VISIT program office awarded a contract to OPM to develop a draft Human Capital Plan. Our review of the draft plan showed that OPM developed a plan for US-VISIT that employed widely accepted human capital planning tools and principles. OPM s recommendations to US-VISIT include the following: Develop and adopt a competency-based system and a corresponding human capital planning model that illustrate the alignment of US-VISIT s mission with individual and organizational performance. Conduct a comprehensive workforce analysis to determine diversity trends, retirement and attrition rates, and mission-critical and leadership competency gaps. Objective 2: Open Recommendations Recommendation 10 Develop a leadership competency model and establish a formal leadership development program to ensure continuity of leadership. Link the competency-based human capital management system to all aspects of human resources, including recruitment, assessment, training and development, and performance. The draft human capital plan includes an action plan that identifies activities, proposed completion dates, and the office (OPM or US-VISIT) responsible for completing these activities. According to OPM, it has completed its work under the draft plan. As of October 2004, US-VISIT had completed some of the activities called for in the draft plan. For example, US-VISIT s Office of Administration and Training has designated a liaison responsible for ensuring alignment between DHS and US-VISIT human capital policies. However, it remains to be seen how full implementation of the plan will impact the US-VISIT program office. For example, the workforce analysis called for in the draft plan could result in a change in the number and competencies of the staff needed to implement US-VISIT. Objective 2: Open Recommendations Recommendation 11 Open Recommendation 11: Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The US-VISIT program office has developed a risk management plan (dated June 2, 2004) and process (dated June 9, 2004). The plan addresses, among other things, the process for identifying, analyzing, mitigating, tracking, and controlling risks. As part of its process, US-VISIT has developed a risk management database. The database includes, among other things, a description of the risk, its priority (e.g., high, medium, low), and mitigation strategy. US-VISIT has also established the governance structure for managing risks. The governance structure includes three primary groups the Risk Review Board, Risk Review Council, and Risk Management Team. Objective 2: Open Recommendations Recommendation 11 The Risk Review Board provides overall decision making, communication, and coordination in regard to risk activities. The board is composed of senior-level staff, such as the program director and functional area directors. The Risk Review Council reviews initially reported risks, validates their categorizations, and ensures that a mitigation approach has been developed. It also serves as a filter for the Board by deciding which risks can be mitigated without being elevated to the Board. The Risk Management Team provides risk management expertise and institutional knowledge. This group is staffed by APMO. According to the Director, APMO, US-VISIT has not reported high risks beyond the Review Board. Objective 2: Open Recommendations Recommendation 12 Open Recommendation 12: Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. Available documentation shows that some technical performance measures for Increments 1 and 2B have been defined. For example: Availability.37 The system will be available 99.5 percent of the time. Timeliness.38 Login, visa query, and TECS/NCIC default query will be less than 5 seconds; TECS optional queries will be less than 60 seconds; and IDENT watch list queries will be less than 10 seconds (matcher time only). Output quantity.39 70,000 primary inspection transactions per user, per day, with a maximum of 105,000 transactions during peak times. The time the system is operating satisfactorily, expressed as a percentage of time that the system is required to be operational. The time needed to perform a unit of work correctly and on time. The number of transactions processed. Objective 2: Open Recommendations Recommendation 12 However, other measures, such as reliability,40 resource utilization,41 and scalability,42 are not defined in the documentation. Further, the documentation does not contain sufficient information to determine the limitations imposed by US- VISIT s reliance on existing systems that have less demanding performance requirements, such as TECS availability of 98.0 percent. Such information would include, for example, the processing sequencing and dependencies among the existing systems. The probability that a system, including all hardware, firmware, and software, will satisfactorily perform the task for which it was designed. A ratio representing the amount of time a system or component is busy divided by the time it is available. Ability of a system to function well when it is changed in size or volume. Objective 2: Open Recommendations Recommendation 13 Open Recommendation 13: Develop and approve test plans before testing begins. These test plans should (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. According to the US-VISIT Systems Assurance Director, the Increment 2B system acceptance test (SAT) plan was approved during an October 15, 2004, test readiness review (TRR). However, no documentation was provided that explicitly indicated the approval of the plan, and the results of the TRR were not approved until October 28, 2004, which is 11 days after the date we were told that acceptance testing began. Objective 2: Open Recommendations Recommendation 13 The test plan does not fully address the test environment. For example, the plan does not describe the scope, complexity, and completeness of the test environment or identify necessary training. The plan does include generic descriptions of testing hardware, such as printers and card readers. The plan does not include descriptions of tests to be performed. However, officials from the IT Management Office provided us with other documentation describing the tests to be performed that included expected outputs, but it did not include inputs or controls. The plan does not provide test procedures to be followed in conducting the tests. Objective 2: Open Recommendations Recommendation 13 The plan does not provide traceability between test cases and the requirements to be verified by the testing. Our analysis of the 116 requirements identified in the consolidated requirements document showed that 39 requirements mapped to test cases that lacked sufficient detail to determine whether the test cases are testable, 15 requirements did not have test cases, 2 requirements were labeled not testable, and 1 requirement was identified as TBD, but was mapped to an actual test case. Open Recommendation 14: Ensure the independence of the Independent Verification and Validation (IV&V) contractor. According to the US-VISIT Program Director, the US-VISIT IT Management Office is developing high-level requirements for IV&V. In particular, it is developing a strategy and statement of work for acquiring an IV&V contractor. Objective 2: Open Recommendations Recommendation 15 Open Recommendation 15: Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. According to US-VISIT s draft configuration management (CM) plan, dated July 2004, and US-VISIT officials, US-VISIT has not yet developed or implemented US- VISIT-level configuration management practices or a change control board. In the interim, for Increments 1, 2A and 2B, US-VISIT continues to follow relevant IDENT, ADIS, and TECS configuration management procedures, including applicable change control boards and system change databases. According to the US-VISIT System Assurance Director, for Increment 2B, US-VISIT is using the TECS change requests database for US-VISIT change requests, including those for IDENT and ADIS. Objective 2: Open Recommendations Recommendation 15 The draft configuration management plan describes key configuration activities that are to be defined and implemented, including (1) defining and identifying processes and products to be controlled; (2) evaluating, coordinating, and approving/rejecting changes to controlled items; (3) recording and monitoring changes to the controlled items; and (4) verifying that the controlled items meet their requirements and are accurately documented. The draft plan also proposes a governance structure, including change control boards. The proposed governance structure includes the following: A US-VISIT CM team is responsible for implementing, controlling, operating, and maintaining all aspects of configuration management and administration for US-VISIT. The team is to be composed of a CM manager, CM team staff, DHS system CM liaisons, prime integrator CM liaison, and testers and users. A change control board is to serve as the ultimate authority on changes to any US-VISIT system baseline, decide the content of system releases, and approve the schedule of releases. Objective 2: Open Recommendations Recommendation 16 Open Recommendation 16: Identify and disclose management reserve funding embedded in the fiscal year 2004 expenditure plan to the Appropriations Subcommittees. The US-VISIT program office reported the management reserve funding of $33 million for fiscal year 2004 to the Appropriations Subcommittees. According to the Deputy Program Manager, US-VISIT provided this information in a briefing to the Subcommittee staff. Open Recommendation 17: Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. The fiscal year 2005 expenditure plan specified management reserve funding of $23 million. Objective 2: Open Recommendations Recommendation 18 Open Recommendation 18: Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. US-VISIT conducted an Increment 2B baseline analysis to help determine the impact of Increment 2B on workforce and travelers. The analyses included three sites and addressed the Form I-94 issuance process and the Form I-94W43 process in secondary inspection. According to program officials, additional staff will not be needed to implement 2B at the border. Instead, US-VISIT has developed a plan to train existing Customs and Border Protection officers on the collection of traveler entry data, has completed the train the trainer classes at the training academy, and has begun training at three land POEs. I-94W is used for foreign nationals from visa waiver countries. Objective 2: Open Recommendations Recommendation 18 In addition, US-VISIT has conducted space utilization surveys at all of the 166 land POEs and completed survey reports at 16 of the 50 busiest land POEs. US-VISIT expects to have completed survey reports for the remaining 34 busiest land POEs during the fall of 2004. According to the 16 completed survey reports, existing traffic at most of these facilities was at or near capacity and the facilities had no room for expansion. However, US-VISIT officials said that Increment 2B will not require expansion at any facilities; rather, it will require mostly minor modifications, such as the installation of new or updated countertops and electrical power outlets to accommodate new equipment. Objective 2: Open Recommendations Recommendation 19 Open Recommendation 19: Develop a plan, including explicit tasks and milestones, for implementing all our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; also report this progress, including reasons for delays, in all future US-VISIT expenditure plans. The US-VISIT program office has developed a report for tracking the status of our open recommendations. This report is shared with the program office director, but according to the Deputy Program Director, it is not shared with the Secretary and Under Secretary. In addition, he stated that the program office meets weekly with the Under Secretary, but the status of our recommendations are not discussed. The fiscal year 2005 expenditure plan summarizes our recommendations, but it does not identify tasks and milestones for implementing them or discuss progress in implementing them. Observation 1: The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. DHS reported in its fiscal year 2004 US-VISIT expenditure plan that it had intended to award a contract by the end of May 2004 to a prime contractor for integrating existing and new business processes and technologies. US-VISIT awarded the contract on time. Specifically, on May 28, 2004, DHS awarded its prime contract to Accenture LLP and its related partners. Objective 3: Observations Progress Observation 2: The fiscal year 2005 Expenditure Plan does not describe progress against commitments (e.g., capabilities, schedule, cost, and benefits) made in previous plans. Given the immense importance of the US-VISIT program to the security of our nation s borders and the need to acquire and implement it efficiently and effectively, the Congress has placed limitations on the use of appropriations for the US-VISIT program until DHS submits periodic expenditure plans. As we had previously reported,44 to permit meaningful congressional oversight, it is important that expenditure plans describe how well DHS is progressing against the commitments made in prior expenditure plans. GAO, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: June 9, 2003). Objective 3: Observations Progress The fiscal year 2005 expenditure plan does not describe progress against commitments made in prior expenditure plans. For example, in its fiscal year 2004 expenditure plan, US-VISIT committed to, among other things, analyzing, field testing, and initiating deployment of alternative approaches for capturing biometrics during the exit process at air and sea POEs and implementing entry and exit capabilities at the 50 busiest land POEs by December 31, 2004, including delivering the capability to read radio frequency enabled documents at the 50 busiest land POEs for both entry and exit processes. The fiscal year 2005 plan does not address progress against these commitments. For example, the plan does not describe the status of the exit pilot testing or deployment, such as whether it has met its target schedule or whether the schedule has slipped. While the plan does state that US-VISIT will expand its pilot sites during the summer and fall of 2004 and deploy the exit solution during fiscal year 2005, it does not explain the reason for the change or its potential impact. The following graphic provides our analysis of the commitments made in the fiscal year 2003 and 2004 plans, compared with currently reported and planned progress. Objective 3: Observations Progress Further, the fiscal year 2004 plan states that $45 million in fiscal year 2004 funds were to be used for exit activities. However, the fiscal year 2005 plan states that $73 million in fiscal year 2004 funds were to be used for exit activities, but does not highlight this difference or address the reason for the change in budget amounts. Also, the fiscal year 2005 expenditure plan includes benefits stated in the fiscal year 2004 plan, but it does not provide progress in addressing those benefits, despite the fact that, in the fiscal year 2004 plan, US-VISIT stated that it was developing metrics for measuring the projected benefits, including baselines by which progress could be assessed. The fiscal year 2005 plan again states that performance measures are under development. This information is needed to allow meaningful congressional oversight of plans and progress. Objective 3: Observations Exit Deployment Observation 3: The exit capability alternatives are faced with a compressed time line, missed milestones, and potentially reduced scope. On January 5, 2004, US-VISIT deployed an initial exit capability in pilot status to two POEs. At that time, the Program Director stated that US-VISIT was developing other exit alternatives, along with criteria for evaluating and selecting one or more of the alternatives by December 31, 2004. Planned evaluation time line compressed In May 2004, US-VISIT issued an Exit Pilot Evaluation Execution Plan. This plan states that three alternative exit solutions are to be evaluated while deployed to a total of 15 air and sea POEs. The plan allotted about 3 months to conduct the evaluation and report the results. Specifically, the deployment was to be completed by August 1, 2004, and all exit pilot evaluation tasks were to be completed by September 30, 2004, with an evaluation report finished by October 28, 2004. Objective 3: Observations Exit Deployment However, according to the exit master schedule provided to us on October 26, 2004, the three alternatives were scheduled to be fully deployed by October 29, 2004, and all evaluation tasks are to be completed on December 6, 2004, with delivery of the evaluation report on December 30, 2004, which is about a 2-month evaluation and reporting period. The following graphic illustrates how the exit pilot schedule has been shortened from the originally planned 3 months to the currently planned 2 months and compares the original plan with the current plan. Objective 3: Observations Exit Deployment As of November 8, 2004, the three alternatives were deployed and operational in only 5 of the 15 POEs that were to be operational by November 1. According to the Exit Implementation Manager, all ports had received and installed the exit equipment. However, the requisite number of contract employees (WSAs) is not yet available to make all 15 POEs operational because of delays in DHS granting security clearances to the attendants. The manager stated that a recent meeting with DHS security officials has helped to improve the pace of finalized security clearances, but the manager did not know when the remaining 10 ports would become operational. Objective 3: Observations Exit Deployment The Evaluation Execution Plan describes the evaluation methodology that is to be employed for the three alternatives. An important element of that methodology is the targeted sample size per port. For each port, a targeted number of outbound passengers will be processed by the three alternatives and data gathered on these encounters. The plan s specified sample sizes are described as sufficient to achieve a 95 percent confidence level with a margin of error of 5 percent. According to the Exit Implementation Manager, the desired sample size will be collected at each port, despite the compressed time frame for conducting the evaluations, by adding additional personnel to the evaluation teams if needed. These changing facts and circumstances surrounding the exit pilot introduce additional risk concerning US-VISIT s delivery of promised capabilities and benefits on time and within budget. Objective 3: Observations Exit Deployment On November 12, 2004, US-VISIT issued a revised draft Exit Pilot Evaluation Plan. However, the plan does not address any of the concerns cited, in part because it does not include a planned completion date. Instead, the plan states that the evaluation period is planned for October 31, 2004, until completion. Without a planned completion date, it is not possible to determine the length of the evaluation period or any impact that the length of the evaluation may have on the evaluation s scope. Observation 4: US-VISIT and Automated Commercial Environment (ACE) collaboration is moving slowly. The US-VISIT EA alignment analysis document describes a port of entry/exit management conceptual project that is to establish uniform processes at POEs and the capability to inspect and categorize people and goods and act upon the information collected. The document recognizes that both US-VISIT and ACE45 support this project because they have related missions and a planned presence at the borders, including the development and deployment of infrastructure and technology. We recognized the relationships between these two programs in February 2003,46 when we recommended that future ACE expenditure plans specifically address any proposals or plans, whether tentative or approved, for extending and using ACE infrastructure to support other homeland security applications. ACE is a new trade processing system planned to support the movement of legitimate imports and exports and strengthen border security. GAO, Customs Service Modernization: Automated Commercial Environment Progressing, but Further Acquisition Management Improvements Needed, GAO-03-406 (Washington D.C.: Feb. 28, 2003). Objective 3: Observations Collaboration people, processes, and technology, which includes establishing a team to review deployment schedules and establishing a team and process to review and normalize business requirements. In August 2004, the US-VISIT and ACE programs tasked their respective contractors to form collaboration teams to address the three areas. Nine teams have been formed: business; organizational change management; facilities; information and data; technology; privacy and security; deployment, operations, and maintenance; and program management. Objective 3: Observations Collaboration The teams met in September 2004 to develop team charters, identify specific collaboration opportunities, and develop time lines and next steps. In October 2004, US-VISIT and ACE contractors met US-VISIT and ACE management to present their preliminary results. According to a US-VISIT official, the team charters have not yet been formally approved. Since we recommended steps to promote close collaboration between these two programs, about 20 months have passed, and explicit plans have not been developed nor actions taken to understand US-VISIT/ACE dependencies and relationships so that these can be exploited to optimize border operations. During this time and in the near future, the management of both programs have been and will be making and acting on decisions to further define, design, develop, and implement their respective programs. The longer it takes for the programs to exploit their relationships, the more rework will be needed at a later date to integrate the two programs. According to the US-VISIT Program Director, the pace of collaboration activities has been affected by scheduling and priority conflicts, as well as staff availability. Observation 5: US-VISIT system capacity is being managed in a compartmentalized manner. Capacity management is intended to ensure that systems are properly designed and configured for efficient performance and have sufficient processing and storage capacity for current, future, and unpredictable workload requirements. Capacity management includes (1) demand forecasting, (2) capacity planning, and (3) performance management. Demand forecasting ensures that the future business requirement workloads are considered and planned. Capacity planning involves determining current and future resource requirements and ensuring that they are acquired and implemented in a timely and cost-effective manner. Performance management involves monitoring the performance of system resources to ensure required service levels are met. The US-VISIT system, as noted earlier, is actually a system made up of various pre-existing (or legacy) systems that are operated by different DHS organizational components and that have been enhanced and interfaced. Objective 3: Observations Capacity Management Currently, DHS does not have a capacity management program. Instead, the US- VISIT IT Management Office relies on the performance management activities of the respective pre-existing DHS systems. For example: A quarterly report provided by the Customs and Border Protection Systems Engineering Branch Performance Engineering Team tracks such system measures as transaction volume, central processing unit utilization, and workload growth. Immigration and Customs Enforcement tracks such system measures as hourly and daily transaction rates and response times. According to the program office, the system-of-systems nature of US-VISIT does not lend itself to easily tracking systemwide performance. Nevertheless, program officials told us that the US-VISIT program has tasked two of its contractors with developing a comprehensive performance management and capacity planning effort. Until this is developed, the program will continue to rely on component system performance management activities to ensure that US-VISIT system resources are sufficient to meet current US-VISIT workloads, which increases the risk that they may not be able to adequately support US-VISIT mission needs. Objective 3: Observations Cost Estimate Observation 6:The cost estimating process used for Increment 2B did not follow some key best practices. SEI recognizes the need for reliable cost-estimating processes in managing software-intensive system acquisitions. To this end, SEI has issued a checklist47 to help determine the reliability of cost estimates. Our analysis found that US-VISIT did not fully satisfy most of the criteria on SEI s checklist. The US-VISIT Increment 2B estimate met two of the checklist items that we evaluated, partially met six, and did not meet five. For example, US-VISIT provided no evidence that Increment 2B was appropriately sized. Specifically, costs related to development and integration tasks for the TECS, IDENT, and ADIS systems are specified, but estimated software lines of code to be reused, modified, added, or deleted are not. As another example, no one outside the US-VISIT program office reviewed and concurred with the cost estimating categories and methodology. The table on the following slides summarizes our analysis of the extent to which US-VISIT s cost-estimating process for Increment 2B met SEI s criteria. Carnegie Mellon University Software Engineering Institute, A Manager s Checklist for Validating Software Cost and Schedule Estimates, CMU/SEI-95-SR-004 (January 1995). Criterion 1. The objectives of the estimate are stated in writing. 2. The life cycle to which the estimate applies is clearly defined. 3. The task has been appropriately sized (e.g., software lines of code). 4. The estimated cost and schedule are consistent with demonstrated accomplishments on other projects. 5. A written summary of parameter values and their rationales accompanies the estimate. 6. Assumptions have been identified and explained. 7. A structured process such as a template or format has been used to ensure that key factors have not been overlooked. Criterion 8. Uncertainties in parameter values have been identified and quantified. 9. 10. If a dictated schedule has been imposed, an estimate of the normal schedule has been compared to the additional expenditures required to meet the dictated schedule. If more that one cost model or estimating approach has been used, any differences in results have been analyzed and explained. 11. Estimators independent of the performing organization concurred with the reasonableness of the parameter values and estimating methodology. 12. Estimates are current. 13. The results of the estimate have been integrated with project planning and tracking. Objective 3: Observations Cost Estimate Without reliable cost estimates, the ability to make informed investment decisions and effectively measure progress and performance is reduced. <38. To its credit, the program office now has its prime contractor on board to support both near-term increments and to plan for and deliver the yet-to-be-defined US- VISIT strategic solution. However, it is important to recognize that this accomplishment is a beginning and not an end. The challenge for DHS is now to effectively and efficiently work with the prime contractor in achieving desired mission outcomes.> <39. Nevertheless, the fact remains that the program continues to invest hundreds of millions of dollars for a mission-critical capability under circumstances that introduce considerable risk that cost-effective mission outcomes will not be realized. At a minimum, it is incumbent upon DHS to fully disclose these risks, along with associated mitigation steps, to executive and congressional leaders so that timely and informed decisions about the program can be made.> <40. Fully and explicitly disclose in all future expenditure plans how well DHS is> progressing against the commitments that it made in prior expenditure plans. Reassess its plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions, and better ensures that the exit solution selected is in the best interest of the program. Develop and implement processes for managing the capacity of the US-VISIT system. Follow effective practices for estimating the costs of future increments. <41. Make understanding the relationships and dependencies between the US-> VISIT and ACE programs a priority matter, and report periodically to the Under Secretary on progress in doing so. <42. With respect to the program accomplishments during fiscal year 2004, the Program Director also stated that US-VISIT has continued to operate as intended every day at air and sea POEs, and it has produced such accomplishments as making the country more secure while expanding its coverage to include visitors from visa waiver countries. The director further stated that while the program s management capability is not yet mature and has much to accomplish, progress to date has been limited by a shortage of staff.> To accomplish our objectives, we performed the following tasks: We analyzed the expenditure plan against legislative conditions and other relevant federal requirements, guidance, and best practices to determine the extent to which the conditions were met. We analyzed key acquisition management controls documentation and interviewed program officials to determine the status of our open recommendations. We analyzed supporting documentation and interviewed DHS and US-VISIT program officials to determine capabilities in key program management areas, such as enterprise architecture and capacity management. We analyzed Increment 2B systems and software testing documentation and compared them with relevant guidance to determine completeness. We attended program working group meetings. We assessed the reliability of US-VISIT s Increment 2B cost estimate by selecting 13 criteria from the SEI checklist48 that, in our professional judgment, represent the minimum set of criteria necessary to develop a reliable cost estimate. We analyzed the Increment 2B cost-benefit analysis and supporting documentation and interviewed program officials to determine how the estimate was derived. We then assessed each of the criteria as satisfied (US- VISIT provided substantiating evidence for the criterion), partially satisfied (US- VISIT provided partial evidence, including testimonial evidence, for the criterion), and not satisfied (no evidence was found for the criterion). We did not review the State Department s implementation of machine- readable, tamper-resistant visas that use biometrics. For DHS-provided data that our reporting commitments did not permit us to substantiate, we have made appropriate attribution indicating the data s source. Carnegie Mellon University Software Engineering Institute, A Manager s Checklist for Validating Software Cost and Schedule Estimates, CMU/SEI-95-SR-004 (January 1995). We conducted our work at US-VISIT program offices in Rosslyn, Virginia, from June 2004 through November 2004, in accordance with generally accepted government auditing standards. Attachment 2 Recent US-VISIT Studies Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Border Security: Joint, Coordinated Actions by State and DHS Needed to Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. DHS Office of Inspector General. An Evaluation of the Security Implications of the Visa Waiver Program. OIG-04-26. Washington, D.C.: April 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-04-569T. Washington, D.C.: March 18, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. Comments from the Department of Homeland Security GAO Contact and Staff Acknowledgments <43. GAO Contact> <44. Staff Acknowledgments> In addition to the individual named above, Barbara Collier, Neil Doherty, David Hinchman, James Houtz, Carolyn Ikeda, Anh Le, John Mortin, David Noone, Karen Richey, Karl Seifert, and Randolph Tekeley made key contributions to this report.
Why GAO Did This Study The Department of Homeland Security (DHS) has established a program--the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals who travel to the United States. By congressional mandate, DHS is to develop and submit for approval an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. Among other things, GAO was asked to determine whether the plan satisfied these conditions and to provide observations on the plan and DHS's program management. What GAO Found DHS's fiscal year 2005 expenditure plan and related documentation at least partially satisfied all conditions established by the Congress, including meeting the capital planning and investment control requirements of the Office of Management and Budget (OMB). For example, DHS has developed a plan and a process for developing, implementing, and institutionalizing a program to manage risk. In its observations about the expenditure plan and DHS's management of the program, GAO recognizes accomplishments to date and addresses the need for rigorous and disciplined program practices. For example, US-VISIT has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. However, DHS has not employed rigorous, disciplined processes typically associated with successful programs, such as tracking progress against commitments. More specifically, the fiscal year 2005 plan does not describe progress against commitments made in previous plans (e.g., capabilities, schedule, cost, and benefits). According to GAO's analysis, delays have occurred in delivering capability to track the entry and exit of persons entering the United States at air, land, and sea ports of entry. Such information is essential for oversight. Additionally, the effort to pilot alternatives for delivering the capability to track the departure of persons exiting the United States is faced with a compressed time line, missed milestones, and potentially reduced scope. In particular, the pilot evaluation period has been reduced from 3 to 2 months, and as of early November 2004, the alternatives were deployed and operating in only 5 of the 15 ports of entry scheduled to be operational by November 1, 2004. According to US-VISIT officials, this is largely due to delays in DHS granting security clearances to the civilian employees who would operate the equipment at the ports of entry. These changing facts and circumstances surrounding the pilot introduce additional risk concerning US-VISIT's delivery of promised capabilities and benefits on time and within budget.
<1. Background> Land mines in the U.S. inventory are of two distinct types: The first consists of conventional land mines that are hand-emplaced and are termed nonself-destruct, or sometimes dumb, because they remain active for years unless disarmed or detonated. They can therefore cause unintended post-conflict and civilian casualties. The second type consists of land mines that are generally, but not always, surface-laid scatterable land mines that are dropped by aircraft, fired by artillery, or dispersed by another dispenser system. They are conversely called smart because they remain active for preset periods of time after which they are designed to self-destruct or deactivate, rendering themselves nonhazardous. According to DOD, smart land mines have a 99.99-percent self-destruct reliability rate. Most self-destruct land mine systems are set at one of three self-destruct periods: 4 hours, 48 hours, or 15 days. In addition, should the self-destruct mechanism fail, self-destruct land mines are designed to self- deactivate, meaning that they are to be rendered inoperable by means of the irreversible exhaustion of their batteries within 120 days after employment. This feature, according to DOD, operates with a reliability rate of 99.999(+) percent. At the time of the Gulf War, U.S. forces were armed with both nonself-destruct and self-destruct land mines, and U.S. policy allowed them to use both types. Today, however, U.S. presidential policy limits the U.S. forces use of nonself-destruct M-14 and M-16 antipersonnel land mines (see fig. 6 in app. II) to Korea. Antitank mines, as the name implies, are designed to immobilize or destroy tracked and wheeled vehicles and the vehicles crews and passengers. The fuzes that activate antitank mines are of various types. For example, they can be activated by pressure, which requires contact with the wheels or tracks of a vehicle, or by acoustics, magnetic influence, radio frequencies, infrared-sensor, command, disturbance, or vibration, which do not require contact. Antitank mines have three types of warheads. Blast mines derive their effectiveness from the force generated by high-explosive detonation. Shaped-charged mines use a directed-energy warhead. Explosive-formed penetrating mines have an explosive charge with a metal plate in front, which forms into an inverted disk, a slug, or a long rod. Antipersonnel land mines are designed to kill or wound soldiers. Their fuzes can be activated, for example, by pressure, trip wires, disturbance, antihandling mechanisms, or command detonation. Antipersonnel land mine warhead types include blast, directed fragmentation, and bounding fragmentation. The blast mine is designed to injure the lower extremities of the individual who steps on it. The directed fragmentation mine propels fragments in the general direction it is pointed, and the bounding fragmentation mine throws a canister into the air, which bursts and scatters shrapnel throughout the immediate area to kill or wound the enemy. Antitank and antipersonnel land mines are often employed together, as mixed systems. In a mixed system, the antipersonnel land mines are intermingled with antitank land mines to discourage enemy personnel from attempting to disarm them. Antitank land mines may also be equipped with explosive antidisturbance devices designed to protect them from being moved by enemy personnel, thus increasing the difficulty and challenge of breaching a minefield. <2. Effect of the Use of Self-Destruct U.S. Land Mines in the Gulf War Is Unknown> According to DOD, all the types of land mines in DOD s arsenal were available and included in U.S. war plans for use if needed in the Gulf War. DOD reported that during the war, U.S. forces used no nonself-destruct land mines. The services reported using a total of about 118,000 artillery- delivered or aircraft-delivered surface-laid scatterable self-destruct land mines. DOD provided few records showing why land mines were used and no evidence of specific military effects on the enemy such as enemy killed or equipment destroyed from the U.S. use of land mines during the Gulf War. We therefore could not determine the effect of U.S. land-mine use during the Gulf War. See appendix II for pictures, types, and numbers of land mines available for use and numbers used in the Gulf War. <2.1. U.S. Nonself-Destruct and Self-Destruct Land Mines Were Available in Theater> U.S. forces deployed to the Gulf War with over 2.2 million of the DOD- estimated 19 million land mines available in U.S. worldwide stockpiles in 1990. These consisted of both the conventional nonself-destruct land mines and scatterable surface-laid, self-destruct land mines. Nonself- destruct, hand-emplaced land mines available but not used included the M-14 ( Toe Popper ) and the M-16 ( Bouncing Betty ) antipersonnel land mines and the M-15, M-19, and M-21 antitank land mines. Self-destruct, scatterable land mines included air-delivered cluster bomb unit (CBU) 78/89 Gator, which dispensed mixed scatterable antipersonnel and antitank land mines, and artillery-fired M-692/731 Area Denial Artillery Munition (ADAM) antipersonnel land mines and M-718/741 Remote Anti-Armor Mine (RAAM) antitank land mines. (See app. II, figs. 5, 6, and 7 and table 10.) The services reported that all standard types of U.S. land mines in their inventories were available from unit and theater supplies or U.S. stockpiles. <2.2. Planned Use of U.S. Land Mines> During the Gulf War, U.S. forces were permitted by doctrine, war plans, and command authority to employ both nonself-destruct and self-destruct land mines whenever an appropriate commander determined that U.S. use of land mines would provide a tactical advantage. U.S. land mines of all types were available and planned for use by U.S. forces. U.S. land mine warfare doctrine for the services during the Gulf War indicated that land mines could be used both offensively, for example, to deny the enemy use of key terrain, and defensively, for instance, to protect U.S. forces from attack. U.S. doctrine states that the primary uses of land mines are to provide force protection, shape the battlefield, and reduce the number of forces needed. At the time of the Gulf War, U.S. land mine doctrine included the following four types of minefields: 1. protective minefields, whose purpose is to add temporary strength to weapons, positions, or other obstacles; 2. tactical minefields, which are emplaced as part of an overall obstacle plan to stop, delay, and disrupt enemy attacks; reduce enemy mobility; channelize enemy formations; block enemy penetrations; and protect friendly flanks; 3. point minefields, which are emplaced in friendly or uncontested areas and are intended to disorganize enemy forces or block an enemy counterattack; and 4. interdiction minefields, which are emplaced in enemy-held areas to disrupt lines of communication and separate enemy forces. U.S. plans for the execution of the Gulf War included the use of hand- emplaced antipersonnel and antitank land mines (e.g., M-14/16/21), artillery-delivered land mines (ADAM/RAAM), air-delivered land mines (Gator), and others for these purposes when U.S. commanders determined their use was needed. Military units on-hand ammunition supplies, as well as ammunition resupply stockpiles located within the combat theater, included millions of U.S. land mines. Ammunition resupply plans included planned rates for the daily resupply of land mines consumed in combat. <2.3. Services Reported that the United States Used about 118,000 Land Mines> The services reported that during the Gulf War, they used about 118,000 land mines from the approximately 2.2 million U.S. land mines that were taken to the Gulf War theater of operations and the millions of land mines available for use from U.S. worldwide stockpiles, which in total contained about 19 million land mines. All of the land mines used were the self- destructing, scatterable, surface-laid types. However, the services also indicated that, because Gulf War records related to land mines might be incomplete, information made available to us may be inexact. For example, the Army indicated that, while its record searches show that the Army used no land mines, it is unsure whether archived Gulf War records include evidence of Army land mine use that it has not uncovered. The services reported no confirmed use of any nonself-destruct land mines during the Gulf War. In other words, U.S. forces reported no use of antipersonnel land mines such as the over 6 million available (over 200,000 in theater) M-14 Toe Popper or M-16 Bouncing Betty and no M-15, M-19, or M-21 antitank land mines, which numbered over 2 million in U.S. stockpiles (over 40,000 in theater). (See fig. 6 and table 10 in app. II.) The Army reported no confirmed use of any land mines, with the qualification that it is unsure whether it had emplaced two minefields of an unknown type. The other military services reported that they used a total of 117,634 U.S. self-destruct land mines, whose destruction time-delay periods were set at 4 hours, 48 hours, or 15 days. The type of land mine used in the largest quantity was the aircraft-delivered surface-laid Gator land mines, which were dispersed from cluster bomb units containing both antitank and antipersonnel mines. Air Force, Navy, and Marine aircraft employed a total of 116,770 Gator land mines. Table 1 and appendix II provide additional details on the numbers and types of land mines available for use and used by the U.S. military services during the Gulf War. <2.4. DOD Records Contain Little Information on Why Land Mines Were Used> DOD records on the Gulf War provided us include little detail on why land mines were used. Available records indicate that U.S. forces employed land mines both offensively and defensively when fighting in Iraqi- controlled Kuwait. For example, U.S. aircraft offensively employed concentrations of surface-laid Gator land mines to deny Iraqi use of Al Jaber airbase in Kuwait and to hamper the movement of Iraqi forces. In addition, Gator land mines were used extensively with the intent to inhibit free movement in and around possible staging and launch areas for enemy Scud missiles. Possible Scud missile transporter hide sites included culverts, overpasses, and bridges in Iraq. In a defensive mode, Gator land mines were employed along the flanks of U.S. forces. In addition, U.S. Marines defensively employed concentrations of artillery-fired ADAM and RAAM land mines to supplement defenses against potential attacks by enemy forces north of Al Jaber airbase in southern Kuwait. Procedures for commanders to approve land mine use were established, disseminated, and included in all major unit war plans. A senior U.S. force commander who participated in the Gulf War told us that U.S. forces had no restrictive theaterwide or forcewide prohibitions on the employment of land mines, U.S. commanders understood their authority to use mines whenever their use would provide a tactical advantage, and U.S. commanders decided to use land mine or nonland-mine munitions based on their determinations as to which were best suited to accomplish assigned missions. <2.5. Effects of U.S. Land-Mine Use on the Enemy Are Unknown> The services reported no evidence of enemy casualties, either killed or injured; enemy equipment losses, either destroyed or damaged; or enemy maneuver limitations resulting, directly or indirectly, from its employment of surface-laid scatterable Gator, ADAM, and RAAM land mines during the Gulf War. (See app. II, fig. 5.) U.S. forces intended to adversely affect the enemy by using 116,770 Gator land mines, but no service has provided specific evidence that these land mines or the 864 ADAM and RAAM land mines reported as employed actually caused or contributed to enemy losses. Because neither DOD nor the services provided us evidence or estimates of actual effects and losses inflicted on the enemy by these U.S. land mines, we were unable to determine the actual effect of U.S. land mine use during the Gulf War. DOD and service documents detailing when land mines were used did not provide evidence of the effects of that use. For example, in one case, the Marine Corps reported that it had fired artillery-delivered ADAM and RAAM land mines to supplement a defensive position. However, the enemy was not reported to have been aware of or have actually encountered these land mines. Similarly, air Gator drops on possible Scud missile sites were not reported to have destroyed any Scud missiles or transporters. The services provided no evidence indicating whether the enemy had ever encountered the Gator land mines dropped on possible enemy maneuver routes or whether Gator employments had resulted in enemy destruction. <3. Extent of U.S. Casualties from Land Mines and Unexploded Ordnance> Service reports indicate that 81 of the 1,364 U.S. casualties attributed to the Gulf War were caused by land mines. None of these were attributed specifically to U.S. land mines, but rather to an Iraqi or an unknown type of land mine. Because of service data limitations, the possibility cannot be ruled out that some of the casualties now attributed to explosions of unknown or ambiguously reported unexploded ordnance were actually caused by land mines. Service casualty reporting indicates that at least 142 additional casualties resulted from such unexplained explosions. However, there is no way to determine whether some portion of these might have been caused by U.S. or other land mines or by unexploded ordnance. Of all casualties reported to have been caused by explosions, a relatively small percentage were reported to have been caused by the unauthorized handling of unexploded ordnance. <3.1. Numbers of Service Members Reported Killed and Injured during the Gulf War> The services reported that there were 1,364 U.S. casualties associated with the Gulf War. Of these, 385 were killed, and 979 were injured. Army personnel suffered 1,032 casualties, or 76 percent, of all U.S. deaths and injuries. Table 2 shows the numbers of U.S. casualties by military service. <3.2. Causes of U.S. Casualties> To determine what number of these casualties could have been caused by U.S. or other land mines, we obtained information from the services on the causes of all Gulf War deaths and injuries. Service officials attributed casualties to causes and categories based on battlefield casualty, accident, after-action, and other reports. As shown in figure 1, enemy ground and Scud missile fire caused the largest number of identifiable casualties to Gulf War service members. The services assigned 287, or 21 percent, of all casualties during the Gulf War to the enemy ground/Scud fire category. In particular, the Army attributed 128 of the 287 in this category to an Iraqi Scud missile attack. In addition, enemy fire caused some aircraft incident casualties. The second and third largest categories of identifiable causes of casualties were vehicle accidents and aircraft incidents. Available data indicate that explosions from some type of ordnance caused 177 casualties: land mines caused 81; cluster munition unexploded ordnance (UXO) caused 80; and other UXO caused 16. The casualty categories depicted in figure 1 are defined in table 3. As would be expected, the various services experienced different types and numbers of casualties. For the Marine Corps, enemy ground fire caused the largest number of casualties 84; for the Air Force, aircraft incidents was the largest cause 39; and for the Navy, other accidents caused the largest number 33. For the Army, other causes was the largest category 267. Our comparison of casualty-related documentation, however, indicates that at least some of these casualties should have been categorized elsewhere. For example, documentation shows that one casualty placed in other causes might have been a land mine casualty. In a second case, documentation indicates that one of these casualties suffered a heart attack and should have been placed in the natural causes category. In other documentation, we found indications that five casualties placed in this other causes category suffered what were other accidents. For these reasons, it is unclear whether all 267 of these Army-reported casualties should have been placed in the other causes category. However, Army officials indicated that available data limited the Army s ability to identify more specifically the causes of these casualties. See appendix III for the reported numbers of casualties by service and cause. <3.3. Explosion Casualties Caused by Land Mines, Cluster Munition UXO, and Other UXO> Service data show that 34 persons were killed and 143 were injured during the Gulf War by the explosion of some type of ordnance other than enemy fire. These 177 casualties caused by land mines, cluster munition UXO, or other UXO represent 13 percent of all casualties suffered by service members. (See table 4.) Of the 177 Gulf War casualties that DOD reported were caused by an explosion from some type of land mine, cluster munition, or unidentified type of UXO, the services reported no U.S. casualties were caused by U.S. land mines. However, as shown in table 5, U.S. cluster munition UXO (CBU or dual-purpose improved conventional munitions) or other UXO (unidentified) caused more U.S. casualties 96 than Iraqi and unidentified land mines 81. Of all persons killed or injured by explosions from land mines (either Iraqi or unidentified), cluster munition UXO (either CBUs or dual-purpose improved conventional munitions), and other unidentified UXO, Army personnel represented 164, or 93 percent. In addition, 12 Marine Corps personnel were killed or injured, and 1 Air Force service member was injured by these explosions. <3.4. Additional Casualties Could Have Been Caused by Land Mines> Of the 177 explosion casualties attributed by the services to some type of ordnance explosion, service records specify that 35 were caused by Iraqi land mines (see fig. 2). Casualty records for some of the 142 other explosion casualties are inexact or ambiguous. Thus, the other explosion categories cluster munition UXO from CBU and dual-purpose improved conventional munitions, unidentified land mines, and other UXO could include some U.S. casualties by U.S. or other land mines because casualty records did not always permit DOD to identify definitively the type of UXO causing the casualty. While the UXO causing a casualty might have been reported as a cluster munition CBU, it could have been misidentified and actually have been a U.S. land mine cluster munition from Gator, ADAM, RAAM, or some other munition. Casualty records show numerous cases in which all these terms are used interchangeably. For example, in one reported case, a casualty is first attributed to a mine and next to a dual-purpose improved conventional munition. In a second case, the service member was said to have driven over a cluster munition, which was later called a mine. In a third case, the soldier is reported in one document to have hit a trip wire causing mine to explode but in another document to have stepped on an Iraqi cluster bomb. In other words, the terminologies used in these casualty reports are inconsistent and imprecise, thus preventing a definitive analysis by the services of the causes of some casualties. DOD indicated that it is possible also that some of the casualties attributed to land mines were actually caused by unexploded ordnance. <3.5. Percentage of Soldiers Injured or Killed by Unauthorized Handling of UXO Is Relatively Small> DOD data did not always allow it to identify how service members had triggered the UXO that caused each casualty. Because of the many ways that ordnance and UXO can be triggered and because some ordnance can be triggered from a distance, DOD was unable to always determine the circumstances causing an explosion and the type of ordnance that exploded. DOD-reported data, however, indicate that relatively few persons who became casualties of unexploded ordnance were handling it without authorization. In attempting to determine what percentage of service members were injured or killed while handling ordnance in an unauthorized manner, we consulted all available descriptions of these incidents. We grouped these casualties into three categories based on service-reported information concerning how the explosion was triggered: (1) in performance of duty, (2) unauthorized handling of UXO, and (3) unknown circumstance. As shown by figure 3, DOD data indicate that more than half of the explosion casualties resulted from unknown circumstances. Of the 177 explosion casualties, DOD records indicated that 64 casualties (36 percent) resulted from explosions that were triggered in the performance of assigned duties. For example, one Army ground unit reported that when it began its ground attack, its first casualty resulted from a soldier encountering an artillery submunition dud that exploded. In another incident, seven Army engineers were killed while clearing unexploded BLU-97 (nonland-mine) duds at an Iraqi airfield. DOD attributed these casualties to incorrect or incomplete training in mine neutralization techniques and the handling of UXOs. An expert in explosive ordnance demolition who was advising the engineers on how to clear safely Gator land mine duds and other submunitions reported, I feel worse because the guys who died probably died of ignorance. This is a EOD related problem which was ill handled by others who thought they could handle the job. This situation illustrates that UXO can be so dangerous that even engineers with some training in handling UXO were thought by an explosive ordnance disposal expert to be inadequately prepared to deal with UXO on the battlefield. Soldiers who represent the 16 casualties (9 percent) attributed by DOD to unauthorized handling of UXO were generally performing their military duties but for some unknown reason touched or otherwise triggered UXO. These soldiers were typically on duty in or traversing U.S. dudfields on the battlefield while performing such actions as pursuing the enemy. DOD reported that some soldiers were casualties as a result of disturbing battlefield objects that they thought were not hazardous, while others might have known they were handling a piece of some sort of ordnance. For example, a DOD document cited a case in which soldiers handled UXO that they thought was harmless. This report stated that two persons were killed and seven injured when soldiers collected what they thought were parachute flares. Furthermore, soldiers might not have recognized that a battlefield object was hazardous because UXO comes in many shapes, sizes, and designs, much of which inexperienced soldiers have never seen before. Some common U.S. submunitions appear to be harmless while actually being armed and dangerous. Moreover, many soldiers are not aware that some UXO can cause injuries at distances of 100 meters. A small number of DOD casualty reports describing unauthorized handling of UXO attribute soldier casualties to souvenir hunting. For example, one incident resulted when a soldier who was examining an object was told by fellow soldiers to get rid of it. When the soldier threw the object away from him, it exploded. In other cases, soldiers might have known that handling UXO was unauthorized and handled it anyway. Gulf War documents indicate that DOD and the services called for soldiers on a battlefield to be warned not to handle UXO unless directed to do so. The remaining 97 (55 percent) of the 177 explosion casualties fell into the unknown circumstances category. Because battlefield casualty reports did not identify the circumstance or activity of these soldiers, it is unknown whether or not these soldiers became casualties while performing assigned duties. The Army s Safety Center provided us data on 21 Gulf War U.S. explosion casualties that occurred in Kuwait, Iraq, and Saudi Arabia (5 deaths and 16 injured). The Center attributed 7 of these casualties to land mines of unknown type and 14 to U.S. dual-purpose improved conventional munitions and CBU submunitions. These casualties were associated with unintentional entry into minefields or dudfields or disturbance of UXO. These casualties are included in the Gulf War casualty totals presented in this report. <4. DOD Reports Express Fratricide and Mobility Concerns Relating to the Safety of, and Lack of Knowledge about, Land Mines and Dudfields> Numerous issues included in service and DOD Gulf War lessons-learned, after-action, and other reports concerned the safety and utility of conventional and submunition U.S. land mines. Fratricide and battlefield mobility were cited often as important overall concerns associated with both available and used U.S. land mines and nonland-mine submunitions. These concerns led to the reluctance of some U.S. commanders to use land mines in areas that U.S. and allied forces might have to traverse.Commanders fears arose because of two basic reasons: The first reason involved both the obsolescence of conventional U.S. mines and safety issues with both conventional and scatterable land mines. A higher-than- anticipated dud rate for land mines and other submunitions during the Gulf War was one safety issue. Reflective of the safety issues, DOD reports recognized that de facto minefields created by all unexploded submunitions land mine and nonland-mine alike threatened fratricide and affected maneuvers by U.S. forces. The second reason involved concern that reporting, recording, and, when appropriate, marking the hazard areas created by the placement of self-destruct land mines or dudfields were not always accomplished when needed. On the basis of its Gulf War experience, DOD recognized the importance of commanders taking into consideration the possible effects of unexploded munitions when making and executing their plans and identified a variety of corrective actions. (App. IV cites DOD-reported actions related to land- mine and UXO concerns. Because it was beyond the scope of this report, we did not evaluate DOD s progress in these areas.) <4.1. Conventional U.S. Land Mines Were Considered Obsolete and Unsafe> In Gulf War lessons-learned and other documents, DOD and the services reported that U.S. conventional nonself-destructing land mines were obsolete and dangerous to use and that the newer self-destructing land mines also posed safety concerns to users. For example, one Army after- action report recommended that U.S. conventional antitank and antipersonnel land mines be replaced because of safety concerns. Army officials stated that U.S. conventional mines needed better fuzing and the capability of being remotely turned on or off or destroyed. In a joint service lessons-learned report, officials stated, Commanders were afraid to use conventional and scatterable mines because of their potential for fratricide. The report said that this fear could also be attributed to the lack of training that service members had received in how to employ land mines. In particular, prior to the Gulf War, the Army restricted live-mine training with conventional antipersonnel land mines (M-14s and M-16s) because they were considered dangerous. The joint lessons-learned report argued, If the system is unreliable or unsafe during training, it will be unreliable and unsafe to use during war. Since before the Gulf War, the Army has known about safety issues with its conventional nonself-destruct M-14 and M-16 antipersonnel land mines. For example, because of malfunctions that can occur with the M605 fuze of the Bouncing Betty M-16 antipersonnel land mine, the Army has restricted the use of the pre-1957 fuzes that are thought to be dangerous. However, the concern extends beyond the fuze issue to include also the land mines themselves. A DOD reliability testing document states that the M-16 mines are subject to duds; the mine ejects but fails to detonate. mine is then unexploded ordnance and still presents a danger. A DOD 2001 report on dud rates for land mines and other munitions states that the dud rate identified by stockpile reliability testing for M-16 land mines is over 6 percent. In a specific case, a currently serving senior Army officer told us that he had trained his unit with these antipersonnel land mines in Germany in 1990 to get ready for the Gulf War. According to the officer, during the training, his unit suffered 10 casualties from the M-16 land mine. This officer said that U.S. Bouncing Betty M-16 and Toe Popper M-14 antipersonnel land mines should be eliminated from Army stockpiles because they are too dangerous to use. Due to safety concerns, the Army placed prohibitions on live-fire training with these land mines before and after the Gulf War, with restrictions being lifted during the Gulf War. But DOD reporting does not indicate that any U.S. unit chose to conduct live-mine training in the theater with any type of mines. According to an Army engineer after-action report, Some troops even reported that they were prohibited from training on live mines after their arrival in Saudi Arabia. Moreover, DOD reporting states that U.S. forces employed no M-14 or M-16 mines in combat. Because of renewed restrictions following the Gulf War, service members still are prohibited from live-fire training on M-14 antipersonnel land mines, and training on live M-16 mines is restricted to soldiers in units assigned or attached to the Eighth U.S. Army in Korea. <4.2. Land Mines and Other Scatterable Munitions Had Higher-Than-Expected Dud Rates During the Gulf War> Another safety concern expressed in lessons-learned reports was that higher-than-expected dud, or malfunction, rates occurred for the approximately 118,000 U.S. self-destruct land mines and the millions of other U.S. scatterable submunitions employed in the Gulf War. These included duds found by a U.S. contractor while clearing a portion of the Kuwaiti battlefield. These duds created concerns about potentially hazardous areas for U.S. troops. <4.2.1. Expected Dud Rates for U.S. Self-Destruct Land Mines> According to briefing documents provided by DOD s Office of the Project Manager for Mines, Countermine and Demolitions, testing over the past 14 years of almost 67,000 self-destructing antitank and antipersonnel land mines at a proving ground has resulted in no live mines being left after the tests. The office also reports that all U.S. self-destruct mines self- deactivate, that is, their batteries die within 90 to 120 days. The office stated that the reliability rate for the self-destruct feature is 99.99 percent and that the reliability rate for the self-deactivation feature is 99.999(+). According to the program office, these features mean that self-destruct land mines leave no hazardous mines on the battlefield. For safety reasons, SCATMINEs must receive two arming signals at launch. One signal is usually physical (spin, acceleration, or unstacking), and the other is electronic. This same electronic signal activates the mine s SD time. Mines start their safe-separation countdown (arming time) when they receive arming signals. This allows the mines to come to rest after dispensing and allows the mine dispenser to exit the area safely . . . . Mines are armed after the arming time expires. The first step in arming is a self-test to ensure proper circuitry. Approximately 0.5 percent of mines fail the self-test and self- destruct immediately. After the self-test, mines remain active until their SD time expires or until they are encountered. Mines actually self-destruct at 80 to 100 percent of their SD time. . . . No mines should remain after the SD time has been reached. Two to five percent of US SCATMINES fail to self-destruct as intended. Any mines found after the SD time must be treated as unexploded ordnance. For example, mines with a 4-hour SD time will actually start self-destructing at 3 hours and 12 minutes. When the 4-hour SD time is reached, no unexploded mines should exist. <4.2.2. Conventional Munitions Systems, Inc., Found Thousands of Duds on the Kuwaiti Battlefield> Conventional Munitions Systems (CMS), Inc., a U.S. contractor that specialized in explosive ordnance disposal, was paid by the government of Kuwait to clear unexploded ordnance from one of seven sectors of the battlefield in Kuwait, which included Al Jaber Airbase (see fig. 4). CMS reported finding substantially more U.S. land mine duds than would be expected if dud rates were as low as DOD documents and briefings stated they are. DOD indicated that it cannot confirm the accuracy of the CMS-reported data. After the Gulf War, CMS employed more than 500 certified, experienced, and trained personnel to eliminate the unexploded ordnance in its sector of Kuwait. About 150 CMS employees were retired U.S. military explosive ordnance disposal experts. In a report for the U.S. Army, CMS recorded the types and numbers of U.S. submunition duds it found in its explosive ordnance disposal sector of the Kuwaiti battlefield. The report illustrates how the dangers of the battlefield during the Gulf War were compounded by the large numbers of unexploded U.S. submunitions, including land mines. According to the CMS report, it found 1,977 U.S. scatterable land mine duds and about 118,000 U.S. nonland-mine submunition duds in its disposal sector. CMS s report stated that many tons of modern bombs called Cluster Bomb Unit were dropped, each of which would deploy as many as 250 small submunitions. The report states, A significant number of the bombs and more importantly the submunitions, did not detonate upon striking the ground resulting in hundreds of thousands of dud explosive devices laying on the ground in Kuwait. While the vast majority of these duds were from nonland mine submunitions, they included the more modern self-destructing RAAM, ADAM, and Gator land mines. Table 6 lists the types and amounts of U.S. dud submunitions CMS reported finding in its disposal sector of the Kuwaiti battlefield. DOD reports that it employed in the Gulf War a total of about 118,000 self- destruct land mines (see table 1) and that their self-destruct failure, or dud, rate is 0.01 percent (1 in 10,000). However, if, as DOD reported, about 118,000 of these self-destruct land mines were employed and they produced duds at the DOD-claimed rate of 0.01 percent, there should have been about 12 duds produced, not 1,977 as CMS reported finding in one of seven Kuwaiti battlefield sectors. Thus, a substantial inconsistency exists between the DOD-reported reliability rate and the dud rate implied by the number of mines that CMS reported finding from actual battlefield use. At the time CMS was completing this UXO disposal work in Kuwait, the DOD program manager for Mines, Countermine and Demolitions visited the CMS cleanup operation. His report of that trip indicates that he thought CMS s techniques, training of personnel, and recording of ordnance recovered were thorough and accurate. The project manager said in his report that he had personally seen unexploded U.S. ordnance on the battlefield. The mine database developed by CMS to record the location of land mines, the project manager believed, was extremely useful to the U.S. soldiers working in that area. We interviewed several former employees of CMS to obtain their views on these issues. All of those we interviewed were retired senior U.S. officers and noncommissioned officers whose rank ranged from major general to sergeant first class. All but one were experienced in military ordnance and explosive ordnance disposal. They included the then-CMS president, the Kuwaiti on-site manager, and leaders of ground UXO disposal teams. They made two major points: (1) U.S. submunition UXO found in their sector was tactically employed, unexploded ordnance duds that had failed to explode as designed and could have been hazardous, meaning that if disturbed, the ordnance might have exploded, and (2) U.S. Gator, ADAM, and RAAM land-mine duds had not self-destructed as designed and were treated as hazardous. CMS explosives disposal personnel stated that they had personally experienced what they thought were Gator duds exploding on the battlefield in Kuwait, caused by no apparent triggering event, over a year after the Gulf War ended. CMS experts speculated that these detonations might have been caused by the extreme heat in a desert environment. DOD has been unable to explain the circumstances that caused the nearly 2,000 U.S. self-destruct land mine duds found in the CMS disposal sector of the Kuwaiti battlefield not to self-destruct. Several DOD land mine and explosive ordnance disposal experts speculated that these dud land mines could have resulted from (1) mines that had malfunctioned or had been misemployed; (2) greater-than-expected and reported dud rates; or (3) the use by U.S. forces of many thousands more scatterable land mines than DOD has reported having used. Some Army land mine-related officials discounted the accuracy of some data included in the CMS report. However, these officials did not provide us with any factual evidence supporting these views. Other DOD experts in explosive ordnance disposal confirmed in interviews that scatterable mine duds can exist after their self-destruct times have elapsed and that these duds may be hazardous. A DOD explosive ordnance disposal expert said that procedures for eliminating Gator duds specify that explosive ordnance disposal should be postponed for 22 days, and then the duds should normally be destroyed remotely by blowing them up in place. The 22-day period is calculated by adding a 50-percent safety factor to the maximum possible self-destruct period of 15 days. Explosive ordnance disposal personnel thus attempt to reduce the possibility of a munition detonating or self-destructing while they are near it. DOD did not provide us with records to show the results of reliability testing for ADAM, RAAM, or Gator land mines done prior to the Gulf War or any safety-of-use messages that might have been in effect for these or other U.S. land mines that were in U.S. stockpiles at that time. However, DOD did provide some post-Gulf War test records that document reliability problems with eight of its self-destruct land mine systems.Specifically, testing showed that some land mines did not self-destruct at the selected times. For example, a July 2000 Army study of dud rates for ammunition reports that the submunition dud rate for RAAM land mines with short duration fuzes is over 7 percent, and the dud rate for RAAM land mines with long duration fuzes is over 10 percent. In an Ammunition Stockpile Reliability Program test for the ADAM, the Army suspended one lot because it failed. In a test for the Volcano system, 66 out of 564 land mines failed the test. Among the failures were 1 hazardous dud (meaning that it could explode), 24 nonhazardous duds (meaning that they had not armed), 6 mines that detonated early, and 1 mine that detonated late. In another case, DOD testing of the Selectable Lightweight Attack Munition (SLAM) land mine showed that it also did not destruct at the selected time. While this problem was investigated, SLAM use was suspended and a safety-of-use message was put into effect advising personnel never to approach an M2 SLAM that has been armed and, in training, to assure that it can be detonated if it fails to go off as intended. According to DOD, the same self-destruct and self-deactivation design has been used in all U.S. mines since 1970. Because of this design similarity, it is possible that U.S. self-destruct land mines could be subject to similar failures. Failures of self-destruct land mines that are induced by extremes in temperature and other variations in environmental conditions are well- documented in service field manuals and after-action reports. Field manuals state that the reliability of self-destruct land mines degrades when they are employed on sand, vegetation, hillsides, snow, or hard surfaces. Also, self-destruct land mines have reportedly reduced effectiveness on hard surfaces such as concrete and asphalt. They break apart and can easily be seen. Also, the high detectability of scatterable mines on bare and lightly covered surfaces permits the enemy to seek out unmined passageways or pick a way through lightly seeded areas. An Army document states that FASCAM must be covered by either observation or fire, since FASCAM minefields are surface laid and an undisturbed enemy could breach those obstacles quickly .FASCAM is not suitable for use in road interdiction due to its tendency to malfunction on hard surfaces. In snow, self-destruct land mines may settle into the snow at unintended angles, causing their antihandling devices to prematurely detonate them. In deep snow, self- destruct land mines are considered ineffective, and at least 40 percent of their blast is smothered. Soft sand, mud, or surface water can have similar effects. During the Gulf War in particular, Marines found that in the constantly blowing and shifting sand, surface mines became buried, and buried mines came to the surface. Slope or unevenness of the terrain may also have an adverse impact on self-destruct land mines. Specifically, between 5 and 15 percent of scatterable mines come to rest on their edges when deployed. RAAM and ADAM land mines must come to rest and stabilize within 30 seconds of impact, or the submunitions will not arm. Very uneven terrain such as ground covered by vegetation or rocks also may prevent the ADAM or Gator trip wires from deploying properly. Gator testing indicates that various reliability problems can increase dud rates. For example, in 58 tests, seven submunition land mine dispenser failures were observed, reducing the reliability rate of the dispensers to 88 percent. Of the submunition mines delivered, 99 percent survived ground impact. Of those, 97 percent of the antitank mines armed, and 95 percent of the antipersonnel mines armed. Various other problems can affect a mine s explosion. For example, one antitank mine did not explode when triggered, but it did activate when it was picked up and shaken. <4.2.3. Nonland-Mine Submunitions Also Had Higher Dud Rates Than Expected> During the Gulf War, accumulations of thousands of U.S. nonland-mine submunition duds on the battlefield created unintended de facto minefields. This problem was exacerbated by dud rates for these submunitions that appear to have been higher than the 2- to 4-percent submunition dud rate that DOD had previously reported. In a study of UXO issues, the Army identified an estimated 8-percent overall dud rate for submunitions. Another Army document said that an explosive ordnance disposal (EOD) commander estimated that an area occupied by the 24th Infantry Division during the war experienced at least a 15- to 20-percent dud rate for some Army submunitions. The document stated that An unknown amount was covered by sand suggesting an even higher rate. EOD personnel estimated that the dud rate for Air Force submunitions was 40 percent for one area. They commented that these submunitions did not function well in soft sand. In addition, DOD reported that at the time of the Gulf War, over half of the 133 Multiple Launch Rocket System (MLRS) submunition lots in inventory exceeded the Army s 5-percent dud-rate goal. Each Multiple Launch Rocket System contains 644 M77 submunitions. One DOD document stated that the dud rate for the M77 for the Gulf War ranged from 10 to 20 percent. U.S. ammunition stockpile sample testing also indicated that DOD has experienced past problems with submunition reliability rates. For example, in 1990, testing of artillery-delivered nonland-mine submunitions identified two lots that had duds in excess of 40 percent. According to a testing document, one way to compensate for this high dud rate is to increase the quantity fired. Instructions contained in the testing document were to Notify the user of the increase in submissile defect rate so that he can make adjustments in the tactical employment plans. The July 2000 Army study of dud rates for ammunition reports that the dud rate for artillery-fired M42/46 submunitions is over 14 percent. Like land mines, nonland-mine submunitions experience higher failure rates in various environmental conditions. According to an Army field manual, about 50 percent of the submunitions that fail to detonate are armed and hazardous. Firing them into mountainous areas or uneven terrain further increases the dud rate. The effectiveness of these rounds also decreases in snow, water, marshy areas, mud, vegetation, and soft sand. According to one DOD document, the improved conventional munitions used, including dual-purpose improved conventional munitions, and CBUs, experienced a high dud rate and caused obstacles for maneuvering forces. Units perceived the dud rates as considerably greater than the 2-4 percent anticipated, creating a dud minefield. The document continued that because the dud rates were too high, some maneuver commanders hesitated to use submunition weapons, especially if they believed that their units would move through the area later. Hazardous dudfields caused delays in movement on the battlefield, and high winds and shifting sands often covered many duds. According to this report, This became especially dangerous for high hazard missions such as refueling operations. In one case, the 1st Cavalry Division moved into Kuwait along the Wadi al Batin. Twenty miles of this route was saturated with both USAF submunitions (BLU97 and Rockeye) and Army M77 submunitions. . . . Maneuvering through this area was no problem for the tracked vehicles of the division. However, the 1st Cav selected the same route for its main supply route (MSR). Because the division s CSS consisted of mainly wheeled vehicles, EOD support was required. It took the 64th EOD and a British unit about five days to clear a two lane path through the area. In this case, the unit s progress was clearly slowed by the duds. Because Gulf War records are not always specific, it is not clear how frequently U.S. forces experienced problems in maneuvering through areas previously attacked by U.S. ordnance. However, available records indicate that such problems did occur to some degree and were an operational concern. In fact, DOD reported that in some instances ground movement came to a halt because units were afraid of encountering unexploded ordnance. Moreover, Army officials reported that, in the case of the M77 submunitions, the Army believed that the weapon would most likely be used against the Soviet threat in Europe, where U.S. troops would probably be in a defensive position. Therefore, U.S. soldiers were not expected to occupy submunition-contaminated areas. <4.3. Land Mine and Dudfield Reporting, Recording, and Marking Problems Created Fratricide and Mobility Concerns> During the Gulf War, the placement of self-destruct land mines was not always reported, recorded, or marked when appropriate. This situation was exacerbated by the possibility that self-destruct land mines did not always self-destruct as designed after their preset periods of time. Consequently, safety issues involving Gulf War self-destruct land mines, as well as other submunitions, focused on the potential for fratricide resulting from U.S. forces unknowingly maneuvering into areas where scatterable land mines had been employed but had not yet self-destructed. Shortly after the Gulf War, one DOD fact sheet reported that DOD s joint procedures for coordinating the use of air-delivered mines had not been widely disseminated. Further, according to the fact sheet, the procedures were outdated with respect to the rapid mobility of the modern Army. Thus, the warning information such as the locations and self-destruct timing durations was next to impossible to obtain and pass to ground component commanders. According to the document, this situation dramatically increased the probability of friendly fire casualties. The Army s Field Manual on Mine/Countermine Operations states the importance of such coordination: Because SCATMINEs [scatterable mines] are a very dynamic weapon system, great care must be taken to ensure that proper coordination is made with higher, adjacent, and subordinate units. To prevent friendly casualties, all affected units must be notified of the location and duration of scatterable minefields. Gulf War records include numerous reports indicating that scatterable minefields were employed in locations that were not reported to maneuver commanders. For example, one DOD report stated that neither the Air Force nor the Navy could accurately track the location or duration of Gator minefields. An Army after-action report stated that the Air Force flew over 35 GATOR missions (the exact number is not known) without reporting or recording the missions. According to this report, the result was that uring the ground offensive, units found themselves maneuvering in GATOR minefields without any knowledge of their existence. Another Army after-action report stated, Some friendly Gator- scatterable Air Force-delivered scatterable minefields were encountered in Iraq. The report highlighted the lack of a scatterable minefield self- extraction capability for units to avoid fratricide. A DOD fratricide lessons- learned document noted that casualties from friendly minefields were a major problem due to the lack of coordination, failure to disseminate obstacle plans, and failure to report the location of mines throughout the chain of command. Another Army after-action report attributed fatalities to the failure to mark hazardous areas. According to this report, In many cases GATOR minefields and large areas which contained DPICM [dual-purpose improved conventional munitions] and CBU duds were left unmarked due to the lack of a fast and simple method for marking hazardous areas. After-action reports also cited planners ignorance of the capabilities, limitations and reporting, recording, and marking requirements of our scatterable mine systems, as well as a lack of training regarding unexploded ordnance, as the causes of fatalities. Tracking nonland-mine dudfields presented similar concerns. A case in which one U.S. unit had moved through an area where another U.S. unit had earlier dropped cluster munitions is presented in an historical account of the Gulf War written by a retired Army lieutenant general. According to this account, a U.S. Army 101st Airborne Division aviation battalion traversed an area that had previously been seized by the U.S. Army VIIth Corps, which had fired cluster munitions. The battalion s commander cited a case in which one of his soldiers was injured when he stepped on a cluster munition. Keeping track of DPICM -dudded areas, said the commander, was complicated by the fact that one Corps moved into another Corps area. Senior U.S. Gulf War commanders were aware of the incidence of fratricide from unexploded CBU, dual-purpose improved conventional munitions, and other ordnance. For example, one U.S. Army artillery general sent a safety message that read, In recent days I have received numerous reports of soldiers being injured and killed by duds. . . . I am firmly convinced that each case could have been averted. Every soldier must be warned. . . . According to one DOD official, the main reason hazardous dudfields were not always reported or marked was that doctrine did not require commanders to always report or mark nonland-mine hazard areas, as is required for minefields. However, DOD has noted, Although UXO is not a mine, UXO hazards pose problems similar to mines concerning both personnel safety and the movement and maneuver of forces on the battlefield. <4.4. DOD Has Recognized the Need for Action Related to Land Mine and UXO Concerns> According to after-action, lessons-learned, and other reports, DOD and the services recognize the nature, extent, and implications for fratricide and battlefield maneuver of reported concerns, as well as the need to act upon their concerns about land mines and other submunition UXO. According to an Army after-action report, The large amount of UXO found in Iraq and Kuwait caught Allied forces by surprise. Lessons learned from past conflicts were not learned, leading to unacceptable casualties among our soldiers, allies, and civilians. These reports suggested that changes to address these concerns would increase submunition battlefield utility and effectiveness while simultaneously reducing casualties and increasing freedom of maneuver. In after-action reports, a number of actions were identified to improve the safety of troops and their mobility through land mines and other employed submunitions. These included, among others, that DOD replace the current conventional land mines with modern, safer ones; add a feature to scatterable land mines that would allow them to be turned on and off, giving the land mines a long-term static capability and providing U.S. commanders with the ability to create cleared lanes for friendly passage when and where needed; develop submunitions with lower dud rates and develop self-destruct mechanisms for nonland-mine submunitions; consider the magnitude and location of UXO likely to be on the battlefield when deciding the number and mix of submunitions, precision-guided munitions, or other munitions to use and, when planning maneuver operations, avoid dudfield hazard areas or breach them with troops inside armored vehicles; develop training aids such as manuals and working models of U.S. scatterable mines to provide service members with the ability to recognize U.S. scatterable mines and other unexploded ordnance and the knowledge of the proper actions to take to safely avoid and/or deactivate/detonate explosive submunitions and to safely extract themselves from minefields or dudfields; and establish and standardize procedures for the reporting, recording, and, when appropriate, marking of concentrations of submunition bomblets as hazard areas. DOD has reported a number of actions that relate to these land mine and UXO concerns. These actions are summarized in appendix IV. Because it was beyond the scope of this report, we did not evaluate DOD s progress in these areas. <5. Agency Comments and Our Evaluation> In its comments on a draft of this report, DOD stated that it believes the report is flawed because it makes assertions and speculations that are not based on fact and because we used unreliable or unrelated data. In particular, DOD made the following main points: Our report implies that U.S. casualties caused by land mines were higher than DOD records show. Our report relied heavily on the report by CMS, Inc., even though there are weaknesses and mistakes in the CMS report. Our report confuses issues dealing with unexploded ordnance and land mines. By focusing on the Gulf War experience as one case study, our report is not a credible analysis of land-mine utility and employment. We have made some changes to the report to clarify and elaborate on the issues DOD has raised, but we do not agree that the report is flawed or makes unsubstantiated assertions. In regard to each of DOD s comments, we offer the following response: Our report states that DOD records show no U.S. casualties attributed to U.S. land mines and that 81 casualties were attributed to Iraqi or other land mines. In addition, we point out that it is possible that some portion of the casualties in the other or unknown categories reported by DOD could have been caused by land mines there is simply no way of knowing. This is a statement of fact, not an assertion that casualties were greater than reported. As we gathered data on Gulf War casualties, our service points of contact worked with us to ensure that we had the most complete information on this issue that was available. Some records were ambiguous and/or incomplete. However, DOD officials who provided us with this data agreed that our interpretation of the records was accurate. Much of DOD s concern about unreliable data stems from our use of the report by CMS, Inc., on UXO cleanup of the battlefield. Most of our discussion of the CMS report is in the section addressing DOD s lessons learned from the Gulf War. Our use of CMS data in that section corroborates in most cases the lessons learned contained in DOD after- action reports. While DOD claims that the CMS report contained inaccuracies, DOD did not provide any data to challenge the main message of the CMS report, which was that a very large number of U.S. land mine and cluster munition duds were found on the Kuwaiti battlefield. In fact, a DOD study that discusses the magnitude of the unexploded ordnance problem and that calculates the relative cost of cleaning up the battlefield compared to retrofitting or reprocuring U.S. submunitions with self- destruct fuzes in order to lower dud rates uses the same CMS data we cite in our report. In its 2000 report to Congress, DOD uses the results of these calculations to discuss the cost and feasibility of retrofitting the Army s ammunition stockpile. UXO is discussed in our report from two standpoints. First, casualty data presenting the causes of casualties cannot always distinguish between a land mine and other types of UXO, so we believed it was important to discuss both to provide a proper context. Secondly, DOD s own after- action reports on lessons learned discuss the problems of unexploded ordnance in terms of both land mines and cluster munitions, so our discussion of land mines needs to be in this overall UXO context. We have tried throughout the report to make clear distinctions between land mines and other ordnance, and we have made further clarifications as a result of DOD s comments. Lastly, we recognize that this report focuses exclusively on the Gulf War; this was the agreed-upon scope of our work as discussed with our congressional requester, and this is stated in the objectives and scope and methodology sections of our report. As such, we agree that it is not a comprehensive analysis of the utility of land mines in modern warfare; it was never intended to be. As our report makes clear, we do not make any conclusions or recommendations in this report. Nevertheless, we believe the report provides important historical context the Gulf War was the largest U.S. conflict since Vietnam, and both sides in the battle made use of land mines. Unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairmen of the House and Senate Committees on Armed Services; the Chairmen of the House and Senate Committees on Appropriations, Subcommittees on Defense; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Commandant of the Marine Corps. We will also make copies available to other congressional committees and interested parties on request. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (757) 552-8100 or e-mail me at [email protected]. Key staff who contributed to this report were Mike Avenick, William Cawood, Herbert Dunn, M. Jane Hunt, Jim McGaughey, and Bev Schladt. Appendix I: Current U.S. Land Mine Inventory According to DOD and service data, the current DOD land-mine stockpile contains about 18 million land mines over 2.9 million nonself-destruct land mines and over 15 million self-destruct land mines. The Army owns the vast majority of the nonself-destruct land mines, including over 1.1 million M-14 and M-16 mines (see fig. 6 in app. II). The Marine Corps has a relatively small number of these mines and has no M-14 land mines. The Air Force and the Navy stock no nonself-destruct land mines. Of the over 15 million self-destruct land mines in the U.S. stockpile, over 8.8 million are antipersonnel, and about 6.2 million are antitank land mines. Artillery-fired ADAM antipersonnel land mines (over 8 million) and RAAM antitank land mines (over 4 million) are stocked mainly by the Army but also by the Marine Corps. (See table 7 and fig. 5 in app. II.) The DOD land mine stockpile includes over 150,000 mixed land-mine dispensers, which contain a mixture of both antipersonnel and antitank land mines. All together, these mixed land-mine dispensers contain over 2 million land mines, of which over 400,000 are antipersonnel land mines and over 1.6 million are antitank land mines. (See table 8.) The services report that land mine types are mixed in three dispenser systems: the Gator, the Volcano, and the Modular Pack Mine System. For example, the Air Force and the Navy stockpile the Gator air-delivered CBU, which is one type of mixed land mine dispenser. The two services together have almost 14,000 CBU dispensers, which contain nearly 1.2 million land mines. The Army stocks over 134,000 Volcano mixed dispensers, which contain over 800,000 antipersonnel and antitank land mines. Table 9 contains the total current U.S. inventory of land mines by mine type and common name; self-destruct capability; dispenser type, if any; service that maintains them; and quantity. Appendix II: U.S. Land Mines Available for Use in the Gulf War Figures 5 and 6 illustrate types of land mines that were in the U.S. inventory and available for use during the Gulf War. Figure 7 shows the M-18 Claymore antipersonnel land mine. DOD has stated that it is employed in only the command-detonation mode and therefore is defined to be a nonland mine. Army Field Manual 20-32 alternately calls the M-18 Claymore a land mine and a munition. See appendix IV for DOD s statements. Table 10 cites the U.S. land mines by mine type and common name and by service that were available and used during the Gulf War. Appendix III: U.S. Gulf War Casualties by Service Appendix IV: DOD-Reported Actions That Relate to Land Mine and UXO Concerns DOD has reported a number of actions that are related to the land-mine and unexploded ordnance concerns raised in Gulf War after-action and lessons-learned reports. These actions fall into three areas: (1) developing antipersonnel land-mine alternatives and more capable and safer self- destruct land mines, (2) revising doctrine and procedures to better address hazardous submunition dudfields, and (3) increasing ammunition reliability and reducing dud rates. DOD-reported actions in these areas are described below. However, because it was beyond the scope of this report, we did not independently assess DOD s progress in these areas. <6. Developing Antipersonnel Land- Mine Alternatives and More Capable and Safer Self-Destruct Land Mines> Presidential directives establish and direct the implementation of U.S. policy on antipersonnel land mines. Presidential Decision Directive 48 states that the United States will unilaterally undertake not to use and to place in inactive stockpile status with intent to demilitarize by the end of 1999, all nonself-destructing antipersonnel land mines not needed for (a) training personnel engaged in demining and countermining operations and (b) defending the United States and its allies from armed aggression across the Korean demilitarized zone. The Directive also directs the Secretary of Defense to, among other things, undertake a program of research, procurement, and other measures needed to eliminate the requirement for nonself-destructing antipersonnel land mines for training personnel engaged in demining and countermining operations and to defend the United States and its allies from armed aggression across the Korean demilitarized zone. It further directs that this program have as an objective permitting both the United States and its allies to end reliance on antipersonnel land mines as soon as possible. Presidential Decision Directive 64 directs the Department of Defense to, among other things, (1) develop antipersonnel land mine alternatives to end the use of all antipersonnel land mines outside Korea, including those that self-destruct, by the year 2003; (2) pursue aggressively the objective of having alternatives to antipersonnel land mines ready for Korea by 2006, including those that self-destruct; (3) search aggressively for alternatives to our mixed antitank land mine systems; (4) aggressively seek to develop and field alternatives to replace nonself-destructing antipersonnel land mines in Korea with the objective of doing so by 2006; and (5) actively investigate the use of alternatives to existing antipersonnel land mines, as they are developed, in place of the self-destructing/self-deactivating antipersonnel submunitions currently used in mixed antitank mine systems. In April 2001, DOD reported to the Congress on its progress in meeting the objectives of Presidential Decision Directives 48 and 64. Although DOD has pursued programs to develop and field systems to replace land mines and has plans to spend over $900 million to do so, it reported to us in May 2002 that it will not be able to meet the dates established in Presidential Decision Directives 48 and 64. Begun in 1997 and led by the Army, DOD s Antipersonnel Landmines Alternative program is aimed toward producing what DOD calls a Non Self-Destruct Alternative (NSD-A). According to the program office, however, DOD does not now anticipate that it will be able to field this alternative system by the presidential goal of 2006. The alternative system, which DOD expects to cost over $507 million, is now on hold pending a decision on whether to include a mechanism that would allow a command- controlled man-in-the-loop feature to be turned off so that unattended mines could remain armed and detonate on contact. In response to the June 1998 Presidential Decision Directive 64, DOD has also been pursuing alternatives to pure antipersonnel land mine systemsto end the use of all antipersonnel land mines outside of Korea by 2003 and in Korea by 2006. These efforts are being led by the Army, the Defense Advanced Research Projects Agency, and the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics). The program office indicated that the Army-led project to end the use of all pure antipersonnel systems outside Korea by 2003 by fielding artillery- fired mixed land mine ammunition, budgeted at about $145 million, might now be discontinued. A second effort, budgeted at $24 million and led by the Defense Advanced Research Projects Agency, is to seek long-term alternatives for mixed land mine systems. One concept under development is the self-healing minefield, which does not require antipersonnel land mines to protect antitank land mines because the antitank mines in the system are able to independently hop around the battlefield to intelligently redistribute themselves in response to breaching attempts. This system is not expected to be fielded before 2015. A third effort, budgeted at about $230 million and led by the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), is aimed at replacing all U.S. mixed land mine systems by removing the antipersonnel land mines in them. These mixed systems include the Modular Pack Mine System, the Volcano, and the Gator. At present, DOD does not expect any of these alternatives to be fielded by 2006. Although DOD has numerous land-mine- related program activities underway, it has not reported to us that it has identified the land mine alternative concepts or systems or the specific land-mine programs that it plans to develop or procure and field as its next generation of land mines or land mine alternatives, which would comply with presidential directives and meet DOD s military requirements. Because it was beyond the scope of this report, we did not assess DOD s progress in these areas. <7. Revising Doctrine and Procedures to Better Address Hazardous Submunition Dudfields> Since the Gulf War, DOD and the services have updated their manuals and procedures dealing with unexploded ordnance to increase the attention paid to reporting and tracking possibly hazardous areas. These revisions are intended to improve the integration of UXO-related planning into military operations and provide improved procedures for the services to use when operating in a UXO environment. However, DOD has provided to us no manuals that require combat commanders to always report and track all potential hazardous submunition dudfields. Instead, commanders are allowed to determine when reporting, tracking, and marking of potentially hazardous submunition dudfields are required. DOD s post-Gulf War UXO manuals increase attention to procedures for operations in a UXO environment. DOD s guidance is based on Gulf War lessons learned: Experience from Operation Desert Storm revealed that a battlefield strewn with unexploded ordnance (UXO) poses a twofold challenge for commanders at all levels: one, to reduce the potential for fratricide from UXO hazards and two, to minimize the impact that UXO may have on the conduct of combat operations. Commanders must consider risks to joint force personnel from all sources of UXO and integrate UXO into operational planning and execution. DOD s manuals conclude that Although UXO is not a mine, UXO hazards pose problems similar to mines concerning both personnel safety and the movement and maneuver of forces on the battlefield. DOD s manuals describe the UXO problem as having increased in recent years: Saturation of unexploded submunitions has become a characteristic of the modern battlefield. The potential for fratricide from UXO is increasing. According to DOD, The probability of encounter is roughly equal for a minefield and a UXO hazard area of equal density the lethality of the UXO hazard area is lower. DOD lists three Army and Marine Corps systems as causes of UXO: the Multiple Launch Rocket System (MLRS), the Army Tactical Missile System (ATACMS), and the cannon artillery-fired dual-purpose improved conventional munition (DPICM). The manuals warn that, based on the types of ammunition available for these weapons in 1996, every MLRS and ATACMS fire mission and over half of the fire missions executed by cannon artillery produce UXO hazard areas. With a 95-percent submunition reliability rate, a typical fire mission of 36 MLRS rockets could produce an average of 1,368 unexploded submunitions. Air Force and Navy cluster bomb units (CBUs) contain submunitions that produce UXO hazard areas similar to MLRS, ATACMS, and cannon artillery-fired DPICM submunitions. In its post-Gulf War manuals, DOD s guidance includes recommended methodologies for use by the services for planning, reporting, and tracking to enhance operations in an UXO contaminated environment. Of primary concern to DOD is the prevention of fratricide and the retention of freedom of maneuver. DOD s manuals state that U.S or allied casualties produced by friendly unexploded submunitions may be classified as fratricide. In planning wartime operations, the guidance suggests that commanders be aware of hazardous areas and assess the risk to their operations if their troops must transit these areas. Such planning is necessary for any type of mission, regardless of the unit. Without careful planning, according to the manuals, commanders ability to maintain the required operational tempo could be difficult. Planners should allocate additional time for the operation if a deliberate breach or a bypass of a UXO hazard area is required. When encountering locations where unexploded submunitions have been or may be encountered, commanders should immediately report these areas. According to the manuals, Immediate reporting is essential. UXO hazard areas are lethal and unable to distinguish between friend and foe. After reporting hazardous areas, commanders should carefully coordinate with other units to prevent the UXO from restricting or impeding maneuver space while at the same time decreasing fratricide. Such areas should be accurately tracked and marked. When describing the need for improved procedures, DOD s UXO manuals state, Currently no system exists to accurately track unexploded submunitions to facilitate surface movement and maneuver. DOD now highlights staff responsibilities for joint force planning, reporting, tracking, and disseminating UXO hazard area information and tactics, techniques, and procedures for units transiting or operating within a UXO hazard area. For example, the joint force engineer is responsible for maintaining the consolidated mine field records and historical files of UXOs, minefields, and other obstacles. The manuals conclude that Properly integrated, these procedures will save lives and reduce the impact of UXO on operations. Some of the suggested procedures are as follows: Coordination between component commanders and the joint force commander may be required before the use of submunitions by any delivery means. Units should bypass UXO hazard areas if possible. When bypassing is not feasible, units must try to neutralize the submunitions and scatterable mines. Combat units that have the assets to conduct an in-stride breach can do so. Extraction procedures resemble in-stride breach or clearing procedures. Dismounted forces face the greatest danger of death or injury from UXO. Unexploded ordnance is a significant obstacle to dismounted forces. Dismounted forces require detailed knowledge of the types and locations of submunitions employed. The chance of significant damage to armored, light armored vehicles, and other wheeled armored vehicles is relatively low. Personnel being transported by unarmored wheeled vehicles face nearly the same risk to UXO as dismounted forces. The protection afforded by unarmored wheeled vehicles is negligible. Air assault and aviation forces are also at risk from UXO. Aircraft in defilade, flying nap-of-the-earth or in ground effect (hovering) are vulnerable to submunitions. Certain submunitions are sensitive enough to function as a result of rotor wash. DOD has issued manuals that alert U.S. forces to the threat of UXO and identify procedures to mitigate risks. For example, Field Manual 20-32 states that Mine awareness should actually be entitled mine/UXO awareness. If only mines are emphasized, ordnance (bomblets, submunitions) may be overlooked, and it has equal if not greater killing potential. Despite this recognition, DOD officials have not indicated to us that they plan to require commanders to report and track all potential hazardous nonland-mine submunition dudfields and to mark them when appropriate, as is now required for scatterable submunition minefields. Because it was beyond the scope of this report, we did not assess DOD s post-Gulf War implementation of doctrinal and procedural measures to minimize UXO-caused fratricide, maneuver limitations, and other effects. <8. Increasing Ammunition Reliability and Reducing Dud Rates> In 1994, the Army formed an Unexploded Ordnance Committee after the commanding general of the Army s Training and Doctrine Command expressed concern about the large number of submunition duds remaining on the battlefield after the Gulf War. The commanding general sent a message to the Army s leadership that stated, This is a force protection issue. Based on number of submunitions employed during ODS [Operation Desert Storm], dud rate of only two percent would leave about 170K-plus unexploded Army submunitions restricting ground forces maneuver. Add in other services submunitions and scope of problem mushrooms . Need to reduce hazards for soldiers on future battlefields from own ordnance. As one of the Army s efforts to reduce the dud rates of these submunitions, the commander stated that all future requirements documents for submunitions should state that the hazardous dud rate should be less than 1 percent. The committee s work also resulted in calculations of the cost of retrofitting or replacing the Army s submunition stockpile to lower hazardous dud rates and the relative costs of cleaning UXO from a battlefield. The Army estimated in 1994 that the cost would be about $29 billion to increase submunition reliability by retrofitting or replacing submunitions to add self-destruct fuzing for the nearly 1 billion submunitions in the Army stockpile. In a different estimate in 1996, the Army estimated the cost to retrofit the stockpile to be $11-12 billion. The Army also estimated lesser costs to retrofit or procure submunitions with self-destruct fuzing for only those munitions most likely to be used, including those in unit basic ammunition loads and pre-positioned ships. These Army cost estimates to equip Army submunitions with self-destruct fuzing do not indicate that they include costs to similarly equip Air Force, Marine, and Navy submunitions. Using actual CMS, Inc., costs to clean up UXO from the CMS sector of the Kuwaiti Gulf War battlefield, the Army also estimated that the cost to reduce the dud rate by adding self-destruct fuzes for the submunitions actually used on a battlefield was comparable to the cost to clean up duds left by unimproved submunitions. The Army further recognized that, while the costs of reducing and cleaning up duds may be similar, the detrimental battlefield fratricide and countermobility effects of duds also need to be considered, as well as humanitarian concerns. In 1995, DOD reported that its long-term solution to reduce UXO is the ongoing efforts to incorporate self-destruct mechanisms in the DoD s high density munitions which would limit further proliferation of unexploded ordnance on the battlefield. DOD called the UXO detection and clearance problem of enormous magnitude. DOD has reported that it is taking actions to increase land mine and submunition reliability rates and reduce dud rates. In a 2000 report to Congress, DOD summarized its overall approach to addressing UXO concerns. DOD stated in that report, An analysis of the UXO problem concluded that UXO concerns are viable and, using existing weapons, the potential exists for millions of UXO. The report further stated that the majority of battlefield UXO will result from submunitions that are not equipped with self-destruct features, pose the greatest potential for UXO hazards. Importantly, DOD s approach to ammunition reliability improvement is to emphasize adding reliability to future procurements rather than fixing the existing stockpile. According to DOD s 2000 report to Congress, The Department does not plan to retrofit or accelerate the demilitarization of its current inventory of weapons containing submunitions that pose UXO hazards. Notwithstanding, the Department will monitor the Service submunition development programs to make sure that every effort is taken to develop a mechanism within the submunition that will increase its overall reliability, thus reducing the potential for UXO. The report went on to state that DOD will also monitor future procurement programs to ensure that reprocured weapons that contain submunitions were improved to increase their overall reliability. In addition to DOD actions aimed at controlling the UXO problem, there are a number of procurement-related efforts in place by the services to reduce and/or eliminate potential UXO from new purchases of ammunition. For example, in its 2000 report to Congress, DOD states, The Army is in the process of producing new weapons that contain self- destruct mechanisms. In addition, the Army is considering developing requirements for new weapons systems aimed at controlling unexploded submunitions. The report also states that Air Force and Navy munitions procurements likewise address reliability concerns. DOD has concluded in this report that hile it has been deemed infeasible to attempt to retrofit legacy weapons systems with self-destruct features, new and future submunition-based weapon systems for the Services have or will incorporate self-destruct features to contain the UXO problem. In January 2001, the Secretary of Defense issued a memorandumdirecting the services to adhere to DOD policy on submunition reliability. This memorandum states, Submunition weapons employment in Southwest Asia and Kosovo, and major theater war modeling, have revealed a significant unexploded ordnance (UXO) concern . . . . It is the policy of the DoD to reduce overall UXO through a process of improvement in submunition system reliability the desire is to field future submunitions with a 99% or higher functioning rate. The memorandum did accept lower functioning rates under operational conditions due to environmental factors such as terrain and weather. The memorandum allows the continued use of current lower reliability munitions until superseded by replacement systems. Because it was beyond the scope of this report, we did not assess DOD s actions to increase ammunition reliability and reduce dud rates. Appendix V: Scope and Methodology At least in part because the Gulf War took place over a decade ago, DOD reported that many records on the U.S. use of land mines and U.S. casualties had been destroyed, were lost, were incomplete, conflicted with each other, or were archived and not easily accessed. Resulting inconsistencies and gaps in data provided to us by the services and DOD on U.S. Gulf War land mine use, casualties, and lessons learned required that we perform extensive cross-checking and comparisons to check facts and identify associated themes. To create a picture of what happened during the Gulf War, DOD assisted us in obtaining available records and documents from various DOD sources in many different locations. We relied heavily on original service casualty reports as well as service and DOD after-action and lessons-learned reports written soon after the Gulf War. Based on our request, the Army conducted a reevaluation of original Gulf War casualty data and arrived at more exact data on causes and circumstances of Army-reported casualties. Our resulting compilation of service data used in calculating U.S. usage of land mines, U.S. casualties, and lessons learned during the Gulf War is the most complete assembled to date for the topics in this report. DOD officials believe that the service- provided information on land mine usage and casualties shown in this report is as accurate as service records permit. DOD, the Joint Chiefs of Staff, and the services confirmed the accuracy of the information they provided us on casualties and land-mine use and the information included in DOD lessons-learned and after-action reports. To obtain information on land mine issues, we reviewed numerous reports and analyses of land mines by such organizations as the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics); the Center for Army Analysis; the National Academy of Sciences; Lawrence Livermore National Laboratory; the Army Training and Doctrine Command; and the Congressional Research Service. No one DOD or service office maintained complete records on the Gulf War, and existing DOD and service records were stored in various locations around the country. For example, the Headquarters of the U.S. Central Command, which had directed the war, retained no records of the war, and the services had no central repositories for the Gulf War documentation we sought. We therefore visited the following locations to obtain all available detailed descriptions of land mine systems, the doctrine governing their use, documents and records on Gulf War land mine usage and effectiveness, and historical records on the Gulf War: Office of the Project Manager for Mines, Countermine and Demolitions, and Close Combat Systems, U.S. Army Program Executive Office for Ammunition, Picatinny Arsenal, New Jersey; U.S. Army Communications-Electronics Command, Night Vision and Electronic Sensors Directorate, Fort Belvoir, Virginia; Headquarters, U.S. Central Command, MacDill Air Force Base, Florida; U.S. Army Engineer Center, Fort Leonard Wood, Missouri; U.S. Army Field Artillery Center, Fort Sill, Oklahoma; Naval Explosive Ordnance Disposal Technology Division, Indian Head, Marine Corps History and Museums, Headquarters, U.S. Marine Corps, Marine Corps Combat Development Center, Capability Assessment Branch, Quantico, Virginia; Army Center of Military History, Fort McNair, Washington, D.C.; and Air Force Headquarters, Washington, D.C. To determine the extent to which land mines and unexploded ordnance caused U.S. casualties, we gathered data from the services and consulted original casualty reports. Because DOD data was not sufficiently detailed to allow identification of land mine or related casualties, we used the services more detailed data. In collaboration with service officials, we reconciled inconsistencies in order to identify the most authoritative data available for casualties. We visited or received information on Gulf War casualties from the following locations: Army Records Management Declassification Agency, Springfield, Virginia; Army Safety Center, Ft. Rucker, Alabama; U.S. Marine Corps Casualty Section, Quantico, Virginia; Army Casualty Office, Washington, D.C.; U.S. Air Force Personnel Center, Casualty Branch, Randolph Air Force Base, San Antonio, Texas; U.S. Navy Casualty Division, Millington, Tennessee; and Office of the Secretary of Defense s Directorate for Information Operations and Reports, Arlington, Virginia. Lessons learned- and after-action reports and documents on the Gulf War were similarly not available in a central location but rather were located in various service organizations and libraries. Therefore, to identify concerns expressed in these reports about the use of land mines and related unexploded ordnance issues, we visited and examined documents at the following locations: Center for Army Lessons Learned, Ft. Leavenworth, Kansas; Army Training and Doctrine Command s Analysis Center, Ft. Leavenworth, U.S. Army Materiel Systems Analysis Activity, Aberdeen Proving Ground, U.S. Naval Historical Center, Washington Navy Yard, Washington, D.C.; U.S. Air Force Historical Research Agency, Maxwell Air Force Base, Combined Arms Research Library, Ft. Leavenworth, Kansas; U.S. Air Force Headquarters, Washington, D.C.; and Marine Corps Combat Development Center, Quantico, Virginia. To identify U.S. policy on the U.S. use of land mines during the Gulf War, we interviewed or obtained documentation from DOD and service officials in Washington, D.C. These included officials from the Office of the Joint Chiefs of Staff, the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); Office of the Deputy Assistant Secretary for Peacekeeping and Humanitarian Assistance, Assistant Secretary of Defense (Special Operations and Low-Intensity Conflict); the Army Office of the Deputy Chief of Staff for Operations and Plans, Strategy, Plans and Policy Directorate; Office of the Deputy Chief of Staff for Logistics, Army Headquarters; and service headquarters officials of the Air Force, Marine Corps, and Navy. To obtain detailed information on the U.S. policy concerning the use of land mines during the Gulf War, we interviewed the U.S. commander-in-chief of all forces participating in the Gulf War. To obtain details on what ordnance was found on the battlefield after the Gulf War, we interviewed in person or by telephone seven former employees or officials of Conventional Munitions Systems (CMS), Inc. These persons were all retired U.S. military service members, ranking from major general to sergeant first class, and all but one had extensive experience in ordnance and explosive ordnance disposal. We confirmed with each CMS interviewee that they believed that the CMS data reported to the Army were accurate. We did not examine the evidence CMS used to prepare its report contracted by the Army. To discuss U.S. policy and legal issues related to land mines, we interviewed officials from the Department of State s Office of the Legal Adviser, Office of International Security Negotiations, and Office of Humanitarian Demining Programs. In addition, we discussed the major topics and themes in this report with an official from the State Department s Bureau of Political-Military Affairs. We conducted our review between June 2001 and September 2002 in accordance with generally accepted government auditing standards. Appendix VI: Comments from the Department of Defense The following are GAO s comments on the Department of Defense s (DOD) letter dated September 12, 2002. <9. GAO Comments> 1. We have deleted from the report the example of Gator land mine use against an aircraft on an airfield. 2. We have changed the report to clarify the fact that Scud transporters were targeted rather than the Scud missiles they carried. 3. In conducting our review, we consulted these and other reports, as we state in our objectives and scope and methodology sections. We cite the National Research Council s report in appendix IV. However, because it was beyond the scope of our report to evaluate land mine policy and program alternatives, which is the general subject of these reports, we do not discuss them in detail.
What GAO Found The utility of land mines on the modern battlefield has come into question in recent years, largely because of their potential for causing unintended casualties and affecting U.S. forces' maneuverability. These concerns were raised during the Persian Gulf War. U.S. land mines of all types--nonself-destructing and self-destructing, antipersonnel and antitank--were available for use if needed in the Gulf War from U.S. land mine stockpiles, which contained 19 million land mines. U.S. forces sent to the Gulf War theater of operations took with them for potential use over 2.2 million land mines. U.S. war plans included plans for the use of land mines if required by the tactical situation. According to Department of Defense (DOD) documents, no nonself-destructing or "dumb," land mines were used; and the reported number of self-destructing, or "smart," land mines used by the services totaled approximately 118,000. DOD did not provide information on the effect of U.S. land mine use against the enemy. According to U.S. service records, of the 1,364 total U.S. casualties in the Gulf War, 81, or 6 percent, were killed or injured by land mines. Concerns about land mines raised in DOD lessons-learned and other reports included the fear of fratricide and loss of battlefield mobility. These concerns led to the reluctance of some U.S. commanders to use land mines in areas that U.S. and allied forces might have to traverse.
<1. Background> Many federal agencies fund research to serve their goals and objectives. For example, NIH, the largest source of federal support for nondefense research, is the federal focal point for medical and behavioral research to help extend healthy life and reduce illness and disability. Each of the 27 institutes and centers that constitute NIH has an explicit mission focused on a particular disease, organ system, stage of development, or a cross- cutting mission, such as developing research tools. Other agencies, such as EPA, FDA, and FAA, support research, in part, to further scientific understanding that may in the future better inform their regulatory decisions. Nineteen offices within EPA conduct and/or support research to help carry out the regulatory aspect of the agency s mission to protect human health and the environment and to implement environmental laws. Similarly, FDA relies on research to help identify and assess risks and to serve as the basis for regulatory decisions about such issues as human and veterinary drugs, medical devices, and the nation s food supply. Finally, FAA, which enforces regulations and standards for the manufacture, operation, and maintenance of aircraft, conducts research to help ensure a safe and efficient system of air navigation and air traffic control. Federal research can be conducted by scientists in government laboratories called intramural research or by scientists at universities, in industry, or at nonprofit organizations called extramural research. In fiscal year 2002, NIH, EPA, FDA, and FAA devoted a total of about $23 billion to intramural and extramural research. (See fig. 1.) Together, these four agencies accounted for about 50 percent of the federal funds devoted to research. Federal laws have created an environment conducive to a full range of joint ventures between government and industry, or between industry and universities, as well as among companies. Specifically, through collaboration, federal and nonfederal partners attempt to share the costs, risks, facilities, and expertise needed for research and to promote the movement of ideas and technologies between the public and private sectors. This cooperation between federal and private sector researchers may take many forms. Through informal cooperation, for example, federal agencies and industry may coordinate and share research agendas to prevent duplication of effort, or agency and private sector scientists may consult one another. Through formal cooperation, federal and nonfederal partners use written agreements, such as contracts or memorandums of understanding, to define the roles and responsibilities of each party. However, each type of arrangement differs in the extent of federal involvement in the research conducted under the agreement. Generally, work conducted under contracts is directed and overseen by federal agencies that do not participate in the work. In contrast, memorandums of understanding allow great flexibility in terms of participation by federal agencies and may also allow for sharing of resources or the funding of research by nonfederal partners. Congress may provide federal agencies the authority to accept gifts from external sources. For example, under the Public Health Service Act, certain agencies, such as NIH, may accept funds or nonmonetary gifts to support their research efforts or other agency functions. Under the act, donors may stipulate how agencies may use their gifts, for example, to only support research on a specific disease or condition, or they may allow the agency to use the gift for the benefit of any effort without stipulations. An agency s statutory authority to accept donations is called its gift acceptance authority. In 2001 and 2003, NIEHS and ORD, respectively, entered into research arrangements with ACC to solicit and fund extramural research proposals. These arrangements specified how research proposals would be solicited, reviewed, funded, and overseen. Specifically, under the NIEHS-ACC arrangement, ACC and NIEHS agreed to support a 3-year research program to study the effects on reproduction and development of exposure to chemicals in the environment. ACC provided a gift of $1.05 million to NIEHS to fund this research, and NIEHS contributed $3.75 million to the project. Using the combined funds, NIEHS awarded a total of 17 research proposals from among the 52 it received. The program ended in 2004. Under the ORD-ACC arrangement, ACC and ORD agreed to support and fund research, with the first solicitation for research proposals focusing on novel approaches to analyzing existing human exposure data. In response to this first announcement of funding availability, issued in July 2003, 36 research proposals were submitted. ORD funded four research proposals, for a total of about $1.7 million, and ACC funded two proposals, for a total of about $1 million. ORD and ACC separately funded the research proposals that each had selected under this arrangement because EPA does not have the authority to accept contributions from outside sources. Researchers could specify whether they wanted their proposals considered for funding solely by ORD or by either ORD or ACC. ACC is a nonprofit trade organization representing most major U.S. chemical companies. It represents the chemical industry on public policy issues, coordinates the industry s research and testing programs, and leads the industry s initiative to improve participating companies environmental, health, and safety performance. In 1999, ACC launched a $100 million research initiative to study the potential impacts of chemicals on human health and the environment and to help improve screening and testing methods. A primary goal of the initiative is to focus on projects or programs that might take advantage of work planned or conducted by EPA, NIEHS, and other laboratories to stimulate collaboration and/or to prevent unnecessary duplication. Individuals or organizations can have conflicts of interest that arise from their business or financial relationships. Typically, federal conflict-of- interest laws and regulations govern the actions of individual federal employees, including their financial interests in, and business or other relationships with, nonfederal organizations. Conflict-of-interest concerns about individual federal employees typically arise when employees receive compensation from outside organizations; such arrangements often require prior approval from the federal employer. When a federal agency enters into a relationship with, or accepts a gift from, a regulated company or industry, concerns may arise about the agency s ability to fulfill its responsibilities impartially. <2. NIEHS and ORD Used Broad Legal Authority to Support Their Arrangements with ACC> The statutory provisions that NIEHS and ORD relied upon to enter into their arrangements with ACC grant the agencies broad authority to collaborate with external organizations in support of research. Nothing in these statutes appears to prohibit either agency from entering into research arrangements with nonprofit organizations such as ACC. NIEHS used the authorities granted to NIH s institutes and centers under sections of the Public Health Service Act, as amended, to enter into its arrangement with ACC (sections 301 and 405). The act authorizes NIH and its institutes and centers to cooperate, assist, and promote the coordination of research into the causes, diagnosis, treatment, control, and prevention of physical and mental diseases. In its research arrangement with ACC, NIEHS cited sections of the act as the authority it relied on to enter into the arrangement. These sections enumerate the general powers and duties of the Secretary of Health and Human Services and the directors of the institutes and centers in broad terms, including the authority to encourage and support studies through grants, contracts, and cooperative agreements. Similarly, ORD relied on broad authorities granted to EPA under sections of the Clean Air Act, as amended; the Clean Water Act, as amended; and the Solid Waste Disposal Act, as amended, to enter into its research arrangement with ACC (sections 103, 104, and 8001, respectively). These sections authorize EPA to promote the coordination and acceleration of research relating to the causes, effects, extent, prevention, reduction, and elimination of pollution in the air and water, and from solid waste. These sections authorize the EPA Administrator and other EPA officials to cooperate with appropriate public and private agencies, institutions, organizations, and industry to conduct research and studies. <3. NIEHS and ORD Did Not Formally Evaluate, but Took Steps to Manage, the Potential for Conflicts of Interest in Their Arrangements with ACC> NIEHS and ORD did not formally evaluate the possibility that organizational conflicts of interest could result from their research arrangements with ACC because neither agency had policies requiring such evaluations. However, officials at both agencies took steps to manage potential conflicts that might arise during implementation of the arrangements. <3.1. NIEHS and ORD Did Not Formally Evaluate Potential Conflicts of Interest that Could Result from Research Arrangements with ACC> In 2001 and 2003, when they entered into arrangements with ACC, neither NIH nor EPA had specific policies requiring officials to formally evaluate potential conflicts of interest that could result from entering into such collaborative arrangements. As a result, neither NIEHS nor ORD conducted such evaluations. During negotiations with ACC on their research arrangements, NIEHS and ORD officials recognized the potential for organizational conflicts of interest, or at least the appearance of such conflicts. However, in light of the lack of policies on this issue, neither agency formally evaluated the potential for conflicts before finalizing their arrangements with ACC. Instead, officials told us, they informally evaluated the potential for conflicts of interest and intended to manage potential conflicts that might arise during implementation. To date, neither agency has developed any such policy guidance. <3.2. NIEHS and ORD Relied on Existing Research Management Processes to Help Mitigate Potential Conflicts of Interest> In implementing their arrangements with ACC, NIEHS and ORD used their general research management processes to help manage potential conflicts of interest. These processes are designed to help ensure the integrity of scientific research undertaken by these agencies. According to agency officials, these processes helped guard against undue influence of ACC by limiting ACC s participation in the selection, review, and oversight of agency-funded research conducted under the arrangements. For example: Developing research topics. Research priorities at both NIEHS and ORD were identified through routine agency planning processes that involved significant input from a range of stakeholders before the arrangements with ACC were finalized. In addition, NIEHS included research topics suggested by the National Research Council, a congressionally chartered scientific advisory body. Both NIEHS and ORD then worked with ACC to select the specific scientific topics that would become the focus of the research conducted under the arrangements. According to NIEHS and ORD officials, their arrangements with ACC did not change or influence the agencies research priorities. Because the research conducted under these arrangements supported the agencies existing research agendas, officials believe that the ACC arrangements helped them effectively leverage federal research dollars. Advisory council consultation. Both agencies have advisory panels that they routinely consult on matters related to the conduct and support of research, among other things. These consultations include public sessions that allow interested individuals, in addition to the panel members, to provide comments on the topics discussed. NIEHS obtained approval from its National Advisory Environmental Health Sciences Council before entering into the arrangement with ACC. ORD did not specifically consult its Board of Scientific Counselors regarding the agency s arrangement with ACC, but did seek input from the Board regarding the research priorities covered by the arrangement. Both advisory bodies were established under the Federal Advisory Committee Act and must comply with the requirements of the act as well as related regulations. Publicly announcing the availability of funds. Both NIEHS and ORD, in 2001 and 2003, respectively, announced the opportunity to apply for grant funds available under the arrangements with ACC throughout the scientific community. Both agencies announced the availability of funding on their Web sites and included detailed information on the research programs and how to apply for funds. Both agencies also posted announcements in publications that are commonly used to advertise the availability of federal funding. Specifically, NIEHS published an announcement in the NIH Guide to Grants and Contracts, and ORD published its announcement in the Catalog of Federal Domestic Assistance. In addition, both agencies sent announcements to relevant scientific and professional organizations and to interested scientists who had signed up for electronic notice of funding opportunities. ORD also published a notice in the Federal Register. By widely announcing the availability of funds, the agencies hoped to ensure the participation of many qualified researchers and to avoid the appearance of preferential treatment for specific researchers. Moreover, widely publicizing the availability of funds would help ensure the openness of the agencies research processes. However, the agencies differed in the clarity of their instructions regarding how information would be shared with ACC. For example, in the portion of the announcement labeled special requirements, NIEHS s announcement stated that applicants should, among other things, submit a letter allowing NIEHS to share their proposals with ACC. According to NIEHS this wording was not intended to be interpreted as a requirement but instead was intended to be a request. We believe that the language could have confused potential applicants about whether sharing information with ACC was required and could have dissuaded some qualified applicants from submitting proposals. In contrast, under the ORD-ACC arrangement, researchers were clearly advised that they could elect to have their proposals considered for funding by either ORD or ACC or solely by ORD. Applicants who did not want to share their proposals with ACC could elect to have their applications reviewed and considered solely by ORD. Determining completeness and responsiveness. Initially, NIEHS and ORD reviewed all submitted research proposals for compliance with administrative requirements. ACC did not participate in these reviews. At both agencies, research proposals judged incomplete were to receive no further consideration. NIEHS and ORD also had similar approaches for determining the responsiveness of the applications to the goals of the research program. At ORD, responsiveness was determined as part of the agency s completeness review and did not involve ACC. Similarly, at NIEHS, responsiveness was determined solely by agency officials. Although NIEHS s announcement stated that ACC would participate in the responsiveness review, NIEHS and ACC officials told us that ACC did not take part in this review. Peer review of research proposals. At both NIEHS and ORD, complete and responsive research proposals were independently peer reviewed for technical and scientific merit. According to officials, each agency followed its standard procedures for selecting experts to serve as peer reviewers and excluded representatives of ACC from serving as reviewers. At both agencies, only meritorious research proposals qualified for funding decisions. Both agencies also subjected these proposals to additional independent review. NIEHS s National Advisory Environmental Health Sciences Council reviewed qualified proposals, and ORD required other EPA staff to review research proposals that were judged excellent or very good to help ensure a balanced research portfolio responsive to the agency s existing research agenda. ACC convened its own technical panels to review qualified research proposals to ensure the relevancy of the proposals to the industry s research needs and to ensure that the proposals balanced its research portfolio. Making results available to the public. NIEHS and ORD required without input from ACC the results of the research funded under the arrangements to be made public. For example, according to agency officials, NIEHS and ORD required researchers to discuss their preliminary findings in periodic public meetings, and, once their projects were completed, both agencies required researchers to submit their results for publication in peer-reviewed scientific journals. In addition, NIEHS strongly encouraged researchers to present their results at professional conferences and workshops. Officials from both agencies agreed that publicizing the results of research conducted under the arrangements helped ensure that agency-sponsored research adhered to accepted analytic standards and was unbiased. <3.3. ORD Took Additional Steps that Officials Believe Helped Manage Potential Conflicts of Interest> In addition to the routine research management processes, discussed in the previous section, officials at ORD took further steps that they believe helped them manage the potential for conflicts of interest in their collaboration with ACC. Specifically: Research arrangement developed with public input. ORD publicly announced that it might collaborate with ACC and invited public comment on the terms and conditions of the proposed partnership. In addition, ORD invited public comment on the draft announcement of the opportunity to apply for funding. ORD officials told us that they believed an open and public process to define the terms of ORD s collaboration with ACC could help guard against real or perceived conflicts of interest. Membership of review panels. In addition to prohibiting ACC representatives from serving as expert reviewers, ORD did not allow employees of ACC member companies to serve on the peer review panels that evaluated research proposals for technical and scientific merit. ORD officials said this step helped minimize the perception that ACC or its members could play a role in evaluating the scientific merit of research proposals. <4. NIEHS Generally Complied with NIH s Gift Acceptance Policy, but the Policy Cannot Provide Assurance that Conflicts of Interest Are Evaluated and Managed> When accepting funds from ACC under the research arrangement, NIEHS officials complied with those sections of NIH s policy that guide the acknowledgement and administration of gifts. However, the policy s guidance on evaluating and managing potential conflicts is extremely broad, lacking clarity and consistency. Consequently, officials have wide discretion in deciding how to fulfill their responsibilities under the gift acceptance policy. Further, the policy does not require officials to document the basis of their decisions. As a result, the gift policy does not provide the public sufficient assurance that potential conflicts of interest between NIH and donor organizations will be appropriately considered. Specifically, NIH s gift acceptance policy outlines several steps that officials must take to acknowledge and administer gifts. NIEHS officials generally complied with these policy sections when accepting the gift from ACC. For example, NIEHS officials acknowledged the acceptance of ACC s gift in a timely manner, deposited the funds in government accounts, and used the gift only for the purposes stipulated by ACC. As the policy also requires, NIEHS obtained ACC s written agreement that any remaining funds could be used to further NIH s goals without additional stipulation. However, other policy sections are inconsistent or unclear about what actions officials must take to evaluate conflicts of interest when accepting gifts thereby affording officials wide discretion in carrying out their responsibilities. For example, one part of the policy in effect at that time and in subsequent revisions requires the approving official to use two assessment tools to evaluate conflicts of interest before accepting a gift, but another part of the policy states that the use of these tools is recommended rather than required. The Director of NIEHS, who had authority to accept the gift, said he was acutely aware that accepting the ACC money could pose the potential for real or apparent conflicts of interest. In light of his concerns, he spoke informally with the Acting NIH Director, senior NIEHS officials, NIH legal advisers, and senior officials from two external groups. Through these discussions and using his professional judgment, the NIEHS Director determined that accepting the ACC funds would not present a conflict of interest for NIEHS. When he decided to accept the ACC gift, the Director said that he was unaware of the assessment tools recommended by NIH s policy. However, he believes the steps he and other NIEHS officials took in accepting ACC s gift satisfied the gift acceptance policy regarding conflicts of interest. Given the lack of consistency in the policy sections that relate to conflicts of interest and the use of the assessment tools, it is difficult for us to determine whether the actions the director took complied with the NIH policy. Moreover, without documentation of his actions, we could not determine whether the steps he took were adequate to evaluate the potential for conflicts of interest. Furthermore, the policy in effect at that time and in subsequent revisions does not provide clear guidance on what type of coordination should occur between NIH offices in evaluating the potential for conflicts of interest when accepting a gift. For example, several NIEHS staff were concerned that the proposed ACC gift could result in an apparent conflict of interest and, consistent with NIH s gift policy, forwarded the written agreement to the NIH Legal Advisor s Office for review. However, the gift policy does not require staff to identify their concerns when seeking legal advice. According to these officials, in referring the agreement to NIH attorneys for review, they did not specifically request a determination of whether the gift would constitute a conflict of interest. As a result, the NIH attorneys conducted a general legal review of the gift and the proposed research arrangement, focusing primarily on the agency s legal authority to enter into the arrangement. NIH legal staff told us that they could have provided assistance on conflict-of-interest issues had they been notified that the program staff had such concerns, or if in their view, the gift or written agreement had contained clauses that were obviously illegal or contrary to NIH policy. If the policy had been clearer about how conflict of interest concerns are to be communicated to NIH attorneys, we believe the legal staff would have conducted a conflict-of-interest review. Finally, NIH s policy does not require officials to document how they have addressed conflict-of-interest concerns. Neither the NIEHS Director nor other senior NIH officials documented their consideration of potential conflicts of interest when accepting the ACC gift. The lack of documentation, coupled with the broad discretion resulting from the inconsistency and lack of clarity in the policy, allows officials to satisfy requirements with a wide array of actions, ranging from a formal evaluation to a highly informal one. <5. Research Arrangements Such as Those with the American Chemistry Council Are Not Widely Used> At NIH, we identified nine arrangements that were somewhat comparable to the ACC research arrangements, but we did not identify any similar arrangements at ORD, other EPA program offices, FDA, or FAA. None of the nonprofit partners in the nine research arrangements we found at NIH represents industry in the same direct manner that ACC represents the chemical industry. However, some of the nonprofit partners have either general corporate sponsorship or corporate sponsorship for specific events. For example, sponsors of the Parkinson s Unity Walk in 2004 included pharmaceutical companies. The sponsors helped defray operating expenses to ensure that all proceeds from the walk supported Parkinson s research. Likewise, the Juvenile Diabetes Research Foundation received corporate sponsorship from an airline company, manufacturers of soft drinks and household products, and others, none of whom had any material connection to the outcome of the research. One nonprofit partner is a corporation s philanthropic foundation. At NIH, we found a total of 11 institutes and centers either singly or with other institutes and centers that had entered into research arrangements with one or more nonprofit partners. Under the terms of four of the arrangements, NIH accepted gift funds from nonprofit partners to support the research described in the arrangements. In four other arrangements, when NIH institutes or centers lacked sufficient money to fund all the research proposals rated highly by peer review panels, they forwarded the research proposals to their nonprofit partner(s) for possible funding. (See table 1 for details on the NIH arrangements.) At EPA, none of the 16 program and regional offices we contacted identified any arrangements similar to the research arrangement between ORD and ACC. In addition, we did not identify any partnerships similar to the ACC research arrangement at FDA or at FAA. FDA officials we contacted said the agency had no research arrangements similar to the ACC arrangement with organizations that represent industry. Finally, FAA officials said that the agency had not entered into any research arrangements like the arrangements with ACC and generally did not use this type of collaborative arrangement to conduct extramural research. <6. Conclusions> Federally funded research advances scientific understanding and helps improve regulatory approaches to protecting human health and the environment. For both regulatory and nonregulatory agencies collaboration with external organizations is one mechanism to maximize the financial and intellectual resources available to federal agencies. However, collaboration, particularly with organizations that directly represent regulated industries, can raise concerns about conflicts of interest that could call into question the quality and independence of federally funded research. As a result, it is imperative that federal agencies ensure, before they enter into collaborative research arrangements with nonfederal partners, that they fully consider the potential for conflicts of interest. NIEHS and ORD relied on their general research management processes to minimize any potential conflicts of interest that might arise during implementation of their respective ACC arrangements. While these processes were appropriate for managing the arrangements, they were not specifically designed to address conflict-of-interest concerns and therefore cannot be considered adequate substitutes for formal conflict-of-interest evaluations. Consequently, without policies requiring officials at NIH and EPA to formally evaluate and manage potential conflicts of interest when they enter into collaborative arrangements such as those with ACC, neither agency can ensure that similar arrangements in the future will be systematically evaluated and managed for potential conflicts of interest. When accepting the gift from ACC, NIEHS officials believed their actions satisfied the conditions of the NIH gift acceptance policy for conflict of interest. However, NIH s policy both the wide discretion allowed in deciding on whether and how officials should evaluate conflicts of interest and the lack of required documentation provides little assurance of systematic evaluation of gifts that may present potential conflicts of interest for the agency. To allay concerns about the potential for conflicts of interest that may result from accepting gifts, officials should clearly document both their evaluation of the potential for conflicts of interest and the basis for their decisions to accept or reject a gift. <7. Recommendations for Executive Action> The Director of NIH and the Administrator of EPA should develop formal policies for evaluating and managing potential conflicts of interest when entering into research arrangements with nongovernmental organizations, particularly those that represent regulated industry. The Director of NIH should further revise the NIH gift acceptance policy to require NIH officials to evaluate gifts, particularly from organizations that represent regulated industry, for potential conflicts of interest and to document the basis for their decisions, including what, if any, steps are needed to manage potential conflicts. <8. Agency Comments and Our Evaluation> We provided EPA and NIH with a draft of this report for their review and comment. EPA neither agreed nor disagreed with our recommendation, but provided technical comments that we have incorporated as appropriate. (See app. II.) NIH concurred with our recommendations and stated it would take steps to implement them. In addition, NIH emphasized that is it not a regulatory agency and suggested changes to the report to clarify its role. We have added language to clarify NIH s relationship with the regulated industry. NIH also provided technical comments that we have incorporated as appropriate. NIH s comments and our response are included in appendix III. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, copies of this report will be sent to the congressional committees with jurisdiction over the Environmental Protection Agency and the National Institutes of Health; the Honorable Stephen L. Johnson, Acting Administrator of EPA; the Honorable Elias A. Zerhouni, Director of NIH; and the Honorable Joshua B. Bolten, Director of the Office of Management and Budget. This report will also be available at no charge on GAO s home page at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841. Key contributors to this report are listed in appendix IV. Objectives, Scope, and Methodology As requested by the Ranking Member of the Subcommittee on Environment, Technology and Standards, House Committee on Science, and the Ranking Member of the Subcommittee on Research, House Committee on Science, we determined the (1) legal authority the National Institutes of Health s (NIH) National Institute of Environmental Health Sciences (NIEHS) and the Environmental Protection Agency s (EPA) Office of Research and Development (ORD) used to enter into arrangements with the American Chemistry Council (ACC); (2) extent to which NIEHS and ORD evaluated and managed the possibility that conflicts of interest could result from their arrangements; (3) extent to which NIEHS complied with NIH s gift acceptance policy when accepting ACC s funds; and (4) extent to which similar research arrangements exist within other offices and programs within NIH and EPA, as well as other regulatory agencies. To determine the legal authorities NIEHS and ORD relied on to enter the research arrangements with ACC to solicit and fund extramural research, we reviewed the statutes cited in agency documentation related to the arrangements. For NIH, these authorities included sections 301 and 405 of the Public Health Service (PHS) Act, as amended (42 U.S.C. 241 and 284); and gift acceptance statutes contained in sections 231 and 405(b)(1)(H) of the PHS Act as amended (42 U.S.C. 238, 284(b)(1)(H)). For ORD these authorities included section 103 of the Clean Air Act, as amended (42 U.S.C. 7403), section 104 of the Clean Water Act, as amended (33 U.S.C. 1254), and section 8001 of the Solid Waste Disposal Act, as amended (42 U.S.C. 6981). We also reviewed the following related documentation on delegations of authority: Memorandum from the Assistant Secretary for Health to Public Health Service Agency Heads for Delegation of Authority To Accept Gifts Under Title XXI of the PHS, Miscellaneous (July 10, 1995), and NIH Manual Chapter 1130, Delegations of Authority, Program: General #5 Accept Gifts Under Section 231 of the PHS Act, Program: General #10 National Library of Medicine. We also reviewed relevant legislative histories and Comptroller General decisions and interviewed attorneys at NIEHS and ORD about their reviews of the arrangements. Furthermore, we compared each agency s policies and both formal arrangements with the authorities cited above. To determine what measures NIEHS and ORD took to evaluate and manage the potential that conflicts of interest could result from their arrangements with ACC, we interviewed program officials on their perceptions of conflict of interest when the ACC arrangement was being considered, as well as on the actions they took to develop and implement the arrangements. We also interviewed budget and legal officials, as appropriate, at each agency on their involvement in reviewing and completing the arrangements. We reviewed the research arrangements with ACC, as well as other documentation related to the arrangements, including correspondence between agency officials and ACC, interagency memorandums, and documentation of agency legal and other reviews. We considered statutes on conflict of interest and ethics guidelines that might address the need for agencies to consider and manage real or apparent conflicts of interest (18 U.S.C. 209, and the Ethics in Government Act of 1978, 5 U.S.C. app. 4). Finally, we interviewed ACC officials to obtain their views on conflicts of interest and on the role of ACC representatives in developing the announcement of funding availability, reviewing and funding research proposals, and administering the grants. We did not test the NIEHS or ORD internal controls governing the administration of grants awarded under the arrangements. To determine whether NIEHS s acceptance of ACC funds as a gift complied with NIH policy for accepting gifts, we collected and analyzed NIH s policy for gift acceptance and we interviewed legal staff at NIEHS concerning their review of potential gifts and their assistance to program officials. We obtained and reviewed the research arrangement and related documentation on transferring and administering the gift funds. We interviewed program officials on their actions in accepting the funds and compared activities and documentation pertaining to NIEHS s acceptance of ACC s gift with the requirements and recommendations outlined in NIH s policy. To determine the extent of similar research arrangements at other federal agencies, we identified officials responsible for 96 percent or more of the extramural research budgets at NIH, EPA, and two additional agencies. We then used a structured guide to determine what, if any, research arrangements the agencies had with external partners. In addition to NIEHS and ORD, we selected a nonprobability sample of two additional agencies on the basis of the magnitude of the research component of their mission and congressional interest. The two agencies selected were the Food and Drug Administration (FDA) and the Federal Aviation Administration (FAA) because each agency had a research component to its mission, a corresponding research budget, and a regulatory role. We determined that the selection was appropriate for our design and objectives and that the selection would generate valid and reliable evidence to support our work. To determine the extent to which arrangements exist within these four agencies, we obtained the most current available data on extramural research budgets from institutes and centers in NIH, program and regional offices in EPA, and the programs and centers at FAA and FDA. To assess the reliability of these data, we used a structured guide to interview officials at each agency responsible for maintaining the databases containing the data provided. Specifically, we obtained descriptions of the databases, how data are entered into the databases, quality control checks on the data, testing conducted on the data, and officials views on the accuracy and completeness of the data. We asked follow-up questions whenever necessary. FDA officials noted one limitation on the data that were provided. Specifically, when compiling data on research budgets, officials must sometimes subjectively interpret the term research. The impact of such interpretation may cause the extramural research figures for FDA to be slightly overstated. After taking these steps, we determined that the data were sufficiently reliable for the purposes of this report. We used these data to rank order the programs and centers and identify officials in each agency responsible for administering 96 percent or more of each agency s extramural research budget. In our interviews with these officials, we focused on arrangements established since January 1999 specifically, arrangements with characteristics similar to the ACC arrangements. We looked for and considered arrangements with nongovernmental, nonacademic partners to sponsor research extramural to both organizations. We did not collect information or report on the use of other types of agency research cooperation with external partners such as cooperative research and development agreements or informal consultations between agency and external scientists. At NIH, we used a structured guide to interview officials at the following institutes or centers, listed in order of greatest to least extramural research grant-dollar totals, in fiscal year 2002: National Cancer Institute; National Heart, Lung, and Blood Institute; National Institute of Allergy and Infectious Diseases; National Institute of General Medical Sciences; National Institute of Diabetes and Digestive and Kidney Diseases; National Institute of Neurological Disorders and Stroke; National Institute of Mental Health; National Center for Research Resources; National Institute of Child Health and Human Development; National Institute on Drug Abuse; National Institute on Aging; National Eye Institute; NIEHS; National Institute of Arthritis and Musculoskeletal and Skin Diseases; National Human Genome Research Institute; National Institute on Alcohol Abuse and Alcoholism; National Institute on Deafness and Other Communication Disorders; National Institute of Dental and Craniofacial Research; National Institute of Nursing Research; and National Institute of Biomedical Imaging and Bioengineering. Together, these institutes and centers accounted for 99 percent of NIH s total extramural research funds for fiscal year 2002. At EPA, we used a structured guide to interview program officials from the following offices and regions (shown in order of greatest to least funding available for extramural research fiscal year 2003): ORD; Office of Water; Region 6; Region 9; Office of International Affairs; Region 3; Office of Solid Waste and Emergency Response; Region 4; Region 5; Region 1; Region 2; Region 7; Region 10; Region 8; Office of Prevention, Pesticides and Toxic Substances; and Office of Air and Radiation. Together, these offices accounted for 99 percent of the EPA s extramural research funds for fiscal year 2003. At FDA, we interviewed the agency official responsible for getting approval for Memorandums of Agreement from the General Counsel s Office and Office of Grants Management and for ensuring that each agreement is published in the Federal Register. FDA does not accept funds from external partners under these agreements. Finally, at FAA, we interviewed officials from the research and development offices at headquarters as well as the division manager of the Acquisition, Materiel, and Grants Division of the William J. Hughes Technical Center. Together, these offices accounted for 96 percent of the agency s fiscal year 2003 funds for extramural research. To independently corroborate the information obtained from agency officials, to the extent possible, we collected documents on the agreements we identified at these agencies and reviewed agency Web sites maintained by the relevant centers and offices, as well as Web sites maintained by external sources, such as advocacy or trade groups. We conducted our review from February 2004 through February 2005 in accordance with generally accepted government auditing standards. Comments from the Environmental Protection Agency Comments from the National Institutes of Health GAO Contacts and Staff Acknowledgments <9. GAO Contacts> <10. Staff Acknowledgments> In addition to the individuals listed above, key contributions to this report were made by Amy Dingler, Karen Keegan, Judy Pagano, Carol Herrnstadt Shulman, Barbara Timmerman, Mindi Weisenbloom, and Eugene Wisnoski. Also contributing to this report were Anne Dievler and Jim Lager.
Why GAO Did This Study An institute at the National Institutes of Health (NIH) and an office in the Environmental Protection Agency (EPA) entered into collaborative arrangements with the American Chemistry Council (ACC) to support research on the health effects of chemical exposures. NIH accepted a gift from ACC to help fund the research. EPA and ACC funded their proposals separately. The arrangements raised concerns about the potential for ACC to influence research that could affect the chemical industry. GAO determined the agencies' legal authorities to enter into the arrangements; the extent to which the agencies evaluated and managed potential conflicts of interest resulting from these arrangements; the extent to which the NIH institute complied with NIH's gift acceptance policy; and the extent to which NIH, EPA, and other agencies have similar arrangements. What GAO Found NIH's National Institute of Environmental Health Sciences (NIEHS) used the authorities granted to NIH's institutes and centers under sections of the Public Health Service Act to enter into its arrangement with ACC. Similarly, EPA's Office of Research and Development (ORD) relied on authorities granted to EPA under sections of the Clean Air Act, the Clean Water Act, and the Solid Waste Disposal Act to enter into its research arrangement. Nothing in these statutes appears to prohibit either agency from entering into research arrangements with nonprofit organizations such as ACC. NIEHS and ORD did not formally evaluate the potential for conflicts of interest with ACC before they entered into the arrangements, but both agencies took steps to manage the potential as the arrangements were implemented. NIH and EPA had no specific policies requiring officials to evaluate or manage potential conflicts of interest when they entered into the ACC arrangements, nor do they currently have such policies. Although no formal evaluation occurred, agency officials managed the arrangements through their existing research management processes. Both agencies believe these actions helped mitigate the potential for undue influence by ACC and adequately protected the integrity of the scientific research conducted under the arrangements. Because the agencies' research management processes were not designed to address conflict of interest issues they are not a substitute for a formal evaluation of such conflicts. Without policies requiring a formal evaluation and management of conflicts, there is no assurance that similar arrangements will be appropriately evaluated and managed for such conflicts in the future. NIEHS officials complied with portions of NIH's gift acceptance policy that guide the acknowledgement and administration of gifts. However, the policy's guidance on evaluating and managing potential conflicts is extremely broad, and it lacks clarity and consistency. As a result, the policy gives officials wide discretion in this area. In addition, the policy does not require the agency to document the basis for its decisions. Consequently, the policy does not provide sufficient assurance that potential conflicts of interest between NIH and donor organizations will be appropriately considered. While some institutes and centers at NIH had arrangements somewhat similar to the ACC arrangements, GAO did not find any similar arrangements at other program offices at EPA or at the Food and Drug Administration and the Federal Aviation Administration--two other agencies with significant research budgets. None of the nine research arrangements GAO found at NIH institutes and centers involve organizations that represent industry in the same direct manner that ACC represents the chemical industry.
<1. Background> Distance education is a growing force in postsecondary education, and its rise has implications for the federal student aid programs. Studies by Education indicate that enrollments in distance education quadrupled between 1995 and 2001. By the 2000-2001 school year, nearly 90 percent of public 4-year institutions were offering distance education courses, according to Education s figures. Entire degree programs are now available through distance education, so that a student can complete a degree without ever setting foot on campus. Students who rely extensively on distance education, like their counterparts in traditional campus-based settings, often receive federal aid under Title IV of the Higher Education Act, as amended, to cover the costs of their education, though their reliance on federal aid is somewhat less than students who are not involved in any distance education. We previously reported that 31 percent of students who took their entire program through distance education received federal aid, compared with 39 percent of students who did not take any distance education courses. There is growing recognition among postsecondary officials that changes brought about by the growing use of distance education need to be reflected in the process for monitoring the quality of schools educational programs. Although newer forms of distance education such as videoconferencing or Internet courses may incorporate more elements of traditional classroom education than older approaches like correspondence courses, they can still differ from a traditional educational experience in many ways. Table 1 shows some of the potential differences. The Higher Education Act focuses on accreditation a task undertaken by outside agencies as the main tool for ensuring quality in postsecondary programs. Under the act, accreditation for purposes of meeting federal requirements can only be done by agencies that are specifically recognized by Education. In all, Education recognizes 62 accrediting agencies. Some, such as Middle States Association of Colleges and Schools Commission on Higher Education and the Western Association of Schools and Colleges Accrediting Commission for Community and Junior Colleges, accredit entire institutions that fall under their geographic or other purview. Others, such as the American Bar Association Council of the Section of Legal Education and Admissions to the Bar, accredit specific programs or departments. Collectively, accrediting agencies cover public and private 2-year and 4-year colleges and universities as well as for-profit vocational schools and nondegree training programs. Thirty-nine agencies are recognized for the purpose of accrediting schools or programs for participation in the federal student aid programs. Education is required to recognize or re-recognize these agencies every 5 years. In order to be recognized by Education as a reliable authority with regard to educational quality, accrediting agencies must, in addition to meeting certain basic criteria, establish standards that address 10 broad areas of institutional quality, including student support services, facilities and equipment, and success with respect to student achievement. While the statute provides that these standards must be consistently applied to an institution s courses and programs of study, including distance education courses and programs, it also gives accrediting agencies flexibility in deciding what to require under each of the 10 areas, including flexibility in whether and how to include distance education within the accreditation review. The current accreditation process is being carried out against a public backdrop of concern about holding schools accountable for student learning outcomes. For example, concerns have been expressed about such issues as the following: Program completion the percentage of full-time students who graduate with a 4-year postsecondary degree within 6 years of initial enrollment was about 52 percent in 2000. Unprepared workforce business leaders and educators have pointed to a skills gap between many students problem solving, communications, and analytical thinking ability and what the workplace requires. To address concerns such as these, there is increased interest in using outcomes more extensively as a means of ensuring quality in distance education and campus-based education. The Council for Higher Education Accreditation a national association representing accreditors has issued guidelines on distance education and campus-based programs, that, among other things, call for greater attention to student learning outcomes. Additionally, in May 2003, we reported that 18 states are promoting accountability by publishing the performance measures of their colleges and universities, including retention and graduation rates, because some officials believe that this motivates colleges to improve their performance in that area. At the national level, Education stated in its 2004 annual plan that it will propose to hold institutions more accountable for results, such as ensuring a higher percentage of students complete their programs on-time. The congressionally appointed Web-based Education Commission has also called for greater attention on student outcomes. The Commission said that a primary concern related to program accreditation is that quality assurance has too often measured educational inputs (e.g., number of books in the library, etc.) rather than student outcomes. Finally, the Business Higher Education Forum an organization representing business executives and leaders in postsecondary education has said that improvements are needed in adapting objectives to specific outcomes and certifiable job-skills that address a shortage of workers equipped with analytical thinking and communication skills. <2. Current Federal Restrictions on Distance Education Affect Few Schools Ability to Offer Federal Student Aid, but Numbers Could Increase in the Future> Although current federal restrictions on the extent to which schools can offer programs by distance education and still qualify to participate in federal student aid programs affect a small number of schools, the growing popularity of distance education could cause the number to increase in the future. We found that 14 schools were either now adversely affected by the restrictions or would be affected in the future; collectively, these schools serve nearly 225,000 students. Eight of the 14 schools are exempt from the restrictions because they have received waivers as participants in Education s Demonstration Program, under which schools can remain eligible to participate in the student aid programs even if the percentage of distance education courses or the percentage of students involved in distance education rises above the maximums set forth in the law. Three of the remaining 5 schools in the Demonstration Program are negotiating with Education to obtain a waiver. The 14 schools that the current federal restrictions called the 50-percent rules affect, or nearly affect, are shown in table 2. They vary in a number of respects. For example, 2 are large (the University of Phoenix has nearly 170,000 students and the University of Maryland University College has nearly 30,000), while 5 have fewer than 1,000 students. Six of the 14 are private for-profit schools, 5 are private nonprofit schools, and 3 are public. Thirteen of the schools are in Education s Demonstration Program, and without the waivers provided under this program, 8 of the 13 would be ineligible to participate in federal student aid programs because 50 percent or more of their students are involved in distance education. One school that is not part of the Demonstration Program faces a potential problem in the near future because of its growing distance education programs. Two examples from among the 14 schools will help illustrate the effect that the restrictions on the size of distance education programs have on schools and their students. The University Maryland University College, a public institution, located in Adelphi, Maryland, had nearly 30,000 students and more than 70 percent of its students took at least one Internet course in the 2000-2001 school year. The college is participating in Education s Demonstration Program and has received waivers to the restrictions on federal student aid for schools with substantial distance education programs. According to university officials, without the waivers, the college and about 10,000 students (campus-based and distance education students) would no longer receive about $65 million in federal student aid. Jones International University, a private for-profit school founded in 1993 and located in Englewood, Colorado, served about 450 students in the 2000-2001 school year. The university offers all of its programs online and offers no campus-based courses. The university has received accreditation from the North Central Association of Colleges and Schools, a regional accrediting agency that reviews institutions in 19 states. In August 2003, school administrators told us that they would be interested in federal student program eligibility in the future. In December 2003, the school became a participant in Education s Demonstration Program and, therefore, its students will be eligible for federal student aid when Education approves the school s administrative and financial systems for managing the federal student aid programs. <3. According to Education, the Demonstration Program Has Not Revealed Negative Consequences of Waiving the Current Federal Restrictions on Distance Education> In the second of two congressionally mandated reports on federal laws and regulations that could impact access to distance education, Education concluded, he Department has uncovered no evidence that waiving the 50-percent rules, or any of the other rules for which waivers were provided, has resulted in any problems or had negative consequences. In its report, Education also stated that there is a need to amend the laws and regulations governing federal student financial aid to expand distance education opportunities, and officials at Education recognize that several policy options are available for doing so. A significant consideration in evaluating such options is the cost to the federal student aid programs. Regarding these costs, Education has not provided data on the cost of granting waivers to the 50-percent rules in the first two reports on the Demonstration Program. Based in part on our discussions with Education officials and proposals made by members of Congress, there appear to be three main options for consideration in deciding whether to eliminate or modify the current federal restrictions on distance education: (1) continuing the use of case- by-case waivers, as in the current Demonstration Program, coupled with regular monitoring and technical assistance; (2) offering exceptions to those schools with effective controls already in place to prevent fraud and abuse, as evidenced by such characteristics as low default rates; or (3) eliminating the rules and imposing no additional management controls. Evaluating these options involves three main considerations: the extent to which the changes improve access to postsecondary schools, the impact the changes would have on Education s ability to prevent institutions from fraudulent or abusive practices, and the cost to the federal student aid programs and to monitor schools with substantial distance education programs. Our analysis of the three options, as shown in table 3, suggests that while all three would improve students access to varying degrees, the first two would likely carry a lower risk of fraud and abuse than the third, which would eliminate the rules and controls altogether. We also found support for some form of accountability at most of the 14 schools that current restrictions affect or nearly affect. For example, officials at 11 of these schools said they were generally supportive of some form of accountability to preserve the integrity of the federal student aid programs rather than total elimination of the restrictions. The first option would involve reauthorizing the Demonstration Program as a means of continuing to provide schools with waivers or other relief from current restrictions. Even though exempting schools from current restrictions on the size of distance education programs costs the federal student aid programs, Education has yet to describe the extent of the costs in its reports on the program. According to Education staff, developing the data on the amount of federal student aid could be done and there are no major barriers to doing so. The data would prove valuable in determining the potential costs of various policy options since the program is expanding in scope five new schools joined in December 2003 and additional reports will need to be prepared for the Congress. Our review of the Demonstration Program and our discussions with Education officials surfaced two additional considerations that would be worthwhile addressing if the Congress decided to reauthorize the program. They relate to streamlining Demonstration Program requirements and improving resource utilization. Reducing paperwork requirements. When the Congress authorized the Demonstration Program, it required that Education evaluate various aspects of distance education, including the numbers and types of students participating in the program and the effective use of different technologies for delivering distance education. These requirements now may be redundant since Education collects such information as part of its National Postsecondary Student Aid Study and other special studies on distance education. Eliminating such requirements could ease the paperwork burden on participating institutions and Education staff. Limiting participation to schools that are adversely affected by federal restrictions. Some schools participating in the Demonstration Program do not need waivers to the 50-percent rules, because their programs are not extensive enough to exceed current restrictions. Limiting participation in the program to only schools that need relief from restrictions on the size of distance education programs could ease the administrative burden on Education. However, in the future, more schools may be interested in receiving waivers if their distance education programs expand. <4. Accrediting Agency Reviews of Distance Education Varies> The seven accrediting agencies we reviewed varied in the extent to which their institutional reviews included distance education. While all seven agencies had adopted standards or policies calling for campus-based and distance education programs to be evaluated using the same standards, the agencies varied in (1) the extent to which agencies required schools to demonstrate that distance education and campus-based programs were comparable and (2) the size a distance education program had to be before it was formally included in the overall institutional review. While the Higher Education Act requires Education to ensure that accrediting agencies have standards and policies in place regarding the quality of education, including distance education, it gives the agencies latitude with regard to the details of setting their standards or policies. Differences in standards or policies do not necessarily lead to differences in educational quality, but if one accrediting agency s policies and procedures are more or less rigorous than another s, the potential for quality differences may increase. An Education official said the historical role of the federal government in exerting control over postsecondary education has been limited. Similarly, Education has limited authority to push for greater consistency in areas related to the evaluation of distance education. <4.1. Accrediting Agency Actions for Evaluating Distance Education Programs> The agencies we reviewed all had standards or policies in place for evaluating distance education programs. The Higher Education Act does not specify how accrediting agencies should review distance education programs, but instead directs them to cover key subject areas, such as student achievement, curricula, and faculty. The law does not specify how accrediting agencies are to develop their standards or what an appropriate standard should be. All seven agencies had a policy stating that the standards they would apply in assessing a school s distance education programs would be the same as the standards used for assessing campus- based programs. The six regional accrediting agencies within this group had also adopted a set of supplemental guidelines to help schools assess their own distance education programs. While all the agencies had standards or policies in place for evaluating distance education and campus-based learning, we found variation among the agencies in the degree to which they required institutions to compare their distance learning courses with their campus-based courses. Five of the seven agencies, including the one national accrediting agency reviewed, required schools to demonstrate comparability between distance education programs and campus-based programs. For example, one agency required each school to evaluate the educational effectiveness of its distance education programs (including assessments of student learning outcomes, student retention, and student satisfaction) to ensure comparability to campus-based programs. Another accrediting agency required that the successful completion of distance education courses and programs be similar to those of campus-based courses and programs. The remaining two accrediting agencies did not require schools to demonstrate comparability in any tangible way. A second area in which variations existed is in the threshold for deciding when to conduct a review of a distance education program. While accrediting agencies complete their major review of a school on a multiyear cycle, federal regulations provide they also must approve substantive changes to the accredited institutions educational mission or program. The regulations prescribe seven types of change, such as a change in the established mission or objectives of the institution, that an agency must include in its definition of a substantive change for a school. For example, starting a new field of study or beginning a distance education program might both be considered a substantive change for a school. However, the seven agencies vary in their definition of substantive so the amount of change needed for such a review to occur varies from agency to agency. Three of the seven agencies review distance education programs when at least half of all courses in a program are offered through distance learning. A fourth agency reviews at an earlier stage when 25 percent or more of a degree or certificate program are offered through distance learning. The remaining three agencies have still other polices for when they initiate reviews of distance education programs. <4.2. Education s Role and Responsibility in Monitoring Accrediting Agencies Is Limited> The variations among accrediting agencies that we found probably result from the statutory latitude provided to accrediting agencies in carrying out their roles. For example, in the use of their varying policies and practices, the agencies are operating within the flexible framework provided under the Higher Education Act. Such variations likewise do not necessarily lead to differences in how effectively agencies are able to evaluate educational quality. However, the lack of consistently applied procedures for matters such as comparing distance education and campus-based programs or deciding when to incorporate reviews of new distance education programs could potentially increase the chances that some schools are being held to higher standards than others. Additionally, the flexible framework of the Higher Education Act extends to the requirements that accrediting agencies set for schools in evaluating student learning outcomes. In discussions on this matter, Education officials indicated that the law s flexibility largely precludes them from being more prescriptive about the standards, policies, or procedures that accrediting agencies should use. <5. Accrediting Agency Assessment of Student Learning Outcomes in Their Reviews Varies Considerably> The seven accrediting agencies we reviewed varied in the extent to which their standards and policies address student-learning outcomes for either campus-based or distance education courses or programs. Over the past decade, our work on outcomes-based assessments in a variety of different areas shows that when organizations successfully focus on outcomes, they do so through a systematic approach that includes three main components. The three are (1) setting measurable and quantifiable goals for program outcomes, (2) developing strategies for achieving these goals, and (3) disclosing the results of their efforts to the public. The accrediting agencies we reviewed generally recognized the importance of outcomes, but only one of the seven had an approach that required schools to cover all three of these components. <5.1. Successful Implementation of Outcomes-Based Approach Generally Involves Three Main Components> The three-part approach we found being used to successfully implement an outcomes-based management strategy was based on our assessments across a wide spectrum of agencies and activities, including, for example, the Federal Emergency Management Agency working with local governments and the building industry to strengthen building codes to limit deaths and property losses from disaster and the Coast Guard working with the towing industry to reduce marine casualties. Briefly, here are examples of how these three components would apply in an educational setting. Developing measurable and quantifiable goals. It is important that outcome goals be measurable and quantifiable, because without such specificity there is little opportunity to determine progress objectively. A goal of improving student learning outcomes would require measures that reflect the achievement of student learning. For example, a goal of improving student learning outcomes would need to be translated into more specific and measurable terms that pertain directly to a school s mission, such as an average state licensing examination score or a certain job placement rate. Other measures could include test scores measuring writing ability, the ability to defend a point orally, or analyze critically, and work habits, such as time management and organization skills. Developing strategies for achieving the goals. This component involves determining how human, financial, and other resources will be applied to achieve the goals. In education, this component could include such strategies as training for faculty, investments in information technology, or tutoring programs to help improve skills to desired levels. This component helps align an organization s efforts towards improving its efficiency and effectiveness. Our work has shown that providing a rationale for how the resources will contribute to accomplishing the expected level of performance is an important part of this component. Reporting performance data to the public. Making student learning outcome results public is a primary means of demonstrating performance and holding institutions accountable for results. Doing so could involve such steps as requiring schools to put distance learning goals and student outcomes (such as job placement rates or pass rates on state licensing examinations) in a form that can be distributed publicly, such as on the school s Web site. This would provide a basis for students to make more informed decisions on whether to enroll in distance education programs and courses. It would also provide feedback to schools on where to focus their efforts to improve performance. Education s 2002-2007 strategic plan calls for public disclosure of data by stating, n effective strategy for ensuring that institutions are held accountable for results is to make information on student achievement and attainment available to the public, thus enabling prospective students to make informed choices about where to attend college and how to spend their tuition dollars. Similarly, in September 2003, the Council for Higher Education Accreditation stated that institutions and programs should routinely provide students and prospective students with information about student learning outcomes and institutional and program performance in terms of these outcomes and that accrediting organizations should establish standards, policies and review processes that visibly and clearly expect institutions and programs to discharge responsibilities. <5.2. Most Accrediting Agencies Lacked One or More Components> The accrediting agencies we reviewed generally recognized the importance of student learning outcomes and had practices in place that embody some aspects of the outcomes-based approach. However, only one of the agencies required schools to have all three components in place. Developing measurable and quantifiable goals. Five of seven agencies had standards or policies requiring that institutions develop measurable goals. For example, one accrediting agency required institutions to formulate goals for its distance learning programs and campus-based programs that cover student achievement, including course completion rates, state licensing examination scores, and job placement rates. Another accrediting agency required that schools set expectations for student learning in various ways. For example, the agency required institutions to begin with measures already in place, such as course and program completion rate, retention rate, graduation rate, and job placement rate. We recognize that each institution will need to develop its own measures in a way that is aligned with its mission, the students it serves, and its strategic plans. For example, a 2-year community college that serves a high percentage of low-income students may have a different mission, such as preparing students for 4-year schools, than a major 4-year institution. Developing strategies for achieving the goals. All of the agencies we visited had standards or policies requiring institutions to develop strategies for achieving goals and allocating resources. For example, one agency had a standard that requires institutions to effectively organize the human, financial, and physical resources necessary to accomplish its purposes. Another agency had a standard that an institution s student development services must have adequate human, physical, financial, and equipment resources to support the goals of the institution. In addition, the standard requires that staff development to be related to the goals of the student development program and should be designed to enhance staff competencies and awareness of current theory and practice. Our prior work on accountability systems, however, points out that when measurable goals are not set, developing strategies may be less effective because there is no way to measure the results of applying the strategies and no way of determining what strategies to develop. Our visits to the accrediting agencies produced specific examples of schools they reviewed that had tangible results in developing strategies for meeting distance education goals. One was Old Dominion University, which had collected data on the writing skills of distance education students. When scores by distance learners declined during an academic year, school administrators identified several strategies to improve students writing abilities. They had site directors provide information on tutoring to students and directed students to writing and testing centers at community colleges. In addition, they conducted writing workshops at sites where a demonstrated need existed. After putting these strategies in place, writing test scores improved. Reporting performance data to the public. Only one of the agencies had standards or policies requiring institutions to disclose student learning outcomes to the public. However, various organizations, including the Council for Higher Education Accreditation, are considering ways to make the results of such performance assessments transparent and available to the public. Among other things, the Council is working with institutions and programs to create individual performance profiles or to expand existing profiles. The Student Right to Know and Campus Security Act of 1990 offers some context for reporting performance data to the public. This act requires schools involved in the federal student loan programs to disclose, among other things, completion or graduation rates and, if applicable, transfer-out rates for certificate- or degree-seeking, full-time, first-time undergraduates. In this regard, Education is considering ways to make available on its Web site the graduation rates of these schools. However, according to two postsecondary experts, the extent that schools make such information available to prospective students may be uneven. <6. Conclusions> The federal government has a substantial interest in the quality of postsecondary education, including distance education programs. As distance education programs continue to grow in popularity, statutory restrictions on the size of distance education programs put in place to guard against fraud and abuse in correspondence schools might soon result in increasing numbers of distance education students losing eligibility for federal student aid. At the same time, some form of control is needed to prevent the potential for fraud and abuse. Over the past few years, the Department of Education has had the authority to grant waivers to schools in the Demonstration Program so that schools can bypass existing statutory requirements. The waivers offer schools the flexibility to increase the size of their distance education programs while remaining under the watchful eye of Education. Education is required to evaluate the efficacy of these waivers as a way of determining the ultimate need for changing the statutory restrictions against distance education. To do so, the Department would need to develop data on the cost to the federal student aid programs of granting waivers to schools. Developing such data and evaluating the efficacy of waivers would be a helpful step in providing information to the Congress about ways for balancing the need to protect the federal student aid programs against fraud and abuse while potentially providing students with increased access to postsecondary education. In addition to administering the federal student aid programs, Education is responsible for ensuring the quality of distance education through the postsecondary accreditation process. Among other things, measures of the quality of postsecondary education include student-learning outcomes, such as the extent to which students complete programs and/or the extent to which students performance improves over time. As distance education programs proliferate, challenges with evaluating these programs mount because accreditation procedures were developed around campus-based, classroom learning. There is growing awareness in the postsecondary education community that additional steps may be needed to evaluate and ensure the quality of distance education and campus-based programs, though there is far less unanimity about how to go about it. Several accrediting agencies have taken significant steps towards applying an outcome-based, results-oriented approach to their accreditation process, including for distance education. These steps represent a potential set of best practices that could provide greater accountability for the quality of distance education. Due to the autonomous nature of accrediting agency operations, Education cannot require that all accrediting agencies adopt these practices. It could, however, play a pivotal role in encouraging and fostering the use of an outcomes-based model. In the long run, if the practices of accrediting agencies remain so varied that program quality is affected, Education may need additional authority to bring about a more consistent approach. Finally, if Education wishes to hold schools more accountable for the quality of distance education and campus-based programs such as ensuring that a minimum percentage of students complete their programs aligning the efforts of accrediting agencies to ensure that these factors are measured could increase the likelihood for success in this area. Indeed, a more systematic approach by accrediting agencies could help Education in its effort to focus greater attention on evaluating schools and educational policy through such outcomes. <7. Recommendations> To better inform federal policymakers, we recommend that the Secretary of Education include data in future Demonstration Program reports on the potential cost to the federal student aid programs of waiving the 50-percent rules. To enhance oversight of distance education quality, we recommend that the Secretary of Education, (1) develop, with the help of accrediting agencies and schools, guidelines or a mutual understanding for more consistent and thorough assessment of distance education programs, including developing evaluative components for holding schools accountable for such outcomes and (2) if necessary, request authority from the Congress to require that accrediting agencies use these guidelines in their accreditation efforts. <8. Agency Comments> In commenting on a draft of this report, Education generally agreed with our findings and the merits of our recommendations. For instance, Education said that it will consider the potential cost of the federal student aid programs of eliminating the 50-percent rules; however, due to the timing of the process of reauthorizing the Higher Education Act, Education believes it is unlikely these estimates will become part of a future report to Congress on the Demonstration Program. While we can appreciate the difficulties surrounding the timing of the reauthorization, we believe that policymakers would be better informed if this information was provided to them as part of the Demonstration Program. Given the uncertainty about whether Congress will indeed amend the 50-percent rules as part of reauthorization and that the timing of such changes is uncertain, providing information on the costs of the waivers would appear to have value especially since such information would, in part, carry out the spirit of Demonstration Program requirements. With respect to our recommendation for accreditation, Education said that it would study it carefully. Education agrees that it could engage in a series of discussions with accrediting agencies and schools leading to guidance on assessment and public disclosure of information. Education, however, said that the results would be largely informational because the agencies would not be required to adopt the guidance, and Education is not convinced of the necessity or appropriateness of requiring the guidance via the Higher Education Act. Again, we can appreciate Education's position on this issue, but continue to believe that greater accountability for student learning outcomes is necessary for enhanced oversight of distance education programs. Given Education's stated desire to hold institutions more accountable for results, such as ensuring a higher percentage of students complete their programs on time, working with accrediting agencies to develop guidelines or a mutual understanding of what this involves would be one management tool for doing so. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO s Web site at http://www.gao.gov. Please call me at (202) 512-8403 if you or your staffs have any questions about this report. Other contacts and acknowledgments are listed in appendix III. Appendix I: Scope and Methodology To address the two questions about the extent to which current federal restrictions on distance education affect schools ability to offer federal student aid to their students and what the Department of Education s Distance Education Demonstration Program has revealed with respect to the continued appropriateness of these restrictions, we obtained information from Education staff and other experts on which postsecondary institutions might be affected by these provisions or were close to being affected. We limited our work primarily to schools that were involved in the Demonstration Program or had electronically transmitted distance education programs and that were accredited or pre-accredited by accrediting agencies recognized by Education for eligibility in the federal student aid programs. We initially interviewed officials at 21 institutions with a standard set of questions regarding the effect, if any, current federal restrictions have on the schools ability to offer federal student aid, and we obtained information on the distance education programs at the schools. Based on our interviews, we determined that only 14 of the 21 schools had been affected or could be affected by the restrictions. We also obtained data on default rates at the 14 schools, if applicable, from Education s student loan cohort default rate database. With respect to the Demonstration Program, we interviewed officials at Education who were responsible for assessing distance education issues. Additionally, we reviewed monitoring and progress reports on participating institutions involved in the Demonstration Program. We reviewed various reports on federal restrictions related to distance education as well as pertinent statutes and regulations. To address the two questions related to the work of accrediting agencies: To what extent do accreditation agencies include distance education in their reviews of schools or programs and as they evaluate distance education programs and campus-based programs, to what extent do accreditation agencies assess educational outcomes, we focused on the standards and policies of seven accrediting agencies that collectively are responsible for more than two-thirds of all distance education programs. We interviewed agency administrators and evaluated the extent of their outcomes-based assessment standards and policies using criteria that we had developed in a variety of past work addressing performance and accountability issues. We compared accrediting agency standards and policies with prior work we conducted on key components for accountability. We provided our preliminary findings to the seven accrediting agencies and asked them to verify our initial findings. In addition, we interviewed staff at Education involved in accreditation issues. We reviewed Education s monitoring reports on accreditation agencies. Additionally, we interviewed officials at the Council for Higher Education Accreditation and reviewed various reports that they have produced. We conducted our work in accordance with generally accepted government auditing standards from October 2002 to February 2004. Appendix II: Comments from the Department of Education Appendix III: GAO Contacts and Staff Acknowledgments <9. Contacts> <10. Staff Acknowledgments> In addition to those named above, Jerry Aiken, Jessica Botsford, Elizabeth Curda, Luann Moy, Corinna Nicolaou, Jill Peterson, Stan Stenersen, and Susan Zimmerman made important contributions to this report.
Why GAO Did This Study Distance education--that is, offering courses by Internet, video, or other forms outside the classroom--has changed considerably in recent years and is a growing force in postsecondary education. More than a decade ago, concerns about fraud and abuse by some correspondence schools led to federal restrictions on, among other things, the percentage of courses a school could provide by distance education and still qualify for federal student aid. Given the recent changes in distance education, GAO was asked to review the extent to which the restrictions affect schools' ability to offer federal student aid and the Department of Education's assessment of the continued appropriateness of the restrictions. Additionally, GAO was asked to look at the extent to which accrediting agencies evaluate distance education programs, including their approach for assessing student outcomes. What GAO Found While federal restrictions on the size of distance education programs affect only a small number of schools' ability to offer federal student aid, the growing popularity of distance education could cause the number to increase in the future. GAO found that 14 schools were either now adversely affected by the restrictions or would be affected in the future; collectively, these schools serve nearly 225,000 students. Eight of these schools, however, will remain eligible to offer federal student aid because they have been granted waivers from the restrictions by Education. Education granted the waivers as part of a program aimed at assessing the continued appropriateness of the restrictions given the changing face of distance education. In considering the appropriateness of the restrictions, there are several policy options for amending the restrictions; however, amending the restrictions to improve access would likely increase the cost of the federal student aid programs. One way to further understand the effect of amending the restrictions would be to study data on the cost of granting the waivers to schools, but Education has yet to develop this information. The seven accrediting agencies GAO reviewed varied in the extent to which they included distance education programs in their reviews of postsecondary institutions. All seven agencies had developed policies for reviewing these programs; however, there were differences in how and when they reviewed the programs. Agencies also differed in the extent to which they included an assessment of student outcomes in their reviews. GAO's work in examining how organizations successfully focus on outcomes shows that they do so by (1) setting measurable goals for program outcomes, (2) developing strategies for meeting these goals, and (3) disclosing the results of their efforts to the public. Measured against this approach, only one of the seven accrediting agencies we reviewed had policies that require schools to satisfy all three components. As the key federal link to the accreditation community, Education could play a pivotal role in encouraging an outcomes-based model.
<1. Background> Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission and is especially important for a government corporation such as FDIC, which has responsibilities to oversee the financial institutions that are entrusted with safeguarding the public s money. While the use of interconnected electronic information systems allows the corporation to accomplish its mission more quickly and effectively, their use also exposes FDIC s information to various internal and external threats. Cyber-based threats to information systems and cyber-related critical infrastructure can come from sources internal and external to the organization. Internal threats include errors as well as fraudulent or malevolent acts by employees or contractors working within an organization. External threats include the ever-growing number of cyber- based attacks that can come from a variety of sources such as hackers, criminals, and foreign nations. Potential attackers have a variety of techniques at their disposal, which can vastly enhance the reach and impact of their actions. For example, cyber attackers do not need to be physically close to their targets, their attacks can easily cross state and national borders, and cyber attackers can preserve their anonymity. Further, the interconnectivity among information systems presents increasing opportunities for such attacks. Indeed, reports of security incidents from federal agencies are on the rise, increasing by more than 650 percent from fiscal year 2006 to fiscal year 2010. Specifically, the number of incidents reported by federal agencies to the United States Computer Emergency Readiness Team (US-CERT) has increased dramatically over the past 4 years: from 5,503 incidents reported in fiscal year 2006 to about 41,776 incidents in fiscal year 2010. Compounding the growing number and kinds of threats are the deficiencies in security controls on the information systems at federal agencies, which have resulted in vulnerabilities in both financial and nonfinancial systems and information. These deficiencies continue to place assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, and critical operations at risk of disruption. Accordingly, we have designated information security as a governmentwide high risk area since 1997, a designation that remains in force today. The Federal Information Security Management Act (FISMA) requires each agency to develop, document, and implement an agencywide information security program to provide information security for the information and systems that support the operations and assets of the entities, using a risk-based approach to information security management. <1.1. FDIC Is a Key Protector of Bank and Thrift Deposits> FDIC was created by Congress to maintain the stability of and public confidence in the nation s financial system by insuring deposits, examining and supervising financial institutions, and resolving troubled institutions. Congress created FDIC in 1933 in response to the thousands of bank failures that had occurred throughout the late 1920s and early 1930s. FDIC identifies, monitors, and addresses risks to the Deposit Insurance Fund when a bank or thrift institution fails. The Bank Insurance Fund and the Savings Association Insurance Fund were established as FDIC responsibilities under the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, which sought to reform, recapitalize, and consolidate the federal deposit insurance system. The act also designated FDIC as the administrator of the Federal Savings & Loan Insurance Corporation Resolution Fund, which was created to complete the affairs of the former Federal Savings & Loan Insurance Corporation and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. The Bank Insurance Fund and the Savings Association Insurance Fund merged into the Deposit Insurance Fund on February 8, 2006, as a result of the passage of the Federal Deposit Insurance Reform Act of 2005. <1.2. FDIC Relies on Computer Systems to Support Its Mission and Financial Reporting> FDIC relies extensively on computerized systems to support its mission, including financial operations, and to store the sensitive information that it collects. The corporation uses local and wide area networks to interconnect its systems and a layered approach to security defense. To support its financial management functions, FDIC relies on many systems, including a corporatewide system that functions as a unified set of financial and payroll systems that are managed and operated in an integrated fashion, a system to calculate and collect FDIC deposit insurance premiums and Financing Corporation bond principal and interest amounts from insured financial institutions; a Web-based application that provides full functionality to support franchise marketing, asset marketing, and asset management; a system to request access to and receive permission for the computer applications and resources available to its employees, contractors, and other authorized personnel; and a primary receivership and subsidiary financial processing and reporting system. FDIC also relies on other computerized systems in deriving its estimates of losses from loss-sharing agreements. This complex estimation process was developed and implemented in order to manage the significant number of loss-sharing agreements that have been created as a result of the current financial crisis. The process uses databases containing information on loss-sharing agreements and asset valuations, software programs that use information from the databases and other sources to calculate the estimated losses, data and programs stored in FDIC s document sharing system, a Web service used to exchange valuation information with outside contractors, and several manual processing steps. In addition, in order to reduce the risk that a material misstatement will not be detected, FDIC relies heavily on supervisory review and oversight controls in the process. We have previously reported that this process is complex, is not fully documented, and involves multiple manual data entries. In a separate report, we have made an additional recommendation to FDIC to improve the documentation around this process. Under FISMA, the Chairman of FDIC is responsible for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the entity s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the corporation s Chief Information Officer the authority to ensure compliance with the requirements imposed on the agency under FISMA. The Chief Information Officer is responsible for developing and maintaining a corporatewide information security program and for developing and maintaining information security policies, procedures, and control techniques that address all applicable requirements. The Chief Information Officer also serves as the authorizing official with the authority to approve the operation of the information systems at an acceptable level of risk to the corporation. The Chief Information Security Officer reports to the Chief Information Officer and serves as the Chief Information Officer s designated representative. The Chief Information Security Officer is responsible for the overall support of assessment and authorization activities; for the development, coordination, and implementation of FDIC s security policy; and for the coordination of information security and privacy efforts across the corporation. <2. Opportunities Exist for FDIC to Improve Information Security Controls> Although FDIC had implemented numerous controls over its systems, it had not always implemented access and other controls to protect the confidentiality, integrity, and availability of its financial systems and information. A key reason for these weaknesses is that the corporation did not always fully implement key information security program activities, such as effectively developing and implementing security policies. Although these weaknesses did not individually or collectively constitute a material weakness or significant deficiency in 2010, they still increase the risk that financial and other sensitive information could be disclosed or modified without authorization. <2.1. FDIC Had Not Always Restricted Access to Information Resources> A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls include system boundary protections, identification and authentication of users, authorization restrictions, cryptography, protection of sensitive system resources, and audit and monitoring procedures. Without adequate access controls, unauthorized individuals, including intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or for personal gain. In addition, authorized users could intentionally or unintentionally modify or delete data or execute changes that are outside of their authority. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from network-connected devices. Any connections to the Internet or to other external and internal networks or information systems should occur through controlled interfaces (for example, proxies, gateways, routers and switches, firewalls, and concentrators). Many networked systems allow remote access to the information systems from virtually any remote location; thus, it is imperative that remote access paths be appropriately controlled and protected using a method such as a virtual private network (VPN). In addition, networks should also be appropriately configured to adequately protect access paths between systems; this can be accomplished through the use of access control lists and firewalls. National Institute of Standards and Technology (NIST) guidance states that agencies should establish trusted communication paths between users and the agency s information systems, that firewalls should be configured to provide adequate protection for the organization s networks, and that the information transmitted between interconnected systems should be controlled and regulated. FDIC had not always controlled the logical and physical boundaries protecting its information and systems. Examples are as follows: Certain network devices, servers, and workstations on FDIC s internal network were not always configured to sufficiently restrict access or to fully secure connections. Firewalls controlling traffic between segments of FDIC s internal network did not sufficiently control certain types of network traffic. Boundary protection controls were configured in a manner that limited the effectiveness of monitoring controls. As a result of these deficiencies, FDIC faces an increased risk that individuals could gain unauthorized access to its financial systems and information. A computer system must be able to identify and authenticate the identity of a user so that activities on the system can be linked to that specific individual and to protect the system from inadvertent or malicious access. When an organization assigns a unique user account to a specific user, the system is able to distinguish that user from others a process called identification. The system must also establish the validity of the user s claimed identity by requesting some kind of information, such as a password, which is known only by the user a process called authentication. NIST guidance states that an organization should manage information system authenticators by changing the default content of authenticators (e.g., passwords) when installing an information system. Also, FDIC policy states that passwords should be changed periodically. FDIC had effectively implemented controls for identifying and authenticating users on certain systems. For example, it had implemented controls to effectively detect and change default vendor-supplied user accounts and passwords in installed software and had ensured that passwords for privileged accounts on certain servers were changed in accordance with its policy. However, FDIC had not consistently enforced other identification and authentication user controls. Examples are as follows: Passwords for certain privileged accounts on a system supporting financial processing were not configured in accordance with FDIC policy. Additionally, two of the accounts were using the same password. Password settings for certain accounts on a system supporting the loss-share loss estimation process were not configured in accordance with FDIC policy. Systems supporting financial processing were not always configured with sufficiently strong identification and authentication controls. As a result of these deficiencies, FDIC is at an increased risk that an individual with malicious intentions could gain inappropriate access to its financial systems and information. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of least privilege, which refers to granting a user only the access rights and permissions needed to perform official duties. To restrict a legitimate user s access to only those programs and files needed, organizations establish user access rights: allowable actions that can be assigned to a user or to groups of users. File and directory permissions are rules that are associated with a particular file or directory, regulating which users can access it and the extent of their access rights. To avoid unintentionally giving a user unnecessary access to sensitive files and directories, an organization should give careful consideration to its assignment of rights and permissions. NIST guidance states that access to information systems should be allowed only for authorized users and only for the tasks necessary to accomplish the work, in accordance with the organization s missions and business functions. In addition, NIST guidance states that agency information systems should separate user functionality from functions necessary to administer databases, network components, workstations, or servers. FDIC policy requires that the access to information technology (IT) resources be periodically reviewed to ensure that access controls remain consistent with existing authorizations and current business needs. Also, the Division of Resolutions and Receiverships requires user access to the document sharing system supporting the loss-share estimation process to be reviewed every 3 months. FDIC had implemented controls to restrict user access to certain resources. For example, it had configured access control lists on servers dedicated to network management to restrict access to only those users who required it, controlled access to sensitive files of critical network devices, and limited user access rights to a business application supporting resolution and receivership activities to only those roles necessary for personnel to perform their duties. However, other deficiencies in authorization controls placed FDIC s financial information and systems at risk. Examples are as follows: The Division of Resolutions and Receiverships had not documented a procedure describing how access to the Web service used in the loss- share loss estimation process was to be reviewed, including requirements for conducting reviews at regular intervals or retaining documentation of reviews. The Division of Resolutions and Receiverships had not reviewed access to the document sharing system every 3 months in accordance with its policy; instead, it had conducted a review only once during 2010. FDIC had given users access to sensitive resources on certain systems supporting financial processing that they did not need to accomplish their work. As a result, FDIC faces an increased risk that a user could gain inappropriate access to computer resources, circumvent security controls, and deliberately or inadvertently read, modify, or delete financial information and other sensitive information. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. If encryption is not used, user identification (ID) and password combinations will be susceptible to electronic eavesdropping by devices on the network when they are transmitted. The National Security Agency and NIST recommend encrypting network services, and NIST guidance states that passwords should be encrypted while being stored and transmitted. NIST guidance also states that the use of encryption by organizations can reduce the probability of unauthorized disclosure of information and that government systems should use sufficiently strong encryption in order to establish and maintain secure communication links between information systems and applications. FDIC had implemented controls to encrypt certain sensitive information on its systems. For example, it had restricted the use of unencrypted protocols on the mainframe and had required that sensitive information stored on user workstations or mobile devices be encrypted. However, FDIC had not always ensured that sensitive financial information transmitted over and stored on its network was adequately encrypted. Specifically, FDIC had not always used sufficiently strong encryption on two systems supporting the loss-share loss estimation process and had not always strongly encrypted stored passwords on certain financial systems. As a result of these deficiencies, FDIC is at an increased risk that an individual could capture information such as user IDs and passwords and use them to gain unauthorized access to data and system resources. To establish individual accountability, monitor compliance with security policies, and investigate security violations, the capability to determine what, when, and by whom specific actions have been taken on a system is needed. Organizations accomplish this by implementing system or security software that provides an audit trail for determining the source of a transaction or attempted transaction and by monitoring user activity. To be effective, organizations should (1) configure the software to collect and maintain a sufficient audit trail for security-relevant events; (2) generate reports that selectively identify unauthorized, unusual, and sensitive access activity; and (3) regularly monitor and take action on these reports. NIST guidance states that organizations should track and monitor access by individuals who use elevated access privileges, review and analyze information system audit records for indications of inappropriate or unusual activity, and report the findings to designated organization officials. FDIC had ensured that default installation user accounts were no longer used on certain servers and had configured its mainframe logging controls efficiently. However, FDIC s audit and monitoring of security- relevant events on key financial systems was not always sufficient. For example, FDIC had not always sufficiently configured logging controls on a system that supported the loss-share loss estimation process or on several network devices. As a result of these deficiencies, FDIC faces an increased risk that unauthorized activity or a policy violation on its systems and networks would not be detected. <2.2. Other Information System Controls Can Be Improved> In addition to access controls, organizations should use policies, procedures, and techniques for securely segregating incompatible duties, configuring information systems, and ensuring continuity of computer processing operations in the event of a disaster or unexpected interruption to ensure the confidentiality, integrity, and availability of its information. However, FDIC s systems were not always in full compliance with these policies, procedures, and techniques, leaving them vulnerable to intrusions. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, segregation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups, which diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. According to NIST, in order to maintain separation of duties, personnel who administer access control functions should not also be responsible for administering audit functions. FDIC s Division of Resolutions and Receiverships had not always separated audit responsibilities from administration of access to loss- share and asset valuation data and programs. Specifically, the FDIC access administrators for both the external Web service and the document sharing system used in the loss-share loss estimation process were also responsible for approving and reviewing user access to the systems. As a result, the access administrators had the ability to grant inappropriate levels of access to loss-share and asset valuation data and programs without being detected, placing the data and programs at risk of unauthorized access, misuse, modification, or destruction. Configuration management is another important control that involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system s life cycle. An effective configuration management process includes procedures for (1) identifying, documenting, and assigning unique identifiers (for example, serial number and name) to a system s hardware and software parts and subparts, generally referred to as configuration items; (2) evaluating and deciding whether to approve changes to a system s baseline configuration; (3) documenting and reporting on the status of configuration items as a system evolves; (4) determining alignment between the actual system and the documentation describing it; and (5) developing and implementing a configuration management plan for each system. In addition, establishing controls over the modification of information system components and related documentation helps to prevent unauthorized changes and ensure that only authorized systems and related program modifications are implemented. This is accomplished by instituting policies, procedures, and techniques that help make sure all hardware, software, and firmware programs and program modifications are properly authorized, tested, and approved. According to NIST, organizations should document approved configuration-controlled changes to information systems, retain and review records of the changes, audit activities associated with the changes, and coordinate and provide oversight for configuration change control activities through a mechanism such as a change control board. NIST also recommends that agencies configure their systems to reflect the most restrictive mode possible consistent with operational requirements and employ malicious code protection mechanisms to detect and eradicate malicious code transported by electronic mail, electronic mail attachments, or other common means. FDIC had not applied appropriate configuration management controls to many of the special purpose programs and data in the loss-share estimating process. Although FDIC had documented activities for development, testing, and production for three of the programs used to calculate the estimates of losses due to loss-sharing agreements and had assigned responsibility for the different activities, it had neither documented approved changes to the programs prior to implementation nor retained records of the changes made. While the corporation had documented plans for tracking changes to these three programs, the plans had not been implemented. Additionally, the corporation had not documented plans for controlling changes to a program that generated a key dataset or to two other programs used to validate the data contained in a key database used in the loss-share loss estimation process. Furthermore, FDIC had not applied version control or change control to the database for the loss-share cost estimates. Moreover, a workstation used to execute one of the key calculation programs had configuration weaknesses that could allow it to be compromised. Until FDIC fully implements configuration management and configuration change controls to these data and programs, increased risk exists that changes to the programs could be unnecessary, may not work as intended, or may result in the unintentional loss of data or program integrity, or that individuals, both internal and external to the corporation, could exploit configuration weaknesses and gain unauthorized access to financial or other sensitive data and systems. Patch management is a critical process that can help alleviate many of the challenges in securing computing systems. Malicious acts can range from defacing a Web site to taking control of an entire system, thereby being able to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations systems. After a vulnerability has been validated, the software vendor may develop and test a patch or workaround to mitigate the vulnerability. Incident response groups and software vendors issue regular information updates on the vulnerability and the availability of patches. NIST guidance states that a comprehensive patch management process should include prioritization of the order in which vulnerabilities are addressed, with a focus on high- priority systems such as those essential for mission-critical operations. FDIC had patched many of its systems and had ensured that much of its software was up-to-date. For example, it had retired critical network devices that were not supported by their manufacturers, updated patch levels for third-party software running on two UNIX servers, and removed an obsolete version of third-party software running on a Windows server. However, FDIC had not consistently updated its financial systems and servers with critical patches or kept its software up-to-date, including systems supporting the loss-share loss estimation process. For example, certain servers supporting financial processing were running a version of software that was unsupported for patch updates, and several workstations used in the loss-share loss estimation process were missing patches and were running software that was no longer supported by the manufacturer. Additionally, certain workstations were missing operating system patches. As a result of these deficiencies, FDIC is at an increased risk that unpatched vulnerabilities could allow its information and information systems to be compromised. Contingency planning, which includes developing contingency, business continuity, and disaster recovery plans, should be performed to ensure that when unexpected events occur, essential operations can continue without interruption or can be promptly resumed, and that sensitive data are protected. NIST guidance states that organizations should develop and implement contingency plans that describe activities associated with backing up and restoring the system after a disruption or failure. The plans should be updated and include information such as contact, resources, and description of files in order to restore the application in the event of a disaster. In addition, the plans should be tested to determine their effectiveness and the organization s readiness to execute the plans. Officials should review the test results and initiate corrective actions. FDIC s Information Technology Security Risk Management Program requires contingency plans and disaster recovery plans to be developed and tested for all sensitive applications (both major and nonmajor) and general support systems; the plans should address measures to be taken in response to a disruption in availability due to an unplanned outage. Although FDIC had developed contingency plans for its major systems and had also conducted testing on these plans, it had not documented plans for recovering the automated and semiautomated processes supporting the loss-share loss estimation process. Although the security plan for one of FDIC s general support systems included the document sharing system and one of the key databases supporting the process, the corporation had not documented or tested contingency plans that addressed restoring the computer programs, workstations, and datasets supporting the preparations of the estimates of losses and costs due to loss-sharing agreements or of the workspaces within the document sharing system where loss-share and asset valuation information and programs are stored. As a result, FDIC may not be able to effectively recover the data and programs in the loss-share loss estimation process and resume normal operations after a disruption. <2.3. FDIC Had Not Always Implemented Key Activities of its Information Security Program> An underlying reason for the information security weaknesses noted in the previous section is that, while FDIC has developed and documented a comprehensive corporate information security program, including documenting an information security risk management policy, developing security policies and procedures, documenting system security plans, and periodically testing information security controls, the corporation had not fully implemented its information security program. Specifically, it had not fully implemented its security policies and had not completed actions to remediate certain control weaknesses. In addition, FDIC had not applied security management controls to the programs and data in the loss-share loss estimation process. An entitywide information security management program is the foundation of a security control structure and a reflection of senior management s commitment to addressing security risks. The security management program should establish a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, or improperly implemented; and controls may be inconsistently applied. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency s required inventory of major information systems; and a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in its information security policies, procedures, or practices. FDIC had developed and documented a comprehensive corporate information security program that was consistent with FISMA requirements and had implemented some elements of its program, but had not fully implemented other elements. Specifically: FDIC had developed and documented an IT security risk management policy that required all sensitive applications to periodically be assessed for the risk and magnitude of harm that could result from vulnerabilities and potential threats. FDIC had not fully implemented its policies requiring that users be provided with only the minimum level of access required to allow them to perform their duties and that its computer security information response team monitor the progress of security patching activities by reviewing reports on the status of implementation. In addition, it had not fully implemented its policies for frequency of password changes and for storage of passwords. FDIC had developed and documented security plans for all of the major systems we reviewed that addressed policies and procedures for providing management, operational, and technical controls, and had documented requirements for physically securing FDIC facilities. FDIC had conducted annual periodic testing and evaluation of the effectiveness of the management, operational, and technical controls for the major systems we reviewed. Although FDIC had established a process for planning, implementing, evaluating, and documenting remedial actions to address information security weaknesses, and had completed actions to remediate 26 of the 33 control weaknesses we identified in our calendar year 2009 audit, the corporation had not yet completed actions to correct or mitigate 7 of the previously reported weaknesses. For example, FDIC had not separated or partitioned the data network from the voice network, developed and documented policies and procedures for assigning access to systems and databases where application controls could be compromised, or fully implemented its monitoring program. In addition, FDIC had not received an independent audit report from the provider of its Web service in a timely manner. FISMA information security requirements apply not only to an agency s own systems but also to information systems used or operated on its behalf by a contractor or other agency, such as an external service provider. According to OMB, service providers are required to provide client organizations with an audit report that describes whether internal controls were designed to achieve specified objectives, have been placed into operation, and are operating effectively. Previously known as Statement on Auditing Standards (SAS) 70 reports, since June 15, 2011, they have been known as Statement on Standards for Attestation Engagements (SSAE) 16 reports. OMB also states that such reports should be provided within a reasonable time frame so that auditors of client organizations may use them during their financial statement audits. However, the provider of the Web service used to exchange information with valuation contractors did not provide FDIC with a SAS 70 report until March 2011, more than 8 weeks after the end of the financial reporting period and more than 5 months after the end of the period that the SAS 70 audit covered. Until all key elements of its information security program are fully implemented, FDIC may not have assurance that controls over its financial systems and information are appropriately designed and operating effectively. FDIC had not applied key controls in its information security program to the loss-share loss estimation process. OMB Circular A-130, Appendix III, requires federal agencies to implement and maintain an automated information security program, including planning for adequate security of each system, assessing risks, and reviewing security controls. OMB Circular A-127 requires that federal financial management systems, which include core financial systems as well as any automated and manual processes, procedures, data, hardware, and software that support financial management, be subject to the requirements of Circular A-130. However, FDIC had not applied key controls in its information security program to the automated and semiautomated processes used to support the preparation of the estimates of losses and costs due to loss- sharing agreements. Specifically, FDIC had not assessed the risks associated with the information and programs involved to identify potential threats and vulnerabilities as well as possible countermeasures and mitigating controls, and had not included the programs in the risk assessment of any of its general support systems; documented the management, technical, or operational security controls intended to protect the programs in system security plans, and had not included the programs in the system security plans of any general support system; or tested any security controls for the programs, and had not included the programs when testing the security controls of other general support systems. FDIC had not applied these controls because the Division of Resolutions and Receiverships developed the process independently, in order to be able to manage the large increase in bank failures and the extensive use of loss-sharing agreements resulting from the current financial crisis. In doing so, the Division of Resolutions and Receiverships had not used FDIC s existing IT management framework which requires these controls to be put into place to develop and manage the process. During 2010, FDIC had mitigated the effect of these weaknesses on financial reporting by implementing compensating management and reconciliation controls in this process. However, because of ongoing financial institution failures and the lack of information security management controls around the process, the financial information processed by the programs involved representing a nearly $39 billion impact on the corporation s financial statements continues to be at risk of unauthorized disclosure, modification, or destruction. <3. Conclusions> FDIC has made significant progress in correcting or mitigating previously reported information security weaknesses, but other control weaknesses continue to unnecessarily put FDIC s systems at an increased risk from internal and external threats. A key reason for these weaknesses is that the corporation had not fully implemented key elements of its information security program, such as effectively implementing security policies, conducting risk assessments, documenting security management plans, documenting contingency plans, testing security controls, or implementing an effective continuous monitoring program. FDIC had made improvements in its information security controls and had mitigated the potential effect of its remaining weaknesses on financial reporting by implementing compensating management and reconciliation controls during 2010, enabling us to conclude that FDIC had resolved the significant deficiency over information systems that we had reported in our 2009 audit. However, the weaknesses both old and new continue to challenge the corporation in its efforts to ensure the confidentiality, integrity, and availability of financial and sensitive information. Until FDIC further mitigates known information security weaknesses in access controls and other information system controls and fully implements its information security program, the corporation will continue to face an increased risk that sensitive financial information and resources will not be sufficiently protected from inadvertent or deliberate misuse, improper disclosure, or destruction. <4. Recommendations for Executive Action> We recommend that the Acting Chairman take the following two actions to enhance FDIC s information security program: Direct the Director of the Division of Resolutions and Receiverships and the Chief Information Officer to develop, document, and implement appropriate information security activities in the loss-share loss estimation process, such as assessing and mitigating risks, managing and controlling the configurations of programs and databases, evaluating the effectiveness of security controls, and ensuring that data and programs can be recovered after a disruption. Direct the Chief Information Officer to work with the external Web service provider to obtain a more timely delivery of the provider s SSAE 16 report (previously known as a SAS 70 report), or to obtain other means of assurance of internal controls. We are also making 38 new recommendations to address 37 new findings in a separate report with limited distribution. These recommendations consist of actions to implement and correct specific information security weaknesses related to access controls, segregation of duties, configuration management, and contingency planning identified during this audit. <5. Agency Comments and Our Evaluation> In providing written comments (reprinted in app. II) on a draft of this report, the Deputy to the Chairman and Chief Financial Officer of FDIC stated that FDIC was pleased to accept our acknowledgment of the significant progress made toward correcting and mitigating our previously reported weaknesses. In addition, he indicated that the corporation plans to implement improvements to address our recommendations, and discussed the actions that FDIC has taken or plans to take to review and improve controls over the loss-share loss estimation process, to obtain timely delivery of appropriate audit reports from current and future service providers, and to conduct additional due diligence activities to obtain assurance of the service provider s internal controls. In responding to our draft recommendation that FDIC develop, document, and implement appropriate information security controls over the automated and semiautomated processes within the loss-share loss estimation process, the Deputy to the Chairman stated that although FDIC agrees that the loss-share business processes and the data associated with these processes deserve proper controls assessment and protection, the corporation will not necessarily treat the processes and data as a separate FDIC system. The Deputy to the Chairman further stated that FDIC is currently taking steps to improve the information security controls around the process. The intent of our draft recommendation was not to suggest that FDIC treat the data and programs supporting the loss-share loss estimation process as a separate information system. We agree that it may not be appropriate for FDIC to treat these data and programs as a separate information system, as they are stored, processed, and executed across multiple systems. Rather, our intent was to recommend that appropriate information security control activities be incorporated into the process. Accordingly, we have clarified our recommendation to state that the Acting Chairman direct the Director of the Division of Resolutions and Receiverships and the Chief Information Officer to develop, document, and implement appropriate information security activities in the loss-share loss estimation process, such as assessing and mitigating risks, managing and controlling the configurations of programs and databases, evaluating the effectiveness of security controls, and ensuring that data and programs can be recovered after a disruption. We are sending copies of this report to the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; Chairman and Ranking Member of the House Financial Services Committee; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512- 4499. We can also be reached by e-mail at [email protected] and [email protected]. Key contributors to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology The objective of our audit was to determine the effectiveness of the Federal Deposit Insurance Corporation s (FDIC) controls protecting the confidentiality, integrity, and availability of its financial systems and information. To do this, we examined FDIC information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials in order to (1) assess the effectiveness of corrective actions taken by FDIC to address weaknesses we previously reported and (2) determine whether any additional weaknesses existed. This work was performed in support of our opinion on internal control over the preparation of the calendar year 2010 and 2009 financial statements of two funds administered by FDIC. To determine whether controls over key financial systems were effective, we considered the results of our evaluation of FDIC s actions to mitigate previously reported weaknesses and performed new audit work at FDIC facilities in Arlington, Virginia, and Washington, D.C. We concentrated our evaluation primarily on the controls for financial applications and enterprise database applications associated with the New Financial Environment; the Assessment Information Management System; the Communication, Capability, Challenge, and Control System (4C) application; the programs, data, and systems supporting the preparation of the estimates of losses and costs due to loss-sharing agreements, and the general support systems. Our selection of the systems to evaluate was based on consideration of systems that directly or indirectly support the processing of material transactions that are reflected in the funds financial statements. Our evaluation was based on GAO s Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using National Institute of Standards and Technology (NIST) standards and guidance and FDIC s policies, procedures, practices, and standards, we evaluated controls by observing methods for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; testing and observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; evaluating the control configurations of selected servers and database inspecting key servers and workstations to determine whether critical patches had been installed or were up-to-date; and examining access responsibilities to determine whether incompatible functions were segregated among different individuals. Using the requirements of the Federal Information Security Management Act (FISMA), which establishes key elements for an effective agencywide information security program, we evaluated FDIC s implementation of its security program by reviewing FDIC s risk assessment process and risk assessments for key FDIC systems that support the preparation of financial statements to determine whether risks and threats were documented consistent with federal guidance; analyzing FDIC s policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans to determine if management, operational, and technical controls were in place or planned and that security plans were updated; analyzing security testing and evaluation results for six key FDIC systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; and examining remedial action plans to determine whether they addressed vulnerabilities identified in FDIC s security testing and evaluations. We also discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. To determine the status of FDIC s actions to correct or mitigate previously reported information security weaknesses, we identified and reviewed its information security policies, procedures, and guidance. We reviewed prior GAO reports to identify previously reported weaknesses and examined FDIC s corrective action plans to determine which weaknesses FDIC reported as being corrected. For those instances where FDIC reported it had completed corrective actions, we assessed the effectiveness of those actions. We conducted this audit from November 2010 to August 2011, in accordance with generally accepted government auditing standards. We conducted our data collection, analysis, and assessment procedures in support of the financial audit between November 2010 and March 2011. We conducted supplemental audit procedures to prepare this report from March 2011 to August 2011. The generally accepted government auditing standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Comments from the Federal Deposit Insurance Corporation Appendix III: GAO Contacts and Staff Acknowledgments <6. GAO Contacts> <7. Staff Acknowledgments> In addition to the individuals named above, Lon Chin, David Hayes, Charles Vrabel, and Christopher Warweg, Assistant Directors; Gary Austin; Angela Bell; William Cook; Saar Dagani; Nancy Glover; Rosanna Guerrero; Jason Porter; Michael Stevens; and Shaunyce Wallace made key contributions to this report.
Why GAO Did This Study The Federal Deposit Insurance Corporation (FDIC) has a demanding responsibility enforcing banking laws, regulating financial institutions, and protecting depositors. Because of the importance of FDIC's work, effective information security controls are essential to ensure that the corporation's systems and information are adequately protected from inadvertent misuse, fraudulent use, or improper disclosure. As part of its audits of the 2010 financial statements of the Deposit Insurance Fund and the Federal Savings & Loan Insurance Corporation Resolution Fund administrated by FDIC, GAO assessed the effectiveness of the corporation's controls in protecting the confidentiality, integrity, and availability of its financial systems and information. To perform the audit, GAO examined security policies, procedures, reports, and other documents; tested controls over key financial applications; and interviewed key FDIC personnel. What GAO Found Although FDIC had implemented numerous controls in its systems, it had not always implemented access and other controls to protect the confidentiality, integrity, and availability of its financial systems and information. FDIC has implemented controls to detect and change default user accounts and passwords in vendor-supplied software, restricted access to network management servers, developed and tested contingency plans for major systems, and improved mainframe logging controls. However, the corporation had not always (1) required strong passwords on financial systems and databases; (2) reviewed user access to financial information in its document sharing system in accordance with policy; (3) encrypted financial information transmitted over and stored on its network; and (4) protected powerful database accounts and privileges from unauthorized use. In addition, other weaknesses existed in FDIC's controls that were intended to appropriately segregate incompatible duties, manage system configurations, and implement patches. An underlying reason for the information security weaknesses is that FDIC had not always implemented key information security program activities. To its credit, FDIC had developed and documented a security program and had completed actions to correct or mitigate 26 of the 33 information security weaknesses that were previously identified by GAO. However, the corporation had not assessed risks, documented security controls, or performed periodic testing on the programs and data used to support the estimates of losses and costs associated with the servicing and disposal of the assets of failed institutions. Additionally, FDIC had not always implemented its policies for restricting user access or for monitoring the progress of security patch installation. Because FDIC had made progress in correcting or mitigating previously reported weaknesses and had implemented compensating management and reconciliation controls during 2010, GAO concluded that FDIC had resolved the significant deficiency in internal control over financial reporting related to information security reported in GAO's 2009 audit, and that the remaining unresolved issues and the new issues identified did not individually or collectively constitute a material weakness or significant deficiency in 2010. However, if left unaddressed, these issues will continue to increase FDIC's risk that its sensitive and financial information will be subject to unauthorized disclosure, modification, or destruction. What GAO Recommends GAO recommends that FDIC take two actions to enhance its comprehensive information security program. In commenting on a draft of this report, FDIC discussed actions that it has taken or plans to take to address these recommendations.
<1. Background> The IPPS provides incentives for hospitals to operate efficiently by paying a predetermined, standardized amount for an entire inpatient episode of a given type rather than the actual costs incurred in providing the care. CMS calculates IPPS payments through a series of adjustments applied to separate national base payment rates covering operating and capital expenses. Specifically, the agency adjusts the base payment rates for patients in different diagnosis-related groups, assuming that cases falling into a particular grouping address similar clinical problems that are expected to require similar amounts of hospital services.applies an area wage index to account for geographic differences in labor costs. Finally, CMS determines whether supplemental Medicare payments or other types of special treatment, such as those provided to certain rural hospitals, are applicable. <1.1. Area Wage Index> CMS adjusts hospital payments under IPPS using the area wage index, to account for variation in labor costs across the country, as these costs are largely beyond any individual hospital s ability to control. The wage index reflects how average hospital wages in each geographic area compare to average hospital wages nationally, set as 1.0. Thus, Medicare payment to a hospital in an area with lower wages is generally below the national average payment and the payment to a hospital in a higher wage area is generally above the national average. CMS considers each distinct urban area as a single labor market, but it considers all rural areas within a state as a single labor market and therefore assigns them the same wage index. If its wage index does not fully account for its relative labor costs, a hospital may qualify to be reclassified to a higher wage index area in order to receive higher Medicare payments. To request a reclassification to another geographic area, hospitals may apply to the Medicare Geographic Classification Review Board (the Board), an entity established by Congress. Among various criteria for reclassification, a hospital must demonstrate close proximity to the area for which it is seeking redesignation. <1.2. Medical Education Payments> Medicare reimburses teaching hospitals and academic medical centers for both the direct and indirect costs of their residency training programs. Direct graduate medical education payments cover the direct costs of resident training, such as salaries and benefits. The indirect medical education (IME) adjustment a percentage add-on to IPPS rates reflects the higher patient care costs associated with resident education. The size of the IME adjustment depends on the hospital s teaching intensity, which is generally measured by a hospital s number of residents per bed. <1.3. Medicare Disproportionate Share Hospitals> The Medicare disproportionate share hospital (DSH) adjustment generally provides supplemental payments to hospitals that treat a disproportionate number of low-income patients. To qualify for this payment adjustment, a hospital s disproportionate patient percentage (DPP) the share of low-income patients treated by the hospital must generally equal or exceed a specific threshold level determined by a statutory formula.amount of Medicare DSH payment adjustment varies by hospital location and size. <1.4. Types of Rural Providers under IPPS> Rural hospitals may qualify for special treatment in determining payment rates under IPPS, although some urban hospitals may also qualify, through three programs: sole community hospitals (SCH), rural referral In some centers (RRC), and Medicare-dependent hospitals (MDH).cases, hospitals may qualify for more than one of these rural provider types, allowing them to receive multiple adjustments to their IPPS payment rates. <2. Numerous Statutory Provisions Have Resulted in Increased Medicare Payments to Certain Hospitals> We identified 16 statutory provisions enacted between 1997 and 2012 that modified Medicare payment for inpatient services in a way that benefitted a subset of hospitals. These provisions allow hospitals to receive adjustments to their wage index, alter classification criteria for supplemental payments or other special treatment, or exclude hospitals from the IPPS. Most of the provisions we identified targeted rural hospitals for increased payment. <2.1. Some Provisions Have Enabled Hospitals to Qualify for a Different Wage Index> We identified seven statutory provisions that have enabled hospitals to receive Medicare payment under a higher or nearby wage index. Some provisions moved hospitals in specific, named counties into different wage index areas, or set a minimum wage index for hospitals meeting certain criteria. Other provisions allow hospitals to reclassify from an urban to a rural area. Still others enable rural hospitals near urban areas to qualify for the higher wage index of the nearby area. Section 4410 of BBA established a rural floor by requiring that the area wage index for a hospital in an urban area of a state could not be less than the area wage indexes for hospitals in that state s rural area. The provision applied to patient discharges beginning in fiscal year 1998, and specified that the implementation of the rural floor must be budget neutral that is, any changes in the wage index for hospitals subject to the floor may not increase or decrease aggregate Medicare payments for the operating costs of inpatient services. Initially, this upwards adjustment only applied in states with at least one rural IPPS hospital. The rural floor provision increased the wage index for hospitals in urban areas that had been paid under a lower wage index than the rural areas of that state. In order to compensate for the increased wage indexes of urban hospitals receiving the rural floor, CMS initially applied a nationwide budget-neutrality adjustment to account for the additional payment to these hospitals. In fiscal year 2009, CMS changed this policy and began phasing in a revised rural floor budget-neutrality adjustment that would be calculated and applied on a state-by-state basis instead of on a nationwide basis. To do so, the agency blended the nationwide and state-by-state budget neutrality formulas for fiscal year 2009 and fiscal year 2010.within each state, some hospitals wage index increased, while other hospitals wage index decreased, in order to ensure that total Medicare payments to hospitals in that state remained the same. As a result, Section 3141 of PPACA reversed this policy by requiring that any adjustments to the wage index must be applied on a budget-neutral basis through a uniform national adjustment beginning in fiscal year 2011. The application of the national budget-neutrality requirement has resulted in a transfer of Medicare payments from hospitals in states where no hospitals qualified for the rural floor to hospitals in states where at least one hospital qualified for this adjustment. (For information on the impact of this provision by state, see app. I.) In fiscal year 2012, the effect of this PPACA provision was that hospitals in seven states (Alaska, California, Colorado, Connecticut, Massachusetts, New Hampshire, and New Jersey) and Puerto Rico received increased hospital payments; hospitals in Massachusetts received the largest increase in payments nearly $275 million five times greater than New Jersey, the next largest recipient; to pay for the rural floor in these states, hospital payments in other states were adjusted downward by as much as 0.5 percent, with a median state reduction of $7.3 million; and hospitals in five states saw declines of over $20 million: New York ($47.5 million), Texas ($34 million), Florida ($29 million), Illinois ($26 million), and Michigan ($21 million). Specific County and Area Reclassifications (Expired) Section 152 of BBRA reclassified hospitals in seven named counties or areas, deeming them to be located in specifically named large metropolitan areas, thus enabling them to qualify for the wage index of that area. This provision benefitted hospitals in these counties that competed for labor with nearby hospitals in higher wage areas. The BBRA limited this reclassification to discharges during fiscal year 2000 and fiscal year 2001. Hospitals in the following specified counties were reclassified: Iredell County, North Carolina, was deemed to be located in the Charlotte-Gastonia-Rock Hill, North Carolina-South Carolina metropolitan statistical area (MSA); Orange County, New York, was deemed to be part of the large urban area of New York, New York; Lake County, Indiana, was deemed to be located in the Chicago, Lee County, Illinois, was deemed to be located in the Chicago, Illinois Hamilton- Middletown, Ohio, was deemed to be located in the Cincinnati, Ohio-Kentucky-Indiana MSA; Brazoria County, Texas, was deemed to be located in the Houston, Chittenden County, Vermont, was deemed to be located in the Boston-Worcester-Lawrence-Lowell-Brockton, Massachusetts-New Hampshire MSA. Section 401 of BBRA allowed certain urban hospitals, beginning January 1, 2000, to request to be reclassified as rural hospitals for payment purposes under the IPPS. Generally, these hospitals may seek a lower wage index in a rural area in order to receive higher payments as a rural provider type, such as an SCH. According to CMS, the provision benefits hospitals that are within an urban area, but are isolated from the metropolitan core by distance or physical features. Under the BBRA provision, to qualify for this reclassification, a hospital must submit an application and meet one of the following criteria: be located in a rural portion of an MSA or an area defined as rural by be designated as a rural hospital by the state; would qualify as a rural, regional, or national referral center, or as an SCH if the hospital was located in a rural area; or meet other criteria established by CMS. By fiscal year 2013, 46 urban hospitals, comprising 1.3 percent of IPPS hospitals, had been reclassified by CMS as rural under this provision. Seven states California, Florida, Missouri, New York, Pennsylvania, Texas, and Virginia had more than two qualifying hospitals. Section 505 of MMA required that HHS establish a process, beginning in fiscal year 2005, by which the agency may increase the wage index for hospitals located in counties where potential employees commute to higher wage index areas. The provision benefits hospitals located in counties where a higher than average percentage of hospital employees reside in that county but work in another county that has a higher wage index. Hospitals in qualifying counties receive an average of the differences between the higher and lower wage indexes, weighted by the percentage of hospital workers in the qualifying county who work in the higher-wage areas. In the first year of implementation, fiscal year 2005, the wage index increased for 555 eligible hospitals, representing nearly 14 percent of IPPS hospitals; eligible hospitals had an average of 140 beds; California, Texas, and Michigan had the most eligible hospitals with 89, 44, and 40 hospitals, respectively; Massachusetts, Michigan, and Connecticut had the most qualifying hospitals as a percentage of all hospitals in the state with 45, 31, and 29 percent, respectively; and Utah, Minnesota, and Georgia benefited from the largest Medicare payment adjustment as a result of their qualifying status. Onetime Appeal of Wage Index Reclassification by the Board (Expired) Section 508 of MMA required HHS to establish a process by January 1, 2004, so that a hospital denied a request to be reclassified to the wage index of another area in its state could submit a onetime appeal to the Board. The provision required the Board to grant appeals of and reclassify qualifying hospitals, defined as those hospitals that did not originally qualify for reclassification on the basis of distance or commuting requirements but met other criteria such as quality factors, as specified by HHS. The provision limited reclassifications to a three-year period for appeals filed by February 15, 2004. The provision also capped additional Medicare expenditures resulting from these reclassifications to $900 million over the initial 3-year period. We found that, in its first year of implementation, 130 hospitals, or approximately 3 percent of all IPPS hospitals, qualified for this adjustment; and four states Connecticut, Michigan, North Dakota, and Pennsylvania had at least 10 qualifying hospitals each. While originally enacted as a onetime and time-limited provision, Congress extended reclassifications made under this provision numerous times until they expired on March 31, 2012. Section 10324 of PPACA established a hospital wage index floor adjustment, beginning with discharges as of fiscal year 2011, for hospitals The provision defined a frontier state as one in located in frontier states.which at least 50 percent of counties have a population of fewer than 6 people per square mile and set the wage index in these areas at no lower than 1.0. In other words, while the wage index for all other states is a ratio of the average hourly hospital wage in the area to the national average, if the wage index of a frontier state is lower than the national average or less than 1.0 this provision adjusts the wage index to 1.0. Prior to implementation, CMS projected that five states would meet the criteria to be designated as a frontier state: Montana, Nevada, North Dakota, South Dakota, and Wyoming; 48 out of 82 IPPS hospitals in those states would be eligible for a modified wage index that is at least 1.0; and IPPS payments would increase by approximately $50 million in the first year. <2.2. Provisions Modifying Classification Criteria for IPPS Supplemental Payments or Other Types of Special Treatment> We identified five statutory provisions that have affected the number of hospitals that qualify for IPPS supplemental payments or other types of special treatment. Most of these provisions modified the classification criteria for payment adjustments, thereby expanding the number of hospitals that qualify for higher payments. Section 211 of BIPA revised the Medicare threshold criteria for DSH, resulting in an increased number of hospitals qualifying for a payment adjustment. Effective for discharges as of April 1, 2001, this provision allowed hospitals, regardless of location and size, to receive a DSH adjustment with a DPP of 15 percent or greater. Originally, different types of hospitals qualified for a DSH adjustment on the basis of varying DPP thresholds. For instance, urban hospitals with 100 or more beds qualified for a DSH payment with a minimum DPP of 15 percent, whereas urban hospitals with fewer than 100 beds qualified for a DSH payment adjustment with a minimum DPP of 40 percent. CMS reported that this adjustment of the DPP qualifying threshold added 1,191 primarily rural and small urban hospitals to those already receiving a DSH adjustment; for example, 351 additional rural hospitals with fewer than 100 beds and 244 additional urban hospitals with fewer than 100 beds started receiving a DSH adjustment after implementation; would increase Medicare spending by $60 million from fiscal year 2001 through fiscal year 2002; and would not negatively affect any hospitals. Section 402 of MMA modified the formulas used to calculate the DSH payment adjustment for certain hospitals, thereby increasing payments to these hospitals. Specifically, effective with discharges as of April 1, 2004, the DSH adjustment formula used for large urban hospitals was applied to other types of hospitals, including SCHs, RRCs, other rural hospitals with fewer than 500 beds, and urban hospitals with fewer than 100 beds. In addition, this provision capped this DSH payment adjustment at 12 percent of the hospital s IPPS rate, while exempting RRCs from this cap. For instance, an urban hospital with fewer than 100 beds that qualifies for a DSH payment adjustment of 18.1 percent is capped at the maximum 12 percent payment adjustment. Section 406 of MMA established a new payment adjustment for low- volume hospitals beginning in fiscal year 2005 that accounts for the higher costs per discharge at hospitals that admit a relatively small number of patients. To qualify, hospitals had to be located more than 25 miles from another hospital and have fewer than 800 total discharges annually. The provision required CMS to determine, on the basis of empirical data, applicable percentage increases, not to exceed 25 percent, in payments for qualifying low-volume hospitals. CMS explained in issuing the final rule implementing this MMA provision that the agency analyzed data and determined that hospitals with fewer than 200 discharges a year have sufficiently higher costs relative to payments to justify receiving a payment adjustment, but that hospitals with 200 to 800 discharges a year did not. CMS provided the maximum 25 percent payment adjustment only to those qualifying hospitals that were located more than 25 miles from another hospital and had fewer than 200 discharges in a given year. CMS reported that CBO estimated this provision would increase Medicare program expenditures by less than $50 million annually, and only three hospitals one located in Florida and two located in South Dakota received a low-volume payment adjustment in fiscal year 2005. All were small (30 or fewer beds) rural hospitals classified as SCHs. Section 3125, as amended by section 10314, of PPACA temporarily revised the qualifying criteria for a low-volume hospital designation making it easier for hospitals to receive the payment adjustment. Effective for fiscal year 2011 and fiscal year 2012, this provision decreased the required distance from the nearest hospital from 25 miles to 15 miles. In addition, it changed the maximum number of annual discharges allowed from 800 total patients to 1,600 Medicare Part A beneficiaries. The provision also required that payment adjustments for qualifying low-volume hospitals be calculated using a continuous sliding scale, paying up to an additional 25 percent to hospitals with 200 or fewer annual Medicare Part A discharges. As a result of these changes, both the number of hospitals receiving the low-volume payment adjustment and the estimated Medicare expenditures rose substantially. CMS reported that over the 2-year period, fiscal year 2011 and fiscal year 2012, the provision was estimated to cost Medicare approximately $880 million; and the number of hospitals that received a payment adjustment rose to approximately 645 or about 18 percent of IPPS hospitals in fiscal year 2011 from 3 hospitals the year before. We found that, in fiscal year 2011, at least 40 percent of IPPS hospitals in each of 6 states received a low-volume adjustment: Wyoming (67 percent), Vermont (50 percent), New Mexico (43 percent), Minnesota (42 percent), Alabama (41 percent), and Mississippi (41 percent); and recipient hospitals were relatively small,rural areas, and likely to also receive a DSH payment adjustment. Section 212 of BIPA modified one aspect of the MDH classification criteria. Effective for cost reporting periods beginning April 1, 2001, it changed the data source used to determine whether at least 60 percent of a hospital s discharges were Medicare beneficiaries. This provision allowed a hospital to base this determination on two of the three most recently audited fiscal year cost reporting periods. Prior to this change, discharge data were based on cost reporting periods beginning in 1987. This provision did not initially have a significant effect on the number of qualifying hospitals or on Medicare payments. CMS estimated that a total of 139 hospitals all of which had previously been designated as MDHs would qualify as an MDH using this revised data source. The agency further estimated that Medicare would spend an additional $10 million in the first year of implementation. It is likely that this provision had a greater effect on the number of hospitals qualifying for an MDH classification in later years because it allowed hospitals to use recent, rather than outdated, cost reports. <2.3. Provisions Creating and Modifying Qualifying Criteria for CAHs> In 1997, Congress established the Critical Access Hospital (CAH) program, under which qualifying small rural hospitals are excluded from the IPPS and receive Medicare payment based on the reasonable costs of providing services. In effect, CAHs receive higher payments for providing services to Medicare beneficiaries than they would under the IPPS. Specifically, section 4201 of BBA allowed states to apply for approval to create a Medicare Rural Hospital Flexibility program (Flex Program), under which states must designate at least one hospital as a To be designated as a CAH, hospitals had to meet the following CAH.qualifying criteria: type of organization: nonprofit or public hospital; services: must provide 24-hour emergency services deemed necessary for ensuring access in each area served by the CAH; location: rural county or other rural area in states with approved Flex Programs; size: no more than 15 acute care inpatient beds; average inpatient stay: no more than 4 days (subject to certain exceptions) patient access: (a) located either more than 35 miles from the nearest hospital or CAH or more than 15 miles in areas with mountainous terrain or only secondary roads, or (b) designated by the state as a necessary provider of health care services to residents in the area. Prior to implementation, CMS reported that approximately 51 facilities in seven states those participating in the demonstration program that preceded the CAH program would be eligible to become CAHs. While the CAH program was expected to grow, CMS was not able to estimate reliably how many additional states would choose to participate or the potential cost of the CAH program to Medicare. Section 403 of BBRA made a number of changes to qualifying criteria for the CAH program. First, the provision changed the inpatient length of stay requirement from a maximum of 4 days to an annual average of 4 days. Second, the provision removed the requirement that eligible hospitals must be nonprofit or public, allowing for-profit hospitals to qualify as CAHs, if so approved by their state. Third, the provision permitted a state to designate as a CAH not only a currently operating hospital, but closed facilities or facilities that were previously hospitals but currently operate as a state-licensed health clinic or health center, if the facilities Data indicate that in the 2 years following the meet all other criteria.enactment of BBRA in 1999, 411 hospitals were newly designated as CAHs. See MMA, Pub. L. No. 108-173, 405(e), (g), (h), 117 Stat. at 2266 (codified, as amended, at 42 U.S.C. 1395i-4(c)(2), (h)). reasonable cost basis).eliminated the ability of a state to designate a hospital as a necessary provider, so that states could no longer waive the 35-mile distance requirement to designate a hospital as a CAH. However, the provision grandfathered all CAHs that had already received their CAH status by being designated as a necessary provider. Third, effective January 1, 2006, the provision CBO estimated that these changes to the CAH program under this provision would increase Medicare program expenditures by approximately $100 million annually, according to CMS. Data show that in 2004 and 2005, the 2 years following enactment of MMA, 422 new CAHs joined the program. However, the number of new CAH designations dropped sharply in 2006 with the elimination of the necessary provider designation. (See fig. 4.) The CAH program has grown to a total of 1,328 hospitals as of 2012. The majority of CAHs have the maximum 25 inpatient beds. In addition, CAHs are largely concentrated in the central states, although all but five states have at least one CAH. We found that the states with the largest percentage of hospitals designated as CAHs are North Dakota and Montana, with about 84 percent and 79 percent, respectively. (See fig. 5.) Furthermore, according to MedPAC, 17 percent of CAHs are 35 or more miles from the nearest hospital, 67 percent are between 15 miles and 35 miles from the nearest hospital, and 16 percent of CAHs are fewer than 15 miles from the nearest hospital. These data indicate that not all CAHs meet current qualifying criteria. Section 1109 of HCERA authorized $400 million in Medicare payments to qualifying hospitals in low-spending counties over 2 years, fiscal year 2011 and fiscal year 2012. This provision defined qualifying hospitals as acute care hospitals located in a county that ranked within the lowest quartile of age, sex, and race adjusted spending per beneficiary enrolled in fee-for-service Medicare parts A and B. CMS allocated the additional payments to each qualifying hospital in proportion to its share of Medicare inpatient payments for all qualifying hospitals, based on fiscal year 2009 IPPS payments for operating expenses. In implementing this provision, CMS made payments to about 400 hospitals, which accounted for 11 percent of all IPPS hospitals and approximately 8 percent of IPPS beds; Medicare expenditures increased $150 million in fiscal year 2011 and $250 million in fiscal year 2012; on average, qualifying hospitals had 135 to 137 beds; half of qualifying hospitals were in urban areas and half were in rural states with the most hospitals receiving this payment were New York (50 hospitals), Wisconsin (40 hospitals), Virginia (31 hospitals), Oregon (21 hospitals), and Iowa (20 hospitals). <3. IPPS Payment Adjustments or Exclusions Affected Nearly All Hospitals> We found that, in 2012, payment adjustments to, or exclusions from, the IPPS affected nearly all of the 4,783 hospitals in our review. Of these hospitals IPPS hospitals and CAHs 91 percent were subject to a payment adjustment under the IPPS or were excluded from the IPPS entirely. Specifically, 3,039 hospitals, or over 63 percent, qualified for at least one of the following four types of payment adjustments under the IPPS: a DSH adjustment, an IME adjustment, a wage index adjustment, or a rural provider type adjustment for an RRC, SCH, or MDH designation; 1,328 hospitals, or about 28 percent, qualified as CAHs, excluding them from the IPPS; and 416 hospitals, or about 9 percent, received IPPS payments that were unadjusted for the modifications included in our review. Among the 3,455 IPPS hospitals, the vast majority qualified for one of the four categories of upward payment adjustment in 2012. The DSH adjustment had the broadest reach, affecting payments to about 4 in 5 of IPPS hospitals. Nearly 1 in 3 hospitals qualified for an IME adjustment, 1 in 3 qualified for a wage index adjustment, and almost 1 in 5 received a rural provider type adjustment. Each of these categories of increased payment benefited hospitals in both urban and rural areas. The DSH and IME adjustments applied mostly to urban hospitals payments, whereas the wage index adjustment was applied to more rural hospitals payments. Although rural provider type adjustments generally supported payment to rural hospitals, about 20 percent of recipients were urban hospitals. (See table 1.) Among the 3,039 IPPS hospitals receiving additional payment, the majority qualified for more than one category of payment adjustment. In 2012, roughly half of IPPS hospitals received two forms of adjustments and 13 percent qualified for three forms of adjustments. Two percent of IPPS hospitals qualified for four forms of adjustments, but no state had more than 10 hospitals qualifying for four forms of increased payment. The remaining IPPS hospitals, almost a third of the total, qualified for a DSH adjustment alone. By location, we found that most hospitals in urban areas qualified for one or two forms of increased payment, whereas most hospitals in rural areas qualified for two or more forms of additional pay. (See fig. 6.) Of the 416 hospitals that did not qualify for IPPS payment adjustments, nearly all were in urban areas and were distributed across most states. Generally, they were substantially smaller than the average urban hospital, typically having 98 beds compared to 224 beds. The multiple types of adjustments to Medicare payments vary in their financial effect, and can substantially affect a hospital s revenue. Take, for example, a beneficiary who undergoes coronary bypass surgery with angioplasty and has a major complication or comorbidity. When this surgery is performed at a teaching hospital in a large urban area that treats a high percentage of low-income patients, the total operating payment from Medicare comes to about $63,600. Specifically, in fiscal year 2013, the IPPS operating base rate for that case, adjusted for the local wage index, is approximately $41,500. Added to that amount is about $14,100 for IME and roughly $8,000 for Medicare DSH. Thus, the two payment adjustments increase the amount this hospital would receive for this discharge by more than 50 percent. <4. Concluding Observations> The IPPS streamlines how Medicare pays hospitals and gives hospitals an incentive to economize by paying a fixed amount, set in advance. Over time, however, numerous statutory provisions have been enacted that provide, grandfather, or extend additional payments to IPPS hospitals or exclude a substantial number of hospitals from the IPPS altogether. This piecemeal approach to modifying the original IPPS a patchwork of individual fixes has had the cumulative effect of most hospitals receiving modifications and add-ons to the basic payment formula that increase Medicare spending. In fact, over 90 percent of hospitals were subject to either IPPS payment adjustments or exemptions in 2012. These changes address characteristics of the hospital market such as competition for labor, challenges to rural hospitals, and the need to support Medicare-participating hospitals in certain markets. In addition, organizations such as IOM and MedPAC have recently made recommendations to strengthen the data used in geographic adjustments, and to hone the targeting of rural special payment adjustments. Taken together, these findings and recommendations suggest that, 30 years after the IPPS was implemented, the way Medicare currently pays hospitals may no longer ensure that the goals of the payment system cost control, efficiency, and access are being met. <5. Agency Comments> HHS reviewed a draft of this report and did not have any general comments. The agency provided technical comments, which we incorporated where appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. We will send copies of this report to the Secretary of Health and Human Services. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Estimated Effect of Nationwide Budget Neutrality of Rural Floor on Hospital Payments by State, Fiscal Year 2012 and 2013 <6. Fiscal year 2012> <7. Fiscal year 2012> Appendix II: Hospitals by Provider Type Appendix III: GAO Contact and Staff Acknowledgments <8. GAO Contact> <9. Staff Acknowledgments> In addition to the contact named above, Rosamond Katz, Assistant Director; Alexander Galuten; Katherine Perry; Kathryn Richter; and Hemi Tewarson made key contributions to this report.
Why GAO Did This Study To help control the growth of hospital spending, give hospitals an incentive to provide care efficiently, and ensure beneficiary access, Congress created the IPPS in 1983. Yet, Congress can enhance Medicare payments to certain hospitals by changing the qualifying criteria for IPPS payment categories, creating and extending exceptions to IPPS rules, or by exempting certain types of hospitals from the IPPS. The Institute of Medicine and the Medicare Payment Advisory Commission have stated that such practices undermine the integrity of the IPPS. GAO was asked to review legislation that altered payments to certain hospitals. In this report, GAO (1) identified provisions of law that enhanced Medicare payments for only a subset of hospitals and (2) examined the extent to which hospitals qualified for adjustments to the IPPS or exemptions from the IPPS in 2012. To conduct this work, GAO reviewed provisions enacted from 1997 to 2012 to identify those that adjusted payments to a subset of IPPS hospitals or exempted hospitals from the IPPS. GAO analyzed data to learn the number, location, and size of hospitals affected by these provisions and budgetary estimates for the first year of implementation, where available. GAO also analyzed 2012 data on 4,783 general hospitals to determine the number and types of adjustments they received, the extent to which they qualified for multiple adjustments, and the number exempted from the IPPS. The Department of Health and Human Services reviewed a draft of this report, and provided technical comments, which we incorporated as appropriate. What GAO Found Over time, Congress has modified how Medicare reimburses certain hospitals under the inpatient prospective payment system (IPPS), which pays hospitals a flat fee per stay, set in advance, with different amounts for each type of condition. GAO identified numerous statutory provisions that individually increased Medicare payments to a subset of hospitals. Seven provisions enabled hospitals to be paid under a different geographic wage index, which is used to address variation in labor costs. Five provisions modified the classification criteria allowing IPPS hospitals to qualify for supplemental payments through the Medicare disproportionate share hospital (DSH) program or other types of special treatment. Three provisions created and modified criteria for classifying small rural providers as Critical Access Hospitals (CAH), which are exempt from IPPS and instead are paid under an alternative methodology. In general, while such provisions were designed to affect only a subset of hospitals, nearly all of the 4,783 hospitals in GAO's review qualified for an adjustment or exemption from the IPPS in 2012. About 91 percent were subject to an IPPS payment adjustment or were excluded from the IPPS entirely. Most hospitals, over 63 percent, qualified for at least one of four categories of increased payment, with DSH payments being the most common. Under the CAH program, 28 percent of hospitals were exempt from the IPPS. The remaining hospitals, 9 percent, received IPPS payments that were unadjusted for the modifications included in GAO's review. Moreover, many IPPS hospitals qualified for multiple categories of payment adjustments. These findings suggest that the way Medicare currently pays hospitals may no longer ensure that the goals of the IPPS--cost control, efficiency, and access--are being met.
<1. Background> The U.S. district courts are the trial courts of the federal court system. There are 94 federal judicial districts at least one for each state, the District of Columbia, and four U.S. territories organized into 12 regional circuits. Each circuit has a court of appeals whose jurisdiction includes appeals from the district and bankruptcy courts located within the circuit, as well as appeals from decisions of federal administrative agencies.The Administrative Office of the United States Courts (AOUSC) within the judicial branch carries out a wide range of services for the federal judiciary, including capital-planning. The Judicial Conference of the United States (Judicial Conference) supervises the Director of the AOUSC and is the principal policy-making body for the federal judiciary and recommends national policies and legislation on all aspects of federal judicial administration. Federal courthouses can house a variety of appellate, district, senior district, magistrate, or bankruptcy judges as well as other court and non- court-related tenants. Prior to 2008, the judiciary did not require judges to share courtrooms, except in situations where the courthouse was out of space. courtroom-sharing (1) between senior district judges and (2) between magistrate judges. In 2011, the Judicial Conference adopted a courtroom- sharing policy for bankruptcy judges. These policies apply to new courthouse projects and existing courthouses when there is a new space need that cannot otherwise be accommodated. (See app. II for more information on judiciary s courtroom-sharing policies.) The judiciary has also been studying the feasibility of an appropriate sharing policy for district judges in courthouses with more than 10 district judges, but has not yet finalized a policy and could not tell us when or if it expected to do so. Our 2010 report examined judiciary data on courtroom usage and found that there are additional opportunities for significant cost savings through courtroom-sharing, particularly for district judges. Appellate judges, however, have always shared courtrooms because they sit in panels of three or more. operational deficiencies in the existing courthouse, and (4) the current number of judges who do not have a permanent courtroom and chambers in the existing courthouse, plus the projected number of judges over the 10-year planning period who will not have a courtroom and chambers. From fiscal years 2005 to 2006, as a cost containment initiative, the judiciary imposed a moratorium on new courthouse construction while it reevaluated its capital-planning process. In 2008, the judiciary began using a new capital-planning process, called the Asset Management Planning (AMP) process, to assess, identify, and rank its space needs. According to judiciary officials, the AMP process addresses concerns about growing costs and incorporates best practices related to capital-planning. The AMP process includes several steps beginning with the completion of a district-wide Long Range Facilities Plan (LFRP). Collectively, the AMP process: documents courthouse space conditions and district space needs develops housing strategies that can include construction of a new based, in part, on the judiciary s AMP process rules and building standards as specified in the U.S. Courts Design Guide; identifies space needs on a building-specific and citywide basis; and courthouse or annex and renovation projects. The AMP process results in an urgency score for construction or renovation based primarily on the current and future need for courtrooms and chambers and the condition assessment of the existing building (see app. III). The AMP process establishes criteria for qualifying for new courthouse construction, such as requiring that an existing courthouse have a chamber for each judge and needing two or more additional courtrooms. Judiciary officials told us that unlike the previous capital- planning process, a new courthouse could no longer be justified as part of the AMP process based solely on security or operational deficiencies. The judiciary has chosen to improve security within existing courthouse rather than replace them with new courthouses. After the Judicial Conference identifies courthouse projects, GSA conducts feasibility studies to assess alternatives for meeting the judiciary s space needs and recommends a preferred alternative. The judiciary adopts the GSA recommended alternative, which may differ from the alternative recommended in the AMP process. For example, a project may not qualify for new courthouse construction under the AMP process, but GSA may determine through its feasibility study that new construction is the most cost-efficient, viable solution. See figure 1 for the judiciary s current process for selecting and approving new courthouse construction projects. Part of the judiciary s capital-planning both the previous and current processes has been to periodically communicate its facility decisions for construction projects via a document known as the Five Year Courthouse Project Plan (5-year plan). The 5-year plan is a one-page document that lists proposed projects by fiscal year and the estimated costs for various project phases (site acquisition, design, or construction) as approved by the Judicial Conference. The judiciary uses the plan to communicate its most urgent projects to Congress and other decision makers. Previously, we found that judiciary s 5-year plans did not reflect the most urgently needed projects and lacked key information about the projects selected such as a justification for the project s priority level. GSA reviews its courthouse studies with the judiciary and forwards approved projects for new courthouses to the Office of Management and Budget (OMB) for review. If approved by OMB, GSA then submits requests to congressional authorizing committees for new courthouse projects in the form of detailed descriptions, or prospectuses, authorizing acquisition of a building site, building design, and construction. Following congressional authorization and the appropriation of funds for the projects, GSA manages the site, design, and construction phases. After occupancy, GSA charges federal tenants, such as the judiciary, rent for the space they occupy and for their respective share of common areas, including mechanical spaces. In fiscal year 2012, the judiciary s rent payments to GSA totaled over $1 billion for approximately 42.4 million square feet of space in 779 buildings that include 446 federal courthouses. Before Congress makes an appropriation for a proposed project, GSA submits detailed project descriptions called prospectuses to the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure, for authorization by these committees when the proposed construction, alteration, or acquisition of a building to be used as a public building exceeds a specified dollar threshold. For purposes of this report, we refer to these committees as authorizing committees when discussing the submission of the prospectuses and providing additional information relating to prospectuses to these committees. Furthermore, for purposes of this report, we refer to approval of these projects by these committees as congressional authorization. See 40 U.S.C. 3307. evaluating all of the courthouses until October 2015 and would take another 18 to 24 months to complete the LRFPs, dependent upon the availability of funding. <2. AMP Process Partially Aligns with Several Leading Practices but Does Not Provide Needed Information to Decision Makers> <2.1. AMP Process Partially Aligns with Several Leading Practices in Capital Planning> The AMP process, which the judiciary has applied to about 67 percent of its courthouses, represents progress by the judiciary in aligning its capital- planning process with leading capital-planning practices, but the document the judiciary uses to request courthouse construction projects lacks transparency and key information. We have previously reported that prudent capital-planning can help agencies maximize limited resources and keep capital acquisitions on budget, on schedule, and aligned with mission needs and goals. Figure 2 summarizes leading capital-planning practices and our assessment of the extent to which the AMP process aligns with those practices. For our analysis of judiciary s planning practices, we focused on the judiciary s implementation of the concepts that underlie the planning phase of OMB and GAO guidance, including linking capital-planning to an agency s strategic goals and objectives and developing a long-term capital investment plan. Several aspects of the AMP process partially align with leading capital- planning practices, but none fully align and the 5-year plan only aligns to a limited extent which we discuss further in this report. Here are some examples to illustrate partial alignment: Strategic Linkage. The judiciary s strategic plan links to its management of capital assets, but the AMP process does not link to the strategic plan. For example, the AMP process documents we reviewed did not explain how the process helps achieve the goals and objectives in the judiciary s current strategic plan, which are organized around seven issues: providing justice; the effective and efficient management of public resources; the judiciary workforce of the future; harnessing technology s potential; enhancing access to the judicial process; the judiciary s relationships with the other branches of government; and enhancing public understanding, trust, and confidence. However, after our review, a judiciary official told us that the Long Range Facility Plans (LRFP) currently under development would include a reference to the strategic plan. Needs Assessment and Gap Identification. The AMP process has improved judiciary s needs assessment and gap analysis by establishing a comprehensive, nationwide 328-factor study for every courthouse, whereas the previous process was not as comprehensive and only assessed courthouses when requested by a local judicial district. The AMP process evaluates the degree to which existing facilities support court operations by applying space functionality standards, security, and building condition factors. However, cost estimates supporting the judiciary s needs are incomplete, as discussed later in this report. Alternatives Evaluation. The AMP process establishes a review and approval framework criteria for justifying new construction, whereas none existed in the previous process. The AMP process evaluates some alternatives, such as renovating existing courthouses to meet needs, but it is unclear if the judiciary considered other options, such as courtroom-sharing in the existing courthouse. Assessing a wide- range of alternatives would help the judiciary ensure that it evaluated other, less costly, approaches to bridging the performance gap before recommending new construction. Review and Approval Framework with Established Criteria for Selecting Capital Investments. The AMP process includes a review and approval framework with criteria, such as courthouses needing two or more courtrooms to qualify for a new courthouse project. However, courtroom deficits are not apparent in most projects reported in the 5-year plan. Long-Term Capital Investment Plan. Judiciary officials with whom we spoke agreed that the 5-year plan is not a long-term capital investment plan, but it is what the judiciary uses to document its request for new courthouse construction to decision makers. The one- page 5-year plan document does not reflect the depth of the AMP process, describe all other projects that the judiciary considered, or indicate how the projects chosen will help fulfill the judiciary s mission, goals, and objectives. Two courthouse projects illustrate how the AMP process has changed the way the judiciary evaluates its need for new courthouses. Specifically, two projects listed on a previous 5-year plan (covering fiscal years 2012 through 2016) were re-evaluated under AMP San Jose, California, and Greenbelt, Maryland. Both had ranked among the top 15 most urgent projects nationwide under the previous capital-planning process, and as such, the judiciary prioritized them for new construction in 2010. However, after the judiciary evaluated the San Jose and Greenbelt projects under the AMP process, their nationwide rankings fell to 117 and 139, respectively. Judiciary officials explained that this drop was largely because of the completion of additional AMP assessments, coupled with the reduced space needs because of courtroom-sharing. Following the change in rankings, GSA and the judiciary determined that judiciary s needs could alternatively be addressed through repair and alteration projects that reconfigure existing space. The judiciary added that its decision saved taxpayer money. As a result, at the request of the judiciary, the Judicial Conference of the United States removed the two projects from the 5-year plan. <2.2. Current 5-Year Plan Lacks Transparency, and $1- Billion Cost Estimate Is Not Comprehensive> The judiciary s current 5-year plan the end product of the judiciary s capital-planning process does not align with leading practices for a long- term capital investment plan in a number of ways. The plan does not provide decision makers with detailed information about proposed construction projects or how they were selected. The one-page document lists each project by city name, year, and dollar estimate for the next phase of the project s development as shown in figure 3. The one-page plan also provides the project s urgency score from the judiciary s capital- planning process. However, the document does not specify whether the scores were developed under the old process or the AMP process. Unlike a long-term capital investment plan usually the end product under leading capital-planning practices the 5-year plan lacks complete cost and funding information, linkage to the judiciary s strategic plan, and information on why projects were selected. Specifically, while courthouses provide facilities for the judiciary to accomplish goals set out in its strategic plan, such as enhancing access to the judicial process, the 5-year plan contains no mention of the strategic plan. In addition, the 5- year plan does not include a discussion of the AMP process and criteria; a schedule of when the AMP process will be completed; and details on the alternatives considered during the process, such as whether the judiciary s courtroom-sharing policy was applied prior to requesting a new courthouse project. The 5-year plan is not transparent and does not provide key funding information, such as total estimated project costs. Specifically, it lists about $1.1 billion in estimated costs, which are the funds needed for that specific 5-year period. However, these costs only include part of the project phases. The estimated cost of all project phases site acquisition, building design, and construction comes to $1.6 billion in 2013 dollars. In addition, while no longer included in the 5-year plan, the judiciary estimated that it would need to pay GSA $87 million annually in rent, or $1.6 billion over the next 20 years, to occupy these courthouses if constructed. Table 1 describes our analysis of judiciary s data for the estimated cost of all phases and projected rent costs that total almost $3.2 billion. However, even though the $3.2-billion estimate provides a more complete presentation of the project costs, that estimate could change based on GSA s redesign of projects because of changes in the judiciary s needs. In addition, the $3.2-billion estimate does not include life-cycle costs, such as furniture and GSA disposal of existing facilities, which would also have to be included for the cost estimate to be comprehensive. GAO and OMB have established that estimates of life-cycle costs are necessary for accurate capital-planning. In addition, the 5-year plan does not provide the amount of funding already provided for all of the projects. Since fiscal year 1995, Congress has appropriated about $177 million of the estimated $1.6 billion needed for 10 of these projects phases, mostly for site acquisitions and designs. None of the projects has begun construction, and only the Mobile project has received any construction funding (see fig. 4). We found that the 5-year plan does not align with the leading practice of considering the risks involved in acquiring new courthouses. Specifically, the plan does not inform stakeholders that 11 of the 12 projects require further design before construction can begin. According to GSA officials, the agency has not received funding for the design of two projects (Chattanooga and Des Moines). Of the remaining 10 projects that have design funding, 1 is in the design process and 9 are on hold. According to GSA officials, some of the projects on hold must be re-designed to accommodate policy and other requirements relating to, for example, changes such as courtroom-sharing and energy management.example, the design of the Savannah courthouse project was completed in 1998 and now needs extensive re-design to accommodate changes mandated by policy shifts, including improved security and a reduction in the number of courtrooms needed. GSA officials said that only the design of the Nashville project though oversized by one floor is likely to remain largely intact because it would be more cost-effective to rent the additional space to other tenants than to completely re-design the project. In February 2012, judiciary submitted its 5-year plan to Congress and other decision makers. As a result, there is a risk that funding decisions could be made without complete and accurate information. Congress would benefit from having information based upon a long-term capital investment plan for several reasons. Specifically, transparency about future priorities could allow decision makers to weigh current-year budget decisions within the context of projects expected future costs. In the case of the judiciary, which has identified a number of future courthouse projects estimated to cost several billion dollars, full transparency regarding these future priorities may spur discussion and debate about actions Congress can take to address them. Additionally, transparency regarding future capital costs would put the judiciary s priorities in context with federal spending. There is widespread agreement that the federal government faces formidable near- and long-term fiscal challenges. GAO has long stated that more transparent information and better incentives for budget decisions, involving both existing and proposed programs, could facilitate consideration of competing demands and help put U.S. finances on a more sustainable footing. <3. Most Courthouse Projects Were Not Evaluated under AMP Process and Do Not Meet AMP Criterion for New Construction> <3.1. Judiciary Has Not Evaluated Most 5-Year Plan Projects under the AMP Process> The judiciary has not applied the AMP process to 10 of the 12 construction projects on the current 5-year plan dated September 2012. These 10 projects were evaluated under the judiciary s prior capital- planning process and approved based on their urgency levels as determined under that process. Judiciary officials said that they did not want to delay the projects or force them to undergo a second capital- planning process review because the judiciary had already approved the projects. Only 2 projects on the current 5-year plan (2014 to 2018) were assessed under the AMP process Chattanooga, Tennessee, and Des Moines, Iowa. Judiciary officials said these projects were added to the 5- year plan in September 2010 because they had the highest priority rankings of the projects that had undergone an AMP review at that time. Judiciary officials explained that these projects also had GSA feasibility studies that recommended new construction. However, the Chattanooga and Des Moines projects have not retained their top rankings as the judiciary has continued to apply the AMP process to additional courthouses. Specifically, judiciary documents show that more than a dozen other projects not included on the 5-year plan now rank above the Chattanooga and Des Moines projects, six of which recommend new construction. For example, we visited the federal courthouse in Macon, Georgia, which now ranks higher than either the Chattanooga or Des Moines projects. The Macon courthouse suffers from numerous operational and security issues typical of historic courthouses, but it is not included on the 5-year plan. As we previously noted, the judiciary also applied the AMP process to 2 other projects that were included on an older 5-year plan (2012 to 2016) San Jose and Greenbelt and subsequently removed them after the projects received substantially lower priority rankings, as shown in appendix IV. The change in the rankings of the 4 projects calls into question the extent to which the projects remaining on the 5-year plan represent the judiciary s most urgent projects and whether proceeding with these projects while hundreds of AMP reviews remain to be done represents the most fiscally responsible path. We recognize that conducting AMP reviews of the 10 projects on the 5-year plan would involve additional costs; however, not conducting AMP reviews on these projects could involve spending billions of dollars over the next 20 years on courthouses that may not be the most urgent projects. While the AMP process only partially aligns with leading practices in capital-planning, it is a significant improvement over the capital-planning process the judiciary used to choose 10 of the 12 projects on the 5-year plan. Assessing the 10 projects with the AMP process could help ensure that projects on the 5- year plan do, in fact, represent the judiciary s most urgent projects. <3.2. Most Projects Do Not Qualify for a New Courthouse under the AMP Courtroom Criterion> found that 5 of the projects on the list currently need additional courtrooms, and of those, only the Charlotte and Greenville projects would qualify under the AMP criterion because both need three additional courtrooms (see table 2). We did not assess if the shortage of courtrooms alone is the most appropriate criterion for requesting new construction from GSA, but the establishment of a clear criterion adds an element of transparency that was lacking in the judiciary s previous capital-planning process. We visited two courthouses on the current 5-year plan that were selected as new construction projects under the prior capital-planning process Savannah and Anniston built in 1899 and 1906, respectively. These historic courthouses qualified for new construction under the previous process because of space needs and security and operational deficiencies because of their age, condition and building configuration. According to judiciary and GSA officials, neither courthouse meets Design Guide standards for (1) the secure circulation of prisoners, the public, and courthouse staff and (2) the adjacency of courtrooms and judge s chambers. However, neither of these courthouses would qualify for new construction under the AMP criterion as both have a sufficient number of existing courtrooms for all the judges. Specifically, the Savannah and Anniston courthouses each have enough courtrooms for all assigned judges to have exclusive access to their own courtroom. Savannah currently houses one district judge, one senior district judge, one magistrate judge, and one bankruptcy judge. Figure 5 shows two courtrooms in the Anniston courthouse that currently house one senior district judge and one bankruptcy judge. As discussed, the judiciary s courtroom-sharing policies for senior district, magistrate, and bankruptcy judges allow it to reduce the scope of its courthouse projects and contributed to the cancelation of other courthouse projects. The judiciary has also been studying a courtroom- sharing policy for district judges but has not yet finalized a policy and could not provide a date when and if it planned to do so. Our 2010 report based on judiciary data on courtroom scheduling and use showed that judges of all kinds, including district judges, could share courtrooms without delaying any scheduled events and recommended that the judiciary expand courtroom-sharing to more fully reflect the actual scheduling and use of district courtrooms. Specifically, judiciary data showed that three district judges could share two courtrooms or a district judge and a senior district judge could share one courtroom. If district judges shared courtrooms in this way, the judiciary would have a sufficient number of courtrooms in all of the 12 proposed projects in the 5- year plan, based on the AMP criterion. In responding to our recommendation, the judiciary stated that our 2010 report oversimplified the complex task of courtroom-sharing by assuming that judicial proceedings were more certain and predicable than they are. We addressed the uncertainty of courtroom scheduling by (1) accounting for unused scheduled time as if the courtroom were actually used and (2) providing additional unscheduled time in courtrooms. Since potential courtroom-sharing among district judges could reduce the need for additional courtroom space and the AMP criterion for qualifying for new courthouse construction, it is important for the judiciary to finalize its position and policy on courtroom-sharing, as we previously recommended. <4. Conclusion> With the development and implementation of the AMP process, the judiciary s capital-planning efforts partially align with several leading practices. The AMP process has the potential to provide a wealth of information on the judiciary s existing facilities and assess and rank the need for new construction based on measurable criteria. However, the 5- year plan submitted for approval of several billion-dollars worth of projects a one-page list of projects with limited and incomplete information does not support the judiciary s request for courthouse construction projects. For example, the AMP process introduces a criterion for when new construction is warranted when two or more courtrooms are needed but the 5-year plan does not show how this criterion applies to the recommended projects. Furthermore, the 5-year plan has underestimated total costs of these projects by about $2 billion because it does not include all project phases and because the judiciary no longer includes its rent costs on the 5-year plan. Additionally, construction has not begun on any of the 12 courthouse projects on the 5- year plan and most need to be redesigned to meet current standards. Given the fiscal environment, the judiciary and the Congress would benefit from more detailed information about courthouse projects and their estimated costs than judiciary currently provides. Such information would enable judiciary and Congress to better evaluate the full range of real property priorities over the next few years and, should fiscal constraints so dictate, identify which should take precedence over the others. In short, greater transparency would allow for more informed decision making among competing priorities. Current fiscal challenges also require that the federal government focus on essential projects. While the judiciary has made significant strides in improving its capital-planning process, most of the 12 projects listed on the 5-year plan are products of its former process. It is possible that some of the 12 projects do not reflect the most urgent capital investment needs of the judiciary under its current criteria. Two projects on a previous 5- year plan that were assessed under the AMP process were removed from the list and now rank well down on the judiciary s list of priorities, but the judiciary has not applied the AMP process to 10 courthouses on the current 5-year plan dated September 2012. Furthermore, 10 of the 12 projects on the current 5-year plan do not require a sufficient number of courtrooms to qualify for new construction under the AMP courtroom criterion. In addition, there is no evidence that the judiciary considered how it could meet the need for courtrooms without new construction if district judges shared courtrooms. Although there would be some incremental costs involved with an additional 10 AMP reviews, those costs appear justified given the billions involved in moving forward with the construction of those 10 courthouses. Similar to the 2-year moratorium the judiciary placed on courthouse construction while it developed the AMP process, it is not too late to apply the AMP process to the 5-year plan projects and possibly save taxpayers from funding construction of projects that might not represent the judiciary s highest priorities under current criteria. It is critical that the judiciary accurately determine its most urgent projects because of the taxpayer cost and the years of work involved in designing and constructing new courthouses. <5. Recommendations> To further improve the judiciary s capital-planning process, enhance transparency of that process, and allow for more informed decision making related to the federal judiciary s real property priorities, we recommend that the Director of the Administrative Office of the U.S. Courts, on behalf of the Judicial Conference of the United States, take the following two actions: 1. Better align the AMP process with leading practices for capital- planning. This should include linking the AMP process to the judiciary s strategic plan and developing and sharing with decision makers a long-term capital investment plan. In the meantime, future 5- year plans should provide comprehensive information on new courthouse projects, including: a) a summary of why each project qualifies for new construction and is more urgent than other projects, including information about how the AMP process and other judiciary criteria for new courthouse construction were applied to the project; b) complete cost estimates of each project; and c) the alternatives to a new project that were considered, including courtroom-sharing, and why alternatives were deemed insufficient. 2. Impose a moratorium on projects on the current 5-year plan until AMP evaluations are completed for them and then request feasibility studies for courthouse projects with the highest urgency scores that qualify for new construction under the AMP process. <6. Agency Comments and Our Evaluation> We provided copies of a draft of this report to GSA and AOUSC for review and comment. GSA and AOUSC provided technical comments that we incorporated as appropriate. Additionally, AOUSC provided written comments in which it agreed with our recommendation to link the AMP process to the judiciary s strategic plan. However, AOUSC raised a number of concerns that the subpoints related to our first recommendation on improving capital planning would duplicate other judiciary or GSA documents. Furthermore, AOUSC disagreed with our recommendation to place a moratorium on the projects in the 5-year plan until it could perform AMP evaluations of those projects because it would take years and not change the result. We continue to believe that our recommendation is sound because the projects included on the 5-year plan were evaluated under the judiciary s previous capital planning process and evidence suggested they no longer represent the judiciary s highest priorities. Specifically, two projects on a previous 5-year plan that were assessed under the AMP process were removed from the list and now rank well down the judiciary s list of priorities. In addition, 10 of the 12 projects on the current 5-year plan do not qualify for new construction under the AMP process. In response to AOUSC s comments, we made some technical clarifications where noted, none of which materially affected our findings, conclusions, or recommendations. AOUSC s complete comments are contained in appendix V, along with our response to specific issues raised. In commenting on a draft of our report, AOUSC said it would take steps to address our first recommendation to link the AMP process to the judiciary s strategic plan, but cited concerns about our presentation of information, accuracy of data, and the subpoints of the first recommendation. Specifically, AOUSC disputed our characterization of the judiciary s role in the capital-planning process for new courthouses and the information provided to Congress to justify new courthouses. According to AOUSC, Congress receives extensive, detailed information on new courthouse projects from GSA, and our recommendation for the judiciary to provide more comprehensive information on courthouse projects in 5-year plans would duplicate the GSA s work. AOUSC also disputed our presentation of the AMP process, stating that GAO did not consider all documents when making our conclusions. AOUSC disagreed with our recommendation for a moratorium on all projects currently on the 5-year plan because completing AMP evaluations for those projects would unnecessarily delay the projects and exacerbate existing security and structural issues with the existing courthouses. In AOUSC s view, AMP evaluations for these courthouses would take years and not alter the justification for new construction projects. AOUSC further disputed the data we used to support our conclusions about the projects on the 5-year plan and our explanation of the data s source. AOUSC also questioned our characterization of the judiciary s actions in response to recommendations in a prior GAO report. We believe our findings, analysis, conclusions, and recommendations are well supported. GAO adheres to generally accepted government auditing standards, which ensure the accuracy and relevance of the facts within this report. These standards include a layered approach to fact validation that includes supervisory review of all work papers, independent verification of the facts within the report, and the judiciary s review of the facts prior to our release of the draft report for agency comment. To the extent that the judiciary is questioning any facts, the judiciary had multiple opportunities provide supporting documentation to substantiate its view. We believe that our description of the roles and responsibilities of the judiciary and the GSA in the capital-planning process for new courthouses is correct and appropriate. In reaching our conclusions about the information provided to Congress, we relied on documents we received from the judiciary and GSA. We continue to believe that by implementing our recommendation about providing additional information to Congress, the judiciary would improve the completeness and transparency of the information that Congress needs to justify and authorize funding of new courthouse projects. We will review AOUSC s steps, once finalized, to address our recommendation that the AMP process be linked to the judiciary s strategic plan. We continue to believe that any steps that AOUSC takes should be aligned with leading practices, including presentation of total project cost estimates and alternatives considered, such as greater courtroom sharing in existing courthouses. With regard to our recommended moratorium on projects on the current 5-year plan, we note that the AMP process represents progress by the judiciary in better aligning its capital-planning process with leading practices. Consequently, we believe that it would be worthwhile to use this improved process to ensure that all courthouse construction proposals remain the judiciary s top priorities and qualify for new construction under the AMP process. The San Jose and Greenbelt projects were approved as among the highest priorities for new construction under the old process but, after being evaluated under the AMP process, now rank far lower on the judiciary s list of priorities 117th and 139th, respectively. We also noted that regardless of whether a project is on the 5-year plan, GSA is responsible for ensuring that courthouses are adequately maintained. We relied on data provided by the judiciary and the GSA to support our analysis of whether the projects on the 5-year plan would qualify under the AMP process, and stand by our conclusions. We used the most current and complete data provided by the judiciary to evaluate the cost of these projects. We will review information provided by the judiciary and determine whether to close the recommendation from our 2010 report at the appropriate time. In response to AOUSC s comments, we clarified the report and added detail to our methodology in appendix I as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Director of the Administrative Office of the U.S. Courts, the Administrator of GSA and other interested parties. In addition, the report will be available at no charge on GAO s website at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix VI. Appendix I: Objectives, Scope and Methodology This report addresses the following objectives: To what extent does the judiciary s capital-planning process align with leading practices and provide the information needed for informed decision making? To what extent were the courthouse projects recommended for funding in fiscal years 2014 to 2018 assessed under the judiciary s current capital-planning process? To evaluate the judiciary s capital-planning process, we collected information on leading capital-planning practices from the Office of Management and Budget s (OMB) Capital Programming Guide and GAO s Executive Guide and compared this information with the AMP process contained in the judiciary s Long Range Facility Plans, Facility Benefit Assessments, Citywide Benefit Assessments, Urgency Evaluations, 5-year plans and Strategic Plan. We did not review the appropriateness of criteria used by judiciary in its AMP process. We reviewed documentation on the status of courthouse construction projects and information about other federal buildings occupied by the judiciary. We reviewed GSA data on actual costs of construction and tenant improvements at two courthouse projects (Las Cruces, NM and Ft. Pierce, FL) one completed in 2010 and one completed in 2011; and GSA and judiciary estimated costs of construction for the courthouse projects on the most recent 5-year plan, covering fiscal years 2014 to 2018. To determine if life-cycle cost estimates were provided in the 5-year plan, we assessed the judiciary data against GAO s Cost Estimating and Assessment Guide. To determine the current dollar value of the judiciary s estimate of courthouse projects rents, we calculated the present value of the estimated project cost based upon averages of monthly indexes from U.S. Department of Labor, Bureau of Labor Statistics, and rent based upon 20 year OMB published discount rate for analyses. In addition, we interviewed judiciary officials on the AMP process and its alignment with leading capital-planning practices. To analyze judiciary s capital-planning process, we reviewed our previous reports on capital-planning across the federal government, including the efforts by the judiciary and the Department of Veterans Affairscommunicate its urgent housing needs to Congress. To assess recent courthouse projects recommended for funding under the judiciary s current capital-planning process, we reviewed the judiciary s documents detailing the projects recommended for funding for fiscal years 2009 through 2018, called 5-year plans, and other documents on: congressional authorizations and funding appropriations for courthouse projects; judiciary information on courts and courthouses; and GSA information on federal buildings, existing and planned federal courthouses, courthouse design, and federal historic property. We interviewed judiciary and GSA officials in Washington, D.C., and federal courthouses we selected in Anniston, AL; Macon, GA; and Savannah, GA. To observe existing courthouses, we selected Anniston and Savannah because they were evaluated under judiciary s old capital- planning process and are on the most recent 5-year plan, covering fiscal years 2014 to 2018. We selected Macon because it was highly ranked under the judiciary s new capital-planning process and is in close proximity to Anniston and Savannah. While our observations cannot be generalized to all federal courthouses, they provide keen insights into physical conditions at old historic courthouses. We reviewed documentation provided by the judiciary on strategic planning, capital- planning, existing courthouse evaluations, the rating and ranking of existing courthouse deficiencies, existing and future judgeships, and courtroom-sharing by judges. To determine the extent that courthouse projects on the 5-year plan reflect future judges needed and courtroom- sharing, we compared the judiciary s planned occupancy information to the judiciary s own guidance, our previous work on judiciary s courtroom- sharing, and a recently proposed bill from the 112th Congress that would have required GSA to design courthouses with more courtroom-sharing. We determined the number of courtrooms in the existing courthouses and compared them to the number of courtrooms needed in the new courthouses using the judiciary s courtroom-sharing policy. We also applied judiciary s courtroom-sharing policy for new courthouses to existing courthouses. We reviewed documentation provided by GSA on the status of courthouse construction; the status of courthouse projects on the two most recent 5-year plans; and federal buildings and courthouses occupied by the judiciary. We reviewed the judiciary s and GSA s data for completeness and determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from March 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Judiciary s Courtroom-Sharing Policy for New Construction <7. Senior District Judges> In court facilities with one or two Bankruptcy Judges, one courtroom will be provided for each Bankruptcy Judge. In court facilities with three or more bankruptcy judges, one courtroom will be provided for every two bankruptcy judges, rounding down when there is an odd number of judges. In addition, one courtroom will be provided for emergency matters, such as Chapter 11 first-day hearings. Appendix III: Judiciary s Asset-Management Planning Process Urgency-Evaluation Matrix for New Construction Projects Categories (weight) Description Current additional courtrooms needed (15%) Courtrooms needed today. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Future additional courtrooms needed (5%) Courtrooms needed within 15 years. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Current additional chambers needed (22.5%) Chambers needed today. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Future additional chambers needed (7.5%) Chambers needed within 15 years. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Citywide benefit assessment result (40%) In cities where courtrooms and chambers are located in multiple facilities, a citywide benefit assessment is produced. This incorporates the individual Facility Benefit Assessment for each facility; the type, a mix of facility ownership; and fragmentation of the court operations on a citywide basis. In cities with a single courthouse, the Facility Benefit Assessment is the same as the citywide assessment and covers 328 items in four main categories: building conditions (30%); space functionality (30%); security (25%); and space standards (15%). Civil filings historic (3%) Average annual change in the number of civil filings (1997-2011). Civil filings projected (1%) Projected average annual change in the number of civil filings (2012-2026). Criminal defendants historic (4.5%) Average annual change in the number of number of criminal defendants (1997-2011). Criminal defendants projected (1.5%) Projected average annual change in the number of criminal defendants (2012-2026). Appendix IV: Judiciary s New Courthouse Projects for Fiscal Years 2012 to 2016 and Fiscal Years 2014 to 2018 <8. New asset-management planning process> Fiscal year 2014 2018 5- X 90.4 See note See note The higher the score, the greater the space need urgency. More than one building assessed. Appendix V: Comments from the Federal Judiciary <9. GAO Comments> 1. AOUSC stated that we failed to understand the purpose of the 5-year plan, indicating that it is not a long-term capital investment plan. The draft report that we provided to AOUSC for comment indicates that the 5-year plan is not a long-term capital investment plan. However, the 5-year plan represents the only document that communicates the judiciary s recommendations related to new courthouse projects to Congress and other stakeholders. Since it is important for stakeholders to understand the context for new courthouse projects, we continue to believe that the judiciary should improve the completeness and transparency of the information the judiciary uses to justify these projects. 2. AOUSC stated that funding for the projects totaled $188.29 million, but did not provide any supporting information for this amount. We used General Services Administration (GSA) data to determine the amount of funding appropriated for the projects on the 5-year plan, which we state to be $177 million in our report. 3. AOUSC stated that GSA already provides sufficient information to Congress on the judiciary s behalf for courthouse projects. While GSA provides information to congressional committees when seeking authorization for new courthouse projects, by that time, the judiciary has already recommended the projects for new construction. The 5-year plan represents the only document that communicates the judiciary s recommendations for new construction, and it is incomplete and lacks transparency. For example, the 5-year plan underestimates the total costs of these projects by about $2 billion because it does not include all project phases and because the judiciary no longer includes its rent costs on the 5-year plan. 4. AOUSC was critical of our conclusion that the AMP process does not link to the judiciary s strategic plan. According to AOUSC, the template for future Long-Range Facilities Plans will clearly illustrate how the AMP process supports and links to the judiciary s strategic plan. We continue to welcome improvements to the judiciary s approach to strategic planning for courthouse construction. We will assess these changes when they are implemented, as part of our recommendation follow-up process. 5. AOUSC noted that, with respect to our recommendation, imposing a moratorium and reviewing the projects on the 5-year plan under the AMP process would create a delay of up to 6 years and that 10 of the 12 projects have been on the 5-year plan since 1999 or earlier. AOUSC states in its response that the previous capital-planning process was stringent, and as a result should be respected for its policy and budgetary implications. We have previously found deficiencies in the judiciary s previous capital planning process, including that the judiciary tends to overstate the number of judges that will be located in a courthouse after 10 years. Our draft report noted that the AMP process represents progress by the judiciary in better aligning its capital-planning process with leading practices. When the judiciary applied the AMP process to two projects on a previous 5-year plan San Jose, California, and Greenbelt, Maryland neither project ranked among the judiciary s revised priorities for new construction, indeed, they ranked 117th and 139th, respectively. In addition, only two projects in the current 5-year plan qualify for new construction under the judiciary s AMP process. Shifting courthouse priorities demonstrate a process that is not yet finalized. Given the federal government s current budgetary condition, the judiciary should assure the Congress through its planning process that the courthouses prioritized for construction funding truly represent its most urgent needs. Otherwise, the government stands to potentially spend billions of dollars on courthouse construction that does not meet the judiciary s most urgent needs. Assessing all courthouses under the AMP process, given the problems of the previous process, would help assure the judiciary and the Congress that the highest priority courthouses are selected and that the government is effectively spending construction funds. 6. AOUSC stated that the declining conditions in existing courthouses on the 5- year plan place judges, staff, and the public in harm s way. Our work over a number of years has shown that many federal buildings face deteriorated conditions, a reason that federal property was included on GAO s High Risk List. The courts are not alone in this regard. Our draft report noted that GSA is responsible for ensuring that courthouses are adequately maintained. As a result, GSA addresses building maintenance issues regardless of the status of the courthouse construction program. In addition, we note that the criteria the judiciary uses to select new courthouse construction projects are its own. The AMP process established that space shortages, not facility condition, are the only criteria for requesting new courthouse construction. 7. AOUSC noted the security concerns at existing courthouses we visited that we did not independently evaluate. For additional context, we added to the report references to the judiciary s approach to improve security within existing courthouses rather than replace them with new courthouses. The judiciary s AMP process criteria are consistent with this approach, as facility security deficiencies under the AMP process are no longer a justification for new courthouse construction. 8. AOUSC attached a letter from Chief Judge Lisa Godbey Wood of the Southern District of Georgia, which we have printed in this report on pages 47 to 54. We address Judge Wood s comments separately (see comments 21- 27). 9. AOUSC stated table 2 included incorrect information and provided revisions to the table. We stand by the information provided in our report, which was provided by the GSA and the judiciary and was reviewed consistent with our internal controls under generally accepted government auditing standards. The AOUSC s most recent numbers relate only to one courthouse in each city, but our numbers represent all the judiciary s courtrooms in each city for which we used judiciary and GSA data. We revised our final report to clarify that the number of courtrooms in table 2 were for cities, some of which have more than one existing courthouse. For example, in Chattanooga, Tennessee, the AOUSC revised our number of courtrooms from six to four possibly because there are only four courtrooms in the Joel W. Solomon Federal Building and United States Courthouse from which the judiciary is seeking to relocate district and magistrate judge s chambers and courtrooms. However, there are six courtrooms in Chattanooga because the bankruptcy judges chambers and courtrooms are located in a leased former post office/customs house. 10. AOUSC stated that the criteria of needing two or more courtrooms in order to recommend constructing a new courthouse pertains to the housing strategy recommendations contained in a district s Long Range Facilities Plan, and that the next step is the completion of a GSA feasibility study. However, AOUSC is describing the new AMP process. The fact remains that most projects on the current 5-year plan were selected based on their evaluation under the judiciary s previous capital-planning process, which did not include the courtroom shortage criteria. As a result, those courthouses slated for new construction under the old process and those selected under the new process are not comparable and do not represent the judiciary s highest priorities. 11. AOUSC noted that when projects on the 5-year plan have a shortfall of one courtroom as opposed to two, the GSA feasibility study concluded that new courthouse construction was recommended. Our draft report observed that although a project may not qualify for new courthouse construction under the AMP process, GSA may determine through a feasibility study that new construction is the most cost-efficient, viable solution despite the fact the courthouse in question did not rise to the top in the selection process. 12. According to AOUSC, two projects were removed from the 5-year plan because their space needs had changed and not because their rankings dropped. Our draft report correctly stated that reduced space needs contributed to the removal of these projects from the 5-year plan. 13. AOUSC questioned if we reviewed any of the Long Range Facility Plans produced as part of the AMP process and the previous capital planning process. We reviewed these judiciary documents and have revised the description of our methodology discussed in appendix I to include the names of the documents related to the judiciary s capital-planning process that we reviewed while developing this report. Specifically, we added the Long Range Facility Plans, Facility Benefit Assessments, Citywide Benefit Assessments, Urgency Evaluations, and the 5-year plan. 14. AOUSC stated that our assessment that the AMP process partially aligns with the leading capital practice related to needs assessment and gap identification was a gross error. According to AOUSC, it is not the judiciary s role to generate cost estimates, and they believe that our partially aligns assessment is too low. While GSA is responsible for estimating the costs of courthouse projects, we continue to believe that the judiciary s capital- planning process partially not fully aligns with this leading practice. GAO and Office of Management and Budget (OMB) guidance has established that estimates of life-cycle costs are necessary for accurate capital planning. The judiciary s 5-year plan lists GSA estimated costs, but they are incomplete. Specifically, the cost estimates do not include all project phases site acquisition, building design, and construction. In addition, the judiciary no longer includes the estimated cost of rent in its 5-year plan even though they have estimated costs for all project phases and rent. We believe this omission denies stakeholders and congressional decision makers complete information on judiciary construction-program costs. In addition, our draft report notes that these estimates are not life-cycle costs, which would also have to be included for the cost estimate to be comprehensive. 15. AOUSC disagreed with our assessment that the AMP process partially aligns with the leading capital practices related to alternatives evaluation because the judiciary does evaluate options with an emphasis on the least costly option. AOUSC also indicated that we did not consider Long Range Facility Plans in making this determination. We did consider Long Range Facility Plans, and continue to believe that the judiciary s capital-planning process partially aligns with this leading practice. GAO and OMB guidance established that leading organizations carefully consider a wide range of alternatives. Our draft report noted that the AMP process evaluates some alternatives, such as renovating existing courthouses to meet needs, but the judiciary provided no evidence that it considered other viable options, such as courtroom sharing in existing courthouses, even though courtroom sharing is required in new courthouses. 16. AOUSC disagreed with our assessment that the AMP process partially aligns with the leading capital practices related to establishing a review and approval framework with established criteria for selecting capital investments because our draft report indicated that the judiciary has established such a framework. We continue to believe that the judiciary s capital-planning process partially aligns with this leading practice because while we were able to discern that there are review and approval criteria in the AMP process, we found no evidence that the judiciary s current 5-year plan applies those criteria. Specifically, the judiciary established the criterion that courthouses need to have a shortage of two or more courtrooms to qualify for a new courthouse construction project. However, 10 of the 12 projects recommended for new construction on the 5-year plan do not qualify under this criterion. 17. AOUSC stated that we used incorrect and inflated estimates for project costs. We sought to provide total project cost estimates for each project on the 5- year plan. Our draft report uses estimates that the judiciary provided for total project costs and rent, which we adjusted for inflation to the current fiscal year. In response to our statement of facts, AOUSC provided a revised table (reprinted on p. 51 of this report). However, the data in the table that AOUSC provided were incomplete and they did not include supporting documentation. Consequently, we continue to use the most current, complete estimates of the total project costs and rent available. 18. AOUSC stated that the estimates of total project costs were provided to them by GSA. We added GSA to the source of table 1. 19. AOUSC stated that the judiciary has implemented changes to address recommendations from our 2010 report (GAO-10-417). GAO has a process for following up and closing previous recommendations. We have not yet assessed the extent to which the judiciary s actions have fulfilled the recommendations from our 2010 report. We will, however, consider this and all other information from the judiciary when we determine whether to close the recommendations from our 2010 report. We plan to examine this recommendation in the summer of 2013. 20. According to AOUSC, the projects on the 5-year plan are fully justified under its previous stringent process that preceded the AMP process. However, as we have previously noted, the former process had shortcomings and in our opinion does not represent a process that the Congress should rely upon for making capital budget decisions. The new AMP process will, when complete, likely provide Congress with greater assurance that the judiciary s construction priorities represent the highest priority needs. We addressed the difference in funding for the projects on the current 5-year plan in comment 2. 21. Judge Wood stated that the number of judges in Savannah may change. For each project, we used data provided by AOUSC. However, in our 2010 report (GAO-10-417), we found that the judiciary often overestimated the number of future judges it would have in planning for new courthouses. 22. According to Judge Wood, it is inappropriate to subject the Savannah courthouse to the AMP process when over $6 million has already been spent on design services. We found, and the AOUSC agreed, in comments to our draft report, that the Savannah courthouse has four courtrooms and four judges. Consequently, it does not qualify for new construction under the AMP criterion. In addition, according to GSA, the original courthouse design from 1998, to which Judge Wood refers, is old and outdated. As a result, if the project moves forward, the government would need to spend additional money to redesign a new courthouse for Savannah. 23. Judge Wood noted the poor condition of the existing Savannah courthouse and the need for a repair and alterations project to address deferred maintenance issues. We toured this courthouse and noted many of the same deficiencies. Our draft report noted that regardless of whether a project is on the 5-year plan, GSA is responsible for ensuring that courthouses are adequately maintained. In addition, as the current plan for the Savannah project is to continue to use the existing courthouse and build an annex, deferred maintenance in the existing courthouse would still need to be addressed if the plan moved forward. 24. Judge Wood noted that the existing Savannah Courthouse was built in 1899 and has several deficiencies to Design Guide standards. Our draft report noted that some existing courtrooms may not meet Design Guide standards for size. However, as we also note, according to AMP guidance, a disparity between space in an existing facility and the Design Guide standards is not justification for facility alteration and expansion. 25. Judge Wood noted several security concerns in the existing Savannah Courthouse. See comment 7. 26. Judge Wood noted that the Savannah Courthouse project preceded the AMP process and the courthouse needs an additional courtroom and judge s chamber. We address the judiciary s previous capital planning process and judge and courtroom counts in Savannah in comments 20 and 21, respectively. 27. Judge Wood attached photos documenting some of the building condition problems at the Savannah Courthouse, and those are reprinted on pages 49 and 50. See comment 23. 28. AOUSC provided changes to the courtroom numbers in table 2 from our draft report. As we explained in comment 9, we changed the table to make clear that the courtroom count refers to the number of courtrooms citywide, not just in one courthouse. Appendix VI: GAO Contact and Staff Acknowledgments <10. GAO Contact> <11. Staff Acknowledgments> In addition to the contact named above, Keith Cunningham, Assistant Director; George Depaoli; Colin Fallon; Geoffrey Hamilton; James Leonard; Faye Morrison; and Sara Ann Moessbauer made key contributions to this report.
Why GAO Did This Study Rising costs and fiscal challenges have slowed the multibillion-dollar courthouse construction program of the judiciary and the General Services Administration (GSA). In 2006, the judiciary developed AMP to address increasing costs and incorporate best practices and has evaluated about 67 percent of its courthouses under the new system. As requested, GAO assessed changes introduced with AMP. GAO examined: (1) the extent to which the AMP process aligns with leading practices and provides information needed for informed decision making and (2) the extent to which courthouse projects recommended for funding in fiscal years 2014 to 2018 were assessed under the AMP process. GAO compared the judiciary's capitalplanning practices with leading practices, analyzed courthouseplanning documents, and interviewed officials from the judiciary and GSA. GAO visited three courthouses selected because they were highly ranked by the judiciary for replacement, although observations from these site visits cannot be generalized. What GAO Found The Asset Management Planning (AMP) process represents progress by the federal judiciary (judiciary) in better aligning its capital-planning process with leading capitalplanning practices, but its 5-year plan for fiscal years 2014 to 2018--the document the judiciary uses to request courthouse construction projects--lacks transparency and key information on how projects qualify for new construction, alternatives the judiciary considered, and their cost. For example, the plan lists costs for the next phase of the 12 recommended courthouse projects, which have several phases, but does not list previous funding or ongoing annual costs for the projects. As a result, the plan lists about $1 billion in costs for the 12 projects, but the projects would actually cost the federal government an estimated $3.2 billion over the next 20 years. Congress has appropriated a small share of the money needed for the projects, and most will need design changes before construction can begin. As a result, there is a risk that congressional funding decisions could be made without complete and accurate information. However, with this information, decision makers could weigh current-year budget decisions within the context of projects' expected future costs, spur discussion and debate about actions to address them, and put the judiciary's requests in context with other federal spending. Ten of the 12 recommended projects were not evaluated under the AMP process. Judiciary officials said that they did not want to delay the current projects or force them to undergo a second capital-planning process after they had already been approved. Two courthouse projects from a previous 5-year plan that were assessed under AMP were removed from the list and are now ranked behind more than 100 other courthouse construction projects. Furthermore, 10 of the 12 recommended construction projects do not qualify for a new courthouse under the AMP criterion, which requires that new courthouses need two or more additional courtrooms. These conditions call into question the extent to which the projects remaining on the 5-year plan represent the judiciary's most urgent projects and whether proceeding with these projects represents the most fiscally responsible proposal. While 10 additional AMP evaluations would involve some additional costs, not conducting those evaluations could involve spending $3.2 billion over the next 20 years on courthouses that may not be the most urgent projects.
<1. Background> Under federal and state laws, all federally chartered depository institutions and the vast majority of state-chartered institutions are required to have federal deposit insurance. The federal deposit insurance funds were established to restore and maintain depositors confidence in the banking system by providing a government guarantee of deposits. This guarantee insures that a person s money on deposit with an insured institution, within certain limits, would be safe and helps negate the need for depositors having to assess the financial condition of their financial institution. FDIC administers the Bank Insurance Fund (BIF) and the Savings Association Insurance Fund (SAIF). Deposit accounts maintained at banks and thrifts generally are federally insured, regardless of who charters the institution. Similarly, credit unions that are federally chartered must be federally insured by the National Credit Union Share Insurance Fund (NCUSIF), which is administered by NCUA. Almost all (98 percent) credit unions are federally insured. As of December 2002, 9,688 credit unions were federally insured, with about 81 million members and $483 billion in deposits. However, in our survey of the 50 state regulators, we found that not all states require federal deposit insurance for credit unions they charter. As of December 2002, 212 credit unions about 2 percent of all credit unions chose to purchase private deposit insurance. These privately insured credit unions are located in eight states and had about 1.1 million members with deposits totaling about $10.8 billion, as of December 2002 a little over 1 percent of all credit union members and 2 percent of all credit union deposits. We identified nine additional states that could permit credit unions to purchase private deposit insurance through our survey of 50 state regulators and subsequent discussions with state regulators. Figure 1 illustrates the states that permit or could permit private deposit insurance as of March 2003 and the number of privately insured credit unions as of December 2002. The number of privately insured credit unions and private deposit insurers has declined significantly since 1990. In 1990, 1,462 credit unions in 23 states purchased private deposit insurance from 10 different nonfederal, private insurers. At that time, deposits at these credit unions totaled $18.6 billion 73 percent more than the total of privately insured deposits as of December 2002. Shortly after the failure of Rhode Island Share and Depositors Indemnity Corporation (RISDIC), a private deposit insurer in Rhode Island in 1991, almost half of all privately insured credit unions converted to federal deposit insurance voluntarily or by state mandate. As a result of the conversions from private to federal deposit insurance, most private deposit insurers have gone out of business due to the loss of their membership since 1990 and only one company ASI currently offers private primary deposit insurance. ASI has a statutory charter granted by the State of Ohio. ASI is licensed by the Ohio Superintendent of Insurance and is subject to oversight by that department and Ohio s Superintendent of Credit Unions. Unlike federal deposit insurance, which is backed by the full faith and credit of the United States, ASI s insurance fund is not backed by the full faith and credit of any governmental entity. Also, in contrast to federal deposit insurance, which covers up to $100,000 in an insured account, the coverage amount provided by ASI is subject to a $250,000 statutory cap in Ohio law. Depository institutions lacking federal deposit insurance privately insured credit unions do not directly present a risk to the respective federal deposit insurance funds and do not pay for participation in those funds. Accordingly, they are not subject to supervision by the agencies that administer those funds. The Federal Credit Union Act contains criteria for credit unions applying for federal deposit insurance from NCUA and requires NCUA to consider a list of factors before approving an application to become federally insured. For example, NCUA must assess the credit union s financial condition, the adequacy of reserves, the fitness of management, and the convenience and needs of the members to be served by the institution. To continue to be eligible for federal deposit insurance, credit unions must continue to comply with NCUA regulations for measures of net worth, prompt corrective action requirements, and rules governing investment and deposit activities. <1.1. Section 43 Requirements> Section 43 imposes requirements on depository institutions lacking federal deposit insurance and private deposit insurers and assigns FTC with the responsibility for enforcing compliance with these provisions. Specifically, section 43 requires depository institutions lacking federal deposit insurance to Include conspicuously on all periodic account statements, signature cards, passbooks, certificates of deposits, or similar instruments evidencing a deposit, a notice that the institution is not federally insured and that if the institution fails, the federal government does not guarantee that depositors will get back their money; Include conspicuously in all advertising and where deposits are normally received a notice that the institution is not federally insured; and Obtain a written acknowledgement from depositors that the institution is not federally insured and that if the institution fails, the federal government does not guarantee that the depositor will get back their money. In addition, section 43 prohibits institutions lacking federal deposit insurance from engaging in interstate commerce unless the appropriate supervisor of the institution s charter state has determined that the institution meets all eligibility requirements for federal deposit insurance. This prohibition is referred to as the shut-down provision. With respect to private deposit insurers, section 43 requires each insurer to Obtain an annual audit from an independent auditor using generally accepted auditing standards that includes a determination of whether the private deposit insurer follows generally accepted accounting principles and has set aside sufficient reserves for losses; and Distribute copies of the audit report to each depository institution it insures and to the appropriate supervisory agency of each state in which such an institution receives deposits, within specified time frames. With respect to FTC, section 43 Requires the Commission to prescribe the manner and content of disclosure required under the section in order to ensure that current and prospective customers understand the risks involved in forgoing federal deposit insurance; Assigns to FTC the responsibility to enforce compliance with the section under the Federal Trade Commission Act (FTC Act); Authorizes FTC to determine that an institution not chartered as a depository institution nonetheless is subject to the section, referred to as the look-alike provision; and Authorizes FTC, in consultation with FDIC, to exempt an institution from the shut-down provision. Since being charged with the responsibility to enforce and implement these requirements, FTC has requested Congress to prohibit it from enforcing these provisions. In response, FTC s appropriation language, since 1993, has contained provisions prohibiting it from using funds to implement these provisions. FTC has authority to enforce a variety of federal antitrust and consumer protection laws. According to FTC, it works to enhance the smooth operation of the marketplace by eliminating acts or practices that are unfair or deceptive, and its efforts have been directed toward stopping actions that threaten consumers opportunities to exercise informed choice. The FTC Act charges FTC with responsibility for preventing the use of unfair methods of competition and unfair or deceptive acts or practices. That act, however, provides that FTC s powers generally do not extend to depository institutions banks, thrifts, and federal credit unions which typically are beyond FTC s authority. In addition, one section of the FTC Act has been interpreted to mean that FTC does not have jurisdiction over nonprofit corporations. <2. NCUA and State Regulators Imposed Related Disclosure and Audit Requirements> Consistent with its appropriations authority prohibiting FTC from enforcing section 43, FTC has not implemented regulations or orders to prescribe the manner and content of required disclosures; to date, FTC has not brought any enforcement cases as a result of the identification of noncompliance with the disclosure, shut-down, and annual audit provisions. As part of this review, we also ascertained whether other laws or rules impose requirements similar to those of section 43. We found that NCUA and state regulators have imposed disclosure and audit requirements on state-chartered credit unions and private deposit insurers that, while not comparable to section 43 requirements, help achieve the objectives of section 43. For example, NCUA imposes notification requirements on federally insured credit unions seeking to convert to private deposit insurance. NCUA requires these credit unions to notify their members, in a disclosure, that if the conversion were approved, the federal government would not insure deposits. Specifically, under the Federal Credit Union Act, if a federally insured credit union terminates federal deposit insurance or converts to nonfederal (private) insurance, the institution must give its members prompt and reasonable notice that the institution has ceased to be federally insured. NCUA rules implement these provisions by prescribing language to be used in (1) the notices of the credit union s proposal to terminate federal deposit insurance or convert to nonfederal (private) insurance, (2) an acknowledgement on the voting ballot of the member s understanding that federal deposit insurance will terminate, and (3) the notice of the termination or conversion. Under NCUA s rules, the prescribed language is to include a statement apprising members that their accounts no longer would be federally insured. Other language to be included on the notice of a proposal to convert to private deposit insurance and on the related voting ballot is to state that NCUA s insurance is backed by the full faith and credit of the United States and that the private deposit insurance is not backed by the full faith and credit of the United States. While NCUA s disclosure requirements provide some assurance that current members of credit unions converting to private deposit insurance are notified of the lack of federal deposit insurance coverage, these NCUA regulations do not apply to institutions that never were federally insured. In addition, disclosures contained in NCUA s required notifications are not as extensive as disclosures required under section 43. NCUA disclosure pertains to a specific event (termination of insurance or conversion to private deposit insurance) and is provided only to those individuals who are members of the credit union at the time of the event. Section 43, on the other hand, requires disclosure to all members who are depositors, including those individuals who become members after the credit union has terminated federal deposit insurance. Section 43 also requires that depositors acknowledge in writing that the institution is not federally insured and that no federal guarantee exists. In addition, under section 43, an institution s lack of federal deposit insurance must be stated, on an ongoing basis, in periodic account statements, signature cards, passbooks and instruments evidencing a deposit, and in advertising and displays. In our review of Ohio s law, we noted that Ohio imposes certain disclosure requirements about the insured status of depository accounts. Ohio law requires credit union brochures that include the name of the private deposit insurer to also include a specific notice: Members Accounts Are Not Insured or Guaranteed by Any Government or Government-sponsored Agency. The requirements we reviewed, like Ohio law, typically do not require disclosure of the same information or in the same manner as is required by section 43. Ohio also imposes several requirements on the remaining private deposit insurer, ASI. For example, Ohio requires ASI to submit annual audited financial statements and quarterly unaudited financial statements to Ohio regulators. While this annual audit requirement is similar to the section 43 provision, Ohio does not require private deposit insurers to distribute this information to the appropriate supervisory agency of each state in which it insures deposits nor to depository institutions in which it insures deposits. <3. Compliance with Section 43 Provisions Varied; Potential Impact on Consumers Most Evident in Credit Union Noncompliance with Disclosure Requirements> Compliance with section 43 disclosure, shut-down, and annual audit requirements varied considerably. The most likely impact on consumers from the lack of enforcement of these provisions may result from credit unions not providing adequate disclosures about not being federally insured. We found that many privately insured credit unions have not always complied with the disclosure requirements in section 43 that are designed to notify consumers that the deposits in these institutions are not federally insured. While state regulators and ASI officials reported monitoring whether privately insured credit unions disclosed the lack of federal deposit insurance to depositors, we found that these actions varied and did not ensure that all credit unions complied with required disclosures. As a result, depositors at some privately insured credit unions may not be adequately informed that deposits at these institutions are not federally insured. Regarding the shut-down provision, state regulators reported to us that they did not make explicit determinations of insurability but we found that such a determination may not provide a meaningful protection for consumers. The remaining private deposit insurer complied with the annual audit requirements, making it possible for state regulators and member credit unions to become informed about the insurer s financial condition. Therefore, the lack of enforcement of this provision appears to have had no direct effect on consumers. <3.1. The Lobbies, Materials, and Web Sites of Many Privately Insured Credit Unions Lacked Disclosures as Required under Section 43> Section 43 requires privately insured credit unions to disclose to their members that deposits at these institutions are (1) not federally insured and (2) if the institution fails, the federal government does not guarantee that depositors will get back their money. Specifically, these institutions are required to disclose this information at places where deposits are normally received (lobbies) and on signature cards, and on instruments evidencing a deposit (deposit slips). Advertising (brochures and newsletters) must also contain the statement that the institutions are not federally insured. We conducted unannounced site visits to 57 locations of privately insured credit unions (49 main and 8 branch locations) in five states Alabama, California, Illinois, Indiana, and Ohio. On our visits we looked to see whether credit unions lacking federal deposit insurance had disclosed to their members that the institution was not federally insured and that the federal government did not guarantee their deposits. We found that many privately insured credit unions we visited did not conspicuously disclose this information. Specifically, as shown in table 1, 37 percent (21 of 57) of the locations we visited did not conspicuously post signage in the lobby of the credit union. Credit unions compliance with this requirement varied by state. For example, six of the 21 sites visited in California or 29 percent did not display the required notices, while three of the five sites visited in Alabama or 60 percent did not display conspicuous signage in their lobbies. On our visits to these credit unions, we also obtained other available credit union materials (brochures, membership agreements, signature cards, deposit slips, and newsletters) that did not include language to notify consumers that the credit union was not federally insured as required by section 43. Overall, 134 of the 227 pieces of material we obtained from 57 credit union locations or 59 percent did not include specified language. Specifically, 20 of 32 signature cards we obtained from 31 credit unions, and 19 of 20 deposit slips we obtained from 18 credit unions did not include specified language (see table 2). As part of our review, we also reviewed 78 Web sites of privately insured credit unions and found that many credit union Web sites were not fully compliant with section 43 disclosure requirements. For example, 39 of the 78 sites had not included language to notify consumers that the credit union was not federally insured. Specifically, in six of the eight states we reviewed, more than half of the Web sites identified and analyzed in each state were not compliant (see table 3). While these results were not obtained from a statistically valid sample that would allow us to project the extent of compliance to all privately insured credit unions, these findings are robust enough, both in the aggregate and within each state, to raise concern about the lack of required disclosures by privately insured credit unions. <3.2. Monitoring Efforts over Disclosures by Privately Insured Credit Unions Varied> The extent to which state regulators and ASI officials monitored whether privately insured credit unions disclosed the lack of federal deposit insurance to depositors varied. State regulators in Alabama, California, Idaho, Indiana, Maryland, Nevada, and Ohio reported that during state examinations of credit unions, their examiners looked to see whether privately insured credit unions disclosed the lack of federal deposit insurance to depositors. However, according to these state regulators, state examination procedures did not include specific guidance on how to determine if credit unions were compliant with disclosure requirements in section 43. Also, state regulators reported that although they monitored disclosures at privately insured credit unions, they generally had not enforced these requirements. Since we observed poor compliance with section 43 disclosure requirements in our site visits, oversight by state regulators has not provided sufficient assurance that privately insured credit unions are adequately disclosing that their institutions are not federally insured. ASI officials told us that they had developed materials that explained the disclosure requirements of section 43 to assist credit unions it insured to comply with these requirements. ASI officials reported that they provide these materials to credit unions when they convert to private deposit insurance and to other credit unions that requested these materials. Among other things, these materials inform credit unions of the specific disclosure requirements and include samples of on-premise signage. However, our review of ASI s samples for on-premise signage found that not all samples included language to notify consumers that the credit union was not federally insured. ASI s on-site audit program included specific guidance on how to determine if credit unions were compliant with disclosure requirements in section 43. In our review of two ASI examination files, we observed that ASI officials had noted that these two credit unions in Nevada had not included language on credit union materials, such as signature cards, stating that the institution is not federally insured and that if the institution fails, the federal government does not guarantee that depositors will get back their money. In our follow-up discussions with ASI management, they indicated that while ASI officials made some notes regarding compliance when conducting on-site exams as in the examination files on the Nevada credit unions they did not take action to enforce these federal requirements. <3.3. Credit Unions Do Not Appear to Have Obtained State Determinations of Insurability, but Impact on Consumers May Be Limited> The shut-down provision of section 43 prohibits depository institutions lacking federal deposit insurance from engaging in interstate commerce unless the institution s state regulator has determined the institution s eligibility for federal deposit insurance. To be eligible for federal deposit insurance, NCUA must, among other things, assess the credit union s financial condition, the adequacy of reserves, the fitness of management, and the convenience and needs of the members to be served by the institution. It appears that privately insured credit unions have not obtained this determination from their state regulators. One could question, however, whether the states could or should make the determination that institutions meet the standards for federal deposit insurance. Even if the state applied federal deposit insurance eligibility criteria in making the determination for credit unions, the determination may not necessarily provide a meaningful protection for consumers; however, other actions were taken to ensure the health of privately insured credit unions. Section 43 calls for a one-time eligibility determination and does not require an ongoing state assessment of the institutions compliance with federal deposit insurance eligibility requirements. Because this is a one- time determination, it does not ensure that credit unions would remain eligible for federal deposit insurance. Other circumstances also indicate that consumers might not benefit from the eligibility determination. For example, when an institution converts from federal deposit insurance to private deposit insurance, such an eligibility determination would be redundant because the institution had been eligible for federal deposit insurance before it became privately insured. According to ASI, between 1992 and 2002, 27 credit unions converted from federal to private deposit insurance. In these cases, it would be doubtful that an eligibility determination would benefit consumers. State regulators also told us that while they had not made explicit determinations that these privately insured credit unions had met eligibility requirements for federal deposit insurance, they imposed safety and soundness standards on credit unions lacking federal deposit insurance, which the regulators believed generally satisfied the criteria for federal deposit insurance. For example, these regulators reported that they applied the same examination and supervision process to all state- chartered credit unions regardless of deposit insurance status. In addition, these states had adopted NCUA s examination program and their examiners had received training from NCUA. However, implementation of NCUA s examination program does not fully insure that those institutions meet all federal deposit insurance eligibility standards. For example, besides assessing a credit union s financial condition and the adequacy of its reserves when making insurability determinations, NCUA is also required to factor in membership considerations such as the convenience and needs of the members to be served by the institution. Some states also had an approval process for credit unions seeking to purchase private deposit insurance. Alabama, Illinois, and Ohio had written guidelines for credit unions seeking to purchase private deposit insurance. The other five states that permitted private deposit insurance did not have written guidelines for credit unions seeking to purchase private deposit insurance, but Idaho, Indiana, and Nevada state regulators noted that they had the authority to not approve a credit union s purchase of private deposit insurance. Additionally, ASI had several strategies in place to oversee the credit unions it insured. Specifically, ASI regularly conducted off-site monitoring and conducted on-site examinations of privately insured credit unions at least every 3 years. It also reviewed state examination reports for the credit unions it insured, and imposed strict audit requirements. For example, ASI required an annual CPA audit for credit unions with $20 million or more in assets, while NCUA only required the annual audit for credit unions with more than $500 million in assets. ASI also had targeted its monitoring of its largest and smallest credit unions. For larger credit unions, those with more than 10 percent of ASI s total insured shares, ASI planned to conduct semiannual, on-site examinations and monthly and quarterly off-site monitoring, including a review of audits and financial statements. In January 2003, five credit unions comprising about 40 percent of ASI s total assets qualified for this special monitoring. In January 2003, ASI also began a monitoring strategy intended to increase its oversight of smaller credit unions. First, ASI assigned a risk level to credit unions it insured (low, moderate, or high) and then used this assessment to determine the extent and frequency of oversight at the credit union. In January 2003, ASI had determined that 98 credit unions qualified for this monitoring, with shares from the largest of these credit unions totaling about $23 million. Since the above actions were taken to ensure the health of privately insured credit unions, the effect on consumers from the lack of enforcement of this provision may be negligible. <3.4. Remaining Private Deposit Insurer Complied with Federal Audit Requirements> The remaining private deposit insurer has complied with the audit requirements under section 43, which requires private deposit insurers to obtain an annual audit and provide it to state regulators and the management of privately insured credit unions within certain time frames. Among other things, the audit must be conducted by an independent auditor using generally accepted auditing standards and include a determination of whether the insurer follows generally accepted accounting principles and has set aside sufficient reserves for losses. The private deposit insurer must provide a copy of the report to each depository institution it insures not later than 14 days after the audit is completed. Also, the private insurer must provide a copy of the report to the appropriate supervisory agency of each state in which such an institution receives deposits not later than 7 days after the audit is completed. We found that the audits obtained by ASI for 1999, 2000, 2001, and 2002 complied with this federal requirement. Specifically, these audits noted that the reviewed consolidated financial statements presented fairly, in all material respects, ASI s financial position and the results of their operations and cash flows for the years reviewed in conformance with accounting principles generally accepted in the United States. Further, appropriate state regulators and the management of some privately insured credit unions told us that ASI had provided them copies of the annual audits in accordance with the requirement. Since the private deposit insurer has obtained and distributed the audit as required, it has given state regulators and the management of privately insured credit unions the opportunity to become informed about the financial condition of the private deposit insurer. This could help ensure the safety and soundness of ASI which, in turn, protects consumers. It appears consumers have suffered no negative impact from the nonenforcement of this provision. <4. Although There Is No Ideal Regulator to Enforce Section 43, FTC Is Best among Candidates to Enforce Provisions> In evaluating which agency should enforce section 43, we did not find an agency that was ideally suited to carry out the responsibilities set forth in the provision. Although FTC, NCUA, and FDIC officials generally agreed that consumers should receive proper notification about the insured status of their deposits, they maintained that their respective agencies should not be charged with responsibility for implementing and enforcing section 43. NCUA and FDIC oppose having any responsibilities under section 43 because such a role would result in a regulatory conflict of interest and would be inconsistent with their missions and the section s purpose. Credit union industry representatives believe that FTC is the appropriate federal agency to enforce section 43. FTC staff stated that questions about the Commission s authority under section 43 and the Commission s lack of expertise to administer the section justify removing FTC from any responsibilities under the provision. The staff asserted that other federal agencies are more qualified to carry out the section. Based on our review of these concerns, we believe FTC is the best among these candidates to enforce these provisions; however, clarifying FTC s authority and providing it with additional flexibility in administering these provisions could better ensure effective enforcement of these provisions. <4.1. NCUA and FDIC Oppose Having Enforcement Responsibility under Section 43> NCUA has taken the position that it should not be responsible for enforcing section 43. In our discussions with NCUA officials, they offered several reasons why NCUA should not be charged with enforcing section 43. They expressed concern that placing the responsibility with NCUA would closely identify NCUA with uninsured credit unions and, in turn, create the potential for confusion as to whether an institution was federally insured. The officials also maintained that if NCUA were responsible for enforcing and implementing the section, the costs would be passed on to federally insured credit unions. In addition, the officials stated that NCUA regulation of a private insurer would result in a regulatory conflict of interest that might erode confidence in NCUA s authority. They said that if the private deposit insurance system were to fail while under NCUA s purview, confidence in NCUA, as well as federal deposit insurance for credit unions, could weaken to a point that it could have a devastating impact on the financial health of the credit union system. In our discussions with FDIC officials, they expressed several reasons similar to those presented by NCUA why FDIC should not be charged with enforcing section 43. First, FDIC officials noted that FDIC insures the deposits at banks and savings associations but does not regulate or supervise credit unions or insure deposits at these institutions. Officials also expressed concern that placing the responsibility with FDIC would closely identify a federal agency with uninsured credit unions and, in turn, create the potential for confusion as to whether an institution was federally insured. <4.2. Industry Views on Private Deposit Insurance and the Enforcement of Section 43 Requirements> While officials from the National Association of Federal Credit Unions (NAFCU) oppose the option of private primary deposit insurance for credit unions, NAFCU officials believe that since private primary deposit insurance is an option, then section 43 requirements are important and FTC should enforce these requirements for several reasons. NAFCU officials believe that members of privately insured credit unions should be adequately informed that deposits in these institutions are not federally insured. NAFCU officials stated that the enforcement of the provisions in section 43 requires an expertise in consumer protections and deceptive practices. NAFCU takes the position that FTC has this expertise and, further, that the entity does not need expertise in safety and soundness of depository institutions. NAFCU officials also believe that federal financial regulators, such as NCUA and FDIC, are not the appropriate oversight entities for issues related to private deposit insurance because their involvement would imply federal backing. Further, the involvement of NCUA or FDIC in the enforcement of the requirements in section 43 could create conflict between the federal and private insurer. NAFCU officials commented, however, that it would be beneficial for FTC to consult with FDIC and NCUA regarding the enforcement of these requirements because of their expertise. Regarding enforcement, NAFCU officials believe that state regulators could be involved in, but not solely responsible for, enforcing certain section 43 requirements. For example, during state exams of credit unions, examiners could determine if the credit union were compliant with disclosure and insurability requirements of section 43 and then submit a certification to FTC. Credit Union National Association (CUNA) and National Association of State Credit Union Supervisors (NASCUS) support the option of private deposit insurance for credit unions and believe that the requirements in section 43 are important and that FTC should enforce the requirements in section 43. CUNA s public position is that it supports the option of private deposit insurance because the association believes it is an integral part of the dual-chartering system for credit unions (the system allowing credit unions meaningful choice between a state and federal charter). NASCUS also supports the option of private deposit insurance for credit unions because the association thinks credit unions should have a choice when it comes to deposit insurance. Specifically, NASCUS believes that if there was only a single insurer (such as NCUA) this would create a uniform approach, thus obviating state choice, and could revert to a rigid framework. <4.3. Tying NCUA and FDIC Insurance to the Regulation of Uninsured Entities Presents a Conflict of Interest> As the agencies charged with administering and safeguarding their respective insurance funds, NCUA and FDIC have an interest in seeing that the public does not lose confidence in the federal deposit insurance system. The section 43 disclosure requirements help protect this interest by imposing measures designed to inform depositors at nonfederally insured institutions that their deposits are not backed by the federal government. To the extent that institutions comply with section 43, there is a reduced risk that depositors in nonfederally insured institutions would mistakenly believe that their deposits are federally insured. Because section 43 protects NCUA and FDIC interests, it can be argued that those agencies should be responsible for enforcing the provision. Although that proposition has some merit, we have no reason to disagree with statements by NCUA and FDIC officials that placing both private insurers and institutions lacking federal deposit insurance under the jurisdiction of NCUA and FDIC could increase the risk of depositor confusion and create the potential for a loss of public confidence in the federal deposit insurance system. Moreover, assigning responsibility to NCUA and FDIC would mean that federally insured depositary institutions would subsidize the regulation of nonfederally insured institutions. However, we recognize that deciding who pays the cost for regulating nonfederally insured institutions is a complicated issue. Some observers have asserted that if NCUA were responsible for regulating the disclosures required by section 43, a depositor s knowledge that the disclosure was prescribed by NCUA could generate confusion as to NCUA s relationship with a nonfederally insured institution. The identity of the federal agency may be of no consequence because the consumer might not understand, or even be aware of, which federal agency prescribed the disclosure requirements. However, should NCUA determine, as FTC has, that section 43 calls for substantial disclosure of the risks relating to a specific depository institution and its insurer, NCUA would risk significant exposure to conflict of interest charges. For example, if NCUA were to impose requirements on privately insured credit unions that were considered by states or institutions to be too stringent, its partiality as a regulator would be questioned. The costs of compliance with such requirements could cause privately insured institutions to turn to federal deposit insurance, thus adversely affecting the private deposit insurer, NCUA s competitor. We recognize that in two instances Congress has chosen NCUA to implement laws that apply to credit unions regardless of whether they are federally insured. The Truth in Savings Act (TISA) requires that NCUA implement its provisions with respect to all credit unions, regardless of who insures them. The Home Mortgage Disclosure Act (HMDA) also charges NCUA with implementing responsibility for all credit unions regardless of their insured status. See appendix II for an illustration of who is responsible for the enforcement of various laws at credit unions. NCUA has promulgated regulations implementing TISA and issued guidelines for credit union reporting under HMDA. By implementing these laws, NCUA has demonstrated the capacity to regulate operations of credit unions it does not insure. Moreover, the cost of enforcing these laws with respect to nonfederally insured credit unions is passed on to insured credit unions. It is particularly noteworthy that NCUA s TISA regulations require specific disclosures about the terms and conditions of deposit accounts at both federally and nonfederally insured institutions. However, NCUA s administration of those laws does not present the same potential or perceived conflict of interest. The requirements under those laws apply equally to federally insured and nonfederally insured institutions. In contrast, regulations under section 43 would, by definition, treat the institutions differently and expose NCUA to a regulatory conflict of interest. The regulatory conflict of interest also would exist with respect to NCUA enforcement of the audit provision. NCUA would be regulating its competition. If NCUA, like FTC, were to consider enforcement of the requirement as called for by evaluating the conclusions of the audit or scrutinizing the financial health of the insurer, NCUA s action would be inherently suspect. In addition to the regulatory conflict of interest, closely associating NCUA with nonfederally insured institutions could have an undesirable shadow effect. For example, if NCUA were to be responsible for reviewing the private insurer s audit report, NCUA would be closely associated with determinations about the financial health of the private deposit insurer. Should the insurer, which is subject to state regulation, fail to honor its insurance commitments, NCUA s credibility as a regulator would be compromised. Concerns about a regulatory conflict of interest also would accompany NCUA actions involving the shut-down requirement. The agency would be closely associated with liquidating institutions it does not insure and safeguarding deposits it does not protect. In effect, NCUA would be shutting down the institutions that are members of the agency s competition the private deposit insurer. Similarly, NCUA enforcement of the look alike provision could be seen as an attempt by the agency to eliminate entities that compete with federally insured credit unions. NCUA s concern that its enforcement of section 43 would require federally insured institutions to subsidize the regulation of institutions that forgo insurance, in part involves a question of a level playing field; that is, federally insured institutions would be forced to pay the cost of regulating competitors who may benefit from avoiding federal deposit insurance. This concern also touches on other considerations. For example, this additional cost could act as an incentive for federally insured credit unions to convert to private deposit insurance. However, who pays for the oversight of nonfederally insured institutions is a more complicated issue, because federally insured institutions could also benefit from clarifying for consumers the insurance status of these institutions, and if FTC oversees nonfederally insured institutions, taxpayers bear the costs. <4.4. FTC Opposes Having to Implement and Enforce Section 43> Section 43 specifies that FTC shall enforce compliance with its requirements, and any regulations or orders issued under it. In addition, the section charges FTC with specific responsibilities. FTC is to prescribe the manner and content of disclosure required under the section in order to ensure that current and prospective customers understand the risks involved in forgoing federal deposit insurance. Also, the section authorizes FTC, in consultation with FDIC, to exempt an institution from the shut-down provision. In addition, section 43 authorizes FTC to determine that an institution not chartered as a depository institution nonetheless can be subject to the section. FTC staff told us that because of questions about the Commission s authority under section 43 and the Commission s lack of expertise to carry out the section in accordance with the staff s perception of what the section requires, FTC is not the appropriate federal agency to enforce the section. <4.4.1. FTC Staff Said That Questions about the Commission s Authority under Section 43 Could Interfere with Its Ability to Enforce the Section> According to FTC staff, the language of section 43 charging the Commission with responsibility for enforcing the section (charging provision) contains an ambiguity that could lead to challenges against the Commission s authority under the section. As noted above, the charging provision specifies that the FTC shall enforce section 43 under the Act. The FTC Act, however, limits the Commission s jurisdiction in ways that are inconsistent with FTC s responsibilities under section 43. For example, FTC and federal courts have interpreted the FTC Act to mean that the Commission has no jurisdiction over nonprofit entities, a group that includes credit unions. Another provision of the FTC Act (Section 6), which authorizes FTC to conduct investigations, require reports and promulgate rules and regulations to carry out the FTC Act, expressly excludes the business of insurance from those authorities except under very limited circumstances. According to FTC staff, this limitation raises questions about the Commission s authority to enforce the audit provision in section 43, which applies specifically to private insurers. FTC staff said that FTC s jurisdiction with respect to the audit provision, as well as disclosures about deposit insurance, also would be subject to challenge because of limitations the McCarran-Ferguson Act imposes on federal laws that relate to the business of insurance. Under the McCarran- Ferguson Act, a federal law applicable to the business of insurance can be preempted by a state insurance law. Specifically, the McCarran-Ferguson Act precludes application of a federal statute in the face of a state law enacted . . . for the purpose of regulating the business of insurance, if the federal measure does not specifically relate to the business of insurance, and would invalidate, impair, or supersede the state s law. The act also specifies that the FTC Act is applicable to the business of insurance to the extent that such business is not regulated by State law. According to FTC staff, this latter provision displaces application of the FTC Act where there is state regulation of the business of insurance. The staff explained that FTC s authority under section 43 is unclear because the section requires FTC to enforce the deposit insurance disclosure requirements and the audit provision under the Act even though the FTC Act does not apply to insurance. FTC staff believe that enforcement of the disclosure provisions could be subject to challenge in states that regulate deposit insurance, and that enforcement of the audit provision would be subject to challenge because the State of Ohio specifically regulates the only private deposit insurer, ASI. <4.4.2. FTC Staff Raised Practical Concerns about the Commission s Ability to Carry Out Section 43> FTC staff raised several concerns about the Commission s ability to carry out section 43 responsibilities. One concern relates to the manner in which FTC would exercise its rulemaking authority under the section. Section 43 does not specify the authority under which FTC s implementing rules should be promulgated. To the extent that the Commission s rulemaking authority under the section is subject to requirements of the FTC Act, FTC staff made two points. They noted that the Commission s general rulemaking authority under the FTC Act may be exercised only for purposes of carrying out the provisions of . The Commission also has special rulemaking authority under section 18 of the FTC Act with respect to unfair or deceptive acts or practices. That section contains specific procedures FTC must follow in prescribing rules that define unfair or deceptive acts or practices. Among other things, section 18 requires that Commission rules define such acts or practices with specificity and establishes rigorous procedures for issuing the rules. FTC staff asserted that without specific guidance from Congress as to the Commission s rulemaking authority, the Commission could face having to promulgate rules under section 43 in accordance with the requirements in section 18 of the FTC Act. They stated that because the separate rulemaking authorities involve different procedures and authorize different remedies, the absence of guidance in this area makes it difficult for FTC to carry out its rulemaking responsibilities under section 43. FTC staff also raised concerns that section 43 requires the Commission to engage in activities that are incompatible with the manner in which FTC undertakes its consumer protection mission or are beyond FTC s expertise. According to the staff, section 43 calls upon FTC to engage in activities more suitable for a supervisor of depository institutions. These include reviews of insurance company accounting practices and audits, supervisory examinations or inspections, specification of disclosures that should include the risk profiles of depository institutions and their private deposit insurers, and the regulation and possible closure and liquidation of depository institutions and other entities that could be mistaken for depository institutions (such as securities firms that offer accounts with deposit account characteristics). The staff asserted that these responsibilities call for close supervision by an agency that, unlike FTC, has the expertise, tools, and resources to assess and regulate the operations of depository institutions and is knowledgeable about risks associated with depository institutions and deposit insurance. Several provisions of section 43 underlie FTC s concern that the section calls for expertise the Commission does not have. The first is the requirement that FTC promulgate disclosure regulations to ensure that current and prospective customers understand the risks involved in forgoing federal deposit insurance. Commission staff asserted that disclosure of those risks requires more than a standardized notice that the institution is not federally insured and that the federal government does not guarantee that the depositor will get back their deposits. The staff maintained that disclosure could involve a discussion of a depository institution s financial strength and liquidity, as well as the health of the private insurer, because the risk of not having federal deposit insurance would be tied to the health of both the institution and the insurer. The staff also stated that even if disclosure did not require discussion of the safety of the particular institution and insurer, any explanation about the risks of forgoing federal deposit insurance would be beyond FTC s expertise because the Commission lacks the expertise necessary to define those risks. For example, they said that the disclosure requirement creates the dilemma that too much emphasis on the risks of forgoing federal deposit insurance could dissuade depositors from using uninsured institutions, thus weakening them; whereas, too little risk disclosure could mean that such depositors would be inadequately informed. In addition, the staff asserted that the Commission lacks the ability to determine which documents and records should contain the risk disclosure. The second provision of concern to FTC is the shut-down provision. According to FTC staff, this section would require expertise in depository institution operations and depositor protection. They maintained that enforcement of this provision could require FTC to do more than merely declare that an institution must stop doing business. They asserted that if an entity were instructed to shut down, the Commission would have to be prepared to enforce that shut-down, which would necessitate winding up the operations of the entity, a role that would require expertise in the operation of depository institutions and the protection of customer deposits. The staff also expressed a concern that section 43 fails to provide standards for FTC to consider in deciding whether an institution is eligible for an exemption from the shut-down provision. They maintained that in deciding upon an exemption the Commission likely would have to engage itself in the complexities of depository institution law. Another aspect of section 43 that FTC believes to be beyond its expertise is the look-alike definition. The definition of depository institution in section 43 includes any entity FTC determines to be engaged in the business of receiving deposits, and could reasonably could be mistaken for a depository institution by the entity s current or prospective customers. Under this authority, FTC could determine that an entity not chartered as a depository institution is subject to section 43. FTC staff asserted that the Commission lacks the expertise necessary to determine whether an entity s business constitutes receiving deposits or what would cause customers to mistake an entity for a depository institution. Any entity determined to be a look alike and not exempted would be subject to section 43, including the requirements for disclosures regarding lack of federal deposit insurance (even if it holds other forms of federal deposit insurance). According to FTC staff, proper implementation of this provision, in conjunction with the shut-down provision, could lead to shutting down a variety of institutions such as securities firms and mutual funds. FTC officials also stated that the Commission lacks the expertise necessary to enforce the audit requirement for private insurers. As mentioned previously, section 43 requires any private deposit insurer to obtain an annual audit from an independent auditor using generally accepted auditing standards. The audit must determine whether the insurer follows generally accepted accounting principles and has set aside sufficient reserves for losses. FTC staff stated that diligent enforcement would require a review of the auditor s determinations, which, in turn, would necessitate expertise and adequate resources for assessing both the quality of the audit and the financial health of the insurer. FTC staff asserted that the Commission does not possess this expertise. The staff also were of the view that financial audits do not and cannot include determinations about whether reserves set aside for losses are sufficient. The staff said that FTC does not have expertise regarding loss and reserve issues with which to determine whether some form of substitute assurances should be deemed sufficient. <4.5. FTC Best among Candidates for Enforcement Role> Although we found no agency was ideally suited to carry out the responsibilities set forth in the provision, based on our review of the concerns raised by FTC, NCUA and FDIC, we found no compelling reason to remove FTC from its responsibility as the primary agency responsible for implementing section 43. FTC s concerns about its authority and resources are rooted in an interpretation of the section that calls for an extensive federal presence in the regulation of private deposit insurance and depository institutions. The scheme of section 43, particularly in the context of federal deposit insurance, suggests that a more modest interpretation is appropriate, although modifications to the section would enhance the Commission s ability to enforce the section. <4.5.1. FTC s Concerns about Potential Challenges to Its Authority under Section 43 Can Be Addressed> Although FTC s concerns about potential challenges to its authority under section 43 are not unrealistic, it appears that the Commission has authority to implement and enforce the requirements of the provision even if the Commission would not otherwise have jurisdiction under the FTC Act or McCarran-Ferguson Act. A challenge to FTC s authority would arise from uncertainties about what Congress intended by instructing FTC to enforce the section under the FTC Act. The phrase indicates that the Commission must enforce the section under the FTC Act even though, under the FTC Act, the Commission would not have authority to enforce certain provisions of the section or take certain other regulatory actions. Interpreting section 43 to mean that FTC enforcement actions are subject to all provisions of the FTC Act would lead to unreasonable results. Among other things, FTC would be without authority to perform the actions specifically prescribed in section 43. Moreover, it is clear that Congress intended that the section would apply to credit unions because section 43 specifically addresses state-chartered credit unions in the shut- down provision. Even if FTC s authority under the FTC Act did not extend to nonprofit entities before Congress enacted section 43, such a limitation did not preclude Congress from subjecting credit unions to FTC s authority under that provision. We interpret section 43 as authorizing FTC to enforce the section by using the enforcement powers provided in the FTC Act and not as a limitation on FTC s authority that would defeat several purposes of the section. No Act of Congress shall be construed to invalidate, impair, or supersede any law enacted by any State for the purpose of regulating the business of insurance, or which imposes a fee or tax upon such business, unless such Act specifically relates to the business of insurance. As interpreted by the Supreme Court, this provision precludes application of a federal statute in the face of a state law enacted . . . for the purpose of regulating the business of insurance, if the federal measure does not specifically relate to the business of insurance, and would invalidate, impair, or supersede the state s law. One purpose of this provision is to protect state insurance laws against inadvertent preemption by federal law. Section 43 does not inadvertently apply to insurance. Rather, to the extent that the section specifically relates to deposit insurance and to private providers of that insurance a state law relating to the same subject matter would be preempted. Because the audit provision is valid under the McCarran-Ferguson Act, FTC staff concerns about challenges to the Commission s authority to enforce the provision appear to be misplaced. Should FTC take an action arguably inconsistent with the role contemplated in section 43, such as regulating the safety and soundness of providers of private deposit insurance, the McCarran-Ferguson Act might serve as grounds to challenge the action. However, the McCarran-Ferguson act does not stand as a general bar to FTC s authority to enforce the audit requirement. <4.5.2. Lack of Guidance in Section 43 for Rulemaking Procedures Can Be Addressed> The only explicit rulemaking requirement in section 43 is that FTC issue regulations or orders prescribing the manner and content of disclosure required under the section. Section 43 does not designate the procedures FTC should follow in promulgating those rules or orders. Also, to the extent that FTC has authority to issue other regulations under the section, the source of that authority is less clear. Uncertainty about FTC s rulemaking authority might complicate the Commission s ability to promulgate regulations, but these potential complications do not appear to undermine FTC s authority to carry out the section. Under the FTC Act, the Commission has two types of rulemaking authority. The Commission has general authority to make rules and regulations for the purpose of carrying out the act. In addition, FTC has special rulemaking authority the Commission must use for issuing rules with respect to unfair or deceptive acts or practices. The special rulemaking authority requires, among other things, that the Commission define unfair or deceptive acts or practices with specificity and follow stringent rulemaking procedures. If the Commission s authority to issue regulations under section 43 is subject to the requirements of the FTC Act, then the Commission would have to rely upon its special rulemaking authority. It is unclear whether the Commission s authority to issue rules under section 43 is subject to the FTC Act, however. If FTC Act requirements do not apply, then FTC could rely upon the less stringent rulemaking requirements for informal rulemaking under the Administrative Procedure Act. Because section 43 does not provide specific guidance for which of FTC s rulemaking authorities applies, it could affect the manner in which the Commission undertakes its rulemaking. However, the lack of guidance does not preclude the Commission from carrying out its responsibilities under the section. <4.5.3. FTC s Concern That Section 43 Enforcement Would Require More Expertise Is Generally Not Warranted> In addition to perceived jurisdictional limitations, FTC staff maintained that enforcement of the section requires expertise and resources the Commission does not have and would require FTC to take actions inconsistent with its consumer protection mission. FTC staff asserted that enforcement of the disclosure requirement and the promulgation of regulations apprising consumers of the risk of not having federal deposit insurance, as well as proper enforcement of the audit requirement and shut-down provision, require an in-depth knowledge of depository institutions and deposit insurance and FTC oversight of the safety and soundness of institutions subject to section 43. Enforcement of the disclosure provisions does not necessarily require such in-depth expertise, although FTC could benefit from consulting with other federal regulators or others to gain this expertise to more effectively enforce these provisions. The only specific rulemaking mandate in section 43 requires FTC to prescribe the manner and content of disclosure required under this section in order to ensure that current and prospective customers understand the risks involved in forgoing federal deposit insurance. As noted previously, section 43 specifically requires disclosure of two facts: (1) that the depository institution is not federally insured and (2) if the institution fails the federal government does not guarantee that depositors will get back their money. FTC staff interprets the rulemaking mandate to mean that the Commission must issue regulations or orders requiring disclosure of information that goes beyond what is specifically required under section 43. It appears that a less extreme interpretation of the disclosure requirement one that does not compromise FTC s ability to carry out the requirement would be consistent with section 43. Even if the requirement for disclosure regulations calls for more than the disclosure specifically described in section 43, it is not clear that Congress intended the regulations to require a discussion of the safety and soundness of the depository institution and its private insurer. It appears that Congress enacted the disclosure requirements in section 43 to ensure that consumers are informed about an institution s lack of federal deposit insurance. There is no indication in the section or its legislative history that Congress also intended disclosure about the risks associated with the private deposit insurer. The purpose of deposit insurance is to free depositors from having to assess an institution s safety with respect to their deposits, up to the coverage limit; deposits are protected up to that limit even if the institution becomes unsafe or unsound. With respect to the safety of deposits, risk disclosure is unnecessary. FTC staff maintains that disclosure regarding private deposit insurance should be treated differently because, unlike federal deposit insurance, private deposit insurance is subject to the risk that a private insurer may not be able to protect the deposits it insures. We do not take issue with FTC s observation about the potential risks of private deposit insurance. However, nothing in section 43 indicates that Congress intended that disclosures with respect to private deposit insurance should be treated any differently; nothing in the section indicates that FTC should preempt the states in assessing the safety and soundness of privately insured institutions and their insurers. In section 43 Congress deferred to the states on whether to permit the operation of privately insured depository institutions. It is reasonable to conclude that Congress anticipated that depositors at those institutions should rely upon the states to oversee the safety and soundness of private deposit insurers. Finally, we note that the section 43 requirement for disclosure regulations is similar to other laws that require FTC to regulate disclosure without regard to its expertise concerning the subject of the disclosure. For example, under the Fair Packaging and Labeling Act, FTC regulates disclosure about a broad array of commercial items defined generically as consumer commodities. Under the FTC Act, the Commission has responsibility for preventing false advertising without regard to the nature of the product. Also, FTC enforces several federal consumer protection laws applicable to financial institution disclosures, including the Truth in Lending Act, the Consumer Leasing Act, the Equal Credit Opportunity Act, and the Electronic Funds Transfer Act. Moreover, the Commission already has demonstrated that it has the ability to regulate extensively how financial institutions must make disclosures about financial transactions and customer financial privacy. <4.5.4. Certain FTC Concerns Do Raise Questions about Its Enforcement Capabilities or Applicability of Its Authority> With respect to the shut-down provision, whether FTC enforcement requires expertise in depository institutions and deposit insurance depends upon how far the Commission might seek to extend its enforcement authority. Under the most likely enforcement scenario, depository institution expertise would not be necessary. The shut-down provision prohibits any depository institution lacking federal deposit insurance from engaging in interstate commerce unless the appropriate state supervisor has determined the institution s eligibility for federal deposit insurance. Assuming that FTC were not to grant an exemption, enforcing the provision could involve an FTC enforcement action under the FTC Act to shut down the institution. However, because depository institutions subject to section 43 are state-chartered, states likely would have primary responsibility for winding up an institution once it has ceased doing business. Section 43 would not prevent the application of federal bankruptcy laws or laws administered by federal agencies. FTC staff pointed out that under some circumstances it might be appropriate for the Commission to remain involved in winding up an entity subject to shut down to ensure that deposits were protected. To the extent that the Commission might remain involved, partnering with the state would be appropriate. FTC staff also stated that FTC lacks the expertise necessary to evaluate a state s determination of an institution s eligibility for federal deposit insurance. Nothing in section 43 suggests that FTC is to oversee the states in this regard. Congress deferred to the states with respect to the determination. We agree with the FTC staff that the extent to which FTC can challenge a state s determination is unclear, but we see nothing in the statute contemplating FTC review of state determinations. Another of FTC s concerns about the shut-down provision that section 43 does not provide standards for the Commission to apply in deciding whether to exempt an entity from the provision appears to have been partially addressed by Congress when it enacted the section. Section 43 authorizes FTC to permit an exemption from the shut-down requirement in consultation with the Federal Deposit Insurance Corporation. Thus, Congress specifically did not rely on FTC s independent judgment should FTC consider an institution for the exemption. The section, however, does not provide guidance on the factors the Commission should consider in deciding whether an institution is eligible for an exemption. The extent to which this lack of guidance might affect FTC s enforcement of the provision is unclear. We note, however, that FTC could seek to resolve uncertainties about exempting an institution by consulting with FDIC, as contemplated by section 43. The merit of FTC s concern regarding the look alike provision depends upon the Commission s perception of the role Congress intended it to have. Under the look alike provision, the Commission has discretion to decide whether an entity not chartered as a depository institution nonetheless should be subject to section 43. FTC staff asserted that the Commission could exercise this authority in a way that would include various uninsured institutions where funds are deposited, including securities firms and mutual funds. Such institutions would be subject to FTC enforcement of the disclosure requirements and the shut-down provision. According to FTC staff, proper enforcement of section 43 requires the Commission to promulgate a regulation defining look alike institutions and subjecting them to section 43. The staff asserted that because of FTC s lack of expertise regarding deposits, the Commission would have to define the look alike entities broadly, thus subjecting a potentially vast group of entities to the section. FTC s concern in this regard overlooks the fundamental principal that a statute should not be interpreted to produce absurd results. It does not appear that Congress intended that FTC would invoke the look alike provision broadly to include any entity that accepts deposits. For example, a reasonable interpretation of the look alike requirement does not anticipate shutting down entire industries and entities already subject to extensive disclosure regulation under federal law, such as securities firms and mutual funds. FTC staff also expressed concerns about what role the Commission would have to take if the Commission were to shut down a business, particularly if FTC took the action under the look alike authority. The staff stated that the Commission would lack expertise necessary to wind down the institution and protect its customers funds. We note that entities subject to the shut-down provision would be subject to state and federal laws governing the winding up of a business enterprise. In section 43, Congress did not indicate what, if any, role FTC should play in a shut-down scenario. However, nothing in section 43 indicates that Congress intended to preempt laws governing the winding up of an entity. FTC s concerns about monitoring compliance with the audit provision are more substantial. The audit provision does not require FTC to test the conclusions of the audit. It appears that the Commission could carry out its responsibility simply by relying upon the auditor s attestations and checking with the appropriate parties to ensure that the audit report was properly distributed. However, as FTC staff pointed out, proper enforcement of the provision could, under certain circumstances, call for close scrutiny of the audit. According to FTC staff, because the Commission lacks expertise in this area, it might be unaware of circumstances warranting close scrutiny of the audit report. <4.6. Clarifying FTC s Authority and Providing Some Flexibility Could Ensure Effective Enforcement of Section 43> While we found that FTC was the best candidate to enforce section 43 provisions, clarifying FTC s authority and providing additional flexibility in administering the section could help address some of the Commission s concerns about its authority and ability to enforce the provision without undermining its objectives. For section 43 to be fully implemented and enforced, the following changes to the identified provisions could clarify FTC s authority and provide flexibility for more effective enforcement. Disclosure provisions: FTC staff are apprehensive about the Commission s ability to carry out this mandate, primarily because of how they interpret the risk disclosure requirement, an interpretation that contemplates a discussion of the financial health of a depository institution and its private insurer. Giving FTC the flexibility to determine what disclosure requirements should be issued and to decide on the appropriate means for enforcing them could help to alleviate the Commission s concern. For example, the Commission might choose to require nonfederally insured institutions to obtain independent certifications from state supervisors or another independent body that their institution is in compliance with the section s disclosure requirements. Also, the Commission could be given authority to coordinate with state supervisors of nonfederally insured credit unions to assist in enforcing the disclosure requirements or imposing sanctions for violations of the disclosure provisions. In addition, a requirement that FTC consult with FDIC and NCUA about disclosure requirements could ensure that disclosure under section 43 covers FDIC and NCUA concerns about the potential for confusion of private deposit insurance with federal deposit insurance, and provides FTC with access to expertise it deems necessary to establish disclosure requirements. Requiring assistance from FDIC and NCUA in fashioning an appropriate disclosure regime may help satisfy FTC concerns about its lack of expertise. Additionally, such assistance would provide the federal deposit insurance agencies with an opportunity to ensure that disclosures adequately inform depositors in a manner that reduces the possibility of confusion with federal deposit insurance and apprises them of the risks associated with the lack of federal deposit insurance. Shut-down provision: Several aspects of this provision raise regulatory concerns. First, the requirement relies upon states to make a determination that involves federal policies; specifically, whether a particular institution is eligible for federal deposit insurance. The eligibility determination includes many factors that federal regulators apply on a case-by-case basis. A related concern is that the provision does not indicate what criteria a state should use in determining that an institution is eligible for federal deposit insurance. In addition, the section calls upon FTC to shut down institutions that are subject to regulation by state or federal bodies that have expertise in assessing the consequences of a shut down as well as shutting down an institution. To address these concerns, modifications to the shut-down provision could require coordination between FTC and the appropriate primary regulator of an institution in connection with a state s determination of deposit insurance eligibility, the Commission s determination of an institution s eligibility for an exemption from the provision, and the shutting down of an institution. Annual audit requirements: Section 43 clearly sets forth the requirements for a private deposit insurer with respect to the annual audit it must obtain and to whom the annual audit must be provided. However, the section does not indicate the extent of FTC review and monitoring appropriate for enforcing the provision. In this regard, an amendment to section 43 could provide FTC with specific authority to establish annual audit requirements for private insurers. With such authority, the Commission could set forth the conditions under which it would rely on the annual audit or could enter into a cooperative arrangement with the insurer s state regulators concerning reviews of the annual audit. <5. Conclusions> Depository institutions lacking federal deposit insurance are chartered and supervised by states; however, the activities of these entities involve federal interests. Congress acted on these federal interests by enacting section 43 of the FDI Act. However, issues of enforcement remain. Consistent with a prohibition in FTC s appropriations authority, the Commission has not enforced section 43 provisions. Absent enforcement, our work showed that compliance with these provisions varied significantly. Our primary concern, resulting from the lack of enforcement of section 43 provisions, is the possibility that members of state-chartered, privately insured credit unions may not be adequately informed that their deposits are not federally insured and should their institution fail, the federal government does not guarantee that they will get their money back. The fact that many privately insured credit unions we visited did not conspicuously disclose that the institution was not federally insured, raises concerns that the congressional interest in this regard is not being fully satisfied. The lack of enforcement of the other two provisions shut-down and annual audit may have a less direct impact on consumers. While it appears that privately insured credit unions have not obtained a determination from their state regulators that they are eligible for federal deposit insurance, this determination may not be a meaningful protection for consumers. Since it is only a one-time requirement, it does not provide any assurance that institutions will continue to operate in a manner to remain eligible for federal deposit insurance. However, state regulators imposed safety and soundness standards for credit unions lacking federal deposit insurance that are similar to federal oversight standards. NCUA officials also may consider other factors when determining eligibility. ASI officials also told us that they rigorously monitor the safety and soundness of their insured institutions. Given the related actions undertaken to help ensure the health of privately insured credit unions, the effect on consumers from the lack of enforcement of this provision may be negligible. Since we found that the remaining private deposit insurer has complied with the annual audit requirements, state regulators and the management of privately insured credit unions have had the opportunity to become informed about the financial condition of this private deposit insurer. Implementation of this provision helps ensure the safety and soundness of ASI which, in turn, helps to ensure that members of state- chartered, privately insured credit unions have a viable insurer should their credit union fail. Since the remaining private deposit insurer complied with section 43 audit requirements, it appears consumers suffered no negative impact from the nonenforcement of this provision. In evaluating which federal agency should enforce these provisions, we found the responsibilities outlined in these provisions did not fall ideally within any single agency s jurisdiction. FTC staff and officials from NCUA and FDIC opposed charging their agencies with this responsibility. NCUA and FDIC both have an interest in making sure that consumers receive adequate information about whether or not their deposits are federally insured. NCUA and FDIC also have considerable expertise in disclosures at federally insured depository institutions. However, FDIC insures the deposits at banks and savings associations but does not regulate or supervise credit unions or insure deposits at these institutions. If either FDIC or NCUA were charged with this responsibility, it could create potential confusion about federal deposit insurance and would result in a regulatory conflict of interest that could expose the credit union system to a loss of public confidence in the federal deposit insurance system. This would be inconsistent with a central purpose of the provision. Despite this conflict, the agency that enforces section 43 would benefit from coordination with NCUA and FDIC, because of their interests and expertise. Partnering with state regulators could also help FTC enforce certain section 43 requirements. For example, the Commission might choose to require nonfederally insured institutions to obtain independent certifications that their institution is in compliance with the section s disclosure requirements and that the risks of not having federal deposit insurance have been adequately disclosed. Considering that Congress deferred to the states on whether to permit the operation of depository institutions lacking federal deposit insurance, it is reasonable to conclude that Congress also relied upon the states to oversee the safety and soundness of those institutions and, accordingly, the risks to consumers of dealing with them. Although institutions lacking federal deposit insurance are chartered and regulated by the states, protecting consumers from confusion about the insurance of their deposits is consistent with the FTC s consumer protection mission. Congress also determined that the federal agency specifically charged with protecting consumers against misleading or deceptive information practices FTC should ensure that the federal interest in proper disclosure is maintained. However, Congress has also prohibited FTC from discharging its responsibilities under section 43. While FTC staff has raised jurisdictional concerns, as well as practical concerns about the Commission s ability to enforce these provisions, we believe that these interests can be best addressed by retaining FTC s responsibility for enforcing and implementing section 43. However, the section could be modified to reduce concerns FTC has expressed about its ability to enforce these provisions. Such modifications could allow FTC flexibility in discharging its responsibilities and enable it to call upon the expertise of the federal deposit insurers, state regulators, or others when the Commission deems it necessary without sacrificing the purposes of the section. <6. Matters for Congressional Consideration> No federal agency was the clear or obvious choice to carry out the responsibilities outlined in section 43 of the FDI Act; however, if modifications were made to these provisions, we believe that FTC would be best suited to retain responsibility for enforcing and administering these provisions. If Congress determines that FTC is the appropriate agency, then Congress should remove the prohibition from FTC using appropriated funds to enforce these provisions. Also, Congress should clarify that FTC s authority to implement and enforce section 43 is not subject to any limitations on its jurisdiction contained in the FTC Act. To remove obstacles and provide additional flexibility for FTC s enforcement of section 43 disclosure requirements, Congress may wish to consider Providing FTC the authority to consult with FDIC and NCUA when determining the manner and content of disclosure requirements to (1) provide FTC with access to expertise it deems necessary to establish disclosure requirements and (2) ensure that the required disclosures address FDIC and NCUA concerns about the potential for confusion of private deposit insurance with federal deposit insurance; Providing FTC the authority to coordinate with state supervisors of nonfederally insured depository institutions to assist in enforcing the disclosure requirements; and Providing FTC authority to impose sanctions for violations of the disclosure provisions. To remove obstacles and provide additional flexibility for FTC s enforcement of the section 43 shut-down provision, Congress may wish to consider Requiring coordination between FTC and the appropriate primary regulator of an institution when (1) FTC considers whether to exempt an institution from the requirement to obtain a state determination that it meets eligibility requirements for federal deposit insurance; and (2) FTC seeks to shut down an institution because it has not obtained a state determination that it meets eligibility requirements for federal deposit insurance. In light of some uncertainty as to the scope of FTC s jurisdiction under the FTC Act to regulate insurance entities in matters other than antitrust, Congress may wish to consider clarifying FTC s authority regarding the annual audit provision by Providing FTC with specific authority to establish requirements, such as attestation requirements, to ensure the reliability of annual audits for private insurers. <7. Agency Comments and Our Evaluation> We requested comments on a draft of this report from the heads, or their designees, of the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Federal Trade Commission. We received written comments from NCUA and FTC that are summarized below and reprinted in appendixes III and IV respectively. In addition, we received oral comments from the Deputy Director of Supervision and Consumer Protection at FDIC that are summarized below. We also received technical comments from NCUA and FTC that we incorporated into the report as appropriate. FDIC oral comments focused on the findings in the report dealing with FDIC and the overall report conclusions. FDIC generally agreed with the report s findings dealing with FDIC and stated that the arguments included in the report against having the FDIC enforce section 43 were generally consistent with arguments it provided to congressional staff during the drafting of the Federal Deposit Insurance Corporation Improvement Act of 1991, which led to the decision in the enacted legislation to assign FTC responsibility for enforcing compliance with the provisions discussed in this report. FDIC also stated that while time did not permit it to conduct an exhaustive legal review, it generally agreed with the report s overall conclusions. NCUA concurred with the report s conclusions that there is a need for enforcement of the consumer protection provisions in section 43 and that, for the reasons stated in our report, FTC, not NCUA or FDIC, is in the best position to enforce these provisions. NCUA also commented on FTC staff concerns expressed in this report that FTC might be challenged if it were to take action against credit unions because its enabling legislation has been interpreted to mean that it has no jurisdiction over nonprofit entities, such as credit unions. NCUA agreed with our conclusion that even if FTC s authority under the FTC Act did not extend to nonprofit entities, the FTC Act did not preclude Congress from subjecting credit unions to FTC s authority under section 43. Although NCUA agreed with this logic, it also believed that under FTC s enabling legislation FTC has jurisdiction over state-chartered credit unions. FTC disagreed with our conclusion that the Commission is the best among federal agencies to enforce section 43 provisions. FTC believed that the solution we offered does not meet the objectives of the statute and conflicted with our analyses. FTC stated that three principal objectives of section 43 are to provide some federal oversight to determine (1) the safety of deposits in institutions that are neither supervised nor insured by the federal government; (2) the financial soundness of those institutions and their state-supervised insurers; and (3) that disclosures to depositors at those depository institutions fully inform the depositors about an institution s lack of federal deposit insurance. We believe that FTC s interpretation of section 43 is inconsistent with the overall framework and purpose of the section. The regulatory scheme of section 43 indicates that Congress did not intend FTC to have a safety and soundness role. For example, Congress relied upon the states to determine whether a depository institution is eligible for federal deposit insurance even though the determination includes an assessment of an institution s safety and soundness. In addition, Congress required private deposit insurers to obtain an annual audit that satisfies certain standards, but did not require that the insurer submit the audit to FTC. Instead, section 43 requires the insurer to submit the audit to the state supervisors of institutions who have deposits insured by the entity. Finally, Congress designation of FTC as the federal agency responsible for enforcing section 43 indicates that Congress did not contemplate a federal safety and soundness role. The legislative history of section 43 supports this interpretation. The Senate bill containing the original version of section 43 set forth substantially the same disclosure requirements as are contained in section 43. The bill designated FDIC and NCUA two safety and soundness regulators to enforce those requirements. However, in the next version of the bill, which added the audit requirement, the shut-down provision, and the look-alike provision, Congress substituted FTC as the agency charged with enforcement responsibility. The legislative history does not discuss the reasons for this change, but it is reasonable to conclude that by substituting FTC for the safety and soundness regulators, Congress opted against a federal safety and soundness role under section 43. Neither section 43 nor its legislative history indicate that Congress intended to transform FTC from a consumer protection agency into a safety and soundness regulator of state-supervised depository institutions and their state-supervised private deposit insurers. We believe that the primary objectives of section 43 are to ensure that consumers are protected by receiving the disclosures and opportunity for acknowledgement specified in the section; the performance of an annual audit of the deposit insurer in accordance with generally accepted accounting standards that attests to the insurer s adherence to generally accepted accounting principles and the sufficiency of the insurer s loss reserve; the state certification relating to the shut down provision; and FTC s prudent and reasoned exercise of its authority pursuant to the look- alike provision. Our proposed solutions are consistent with this interpretation of section 43. FTC also raised concerns about our proposal that the Commission rely in part on NCUA and FDIC in connection with establishing disclosure requirements. FTC said that this recommendation would expose the Commission s formulation of disclosure requirements to the regulatory conflict of interest that would arise if NCUA and FDIC were to have primary regulatory responsibility under section 43. We believe that FTC, as a disinterest regulator with primary responsibility in this area, could neutralize any potential conflict of interest by considering the views of all parties having an interest in or expertise regarding an FTC action under section 43. FTC also contended that we significantly overestimate the Commission s expertise and experience in auditing, deposit safety and reserves, insurance regulation, assessment of financial soundness of depository institutions or insurers, and shutting down depository institutions. The Commission asserted that proper implementation of section 43 would require grafting onto the FTC, a very small agency, an entirely new deposit safety mission requiring expertise, tools, and resources that the FTC lacks and for which it has no other need. We disagree. This criticism is based on FTC s extreme view of the federal role under section 43. FTC assumes that Congress intended to transform the Commission into a regulator of depository institutions and insurers even though section 43 clearly contemplates that the states are to serve in that capacity. As stated above, we believe that section 43 calls for a more moderate role consistent with FTC s mission as a consumer protection agency. Congress has charged FTC with disclosure-related responsibilities with respect to many industries that FTC does not regulate. FTC regulates advertising and labeling with respect to a wide variety of consumer commodities and services, yet the Commission does not appear to have expertise in the intricacies of all industries subject to those authorities. Nothing in section 43 calls for FTC to have expertise, experience, or resources to regulate the safety of depository institutions. Also, nothing in section 43 requires FTC to oversee the closure of an institution subject to the shut-down provision. The shut-down provision is self-activating, that is, it is a directive to nonfederally insured depository institutions that they must cease doing business (in interstate commerce), if they have not received an insurance eligibility determination from the state. Congress did not provide any procedure for the institutions to follow when shutting down, and Congress did not charge FTC with responsibility for administering a procedure. It should be noted that FTC has ample experience under its routine enforcement authority in having businesses shut down. Additional FTC criticisms were that the report overstates the disadvantages and ignores the advantages of NCUA implementing section 43, and that the report does not consider possible alternative assignments of responsibility. FTC s assertions about the efficiency of NCUA regulation are misguided. As we discussed in the report, assigning NCUA the responsibility for regulating its competition would present an inherent conflict of interest that could undermine NCUA s credibility as a regulator. Moreover, bringing nonfederally insured institutions within the umbrella of regulation by a federal deposit insurer is inconsistent with a central purpose of section 43, which is to ensure the separation of nonfederally insured institutions and their private deposit insurer from federal deposit insurance. The report does not discuss the potential for federal regulators other than NCUA, FDIC and FTC to implement section 43 because no other federal regulator appears to be a suitable candidate. Unlike FTC, the Federal Reserve Board has safety and soundness and related responsibilities regarding certain depository institutions. Placing section 43 responsibilities under the Board would subject nonfederally insured, state-supervised institutions to regulation by a federal supervisor of financial depository institutions. We believe that Congress, by selecting FTC to administer and enforce section 43, sought to avoid such a relationship. FTC administration of section 43 would not necessarily have the same effect. With respect to SEC, we note that requiring SEC to administer the section would unnecessarily expand the Commission s mission. In some cases a look-alike institution (an entity that takes deposits but which is not chartered as a depository institution) could be involved in a securities violation, in which case SEC could take action under the federal securities laws and would not need authority under section 43 to proceed against an entity. Charging SEC with responsibility under section 43 could blur the distinction between disclosure and audit obligations under the securities laws and those established under section 43. We note, however, FTC is not precluded from working with SEC should FTC invoke the look-alike authority. FTC also stated that we failed to assess the potential impact on consumers if the disclosure provisions are not enforced. An empirical analysis of the impact on consumers was not performed. Presumably, depositors would not be impacted negatively by the lack of disclosure unless (a) they believed that their deposits were federally insured because of the lack of disclosure; (b) the institution holding their deposits failed; and (c) their deposits were not protected that is, the deposits were not insured or the insurer was unable to repay the deposits of a failed institution. We note, however, that in section 43 Congress made the judgment that depositors should receive the disclosure required in that section. It is reasonable to conclude that some individuals who do not receive the benefit of that disclosure may be uncertain about the insured status of their accounts. We agree with FTC s concerns that if the section 43 enforcement authority were immediately activated a number of institutions would be faced with shutting down because they have not obtained determinations from their state supervisors of eligibility for federal insurance and that some institutions would be subject to sanctions because of disclosure failures. We anticipate that Congress would grant FTC discretion to enforce and implement section 43 and, if necessary, to provide for a phased-in approach to deal with FTC s concerns. We will provide copies of this report to the Chairman and the Ranking Minority Member on the Senate Committee on Banking, Housing, and Urban Affairs, and the Chairman and the Ranking Minority Member on the House Committee on Financial Services. Copies of this report also will be provided to the Chairman of FTC; the Chairman of FDIC; the Chairman of NCUA, and other interested parties. Copies will also be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Debra R. Johnson, Assistant Director. Please contact Ms. Johnson or me at (202) 512-8678 if you or your staff have any questions about this report. Other major contributors are acknowledged in appendix V. Appendix I: Objectives, Scope, and Methodology To respond to a mandate in the Conference Report to accompany the House Joint Resolution 2, for the Fiscal Year 2003 Consolidated Appropriations Act which directed us to study the enforcement of section 43 of the Federal Deposit Insurance Act we (1) determined the current status of enforcement of these requirements; (2) determined the extent of compliance with each requirement and the potential impact on consumers if these requirements were not enforced, and (3) evaluated which federal agency could most effectively enforce section 43. To better understand the issues around deposit insurance, we reviewed and analyzed relevant studies on federal and private deposit insurers for both credit unions and other depository institutions. In addition, we interviewed officials at the National Credit Union Administration (NCUA), the Department of the Treasury, and the Federal Deposit Insurance Corporation (FDIC) to obtain perspectives specific to federal and private deposit insurance. We also obtained views from credit union industry groups including the National Association of Federal Credit Unions, National Association of State Credit Union Supervisors, and Credit Union National Association, Inc. (CUNA). We limited our assessment of depository institutions lacking federal deposit insurance to state-chartered credit unions that purchase private deposit insurance because banks, thrifts, and federally chartered credit unions generally are required to purchase federal deposit insurance. As of December 2002, 214 state-chartered credit unions lacked federal deposit insurance, and all but two were privately insured. In addition, our analysis was limited to primary deposit insurance. To determine the extent to which private deposit insurance is permitted and utilized by state-chartered credit unions, we conducted a survey of state credit union regulators in all 50 states. Our survey had a 100-percent response rate. In addition to the survey, we obtained and analyzed financial and membership data of privately insured credit unions from a variety of sources (NCUA, Credit Union Insurance Corporation of Maryland, CUNA, and American Share Insurance (ASI), the only remaining provider of primary share insurance). We found this universe difficult to confirm because in our discussions with state regulators, NCUA, and ASI officials, and our review of state laws, we identified other states that could permit credit unions to purchase private deposit insurance. To determine the regulatory differences between privately insured credit unions and federally insured state-chartered credit unions, we identified and analyzed statutes and regulations related to deposit insurance at the state and federal levels. In addition, we interviewed officials at NCUA and conducted interviews with officials at the state credit union regulatory agencies from Alabama, California, Idaho, Indiana, Illinois, Maryland, Nevada, New Hampshire, and Ohio. To determine the extent to which privately insured credit unions met federal disclosure requirements, we identified and analyzed federal consumer disclosure provisions in section 43 of the Federal Deposit Insurance Act, as amended, and conducted unannounced site visits to 57 privately insured credit unions (49 main and 8 branch locations) in Alabama, California, Illinois, Indiana, and Ohio. The credit union locations were selected based on a convenience sample using state and city location coupled with random selection of main or branch locations within each city. About 90 percent of the locations we visited were the main institution rather than a branch institution. This decision was based on the assumption that if the main locations were not in compliance, then the branch locations would probably not be in compliance either. Although neither these site visits, nor the findings they produced, render a statistically valid sample of all possible main and branch locations of privately insured credit unions necessary in order to determine the extent of compliance, we believe that what we found is robust enough, both in the aggregate and within each state, to raise concern about lack of disclosure in privately insured credit unions. During each site visit, using a systematic check sheet, we noted whether or not the credit union had conspicuously displayed the fact that the institution was not federally insured (on signs or stickers, for example). In addition, from these same 57 sites visited, we collected a total of 227 credit union documents that we analyzed for disclosure compliance. While section 43 requires depository institutions lacking federal deposit insurance to disclose they are not federally insured in personal documents, such as periodic statements, we did not collect them. We also conducted an analysis of the Web sites of 78 privately insured credit unions, in all eight states where credit unions are privately insured, to determine whether disclosures required by section 43 were included. To identify these Web sites, we conducted a Web search. We attempted to locate Web sites for all 212 privately insured credit unions; however, we were able to only identify 78 Web sites. We analyzed all Web sites identified. Finally, we interviewed FTC staff to understand their role in enforcement of requirements of section 43 for depository institutions lacking federal deposit insurance. To understand how private deposit insurers operate, we conducted interviews with officials at three private deposit insurers for credit unions ASI (Ohio), Credit Union Insurance Corporation (Maryland), and Massachusetts Credit Union Share Insurance Corporation (Massachusetts). Because ASI was the only fully operating provider of private primary deposit insurance, ASI was the focus of our review. We obtained documents related to ASI operations such as financial statements and annual audits and analyzed them for the auditor s opinion noting adherence with accounting principles generally accepted in the United States. To determine the extent to which ASI provided copies of its annual audits to state regulators and credit unions it insures, we interviewed state regulators in states where ASI insures credit unions and contacted the management of 26 credit unions that are insured by ASI. Additionally, to understand the state regulatory framework for ASI, we interviewed officials at the Ohio Department of Insurance and Department of Financial Institutions. To evaluate which federal agency could most effectively enforce these requirements, we interviewed FTC staff and officials from NCUA, FDIC, and various interested industry groups to discuss their perspectives and obtain their positions on enforcement of section 43 requirements. We also conducted legal research and analysis related to these provisions. We conducted our work in Washington, D.C., Alabama, California, Indiana, Illinois, Maryland, Massachusetts, Ohio, and Virginia between February and August 2003, in accordance with generally accepted government auditing standards. Appendix II: Entities That Enforce Various Laws at Credit Unions Real Estate Settlement Procedures Act Bank Secrecy Act (Currency and Foreign Transactions Reporting Act) DOJ Department of Justice FHA/VA Federal Housing Administration/Veterans Administration FRB Federal Reserve Board FTC Federal Trade Commission HUD Department of Housing and Urban Development TREAS Treasury Department The USA PATRIOT Act amended the Bank Secrecy Act, as well as other legislation. Appendix III: Comments from the National Credit Union Administration Appendix IV: Comments from the Federal Trade Commission Appendix V: GAO Contacts and Staff Acknowledgments <8. GAO Contacts> <9. Acknowledgments> In addition to the persons named above, Anne Cangi, Theresa L. Chen, William Chatlos, Kimberly Mcgatlin, Donald Porteous, Emma Quach, Barbara Roesmann, and Paul Thompson made key contributions to this report.
Why GAO Did This Study This mandated report responds to Congressional concerns that provisions in section 43 of the Federal Deposit Insurance Act (FDI Act) are not being enforced. Since 1991, section 43 has required, among other things, depository institutions lacking federal deposit insurance to conspicuously disclose that deposits in these institutions are not federally insured. GAO's objectives were to (1) determine the current status of the enforcement of provisions in section 43; (2) determine the extent of compliance with each provision and the potential impact on consumers if the provisions were not enforced; and (3) evaluate which federal agency could most effectively enforce the provisions. What GAO Found The Federal Trade Commission (FTC) is responsible for enforcing compliance with the provisions in section 43 of the FDI Act. However, due to a variety of concerns, FTC has requested and appropriators have agreed to prohibit FTC from enforcing these provisions. The National Credit Union Administration (NCUA) and state regulators have imposed some related requirements on credit unions and private deposit insurers. While these requirements are not the same as those in section 43 provisions, they provide some assurances that certain actions contemplated by section 43 are being satisfied. Some privately insured credit unions GAO visited did not adequately disclose that these institutions were not federally insured; as a result, depositors at these institutions may not be fully informed that their deposits are not federally insured. For example, in unannounced site visits to 57 privately insured credit unions in Alabama, California, Illinois, Indiana, and Ohio, GAO found that required notices were not posted in 37 percent of the locations. No federal agency is ideally suited to carry out the responsibilities outlined in section 43. Although FTC, NCUA, and the Federal Deposit Insurance Corporation (FDIC) officials generally agreed that consumers should receive information about the insured status of their deposits, they strongly maintained that their respective agencies should not enforce these provisions. NCUA and FDIC officials objected to enforcing these provisions because their agencies have no direct interest in uninsured institutions and their involvement in the enforcement of these requirements could undermine the purposes of the provision. FTC staff raised jurisdictional concerns and asserted that its mission, resources, and practices were ill suited for such a role. GAO believes that clarifying FTC's authority and providing it with additional flexibility in administering these provisions represents the best option to enforce the provisions.
<1. Background> Both the Clean Water and Drinking Water SRF programs authorize EPA to provide states and local communities with independent and sustainable sources of financial assistance. This assistance is typically in the form of low- or no-interest loans, for projects that protect or improve water quality and that are needed to comply with federal drinking water regulations and protect public health. Repayment of these loans replenishes the funds and provides the ability to fund future loans for additional projects. The Clean Water SRF program was established in 1987 under the Clean Water Act, which was enacted to protect surface waters, such as rivers, lakes, and coastal areas, and to maintain and restore the physical, chemical, and biological integrity of these waters. The Drinking Water SRF program was established in 1996 under the Safe Drinking Water Act, which was enacted to establish national enforceable standards for drinking water quality and to guarantee that water suppliers monitor water to ensure compliance with standards. The Recovery Act provided $6 billion for EPA s Clean Water and Drinking Water SRF programs. This amount represents a significant increase over the federal funds awarded to the non-Recovery Act, or base, SRF programs in recent years. From fiscal years 2000 through 2009, annual appropriations averaged about $1.1 billion for the Clean Water SRF program and about $833 million for the Drinking Water SRF program. In addition to increasing funds, the Recovery Act included some new requirements for the SRF programs. First, projects funded with Recovery Act SRF program funds had to be under contract ready to proceed within 1 year of the act s passage, or by February 17, 2010. Second, states had to use at least 20 percent of these funds as a green reserve to provide assistance for green infrastructure projects, water- or energy- efficiency improvements, or other environmentally innovative activities. Third, states had to use at least 50 percent of Recovery Act funds to provide additional subsidies for projects in the form of principal forgiveness, grants, or negative interest loans. Uses for these additional subsidies can include helping economically disadvantaged communities build water projects, although these uses are not a requirement of the act. With some variation, Congress incorporated two of these requirements green projects and additional subsidies into the fiscal year 2010 and 2011 base SRF program appropriations. In addition to meeting requirements from program-specific provisions, water projects receiving Recovery Act funds have to meet requirements from the act s Buy American and Davis-Bacon provisions. The Recovery Act generally requires that all of the iron, steel, and manufactured goods used in a project be produced in the United States, subject to certain exceptions. Federal agencies can issue waivers for certain projects under specified conditions, for example, if using American-made goods is inconsistent with the public interest or if the cost of goods is unreasonable; the act limits the unreasonable cost exception to those instances when inclusion of American-made iron, steel, or other manufactured goods will increase the overall project cost by more than 25 percent. Furthermore, recipients do not need to use American-made goods if they are not sufficiently available or not of satisfactory quality. In addition, the Recovery Act applies Davis-Bacon provisions to all Recovery Act-funded projects, requiring contractors and subcontractors to pay all laborers and mechanics at least the prevailing wage rates in the local area where they are employed, as determined by the Secretary of Labor. Contractors are required to pay these workers weekly and submit weekly certified payroll records. To enhance transparency and accountability over Recovery Act funds, Congress and the administration built numerous provisions into the act, including a requirement that recipients of Recovery Act funding including state and local governments, private companies, educational institutions, nonprofits, and other private organizations report quarterly on a number of measures. (Recipients, in turn, may award Recovery Act funds to subrecipients, which are nonfederal entities.) These reports are referred to as recipient reports, which the recipients provide through one Web site, www.federalreporting.gov (Federalreporting.gov) for final publication through a second Web site, www.recovery.gov (Recovery.gov). Recipient reporting is overseen by the responsible federal agencies, such as EPA, in accordance with Recovery Act guidance provided by the Office of Management and Budget (OMB). Under this guidance, the federal agencies are required to conduct data quality checks of recipient data, and recipients can correct the data, before they are made available on Recovery.gov. Furthermore, additional corrections can be made during a continuous correction cycle after the data are released on Recovery.gov. A significant aspect of accountability for Recovery Act funds is oversight of spending. According to the federal standards of internal control, oversight should provide managers with current information on expenditures to detect problems and proactively manage risks associated with unusual spending patterns. In guidance issued in February 2009, OMB required each federal agency to develop a plan detailing the specific activities including monitoring activities that it would undertake to manage Recovery Act funds. EPA issued its first version of this plan in May 2009, as required, and updated this document as OMB issued new guidance. <2. All Recovery Act SRF Program Funds Have Been Awarded and Obligated, and with Some Exceptions, States Reported Supporting Major Infrastructure Projects and Helping Economically Disadvantaged Communities> Nationwide, the 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act and reported using the majority of these funds for sewage treatment infrastructure and drinking water treatment and distribution systems, according to EPA data. In the nine states we reviewed, the states used these funds to pay for infrastructure projects that help to address major water quality problems, although state officials said that in some cases, Recovery Act requirements changed their priorities or the projects selected for funding. The nine states also used their Recovery Act funding to help economically disadvantaged communities, but state officials indicated that they continue to have difficulty helping these communities. <2.1. Nationwide, EPA Data Indicate States Awarded and Obligated the Majority of Recovery Act Water Funds for Sewage Treatment Infrastructure and Drinking Water Treatment and Distribution Systems> As of March 30, 2011, states had awarded funds for contracts and obligated the $4 billion in Clean Water SRF program funds and $2 billion in Drinking Water SRF program funds provided under the Recovery Act. <2.1.1. Requirement to Award Recovery Act Funds to Projects under Contract within 1 Year> As we reported in May 2010, EPA indicated that all 50 states met the Recovery Act requirement to award Recovery Act funds to projects under contract by February 17, 2010, 1 year after the enactment of the Recovery Act. In the 2 years since the Recovery Act was passed, states have drawn down from the Treasury approximately 79 percent, or $3.1 billion, of the Clean Water SRF program funds and approximately 83 percent, or $1.7 billion, of the Drinking Water SRF program funds. Across the nation, the states have used the almost $6 billion in Recovery Act Clean and Drinking Water SRF program funds to support more than 3,000 water quality infrastructure projects. As shown in figure 1, the states used the majority of their Recovery Act Clean Water SRF program funds to improve secondary and advanced treatment at wastewater treatment plants, as well as projects to prevent or mitigate sanitary sewer overflow. In Montevallo, Alabama, for example, the state provided Clean Water SRF program funds to upgrade an outdated wastewater treatment plant in Shelby County that served a population of about 5,000. The upgrade added two large settlement basins to hold and treat wastewater, replacing a series of small basins (see fig. 2). The additional treatment is expected to remove nutrients, such as nitrogen and phosphorus, to help the county meet higher standards in the nearby waterways receiving the plant s discharged water. As shown in figure 3, the states used about half of their Recovery Act Drinking Water SRF program funds to construct projects to transmit and distribute drinking water, including pumps and pipelines to deliver water to customers. States used about 40 percent of their funds for projects to treat and store drinking water. In Baltimore, Maryland, for example, the state provided funds to the city to cover one of its treated water reservoirs at the Montebello drinking water treatment plant. Before it was covered, the reservoir was open to birds and other sources of contamination, and city water managers used a mesh-like material to try to keep birds from landing on or using the water. When the project is complete, the reservoir will be a huge, cement tank buried under soil and vegetation (see fig. 4 for the project under construction in December 2010). According to EPA data, all states met the requirement to use at least 20 percent of their Recovery Act funding for green projects, with $1.1 billion of total Clean Water SRF program funds going to green projects and $544 million of total Drinking Water SRF program funds going to green projects. According to EPA, the goal of supporting green projects is to promote green infrastructure, energy or water efficiency, and innovative ways to sustainably manage water resources. Green infrastructure refers to a variety of technologies or practices such as green roofs, porous pavement, and rain gardens that use or mimic natural systems to enhance overall environmental quality. In addition to retaining rainfall and snowmelt and allowing them to seep into groundwater, these technologies can mitigate urban heat islands, and sequester carbon. Figure 5 shows the amount of Clean Water and Drinking Water SRF program funds that states awarded to green projects by type of project. In Annapolis, Maryland, for example, city officials used Clean Water SRF program funds to construct a green parking lot, a project that helped retain and filter storm water runoff. (See fig. 6.) In Los Alamos, New Mexico, city officials used Clean Water SRF program funds to install facilities to recycle water at the city s wastewater treatment plant; the recycled water will be used as washwater water that is used in the plant to clean equipment (see fig. 7). Because New Mexico is an arid state, the reuse of water saves operating costs for the plant, as well as scarce water resources. Nationwide, the states also met the Recovery Act requirement to provide at least 50 percent of the Clean Water and Drinking Water SRF program funds as additional subsidies in the form of principal forgiveness, negative interest loans, or grants (i.e., not loans to be fully repaid). Of the total Recovery Act funds awarded, 76 percent of Clean Water SRF Recovery Act funds and 70 percent of Drinking Water SRF Recovery Act funds were distributed as additional subsidies. Figure 8 shows the total Clean Water and Drinking Water Recovery Act funds awarded by the states as principal forgiveness, negative interest loans, or grants. The remaining 24 percent of Clean Water SRF Recovery Act funds and 30 percent of Drinking Water SRF Recovery Act funds will be provided as low- or no-interest loans that will recycle back into the programs as subrecipients repay their loans. <2.2. Recovery Act Water Funds Generally Addressed Major Water Quality Problems in Nine States, although Recovery Act Requirements Changed Some State Priorities or Projects> In the nine states we reviewed, Recovery Act Clean and Drinking Water SRF program funds have been used to address some of the major clean and drinking water problems in the states. These nine states received a total of about $832 million in Recovery Act SRF program funds about $579 million for their Clean Water SRF programs and about $253 million for their Drinking Water SRF programs. In total, these funds supported 419 clean and drinking water projects. To award SRF program funds, each of the nine states used a system to score and rank water projects seeking funds to address water quality problems that were submitted by local municipalities or utilities. The projects with the most points are considered the highest priority on the list of projects for funding. For example, Nevada officials told us that groundwater contamination is their state s major clean water quality problem, which their ranking system addresses by designating the elimination of existing contamination of groundwater as one of the state s highest-scoring priorities. In addition, in most of the nine states we reviewed, compliance is a key aspect of their ranking system, allowing points to be awarded to infrastructure projects that help the states eliminate causes of noncompliance with federal or state water quality standards and permits. Officials in most of the nine states said that they generally obtain information on their water systems compliance with federal and state water quality standards through discussions with their program compliance staff and from state databases. Michigan, for example, assigns a significant amount of points to clean water projects such as sewage treatment works that will help these projects comply with enforcement actions brought by the state against a municipality. In the nine states we reviewed, officials said that Recovery Act priorities including the requirements for projects to be ready to proceed to contract 1 year after the passage of the Recovery Act or for green projects either changed their priorities for ranking and funding projects or changed the projects they funded. Readiness of a project to proceed to construction requirement. In the nine states, officials included readiness to proceed and other Recovery Act requirements in their ranking system and selected projects on the basis of that ranking system or said that they did not fund or bypassed top- ranked projects that were not ready to proceed to construction by February 17, 2010, 1 year after the passage of the Recovery Act. For example, Washington State s two top-ranked clean water projects did not receive Recovery Act SRF program funds because they could not meet the February 2010 deadline. The projects were to decommission septic systems and construct a wastewater treatment plant to reduce phosphorus discharges to the Spokane River. In Wyoming, many of the projects that were not ready to proceed were water treatment plants, which state officials said take longer to design and plan for construction. Although these higher-ranked projects did not receive Recovery Act funds, at least two states were able to fund these projects in other ways, such as through state grants or non-Recovery Act SRF program funds. Green project requirement. Three states listed green projects separately from other projects. For example, Washington State officials who manage the Clean Water SRF program told us that they established a green projects category because they had anticipated that projects focused primarily on energy and water efficiency (green projects) would not score well under their ranking system, which focuses on water quality protection and improvements. Other states funded green projects ahead of higher- ranked projects. For example, Nevada did not fund a number of higher- ranked projects and funded a lower-ranked drinking water project that had green components. Similarly, Maryland bypassed many projects to fund the first green-ranked project on its list. Buy American and Davis-Bacon provisions. State officials identified a few projects that did not proceed because potential subrecipients either did not want to meet one or more Recovery Act requirements, such as the Buy American and Davis-Bacon provisions, or did not want to increase the cost of their projects. For example, local officials in Alabama withdrew their application for a drinking water project because the project was already contracted without Buy American and Davis-Bacon wage requirements, and an addendum to the contract to meet the regulations would have increased the project s cost. Similarly, officials in all nine states said that a few communities indicated they preferred to have their projects funded from the base program, or chose not to apply for or withdrew from the Recovery Act funding process to avoid paperwork or the additional costs associated with the act s Buy American or Davis- Bacon requirements. For example, Wyoming officials said that potential subrecipients for three clean water projects refused funding, citing time constraints or difficulty meeting Buy American requirements. Despite changes in priorities for ranking and funding projects or in the projects funded, officials reported that they were able to fund projects with Recovery Act funds that helped resolve their major water problems. For example, Wyoming officials told us that Recovery Act clean and drinking water funds were used to replace aging sewer and water lines, which they said was one of their major problems. Connecticut officials said that Recovery Act funding helped support four combined sewer overflow projects, which resulted in fewer discharges of partially treated sewage into the area waterways. Nevada officials told us that Recovery Act funding will help with the rehabilitation and relining of sewer ponds in four rural communities, eliminating groundwater pollution, a major problem in the state. Washington State officials who manage the Drinking Water SRF program told us that six of their Recovery Act projects addressed arsenic drinking water contamination, a major water problem in the state. <2.3. States Supported Economically Disadvantaged Communities, in Part by Using Additional Subsidies Authorized under the Act, although Officials Cited Continuing Difficulty in Helping These Communities> Although the Recovery Act did not require states to target Clean and Drinking Water SRF program funds to economically disadvantaged communities, six of the nine states that we reviewed distributed more than $123 million in clean water funds, and eight of the nine states distributed almost $78 million in drinking water funds under the SRF Recovery Act programs to these communities. This amount represents about 24 percent of the almost $832 million in Recovery Act funds that the states were awarded. As shown in table 1, a large majority of the funds provided to these communities were provided as additional subsidies grants, principal forgiveness, and negative interest loans. According to officials in five of the nine states we reviewed, their states provided additional subsidies to economically disadvantaged communities because the communities would otherwise have had a difficult time funding projects. For example, New Mexico officials told us that they directed additional drinking water subsidies to economically disadvantaged communities because these communities have historically lacked access to capital. Officials in Nevada told us such communities not only have a difficult time funding projects, they also have some of the projects with the highest priority for addressing public health and environmental protection concerns. In addition, officials in a few other states told us that economically disadvantaged communities often lack the financial means to pay back loans from the SRF programs or lack funds to pay for the upfront costs of planning and designing a project. Officials in at least two states also said that many economically disadvantaged communities lack full-time staff to help manage the water infrastructure. Even with the additional subsidies available for projects, officials in a few states said that economically disadvantaged communities found it difficult to obtain Recovery Act funds. For example, Missouri officials told us that the Recovery Act deadline was the single most important factor hindering the ability of these communities from receiving funding. New Mexico officials also told us that because these communities typically do not have funds to plan and develop projects, few could meet the deadline, and several projects that sought Recovery Act funds could not be awarded funding owing to the deadline. We gathered information on economically disadvantaged communities from the nine states we reviewed because EPA did not collect the information. In April 2011, the EPA Office of Inspector General (OIG) reported that EPA could not assess the overall impact of Recovery Act funds on economically disadvantaged communities because the agency did not collect data on the amount of Clean and Drinking Water SRF program funds distributed to these communities nationwide. The OIG recommended that EPA establish a system that can target program funds to its objectives and priorities, such as funding economically disadvantaged communities. <2.4. Number of FTEs Have Declined as Most Recovery Act Funds Are Spent> For the quarter ending December 2009 through the quarter ending June 2010, the number of FTEs paid for with Recovery Act SRF program funds increased each reporting quarter, from about 6,000 to 15,000 FTEs for planning, designing, and building water projects (see fig. 9). As projects were completed and funds spent, the number of FTEs funded had declined to about 6,000 for the quarter ending March 2011. Following OMB guidance, states reported on FTEs directly paid for with Recovery Act funding, not the employment impact on suppliers of materials (indirect jobs) or on the local communities (induced jobs). In addition, state officials told us that, although funding varies from project to project, as much as 80 percent of a project s funding generally is used for materials such as cement for buildings and equipment such as turbines, pumps, and centrifuges and the remainder pays for labor or FTEs. <3. EPA, States, and Other Agencies Took Actions to Monitor SRF Program Funds and Found Projects Largely Complied with Recovery Act Requirements> As Recovery Act Clean Water and Drinking Water SRF program funds have been spent over the last 2 years, EPA officials have monitored projects and spending activity and found that states have generally complied with Recovery Act requirements. Similarly, in the nine states we reviewed, state officials indicated that the site visits they made to monitor Recovery Act projects found few problems. Furthermore, state auditors in the nine states we reviewed continue to monitor and oversee the use of Recovery Act funds, and their reports showed few significant findings. <3.1. EPA s Monitoring Found That States Largely Complied with Recovery Act Requirements> Since the Recovery Act was enacted, EPA officials have reviewed all 50 states Recovery Act Clean and Drinking Water SRF programs at least once and have found that states are largely complying with the act s requirements. In our May 2010 report, we recommended that EPA work with the states to implement specific oversight procedures to monitor and ensure subrecipients compliance with provisions of the Recovery Act- funded Clean Water and Drinking Water SRF programs. EPA updated its oversight plan for Recovery Act funds, in part, as a response to our recommendation. The plan describes the following monitoring actions for the Recovery Act Clean and Drinking Water SRF programs: EPA headquarters staff should visit both SRF programs in every region in fiscal years 2010 and 2011, review all states Clean Water SRF programs and all states Drinking Water SRF programs for these years, and provide training and technical assistance, as needed. Although the oversight plan recommends headquarters staff visit all regions in 2011, EPA officials decided instead to provide regional training on program eligibility requirements. The officials said that they had visited the regions once and saw greater benefit in providing training. EPA s Office of Wastewater Management and Office of Ground Water and Drinking Water will report bimonthly to the Assistant Administrator for Water on oversight activities. Regional staff should conduct state reviews twice a year using an EPA- provided checklist or comparable checklist, examine four project files, and conduct four transaction tests, which can be used to test if an internal control is working or if a dollar error has occurred in the processing of a transaction. In addition, regional staff are to discuss each state s inspection process and audit findings with state officials, and update headquarters staff on any findings. The regions are to submit to headquarters (1) program evaluation reports, which describe how states are managing their Recovery Act SRF funds and projects; (2) Recovery Act project review checklists, to examine compliance with Recovery Act requirements; and (3) transaction testing forms, to determine if any erroneous payments were made. Regional staff should conduct at least one site inspection of a clean water project and a drinking water project in each state each year. According to our review of the Clean and Drinking Water SRF program evaluation reports for the 50 states, EPA regional officials generally carried out the instructions in EPA s oversight plan. As of June 1, 2011, these officials had visited most state programs twice, although they visited some state programs only once or did not have documentation of the visits. During visits, officials reviewed the files for proper documentation pertaining to Davis-Bacon, Buy American, and green project requirements. Additionally, although not required to do so by the oversight plan, regional officials attempted to visit at least one clean water and one drinking water SRF Recovery Act project in every state each year. Headquarters officials said that the regional staff met this goal for drinking water projects in 2010, but they were not able to visit a clean water project in each state because of time and budget constraints. EPA headquarters officials said that they oversaw each region s activities by visiting the regional offices to review files on the states. Headquarters officials told us that when they visited regional offices, they checked whether key state documents were maintained in the region s state file, such as the Recovery Act grant application and any accompanying amendments; the state s intended use plan, which details a state s planned use of the funds, including the criteria for ranking projects and a list of ranked projects; and a copy of the grant award and conditions. Furthermore, headquarters officials said that they used a regional review checklist to examine each region s oversight practices by, for example, determining whether the regions received and reviewed states analyses of costs (business cases) and if the regions ensured that the states updated key reporting data for their Recovery Act projects each quarter. Headquarters officials also said that they briefly reviewed the Drinking Water and Clean Water SRF program evaluation reports when they reviewed the regions activities. Headquarters officials said they had imposed a 60-day time frame for completing these reports because the regional staff were not submitting the reports in a timely manner. Additionally, the EPA OIG is conducting performance audits of EPA s and states use of Recovery Act funds for the Clean and Drinking Water SRF programs and unannounced site inspections of Recovery Act-funded projects. Between May 1, 2010, and May 1, 2011, the OIG has conducted eight unannounced site visits. Six of the eight visits yielded no findings. The OIG issued recommendations for the other two projects: In a visit to Long Beach, California, the OIG found that a contractor did not fully comply with federal and state prevailing wage requirements, which resulted in underpayments to employees. The OIG recommended that EPA require the California State Water Resources Control Board to verify that the city is implementing controls to ensure compliance with prevailing wage requirements. In a visit to Astoria, Oregon, the OIG found that the city understated the number of FTEs created or retained with Recovery Act funds. In addition, the OIG found that a change order for one of four contracts awarded did not meet applicable procurement requirements. The OIG recommended that EPA Region 10 require the Oregon Department of Environmental Quality to require the city to correct the number of FTEs and report the corrected number to the federal government. The OIG also recommended that the regional administrator of EPA Region 10 require the Oregon Department of Environmental Quality to disallow the costs incurred under the change order unless Astoria was able to show that the costs met applicable Oregon requirements. Officials for EPA, the Oregon Department of Environmental Quality, and the city concurred with the corrective actions. The Chairman of the Recovery Accountability and Transparency Board testified in June 2011 that there has been a low level of fraud involving Recovery Act funds. He noted that less than half a percent of all reported Recovery Act contracts, grants, and loans had open investigations and only 144 convictions involving about $1.9 million of total Recovery Act funds for all programs had resulted. As the EPA Inspector General noted in May 2011, however, fraud schemes can take time to surface. The Inspector General cited an ongoing investigation of a foreign company that received over $1.1 million in contracts for equipment to be used in wastewater treatment facilities across the United States after falsely certifying that the equipment met the Recovery Act Buy American provision. Furthermore, the Inspector General also testified that EPA Region 6 officials identified, through a hotline tip, $1 million in unallowable grant costs charged by seven subrecipients. These funds have been reprogrammed by the state for other uses. <3.2. State Officials Said They Have Found Few Problems during Site Visits to Monitor Recovery Act Projects> EPA s oversight plan indicates that state officials should visit each project site at least once per year, and suggests that state officials review the items on EPA s state Recovery Act inspection checklist, or a similar state- specific checklist. According to the plan, state officials should complete the checklist and inform regional offices of any issues encountered in the oversight reviews, inspections, or audits. According to program officials in the nine states we reviewed, the clean and drinking water SRF projects they reviewed largely complied with Recovery Act requirements. The officials said that they inspected each Recovery Act project site at least once during the course of project construction, and sometimes more frequently, depending on the complexity of the project. These officials also said that, using the EPA or other checklist, they evaluated whether the communities or subrecipients were meeting Recovery Act reporting requirements. For example, according to the checklist, officials verified whether subrecipients submitted FTE information to the state each quarter, and whether they submitted regular reports certifying that the project remained in compliance with the Davis-Bacon provisions, based on a weekly review of payroll records. In addition, the officials used the checklist to review the contents of project files and ensure that key project documents were present, such as project-specific waivers. Using the checklist, these officials also confirmed that projects receiving green infrastructure funding properly incorporated green components. In addition, officials in Alabama, Connecticut, Nevada, and New Mexico took photographs of various project components to record compliance with the Buy American provisions. A few officials in the nine states that we reviewed said that meeting the oversight plan requirements, such as increasing the number of site visits, has been time-consuming. However, a couple of officials said that their site visits have resulted in better subrecipient compliance with Recovery Act requirements. For example, as a result of their site visits, state officials corrected a problem they had identified subrecipients in three of the nine states we reviewed had foreign components on site: In New Mexico, officials told us that foreign components had been shipped to a project site, and that they had to replace the components before incorporating them into the project. Missouri officials said that the EPA inspection checklist had helped to identify some foreign-made components on a project site, and the components were replaced. Connecticut officials told us that they had identified a drinking water project that contained Chinese and German equipment valued at $10,000. They said that the project was already in service, making replacement costly and impractical because it would require consumers to be without water. The state is working with EPA to resolve the matter. <3.3. State Audit Reports Covering Clean and Drinking Water Programs Found Few Significant Problems> State auditors or private auditors contracted by the states helped ensure the appropriate use of Recovery Act water funds. For eight of the nine states that we reviewed, we received state or private audits that examined the Recovery Act Clean and Drinking Water SRF programs. With the following two exceptions, the auditors have reported few significant problems: Michigan. In its audit of the Michigan Department of Environmental Quality s fiscal year 2008 and 2009 financial statements, the Michigan Office of the Auditor General reported several material weaknesses in internal controls and material noncompliance with requirements related to subrecipient monitoring and other special provisions for Recovery Act-funded expenditures. For example, for the Recovery Act Clean and Drinking Water SRF programs, the auditors found that the Michigan Department of Environmental Quality overstated the number of FTEs for the reporting period ending September 30, 2009, because its methodology for calculating FTEs was not in accordance with June 2009 OMB guidance. The auditors also found that the department did not have a process to (1) verify the accuracy of the information contained in its recipient report; (2) adequately monitor subrecipients expending of Recovery Act funds for construction activities to ensure that the subrecipients complied with the Davis-Bacon provisions; and (3) adequately monitor subrecipients expending of Recovery Act funds for the construction, alteration, maintenance, or repair of a public building or public work to ensure that the subrecipients complied with Buy American provisions. In response to these findings, the auditors recommended that the department improve its internal control over the SRF programs to ensure compliance with federal laws and regulations. The department partially or wholly agreed with these findings, and anticipated taking the appropriate corrective action by September 30, 2011. One Michigan official said that corrective action has been implemented for the findings that pertain to the SRF program. Washington State. In the November 2010 Financial Statements and Federal Compliance report for the Drinking Water SRF program, auditors found significant deficiencies in the Department of Health s internal control. As a result, they recommended that the Department of Health train employees on financial reporting preparation and requirements; establish and follow internal controls, including an appropriate, independent review of the financial statements and related schedules; and establish policies and procedures related to the preparation of the year-end financial statements. The Department of Health concurred with the finding, and stated that it would take appropriate action. In the corresponding report for the Clean Water SRF program, auditors found no internal control weaknesses. <4. Federal and State Agencies Continue to Oversee the Quality of Recipient Reporting Data, Including Jobs, in Seventh Round of Reporting> To meet our mandate to comment on recipient reports, we have continued monitoring recipient-reported data, including data on jobs funded. For this report, we focused our review on SRF program funds and EPA and state efforts to conduct data quality reviews and identify and remediate reporting problems. <4.1. EPA Continued Performing Data Quality Checks and Said Data Quality Is Relatively Stable> According to EPA officials, the overall quality of the states SRF data on Recovery.gov, which EPA officials have checked each quarter, is stable. The officials said that the states initial unfamiliarity with a newly developed reporting system has been resolved, the Federalreporting.gov help desk has improved, and guidance issued by OMB has clarified reporting issues over time. During the seventh round of reporting, which ended on March 31, 2011, EPA officials continued to perform data quality checks as they had in previous quarters. Specifically, EPA used data from the agency s grants database, contracts database, and financial management system to compare with recipient-reported data. These systems contain authoritative data for every award made to the states, including the award identification number, award date, award amount, outlays, Treasury Account Symbol codes, and recipient names. According to EPA officials, they use the agency data to ensure that recipient-reported information for a given award corresponds with the information on EPA s official award documents. EPA staff can raise questions about any inconsistent data through the Federalreporting.gov system. State recipients may make appropriate changes to the data through the end of the reporting period, and after public release, during the continuous correction cycle. According to EPA officials, this process has resolved any questions and comments from EPA s reviews. To facilitate its oversight of state-reported data, EPA required states to use its Clean Water Benefits Reporting (CBR) system and Program Benefits Reporting (PBR) system to report on certain Recovery Act project information, such as the project name, contract date, construction start, Recovery Act funding, jobs created or retained, and project purpose and anticipated benefits. EPA officials said that they do not routinely collect state expenditure data in these systems and that they rely on regional officials to review expenditures reported by the states on Recovery.gov. We compared EPA s data on awards and funds drawn down by states with data reported by states on Recovery.gov and found only a few minor inconsistencies in the data. Similarly, in September 2010, EPA s OIG reported that the Recovery.gov data for EPA s SRF programs contained a low rate of errors. The OIG audited EPA s controls for reviewing recipient-reported data after the second round of reporting, which ended December 31, 2009, comparing EPA data on award type, award number, funding agency code, award agency code, and award amount to state-reported data on Recovery.gov. The OIG report found that EPA s controls helped lower the rate of errors for these key data and recommended some improvements to EPA s process. EPA s Clean and Drinking Water SRF program officials said that they have had few errors in the SRF data in the last three rounds of reporting. <4.2. Nine States Checked Quality of Recovery Act Data Quarterly, but Minor FTE Discrepancies Occurred> Officials in the nine states we reviewed indicated that the quality of recipient data has remained relatively stable, although we found that the states differed in how they reported state agencies FTE data and did not report some subrecipients FTE data. Water program officials in these states said that they check the quality of data that are reported on Federalreporting.gov and then Recovery.gov. In addition, officials in Alabama, Connecticut, Maryland, Missouri, and New Mexico said that they examined payroll data submitted by contractors to verify FTE data. In some cases, state officials said that they contact subrecipients for clarification about data that are missing or inconsistent. In addition to department-level checks, in most of the nine states we reviewed, state-level Recovery Office staff checked the data before submitting the information to Federalreporting.gov. In four of the nine states Alabama, Maryland, Missouri, and New Mexico Recovery Office staff monitored Recovery Act implementation and performed independent data quality checks of the data reported by state agencies. According to several state officials, this reporting structure provided an additional level of review of state agency data. In Maryland, for example, officials said that their state-level reporting system relieves subrecipients of certain reporting duties. Subrecipients submitted the FTE and payroll information to Maryland s StateStat office, and staff in that office reviewed and validated the data, completed the required federal reports, and submitted them to Federalreporting.gov. Furthermore, for control purposes, only two staff members handled the information. In addition, staff in Nevada s Recovery Office conducted quality checks; however, each state agency then submitted its data directly to the appropriate federal agency. The remaining four states Connecticut, Michigan, Washington State, and Wyoming did not have a Recovery Office staff check data quality. We found minor problems with the FTE data that some of the nine states reported. Specifically, (1) states differed in how they reported the FTEs associated with their own program staff that is, those who conduct document reviews, site inspections, and other key program duties; and (2) three states identified missing or incorrectly reported FTE data on Recovery.gov, and these data have not been corrected. In particular: Six of the nine states reported the FTEs for their state employees who were paid with Recovery Act funds, while two states did not. Officials in Maryland and Michigan noted that they did not report all the time their state employees spent on program activities in Federalreporting.gov, although these FTEs were paid for with Recovery Act funds. EPA officials said that they provided states with OMB guidance and that OMB guidance requires states to report FTEs paid for with Recovery Act funds. Washington State officials who administer the Clean Water SRF program changed the time frame for reporting FTE data in the third round of reporting, and as a result, missed reporting one quarter of data. During the first two reporting rounds, because some subrecipients were finding it difficult to submit complete FTE data to the state by the state s deadline, staff reported data from 2 months of the current quarter and 1 month of the previous quarter. During the third reporting quarter, the state began reporting 3 months of current data. However, the state received data from a subrecipient after the deadline for reporting and did not correct the data during the correction period. As a result, officials said about 18 FTEs remain unreported. EPA officials told them to keep a record of these FTEs in case there is an opportunity to correct the data. Officials in New Mexico did not report a few FTEs for the state s Drinking Water SRF program in the first two rounds of reporting. The officials explained that the revisions were submitted to the state after the reporting period ended and therefore the FTEs were not captured in Recovery.gov. Officials in Wyoming identified incorrectly reported FTEs for two quarters. The FTEs were incorrect because the state entered the data for one clean water project for one quarter in the next quarter. As a result, one quarter s data were overstated by a few FTEs, and the other quarter s data were understated by a few FTEs. The state official explained that the data changed after they were initially reported in Recovery.gov and were not updated during the correction period. As the bulk of Recovery Act funding is spent, EPA officials said that the states were beginning to complete their projects. Officials said that before the next reporting round begins in July 2011, they plan to issue a memorandum to states on how to complete their Recovery Act grants and when to stop reporting to Recovery.gov. During the seventh round of reporting, one state in each program indicated in Recovery.gov that the grant including all projects that received money from the grant was complete. EPA officials told us that as of early May 2011, 629 clean water and 383 drinking water projects have been completed across all states. Some state officials charged with coordinating state-level Recovery Act funds also said that they are winding down their activities. In Michigan, for example, the Recovery Office was originally a separate office under the Governor, but has since been moved under the Department of Management and Budget. In Nevada, the Recovery Act Director said that his office will be eliminated at the end of June 2011. At that point, the Department of Administration s centralized grant office will take responsibility for Nevada s remaining Recovery Act efforts. Similarly, officials at the New Mexico Office of Recovery and Reinvestment said that their office is currently funded through the Recovery Act State Fiscal Stabilization Fund through the end of June 2011. Because of the high-level nature of SRF recipient reporting for Recovery.gov and the availability of information in its own data systems, EPA officials do not anticipate using data from Recovery.gov. The officials said that whereas Recovery.gov summarizes information on many projects at the state level, the data from CBR and PBR are more useful for understanding states projects than data on Recovery.gov because the internal data are provided by project and include more detail. EPA officials said that by the end of 2011 they plan to use information in these two internal systems to assess anticipated benefits of the Recovery Act SRF program funds. EPA Clean Water officials said that they would perform case studies of completed projects and assess anticipated benefits. Drinking Water officials said that they are considering three major studies, some joint with the Clean Water SRF program. These studies may include assessments of project distributions, green projects benefits, and subsidy beneficiaries. <5. The States Identified Challenges in Implementing Recovery Act SRF Programs That Highlight Potential Future Challenges for SRF Programs> Our May 2010 report identified the challenge of maintaining accountability for Recovery Act funds and recommended improved monitoring of Recovery Act funds by EPA and the states. As we note above, our current work shows that EPA and the nine states we reviewed have made progress in addressing this challenge. Two challenges EPA and state officials identified in spending Recovery Act SRF program funds may continue as requirements introduced with the Recovery Act are incorporated into the base SRF programs. Specifically, in fiscal years 2010 and 2011, the Clean and Drinking Water SRF programs were required to include provisions for green projects and additional subsidies. Encouraging green projects. The effort to support green projects was included in EPA s fiscal year 2010 and 2011 appropriations for the base Clean and Drinking Water SRF programs. As we discussed above, under the requirement to fund green projects in the Recovery Act, in certain cases state officials said they had to choose between a green water project and a project that was otherwise ranked higher to address water quality problems. Similarly, in our May 2010 report, we found that officials in some of the states we reviewed said that they gave preference to green projects for funding purposes, and sometimes ranked those projects above another project with higher public health benefits. In addition to competing priorities for funding, EPA s OIG found, in its February 2010 report, that a lack of clear guidance on the green requirement caused confusion and disagreements as to which projects were eligible for green funding. Officials in two of the nine states we reviewed noted that the goal of supporting green projects was not difficult to achieve because they had already identified green projects. Officials in four other states said that while they all met the 20 percent green project goal, it was difficult to achieve, leading one official to suggest that green projects be encouraged without setting a fixed percentage of program funds. EPA officials added that they had also heard that achieving the green requirement may continue to be difficult in some states, particularly for the Drinking Water program. However, the officials also said that they were encouraging states to include green components in their drinking water projects rather than seeking solely green projects. Providing additional subsidies. The fiscal years 2010 and 2011 appropriations for the Clean and Drinking Water SRF programs also continued the requirement to provide additional subsidies in the form of principal forgiveness, negative interest loans, or grants. The subsidy provisions reduced the funds available to use as a subsidy from a minimum of 50 percent of funds required under the Recovery Act to a minimum of 30 percent of base SRF program funds. As with the Recovery Act, the appropriations in fiscal years 2010 and 2011 do not require this additional subsidy to be targeted to any types of projects or communities with economic need, and as the recent EPA OIG report notes, there are no requirements for EPA or the states to track how these subsidies are used. The base Clean and Drinking Water SRF programs were created to be a sustainable source of funding for communities water and wastewater infrastructure through the continued repayment of loans to states. Officials in four of the nine states we reviewed identified a potential challenge in continuing to provide a specific amount of subsidies while sustaining the Clean and Drinking Water SRF programs as revolving funds. State officials pointed out that when monies are not repaid into the revolving fund, the reuse of funds is reduced and the purpose of the revolving SRF program changes from primarily providing loans for investments in water infrastructure to providing grants. <6. Agency Comments and Our Evaluation> We provided a draft of the report to the Environmental Protection Agency for review and comment. EPA stated that it did not have any comments on our report. We are sending copies of this report to the appropriate congressional committees, Administrator of the Environmental Protection Agency, and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this review were to examine the (1) status and use of American Recovery and Reinvestment Act of 2009 (Recovery Act) Clean and Drinking Water State Revolving Funds (SRF) program funds nationwide and in selected states; (2) actions taken by federal, state, and other agencies to monitor and ensure accountability of these program funds; (3) approaches federal agencies and selected states have taken to ensure data quality, including data for jobs reported by recipients of these program funds; and (4) challenges, if any, that states have faced in implementing Recovery Act requirements for the Clean and Drinking Water SRF programs. To examine the status and use of Recovery Act funds nationwide and in selected states, we reviewed relevant Clean and Drinking Water SRF federal laws, regulations, and guidance, and examined federal and selected state program and project documentation. We interviewed Environmental Protection Agency (EPA) officials responsible for administering programs in headquarters. We also interviewed state Recovery Act officials and state program officials, in environmental and public health departments, who are responsible for revolving loan fund programs. We obtained and analyzed nationwide Recovery Act data from the EPA Clean Water SRF Benefits Reporting (CBR) system and the Drinking Water SRF Project Benefits Reporting (PBR) system for all states. These data included (1) categories of clean and drinking water infrastructure and green projects; (2) Recovery Act funds awarded and drawn down from the Treasury; (3) amount of subsidization (principal forgiveness or grants and low- or no-interest loans); and (4) number of full-time equivalents (FTE). We also obtained and analyzed key nationwide data from the EPA National Information Management System on Recovery Act funding by type of clean water project. Using these data, we summarized the amount of Recovery Act funds provided by states to clean and drinking water SRF projects by category of project (e.g., clean water sanitary sewer overflow and drinking water treatment). We assessed these data for their reliability and determined that they were reliable for our purposes. To develop a more in-depth view of the states use of Recovery Act funds for Clean and Drinking Water SRF programs, we selected a nonprobability sample of nine states we had not reviewed in our previous bimonthly reports, representing all but 1 of the 10 EPA regions. The states we selected were Alabama, Connecticut, Maryland, Michigan, Missouri, New Mexico, Nevada, Washington State, and Wyoming. For each state, we interviewed officials from the state environmental department or public health program (water program officials) to discuss their use of Recovery Act SRF program funds. We conducted these interviews using a data collection instrument to obtain consistent information from the states on their water problems and ranking systems for prioritizing projects for funding; the amount of funds provided to projects; the allocation of funding and subsidization to green projects, small communities, and economically disadvantaged communities; the amount of funds received and spent, and the number of FTE positions funded for each project and in total. Additionally, in Alabama, Maryland, and New Mexico, we visited a total of five clean and drinking water projects funded with Recovery Act funds. To examine the actions that federal, state, and other agencies have taken to monitor and ensure accountability for Recovery Act SRF program funds, we reviewed and analyzed relevant federal guidance and documentation, including EPA s oversight plan for Recovery Act projects. To determine whether EPA was following its oversight plan, we reviewed at least one EPA Recovery Act program evaluation report for the Clean Water and Drinking Water programs for all 50 states for fiscal years 2009 or 2010. We also reviewed EPA headquarters reviews of regional reports that detailed the performance of regional drinking water staff as they monitored and documented the states implementation of the Drinking Water SRF program, and we asked headquarters staff about the reviews of regional clean water staff that they conducted, but did not document. To develop a more in-depth view of the states monitoring processes, we asked program officials in the nine states to respond to questions about their oversight activities in our data collection instrument. We then interviewed state program officials who were responsible for monitoring and oversight about their oversight activities and efforts to ensure that projects complied with Recovery Act requirements, including their processes for inspecting project sites and their procedures for collecting and reporting Recovery Act SRF program data. In addition, we interviewed Recovery Act officials in the six states that had such staff Alabama, Maryland, Missouri, Nevada, New Mexico, and Washington about their oversight of program staff, data quality, and federal reporting during additional interviews. Furthermore, to develop an understanding of the work that the broader audit community has completed on the Recovery Act Clean and Drinking Water SRF programs, we reviewed all relevant EPA Office of Inspector General reports that were published since we issued our previous report on the Recovery Act SRF programs in May 2010. To examine approaches federal agencies and selected states have taken to ensure data quality for jobs reported by Recovery Act recipients, we conducted work at both the national and state level. The recipient reporting section of this report responds to the Recovery Act s mandate that we comment on the estimates of jobs created or retained by direct recipients of Recovery Act funds. For our national review of the seventh submission of recipient reports, covering the period from January 1, 2011, through March 31, 2011, we continued our monitoring of errors or potential problems by repeating many of the analyses and edit checks reported in our six prior reviews covering the period from February 2009 through December 31, 2010. To examine how the quality of jobs data reported by recipients of Clean and Drinking Water SRF grants has changed over time, we compared the seven quarters of recipient reporting data that were publicly available at Recovery.gov on April 30, 2011. We performed edit checks and other analyses on the Clean and Drinking Water SRF prime recipient reports and compared funding data from EPA with funding amounts reported on the recipient reports. We also reviewed documentation and interviewed federal agency officials from EPA who are responsible for ensuring a reasonable degree of quality across their programs recipient reports. At the state level, we interviewed state officials in the nine states we reviewed about the policies and procedures they had in place to ensure that FTE information for Recovery Act projects was reported accurately. For selected Recovery Act data fields, we asked state program officials in the nine states to review and verify EPA s Recovery Act data from CBR and PBR and provide corrected data where applicable. For the nine states, we compared state-reported Clean and Drinking Water SRF FTE data from the sixth submission of recipient reports, the period covering October 1, 2010, through December 31, 2010, with corresponding data reported on Recovery.gov. We addressed any discrepancies between these two sets of data by contacting state program officials. Our national and state work in selected states showed agreement between EPA recipient information and the information reported by recipients directly to Federalreporting.gov. In general, we consider the data used to be sufficiently reliable for purposes of this report. The results of our FTE analyses are limited to the two SRF water programs and time periods reviewed and are not generalizable to any other program s FTE reporting. To examine challenges that states have faced in implementing Recovery Act requirements, we interviewed state SRF program officials using a data collection instrument and obtained information on challenges state program officials told us pertaining to the 20 percent green project requirement and the subsidization requirement. We conducted this performance audit from September 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Status of Prior Open Recommendations and Matters for Congressional Consideration In this appendix, we update the status of agencies efforts to implement the 26 open recommendations, and 2 newly implemented recommendations from our previous bimonthly and recipient reporting reviews. Recommendations that were listed as implemented or closed in a prior report are not repeated here. Lastly, we address the status of our Matters for Congressional Consideration. <7. Department of Energy> <7.1. Open Recommendations> Given the concerns we have raised about whether program requirements were being met, we recommended in May 2010 that the Department of Energy (DOE), in conjunction with both state and local weatherization agencies, develop and clarify weatherization program guidance that clarifies the specific methodology for calculating the average cost per home weatherized to ensure that the maximum average cost limit is applied as intended. accelerates current DOE efforts to develop national standards for weatherization training, certification, and accreditation, which is currently expected to take 2 years to complete. develops a best practice guide for key internal controls that should be present at the local weatherization agency level to ensure compliance with key program requirements. sets time frames for development and implementation of state monitoring programs. revisits the various methodologies used in determining the weatherization work that should be performed based on the consideration of cost- effectiveness and develops standard methodologies that ensure that priority is given to the most cost-effective weatherization work. To validate any methodologies created, this effort should include the development of standards for accurately measuring the long-term energy savings resulting from weatherization work conducted. In addition, given that state and local agencies have felt pressure to meet a large increase in production targets while effectively meeting program requirements and have experienced some confusion over production targets, funding obligations, and associated consequences for not meeting production and funding goals, we recommended that DOE clarify its production targets, funding deadlines, and associated consequences while providing a balanced emphasis on the importance of meeting program requirements. DOE generally concurred with these recommendations and has made some progress on implementing them. For example, to clarify the methodology for calculating the average cost per home, DOE has developed draft guidance to help grantees develop consistency in their average cost per unit calculations. The guidance further clarifies the general cost categories that are included in the average cost per home. DOE anticipates issuance of the guidance in June 2011. DOE has also taken steps to address our recommendation that it develop and clarify guidance to generate a best practice guide for key internal controls. DOE distributed a memorandum dated May 13, 2011 to grantees reminding them of their responsibilities to ensure compliance with internal controls and the consequences of failing to do so. This memo is currently under internal review and DOE anticipates it will be released in May 2011. <7.2. Open Recommendations> To better ensure that Energy Efficiency and Conservation Block Grant (EECBG) funds are used to meet Recovery Act and program goals, we recommended in April 2011 that DOE, take the following actions: Explore a means to capture information on the monitoring processes of all recipients to make certain that recipients have effective monitoring practices. Solicit information from recipients regarding the methodology they used to calculate their energy-related impact metrics and verify that recipients who use DOE s estimation tool use the most recent version when calculating these metrics. DOE generally concurred with these recommendations, stating that implementing the report s recommendations will help ensure that the Program continues to be well managed and executed. DOE also provided additional information on steps it has initiated or planned to implement. In particular, with respect to our first recommendation, DOE elaborated on additional monitoring practices it performs over high dollar value grant recipients, such as its reliance on audit results obtained in accordance with the Single Audit Act and its update to the EECBG program requirements in the Compliance Supplement to OMB Circular No. A-133. However, these monitoring practices only focus on larger grant recipients, and we believe that the program could be more effectively monitored if DOE captured information on the monitoring practices of all recipients. With respect to our second recommendation, DOE officials said that in order to provide a reasonable estimate of energy savings, the program currently reviews energy process and impact metrics submitted each quarter for reasonableness, works with grantees to correct unreasonable metrics, and works with grantees through closeout to refine metrics. In addition, DOE officials said that they plan to take a scientific approach to overall program evaluation during the formal evaluation process at the conclusion of the program, which will occur in December 2012. However, DOE has not yet identified any specific plans to solicit information from recipients regarding the methodology they used to calculate their energy- related impact metrics or to verify that recipients who use DOE s estimation tool use the most recent version when calculating. <8. Environmental Protection Agency> <8.1. Newly Implemented Recommendation> We recommended that the Environmental Protection Agency (EPA) Administrator work with the states to implement specific oversight procedures to monitor and ensure subrecipients compliance with the provisions of the Recovery Act-funded Clean Water and Drinking Water State Revolving Fund (SRF) program. In part in response to our recommendation, EPA provided additional guidance to the states regarding their oversight responsibilities, with an emphasis on enhancing site-specific inspections. Specifically, in June 2010, the agency developed and issued an oversight plan outline for Recovery Act projects that provides guidance on the frequency, content, and documentation related to regional reviews of state Recovery Act programs and regional and state reviews of specific Recovery Act projects. We found that EPA regions have reviewed all 50 states Clean and Drinking Water SRF programs at least once since the Recovery Act was enacted, and have generally carried out the oversight instructions in EPA s plan. For example, regional officials reviewed files with state documents and information to ensure proper controls over Davis-Bacon, Buy American, and other Recovery Act requirements. Regional staff also visited one drinking water project in every state, but did not meet this goal for clean water projects due to time and budget constraints. We also found that EPA headquarters officials have been reviewing the regions performance evaluation reports for states, and the officials said that they implemented a 60-day time frame for completing these reports. In the nine states that we reviewed in this report, program officials described their site visits to projects and the use of the EPA inspection checklist (or state equivalent), according to EPA s oversight plan. State officials told us that they visit their Recovery Act projects at least once during construction and sometimes more frequently depending on the complexity of the project. We consider these agency actions to have addressed our recommendation. <9. Department of Health and Human Services: Office of Head Start> <9.1. Open Recommendation> To oversee the extent to which grantees are meeting the program goal of providing services to children and families and to better track the initiation of services under the Recovery Act, we recommended that the Director of the Office of Head Start (OHS) should collect data on the extent to which children and pregnant women actually receive services from Head Start and Early Head Start grantees. The Department of Health and Human Services (HHS) disagreed with our recommendation. OHS officials stated that attendance data are adequately examined in triennial or yearly on-site reviews and in periodic risk management meetings. Because these reviews and meetings do not collect or report data on service provision, we continue to believe that tracking services to children and families is an important measure of the work undertaken by Head Start and Early Head Start service providers. <9.2. Open Recommendation> To help ensure that grantees report consistent enrollment figures, we recommended that the Director of OHS should better communicate a consistent definition of enrollment to grantees for monthly and yearly reporting and begin verifying grantees definition of enrollment during triennial reviews. OHS issued informal guidance on its Web site clarifying monthly reporting requirements to make them consistent with annual enrollment reporting. While this guidance directs grantees to include in enrollment counts all children and pregnant mothers who have received a specified minimum of services, it could be further clarified by specifying that counts should include only those children and pregnant mothers. According to HHS officials, OHS is considering further regulatory clarification. <9.3. Open Recommendation> To provide grantees consistent information on how and when they will be expected to obligate and expend federal funds, we recommended that the Director of OHS should clearly communicate its policy to grantees for carrying over or extending the use of Recovery Act funds from one fiscal year into the next. HHS indicated that OHS will issue guidance to grantees on obligation and expenditure requirements, as well as improve efforts to effectively communicate the mechanisms in place for grantees to meet the requirements for obligation and expenditure of funds. <9.4. Open Recommendation> To better consider known risks in scoping and staffing required reviews of Recovery Act grantees, we recommended that the Director of OHS should direct OHS regional offices to consistently perform and document Risk Management Meetings and incorporate known risks, including financial management risks, into the process for staffing and conducting reviews. HHS reported that OHS is reviewing the risk management process to ensure it is consistently performed and documented in its centralized data system and that it has taken related steps, such as requiring the Grant Officer to identify known or suspected risks prior to an on-site review. <9.5. Newly Implemented Recommendation> To facilitate understanding of whether regional decisions regarding waivers of the program s matching requirement are consistent with Recovery Act grantees needs across regions, we recommended that the Director of OHS should regularly review waivers of the nonfederal matching requirement and associated justifications. HHS reports that it has taken actions to address our recommendation. For example, HHS reports that OHS has conducted a review of waivers of the nonfederal matching requirement and tracked all waivers in the Web- based data system. HHS further reports that OHS has determined that they are reasonably consistent across regions. <10. Department of Housing and Urban Development> <10.1. Open Recommendation> Because the absence of third-party investors reduces the amount of overall scrutiny Tax Credit Assistance Program (TCAP) projects would receive and the Department of Housing and Urban Development (HUD) is currently not aware of how many projects lacked third-party investors, we recommended that HUD should develop a risk-based plan for its role in overseeing TCAP projects that recognizes the level of oversight provided by others. HUD responded to our recommendation by saying it will identify projects that are not funded by the HOME Investment Partnerships Program (HOME) funds and projects that have a nominal tax credit award. However, HUD said it will not be able to identify these projects until it could access the data needed to perform the analysis, and it does not receive access to those data until after projects have been completed. HUD currently has not taken any action on this recommendation because it only has data on the small percentage of projects completed to date. It is too early in the process to be able to identify projects that lack third-party investors. The agency will take action once they are able to collect the necessary information from the project owners and the state housing finance agencies. <11. Department of Labor> <11.1. Open Recommendations> To enhance the Department of Labor s (Labor) ability to manage its Recovery Act and regular Workforce Investment Act (WIA) formula grants and to build on its efforts to improve the accuracy and consistency of financial reporting, we recommended that the Secretary of Labor take the following actions: To determine the extent and nature of reporting inconsistencies across the states and better target technical assistance, conduct a one-time assessment of financial reports that examines whether each state s reported data on obligations meet Labor s requirements. To enhance state accountability and to facilitate their progress in making reporting improvements, routinely review states reporting on obligations during regular state comprehensive reviews. Labor agreed with both of our recommendations and has begun to take some actions to implement them. To determine the extent of reporting inconsistencies, Labor awarded a contract in September 2010 to perform an assessment of state financial reports to determine if the data reported are accurate and reflect Labor s guidance on reporting of obligations and expenditures. Since then, Labor has completed interviews with all states and is preparing a report of the findings. To enhance states accountability and facilitate their progress in making improvements in reporting, Labor has drafted guidance on the definitions of key financial terms such as obligations, which is currently in final clearance. After the guidance is issued, Labor plans to conduct a systemwide webinar and interactive training on this topic to reinforce how accrued expenditures and obligations are to be reported. <11.2. Open Recommendation> Our September 2009 bimonthly report identified a need for additional federal guidance in defining green jobs and we made the following recommendation to the Secretary of Labor: To better support state and local efforts to provide youth with employment and training in green jobs, provide additional guidance about the nature of these jobs and the strategies that could be used to prepare youth for careers in green industries. Labor agreed with our recommendation and has begun to take several actions to implement it. Labor s Bureau of Labor Statistics has developed a definition of green jobs which was finalized and published in the Federal Register on September 21, 2010. In addition, Labor continues to host a Green Jobs Community of Practice, an online virtual community available to all interested parties. As part of this effort, in December 2010, Labor hosted its first Recovery Act Grantee Technical Assistance Institute, which focused on critical success factors for achieving the goals of the grants and sustaining the impact into the future. The department also hosted a symposium on April 28-29, 2011, with the green jobs state Labor Market Information Improvement grantees. Symposium participants shared recent research findings, including efforts to measure green jobs, occupations, and training in their states. In addition, the department released a new career exploration tool called mynextmove (www.mynextmove.gov) in February 2011. This Web site includes the Occupational Information Network (O*NET) green leaf symbol to highlight green occupations. Furthermore, Labor s implementation study of the Recovery Act-funded green jobs training grants is still ongoing. The interim report is expected in late 2011. <12. Executive Office of the President: Office of Management and Budget> <12.1. Open Recommendation> To leverage Single Audits as an effective oversight tool for Recovery Act programs, we recommended that the Director of the Office of Management and Budget (OMB) 1. provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance; 2. take additional efforts to provide more timely reporting on internal controls for Recovery Act programs for 2010 and beyond; 3. evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act; 4. issue Single Audit guidance in a timely manner so that auditors can efficiently plan their audit work; 5. issue the OMB Circular No. A-133 Compliance Supplement no later than March 31 of each year; 6. explore alternatives to help ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner; and 7. shorten the timeframes required for issuing management decisions by federal agencies to grant recipients. (1) To provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance, the OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations 2010 Compliance Supplement (Compliance Supplement) required all federal programs with expenditures of Recovery Act awards to be considered as programs with higher risk when performing standard risk-based tests for selecting programs to be audited. The auditor s determination of the programs to be audited is based upon an evaluation of the risks of noncompliance occurring that could be material to an individual major program. The Compliance Supplement has been the primary mechanism that OMB has used to provide Recovery Act requirements and guidance to auditors. One presumption underlying the guidance is that smaller programs with Recovery Act expenditures could be audited as major programs when using a risk-based audit approach. The most significant risks are associated with newer programs that may not yet have the internal controls and accounting systems in place to help ensure that Recovery Act funds are distributed and used in accordance with program regulations and objectives. Since Recovery Act spending is projected to continue through 2016, we believe that it is essential that OMB provide direction in Single Audit guidance to help to ensure that smaller programs with higher risk are not automatically excluded from receiving audit coverage based on their size and standard Single Audit Act requirements. In May 2011, we spoke with OMB officials and reemphasized our concern that future Single Audit guidance provide instruction that helps to ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. OMB officials agreed and stated that such guidance is included in the 2011 Compliance Supplement which was to be issued by March 31, 2011. On June 1, 2011, OMB issued the 2011 Compliance Supplement which contains language regarding the higher- risk status of Recovery Act programs, requirements for separate reporting of findings, and a list of Recovery Act programs to aid the auditors. We will continue to monitor OMB s efforts to provide more direct focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with higher risk have audit coverage in the area of internal controls and compliance. (2) To address the recommendation for taking additional efforts to encourage more timely reporting on internal controls for Recovery Act programs for 2010 and beyond, OMB commenced a second voluntary Single Audit Internal Control Project (project) in August 2010 for states that received Recovery Act funds in fiscal year 2010. Fourteen states volunteered to participate in the second project. One of the project s goals is to achieve more timely communication of internal control deficiencies for higher-risk Recovery Act programs so that corrective action can be taken more quickly. Specifically, the project encourages participating auditors to identify and communicate deficiencies in internal control to program management 3 months sooner than the 9-month time frame currently required under OMB Circular No. A-133. Auditors were to communicate these through interim internal control reports by December 31, 2010. The project also requires that program management provide a corrective action plan aimed at correcting any deficiencies 2 months earlier than required under statute to the federal awarding agency. Upon receiving the corrective action plan, the federal awarding agency has 90 days to provide a written decision to the cognizant federal agency for audit detailing any concerns it may have with the plan. Each participating state was to select a minimum of four Recovery Act programs for inclusion in the project. We assessed the results of the first OMB Single Audit Internal Control Project for fiscal year 2009 and found that it was helpful in communicating internal control deficiencies earlier than required under statute. We reported that 16 states participated in the first project and that the states selected at least two Recovery Act programs for the project. We also reported that the project s dependence on voluntary participation limited its scope and coverage and that voluntary participation may also bias the project s results by excluding from analysis states or auditors with practices that cannot accommodate the project s requirement for early reporting of control deficiencies. Overall, we concluded that although the project s coverage could have been more comprehensive, the analysis of the project s results provided meaningful information to OMB for better oversight of the Recovery Act programs selected and information for making future improvements to the Single Audit guidance. OMB s second Single Audit Internal Control Project is in progress and its planned completion date is June 2011. OMB plans to assess the project s results after its completion date. The 14 participating states have met the milestones for submitting interim internal control reports by December 31, 2010 and their corrective action plans by January 31, 2011. By April 30, 2011, the federal awarding agencies were to provide their interim management decisions to the cognizant agency for audit. We discussed the preliminary status of these interim management decisions with OMB officials and, as of May 24, 2011, only 1 of the 10 federal awarding agencies had submitted some management decisions on the auditees corrective action plans as required by the project s guidelines. On May 24, 2011, officials from the cognizant agency for audit, HHS, reemphasized to the federal awarding agencies their responsibilities for providing management decisions in accordance with the project s due dates. In our review of the 2009 project, we noted similar concerns that federal awarding agencies submitted management decisions on proposed corrective actions in an untimely manner and made recommendations in this area, which are discussed later in this report. We will continue to monitor the status of OMB s efforts to implement this recommendation and believe that OMB needs to continue taking steps to encourage timelier reporting on internal controls through Single Audits for Recovery Act programs. (3) We previously recommended that OMB evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. OMB officials have stated that they are aware of the increase in workload for state auditors who perform Single Audits due to the additional funding to Recovery Act programs and corresponding increases in programs being subject to audit requirements. OMB officials stated that they solicited suggestions from state auditors to gain further insights to develop measures for providing audit relief. However, OMB has not yet put in place a viable alternative that would provide relief to all state auditors that conduct Single Audits. For state auditors that are participating in the second OMB Single Audit Internal Control Project, OMB has provided some audit relief by modifying the requirements under Circular No. A-133 to reduce the number of low- risk programs to be included in some project participants risk assessment requirements. OMB is taking initiatives to examine the Single Audit process. OMB officials have stated that they have created a workgroup which combines the Executive Order 13520 Reducing Improper Payments Section 4 (b) Single Audit Recommendations Workgroup (Single Audit Workgroup), and the Circular No. A-87 Cost Principles for State, Local, and Indian Tribal Governments Workgroup (Circular No. A-87 Workgroup). The Single Audit Workgroup is comprised of representatives from the federal audit community; federal agency management officials involved in overseeing the Single Audit process and programs subject to that process; representatives from the state audit community; and staff from OMB. OMB officials tasked the Single Audit Workgroup with developing recommendations to improve the effectiveness of Single Audits of nonfederal entities that expend federal funds in order to help identify and reduce improper payments. In June 2010, the Single Audit Workgroup developed recommendations, some of which are targeted toward providing audit relief to auditors who conduct audits of grantees and grants that are under the requirements of the Single Audit Act. OMB officials stated that the recommendations warrant further study and that the workgroup is continuing its work on the recommendations. OMB officials also stated that the Circular No. A-87 Workgroup has also made recommendations which could impact Single Audits and that the workgroups have been collaborating to ensure that the recommendations relating to Single Audit improvements are compatible and could improve the Single Audit process. The combined workgroups plan to issue a report to OMB by August 29, 2011. We will continue to monitor OMB s progress to achieve this objective. (4) (5) With regard to issuing Single Audit guidance in a timely manner, and specifically the OMB Circular No. A-133 Compliance Supplement, we previously reported that OMB officials intended to issue the 2011 Compliance Supplement by March 31, 2011. In December 2010, OMB provided to the American Institute of Certified Public Accounts (AICPA) a draft of the 2011 Compliance Supplement which the AICPA published on its Web site. In January 2011, OMB officials reported that the production of the 2011 Compliance Supplement was on schedule for issuance by March 31, 2011. OMB issued the 2011 Compliance Supplement on June 1, 2011. We spoke with OMB officials regarding the reasons for the delay of this important guidance to auditors. OMB officials stated that its efforts were refocused toward priorities relating to the expiration of several continuing resolutions that temporarily funded the federal government for fiscal year 2011, and the Department Of Defense And Full-Year Continuing Appropriations Act, 2011, which was passed by the Congress in April 2011, averting a governmentwide shutdown. OMB officials stated that, as a result, although they had taken steps to issue the 2011 Compliance Supplement by the end of March, such as starting the process earlier in 2010 and giving agencies strict deadlines for program submissions, they were only able to issue it on June 1, 2011. We will continue to monitor OMB s progress to achieve this objective. (6) (7) In October 2010, OMB officials stated that, based on their assessment of the results of the project, they had discussed alternatives for helping to ensure that federal awarding agencies provide their management decisions on the corrective action plans in a timely manner, including possibly shortening the time frames required for federal agencies to provide their management decisions to grant recipients. However, OMB officials have yet to decide on the course of action that they will pursue to implement this recommendation. OMB officials acknowledged that the results of the 2009 OMB Single Audit Internal Control Project confirmed that this issue continues to be a challenge. They stated that they have met individually with several federal awarding agencies that were late in providing their management decisions in the 2009 project to discuss the measures that the agencies will take to improve the timeliness of their management decisions. Earlier in this report, we discussed that preliminary observations of the results of the second project have identified that several federal awarding agencies management decisions on the corrective actions that were due April 30, 2011, have also not been issued in a timely manner. In March 2010, OMB issued guidance under memo M-10-14, item 7, (http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_20 10/m1014.pdf) that called for federal awarding agencies to review reports prepared by the Federal Audit Clearinghouse regarding Single Audit findings and submit summaries of the highest-risk audit findings by major Recovery Act program, as well as other relevant information on the federal awarding agency s actions regarding these areas. In May 2011, we reviewed selected reports prepared by federal awarding agencies that were titled Use of Single Audit to Oversee Recipient s Recovery Act Funding. These reports were required by memo M-10-14 for reports from the Federal Audit Clearinghouse for fiscal year 2009. The reports were developed for entities where the auditor issued a qualified, adverse, or disclaimer audit opinion. The reports identified items such as (1) significant risks to the respective program that was audited; (2) material weaknesses, instances of noncompliance, and audit findings that put the program at risk; (3) actions taken by the agency; and (4) actions planned by the agency. OMB officials have stated that they plan to use this information to identify trends that may require clarification or additional guidance in the Compliance Supplement. OMB officials also stated that they are working on a metrics project with the Recovery Accountability and Transparency Board to develop metrics for determining how federal awarding agencies are to use information available in the Single Audit and which can serve as performance measures. We attended a presentation of the OMB Workgroup that is working with the Recovery Accountability and Transparency Board in developing the metrics project in May 2011 and note that it is making progress. OMB officials have stated that the metrics could be applied at the agency level, by program, to allow for analysis of Single Audit findings, along with other uses to be determined. One goal of the metrics project is to increase the effectiveness and timeliness of federal awarding agencies actions to resolve single audit findings. We will continue to monitor the progress of these efforts to determine the extent that they improve the timeliness of federal agencies actions to resolve audit findings so that risks to Recovery Act funds are reduced and internal controls in Recovery Act programs are strengthened. <13. Department of Transportation> <13.1. Open Recommendations> To ensure that Congress and the public have accurate information on the extent to which the goals of the Recovery Act are being met, we recommended that the Secretary of Transportation direct FHWA to take the following two actions: Develop additional rules and data checks in the Recovery Act Data System, so that these data will accurately identify contract milestones such as award dates and amounts, and provide guidance to states to revise existing contract data. Make publicly available within 60 days after the September 30, 2010, obligation deadline an accurate accounting and analysis of the extent to which states directed funds to economically distressed areas, including corrections to the data initially provided to Congress in December 2009. In its response, DOT stated that it implemented measures to further improve data quality in the Recovery Act Data System, including additional data quality checks, as well as providing states with additional training and guidance to improve the quality of data entered into the system. DOT also stated that as part of its efforts to respond to our draft September 2010 report in which we made this recommendation on economically distressed areas, it completed a comprehensive review of projects in these areas, which it provided to GAO for that report. DOT recently posted an accounting of the extent to which states directed Recovery Act transportation funds to projects located in economically distressed areas on its Web site, and we are in the process of assessing these data. <13.2. Open Recommendation> To better understand the impact of Recovery Act investments in transportation, we believe that the Secretary of Transportation should ensure that the results of these projects are assessed and a determination made about whether these investments produced long-term benefits. Specifically, in the near term, we recommended that the Secretary direct FHWA and FTA to determine the types of data and performance measures they would need to assess the impact of the Recovery Act and the specific authority they may need to collect data and report on these measures. In its response, DOT noted that it expected to be able to report on Recovery Act outputs, such as the miles of road paved, bridges repaired, and transit vehicles purchased, but not on outcomes, such as reductions in travel time, nor did it commit to assessing whether transportation investments produced long-term benefits. DOT further explained that limitations in its data systems, coupled with the magnitude of Recovery Act funds relative to overall annual federal investment in transportation, would make assessing the benefits of Recovery Act funds difficult. DOT indicated that, with these limitations in mind, it is examining its existing data availability and, as necessary, would seek additional data collection authority from Congress if it became apparent that such authority was needed. DOT plans to take some steps to assess its data needs, but it has not committed to assessing the long-term benefits of Recovery Act investments in transportation infrastructure. We are therefore keeping our recommendation on this matter open. <14. Matters for Congressional Consideration> <14.1. Matter> To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. We continue to believe that Congress should consider changes related to the Single Audit process. <14.2. Matter> To the extent that additional coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit Act and related audits. We continue to believe that Congress should consider changes related to the Single Audit process. <14.3. Matter> To provide housing finance agencies (HFA) with greater tools for enforcing program compliance, in the event the Section 1602 Program is extended for another year, Congress may want to consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. We continue to believe that Congress should consider directing the Department of the Treasury to permit HFAs the flexibility to disburse Section 1602 Program funds as interest-bearing loans that allow for repayment. Appendix III: GAO Contact and Staff Acknowledgments <15. GAO Contact> <16. Staff Acknowledgments> In addition to the individual named above, Susan Iott, Assistant Director; Tom Beall; Jillian Fasching; Sharon Hogan; Susan Iott; Thomas James; Yvonne Jones; Jonathan Kucskar; Kirsten Lauber; Carol Patey; Cheryl Peterson; Brenda Rabinowitz; Beverly Ross; Kelly Rubin; Carol Herrnstadt Shulman; Dawn Shorey; Kathryn Smith; Jonathan Stehle; Kiki Theodoropoulos; and Ethan Wozniak made key contributions to this report.
Why GAO Did This Study The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $4 billion for the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund (SRF) and $2 billion for the agency's Drinking Water SRF. The Recovery Act requires GAO to review funds made available under the act and comment on recipients' reports of jobs created and retained. These jobs are reported as full-time equivalent (FTE) positions on a Web site created for the Recovery Act on www.Recovery.gov . GAO examined the (1) status and use of Recovery Act SRF program funds nationwide and in nine states; (2) EPA and state actions to monitor the act's SRF program funds; (3) EPA and selected states' approaches to ensure data quality, including for jobs reported by recipients of the act's funds; and (4) challenges, if any, that states have faced in implementing the act's requirements. For this work, GAO, among other things, obtained and analyzed EPA nationwide data on the status of Recovery Act clean and drinking water funds and projects and information from a nonprobability sample of nine states that represent all but 1 of EPA's 10 regions. GAO also interviewed EPA and state officials on their experiences with the Recovery Act SRF program funds. What GAO Found The 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act, and EPA indicated that all 50 states met the act's requirement to award funds to projects under contract 1 year after the act's passage. States used the funds to support more than 3,000 water quality projects, and according to EPA data, the majority of the funds were used for sewage treatment infrastructure and drinking water treatment and distribution systems. Since the act was passed, states have drawn down almost 80 percent of the SRF program funds provided under the act. According to EPA data, states met the act's requirements that at least (1) 20 percent of the funds be used to support "green" projects and (2) 50 percent of the funds be provided as additional subsidies. In the nine states GAO reviewed, the act's funds paid for 419 infrastructure projects that helped address major water quality problems, but state officials said in some cases the act's requirements changed their priorities for ranking projects or the projects selected. In addition, although not required by the act, the nine states used about a quarter of the funds they received to pay for projects in economically disadvantaged communities, most in additional subsidies. EPA, states, and state or private auditors took actions to monitor Recovery Act SRF program funds. For example, EPA officials reviewed all 50 states' Recovery Act SRF programs at least once and found that states were largely complying with the act's requirements. Also, in part as a response to a GAO recommendation, in June 2010 EPA updated--and is largely following--its oversight plan, which describes monitoring actions for the SRF programs. Furthermore, state officials visited sites to monitor Recovery Act projects, as indicated in the plan, and found few problems. Officials at EPA and in the nine states have also regularly checked the quality of data on Recovery.gov and stated that the quality has remained relatively stable, although GAO identified minor inconsistencies in the FTE data that states reported. Overall, the 50 states reported that the Recovery Act SRF programs funded an increasing number of FTE positions for the quarter ending December 2009 through the quarter ending June 2010, from about 6,000 FTEs to 15,000 FTEs. As projects were completed and funds spent, these FTEs had declined to about 6,000 FTEs for the quarter ending March 2011. Some state officials GAO interviewed identified challenges in implementing the Recovery Act's Clean and Drinking Water SRF requirements for green projects and additional subsidies, both of which were continued with some variation, in the fiscal year 2010 and 2011 appropriations for the SRF programs. Officials in four states said achieving the green-funding goal was difficult, with one suggesting that the 20 percent target be changed. In addition, officials in two of the four states, as well as in two other states, noted that when monies are not repaid into revolving funds to generate future revenue for these funds, the SRF program purpose changes from primarily providing loans for investments in water infrastructure to providing grants. What GAO Recommends GAO is making no recommendations in this report, which was provided to EPA for its review and comment. EPA did not comment on the report.