report
stringlengths
319
46.5k
summary
stringlengths
127
5.75k
input_token_len
int64
78
8.19k
summary_token_len
int64
29
1.02k
Registered nurses are responsible for a large portion of the health care provided in this country. RNs make up the largest group of health care providers, and, historically, have worked predominantly in hospitals; in 2000, 59.1 percent of RNs were employed in hospital settings. A smaller number of RNs work in other settings such as ambulatory care, home health care, and nursing homes. Their responsibilities may include providing direct patient care in a hospital or a home health care setting, managing and directing complex nursing care in an intensive care unit, or supervising the provision of long-term care in a nursing home. Individuals usually select one of three ways to become an RN--through a 2-year associate degree, 3-year diploma, or 4-year baccalaureate degree program. Once they have completed their education, RNs are subject to state licensing requirements. The U.S. healthcare system has changed significantly over the past 2 decades, affecting the environment in which nurses provide care. Advances in technology and greater emphasis on cost-effectiveness have led to changes in the structure, organization, and delivery of health care services. While hospitals traditionally were the primary providers of acute care, advances in technology, along with cost controls, shifted care from traditional inpatient settings to ambulatory or community-based settings, nursing facilities, or home health care settings. The number of hospital beds staffed declined as did the patient lengths of stay. While the number of hospital admissions declined from the mid-1980s to the mid-1990s, they increased between 1995 and 1999. At the same time, the overall acuity level of the patients increased as the conditions of those patients remaining in hospitals made them too medically complex to be cared for in another setting. The transfer of less acute patients to nursing homes and community-based care settings created additional job opportunities and increased demand for nurses. Current evidence suggests emerging shortages of nurses available or willing to fill some vacant positions in hospitals, nursing homes, and home care. Some localities are experiencing greater difficulty than others. National data are not adequate to describe the nature and extent of these potential nurse workforce shortages, nor are data sufficiently sensitive or current to allow a comparison of the adequacy of the nurse workforce size across states, specialties, or provider types. However, total employment of RNs per capita and the national unemployment rate for RNs have declined, and providers from around the country are reporting growing difficulty recruiting and retaining the number of nurses needed in a range of settings. Another indicator that suggests the emergence of shortages is a rise in recent public sector efforts related to nurse workforce issues in many states. The national unemployment rate for RNs is at its lowest level in more than a decade, continuing to decline from 1.5 percent in 1997 to 1.0 percent in 2000. At the same time, total employment of RNs per capita declined 2 percent between 1996 and 2000, reversing steady increases since 1980. Between 1980 and 1996, the number of employed RNs per capita nationwide increased by 44 percent. At the state level, changes in per capita nurse employment from 1996 to 2000 varied widely, from a 16.2 percent increase in Louisiana to a 19.5 percent decrease in Alaska. (See appendix I.) Overall a decline in per capita nurse employment occurred in 26 states and the District of Columbia between 1996 and 2000. Declining RN employment per capita may be an indicator of a potential shortage. It is an imprecise measure, however, because it does not account for changes in care needs of the population or how many nurses relative to other personnel providers wish to use to meet those needs. Moreover, total employment includes not only nurses engaged in clinical or patient care activities but also those in administrative and other nondirect care positions. Data on how much nurse employment may have shifted between direct care and other positions are not available. Recent studies suggest that hospitals and other health care providers in many areas of the country are experiencing greater difficulty in recruiting RNs. For example, a recent survey in Maryland conducted by the Association of Maryland Hospitals and Health Systems reported a statewide average vacancy rate for hospitals of 14.7 percent in 2000, up from 3.3 percent in 1997. The association reported that the last time vacancy rates were at this level was during the late 1980s, during the last reported nurse shortage. A survey of providers in Vermont found that hospitals had an RN vacancy rate of 7.8 percent in 2001, up from 4.8 percent in 2000 and 1.2 percent in 1996. For 2000, California reported an average RN vacancy rate of 20 percent, and for 2001, Florida reported nearly 16 percent and Nevada reported an average rate of 13 percent. Concerns about retaining nurses have also become more widespread. A recent survey reported that the national turnover rate among hospital staff nurses was 15 percent in 1999, up from 12 percent in 1996. Another industry survey showed turnover rates for overall hospital nursing department staff rising from 11.7 percent in 1998 to 26.2 percent in 2000.Nursing home and home health care industry surveys indicate that nurse turnover is an issue for them as well. In 1997, an American Health Care Association survey of 13 nursing home chains identified a 51-percent turnover rate for RNs and LPNs. A 2000 national survey of home health care agencies reported a 21-percent turnover rate for RNs. Increased attention by state governments is another indicator of concern about nurse workforce problems. According to the National Conference of State Legislatures, as of June 2001, legislation to address nurse shortage issues had been introduced in 15 states, and legislation to restrict the use of mandatory overtime for nurses in hospitals and other health care facilities had been introduced in 10 states. A variety of nurse workforce task forces and commissions have recently been established as well. For example, in May 2000, legislation in Maryland created the Statewide Commission on the Crisis in Nursing to determine the current extent and long-term implications of the growing shortage of nurses in the state. Available data on supply and demand for RNs are not adequate to determine the magnitude of any current imbalance between the two with any degree of precision. Both the demand for and supply of RNs are influenced by many factors. Demand for RNs not only depends on the care needs of the population, but also on how providers--hospitals, nursing homes, clinics, and others--decide to use nurses in delivering care. Providers have changed staffing patterns in the past, employing fewer or more nurses relative to other workers such as nurse aides. For example, following the introduction of the Medicare Prospective Payment System (PPS), hospitals increased the share of RNs in their workforces. However, in the early 1990s, in an effort to contain costs, acute care facilities restructured and redesigned staffing patterns, introducing more non-RN caregivers and reducing the percentage of RNs. While the number of RNs employed by hospitals remained relatively unchanged from 1995 to1997, hospitals reported significant growth in RN employment in 1998 and 1999. Supply depends on the size of the pool of qualified persons and the share of them willing to work. Current participation by licensed nurses in the work force is relatively high. Nationally, 81.7 percent of licensed RNs were employed in nursing in 2000. Although this represents a slight decline from the high of 82.7 percent reported in 1992 and 1996, this rate of workforce participation remains higher than the 76.6 to 80.0 percent rates reported in the 1980s. Moreover, some RNs are employed in nonclinical settings, such as insurance companies, reducing the number of nurses available to provide direct patient care. Current problems with the recruitment and retention of nurses are related to multiple factors. The nurse workforce is aging, and fewer new nurses are entering the profession to replace those who are retiring or leaving. Furthermore, nurses report unhappiness with many aspects of the work environment including staffing levels, heavy workloads, increased use of overtime, lack of sufficient support staff, and adequate wages. In many cases this growing dissatisfaction is affecting their decisions to remain in nursing. The decline in younger people, predominantly women, choosing nursing as a career has resulted in a steadily aging RN workforce. Over the last 2 decades, as opportunities for women outside of nursing have expanded the number of young women entering the RN workforce has declined. A recent study reported that women graduating from high school in the 1990s were 35 percent less likely to become RNs than women who graduated in the 1970s. Reductions in nursing program enrollments within the last decade attest to this narrowing pipeline. According to a 1999 Nursing Executive Center Report, between 1993 and 1996, enrollment in diploma programs dropped 42 percent and enrollment in associate degree programs declined 11 percent. Furthermore, between 1995 and 1998, enrollment in baccalaureate programs declined 19 percent, and enrollment in master's programs decreased 4 percent. The number of individuals passing the national RN licensing exam declined from 97,679 in 1996 to 74,787 in 2000, a decline of 23 percent. The large numbers of RNs that entered the labor force in the 1970s are now over the age of 40 and are not being replenished by younger RNs. Between 1983 and 1998, the number of RNs in the workforce under 30 fell by 41 percent, compared to only a 1-percent decline in the number under age 30 in the rest of the U.S. workforce. Over the past 2 decades, the nurse workforce's average age has climbed steadily. While over half of all RNs were reported to be under age 40 in 1980, fewer than one in three were younger than 40 in 2000. As shown in figure 1, the age distribution of RNs has shifted dramatically upward. The percent of nurses under age 30 decreased from 26 percent in 1980 to 9 percent 2000, while the percent age 40 to 49 grew from 20 to 35 percent. Job dissatisfaction has also been identified as a major factor contributing to the current problems of recruiting and retaining nurses. A recent Federation of Nurses and Health Professionals (FNHP) survey found that half of the currently employed RNs who were surveyed had considered leaving the patient-care field for reasons other than retirement over the past 2 years. Over one-fourth (28 percent) of RNs responding to a 1999 survey by The Nursing Executive Center described themselves as somewhat or very dissatisfied with their jobs, and about half (51 percent) were less or much less satisfied with their jobs than they were 2 years ago. In that same survey, 32 percent of general medical/surgical RNs, who constitute the bulk of hospital RNs, indicated that they were dissatisfied with their current jobs. According to a survey conducted by the American Nurses Association, 54.8 percent of RNs and LPNs responding would not recommend the nursing profession as a career for their children or friends, while 23 percent would actively discourage someone close to them from entering the profession. Inadequate staffing, heavy workloads, and the increased use of overtime are frequently cited as key areas of job dissatisfaction among nurses. According to the recent FNHP survey, of those RNs responding who had considered leaving the patient-care field for reasons other than retirement over the past 2 years, 56 percent indicated that they wanted a less stressful and less physically demanding job. The same survey found that 55 percent of current RNs were either just somewhat or not satisfied by their facility's staffing levels, while 43 percent of current RNs surveyed indicated that increased staffing would do the most to improve their jobs. Another survey found that 36 percent of RNs in their current job more than 1 year were very or somewhat dissatisfied with the intensity of their work. Some providers report increased use of overtime for employees. Twenty-two percent of nurses responding to the FNHP survey said they were concerned about schedules and hours. A survey of North Carolina hospitals conducted in 2000 found significant reliance on overtime for staff nurses. Nine percent of rural hospitals reported spending more than 25 percent of their nursing budget on overtime, and, among urban hospitals, 49 percent expected to increase their use of overtime in the coming year. The trend toward increasing use of overtime is currently a major concern of nurse unions and associations. Nurses have also expressed dissatisfaction with a decrease in the amount of support staff available to them over the past few years. More than half the RNs responding to the recent study by the American Hospital Association (AHA) did not feel that their hospitals provided adequate support services. RNs, LPNs, and others responding to a survey by the ANA also pointed to a decrease of needed support services. Current nurse workforce issues are part of a larger health care workforce shortage that includes a shortage of nurse aides. Some nurses have also expressed dissatisfaction with their wages. While surveys indicate that increased wages might encourage nurses to stay at their jobs, money is not always cited as the primary reason for job dissatisfaction. According to the FNHP survey, of those RNs responding who had considered leaving the patient-care field for reasons other than retirement over the past 2 years, 18 percent wanted more money, versus 56 percent who were concerned about the stress and physical demands of the job. However, the same study reported that 27 percent of current RNs responding cited higher wages or better health care benefits as a way of improving their jobs. Another study indicated that 39 percent of RNs who had been in their current jobs for more than 1 year were dissatisfied with their total compensation, but 48 percent were dissatisfied with the level of recognition they received from their employers. AHA recently reported on a survey that found that 57 percent of responding RNs said that their salaries were adequate, compared to 33.4 percent who thought their facility was adequately staffed, and 29.1 percent who said that their hospital administrations listened and responded to their concerns. Wages can have a long-term impact on the size of a workforce pool as well as a short-term effect on people's willingness to work. After several years of real earnings growth following the last nursing shortage, RN earnings growth lagged behind the rate of inflation from 1994 through 1997. In 2 of the last 3 years, however, 1998 and 2000, RN earnings growth exceeded the rate of inflation. The cumulative effects of these changes are such that RN earnings have just kept pace with the rate of inflation from 1989 to 2000 as shown in figure 2. A serious shortage of nurses is expected in the future as pressures are exerted on both demand and supply. The future demand for nurses is expected to increase dramatically when the baby boomers reach their 60s, 70s, and beyond. The population age 65 years and older will double between 2000 to 2030. During that same period the number of women between 25 and 54 years of age, who have traditionally formed the core of the nurse workforce, is expected to remain relatively unchanged. This potential mismatch between future supply of and demand for caregivers is illustrated by the change in the expected ratio of potential care providers to potential care recipients. As shown in figure 3, the ratio of the working- age population, age 18 to 64, to the population over age 85 will decline from 39.5 workers for each person 85 and older in 2000, to 22.1 in 2030, and 14.8 in 2040. The ratio of women age 20 to 54, the cohort most likely to be working either as nurses or nurse aides, to the population age 85 and older will decline from 16.1 in 2000 to 8.5 in 2030, and 5.7 in 2040. Unless more young people choose to go into the nursing profession, the nurse workforce will continue to age. By 2010, approximately 40 percent of the workforce will likely be older than 50. By 2020, the total number of full time equivalent RNs is projected to have fallen 20 percent below HRSA's projections of the number of RNs that will be required to meet demand. Providers' current difficulty recruiting and retaining nurses may worsen as the demand for nurses increases with the aging of the population. Impending demographic changes are widening the gap between the numbers of people needing care and those available to provide it. Moreover, the current high levels of job dissatisfaction among nurses may also play a crucial role in determining the extent of current and future nurse shortages. Efforts undertaken to improve the workplace environment may both reduce the likelihood of nurses leaving the field and encourage more young people to enter the nursing profession. While state governments and providers have begun to address recruitment and retention issues related to the nurse workforce, more detailed data are needed to assist in planning and targeting corrective efforts. As we agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others upon request. If you or your staff have any questions, please call me on (202)512-7119 or Helene Toiv, Assistant Director, at (202)512-7162. Other major contributors were Eric Anderson, Connie Peebles Barrow, Emily Gamble Gardiner, and Pamela Ruffner.
The nation's hospitals and nursing homes rely heavily on the services of nurses. Concerns have been raised about whether the current and projected supply of nurses will meet the nation's needs. This report reviews (1) whether evidence of a nursing shortage exists, (2) the reasons for current nurse recruitment and retention problems, and (3) what is known about the projected future supply of and demand for nurses. GAO found that national data are not adequate to describe the nature and extent of nurse workforce shortages, nor are data sufficiently sensitive or current to compare nurse workforce availability across states, specialties, or provider types. Multiple factors affect recruitment and retention problems, including the aging of the nurse workforce fewer younger people are entering the profession. A serious shortage of nurses is expected in the future as demographic pressures influence both demand and supply.
3,710
168
The National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to, among another things, develop a Space Launch System as a follow-on to the Space Shuttle and as a key component in expanding human presence beyond low-Earth orbit. To that end, NASA plans to incrementally develop three progressively more capable SLS launch vehicles--70-, 105-, and 130-metric ton (mt) variants. When complete, the 130-mt vehicle is expected to have more launch capability than the Saturn V vehicle, which was used for Apollo missions, and be significantly more capable than any recent or current launch vehicle. The act also directed NASA to prioritize the core elements of SLS with the goal of operational capability not later than December 2016. NASA negotiated an extension of that date, to December 2017, based on the agency's initial assessment of the tasks associated with developing the new launch vehicle, and has subsequently committed to a launch readiness date of November 2018. In 2011, NASA formally established the SLS program. To fulfill the direction of the 2010 act, the agency plans to develop the three SLS launch vehicle capabilities, complemented by Orion, to transport humans and cargo into space. The first version of the SLS that NASA is developing is a 70-mt launch vehicle known as Block I. NASA has committed to conduct two test flights of the Block I vehicle--the first in 2018 and the second in 2021. The vehicle is scheduled to fly an uncrewed Orion some 70,000 kilometers beyond the moon during the first test flight, known as Exploration Mission-1 (EM-1), and to fly a second mission known as Exploration Mission-2 (EM-2) beyond the moon to further test performance with a crewed Orion vehicle. After 2021, NASA intends to build 105- and 130-mt launch vehicles, known respectively as Block IA/B and Block II, which it expects to use as the backbone of manned spaceflight for decades. NASA anticipates using the Block IA/B vehicles for destinations such as near-Earth asteroids and LaGrange points and the Block II vehicles for eventual Mars missions. Space launch vehicle development efforts are high risk from technical, programmatic, and oversight perspectives. The technical risk is inherent for a variety of reasons including the environment in which they must operate, complexity of technologies and designs, and limited room for error in the fabrication and integration process. Managing the development process is complex for reasons that go well beyond technology and design. For instance, at the strategic level, because launch vehicle programs can span many years and be very costly, programs often face difficulties securing and sustaining funding commitments and support. At the program level, if the lines of communication between engineers, managers, and senior leaders are not clear, risks that pose significant threats could go unrecognized and unmitigated. If there are pressures to deliver a capability within a short period of time, programs may be incentivized to overlap development and production activities or delete tests, which could result in late discovery of significant technical problems that require more money and ultimately much more time to address. For these reasons, it is imperative that launch vehicle development efforts adopt disciplined practices and lessons learned from past programs. Best practices for acquisition programs indicate that establishing baselines that match cost and schedule resources to requirements and rationally balancing cost, schedule, and performance is a key step in establishing a successful acquisition program. Our work has also shown that validating this match before committing resources to development helps to mitigate the risks inherent in NASA's programs. We have reported that within NASA's acquisition life cycle, resources should be matched to requirements at key decision point (KDP)-C, the review that commits the program to formal cost and schedule baselines and marks the transition from the formulation phase into the implementation phase, as seen in figure 1 below. The SLS program completed its KDP-C review in August 2014, GSDO completed its KDP-C review in September 2014, and the KDP-C review for Orion is currently scheduled for May 2015. NASA has taken positive steps to address specific concerns we raised in July 2014 regarding aggressive schedules and insufficient funding by establishing the SLS program's committed launch readiness date as November 2018--almost a year later than originally planned. Specifically, we reported in July 2014 that NASA had yet to establish baselines that matched the SLS program's cost and schedule resources with the requirement to develop the SLS and launch the first flight test in December 2017 at the required confidence level of 70 percent. NASA policy generally requires a 70 percent joint confidence level--a calculation NASA uses to estimate the probable success of a program meeting its cost and schedule targets--for a program to proceed with final design and fabrication. At the time of our July 2014 report, NASA had delayed its review to formally commit the agency to cost and schedule baselines for SLS from October 2013, as the agency considered future funding plans for the program. At that time, the agency's funding plan for SLS was insufficient to match requirements to resources for the December 2017 flight test at the 70 percent joint confidence level and the agency's options for matching resources to requirements were largely limited to increasing program funding, delaying the schedule, or accepting a reduced confidence level for the initial flight test. We have previously reported that it is important for NASA to budget projects to appropriate confidence levels, as past studies have linked cost growth to insufficient reserves, poorly phased funding profiles, and more generally, optimistic estimating practices. We found that NASA's proposed funding levels had affected the SLS program's ability to match requirements to resources since its inception. NASA has requested relatively consistent amounts of funding of about $1.4 billion each year since 2012. According to agency officials, the program has taken steps to operate within that flat funding profile, including streamlining program office operations and asking each contractor to identify efficiencies in its production processes. Even so, according to the program's own analysis, going into the agency review to formally set baselines, SLS's top risk was that the current planned budget through 2017 would be insufficient to allow the SLS as designed to meet the EM-1 flight date. The SLS program office calculated the risk associated with insufficient funding through 2017 as 90 percent likely to occur; furthermore, it indicated the insufficient budget could push the December 2017 launch date out 6 months and add some $400 million to the overall cost of SLS development. The cost risk was considerably greater than $400 million in the past, but according to program officials they were able to reduce the affect due to receiving more funding than requested in fiscal years 2013 and 2014. Similarly, our ongoing work on human exploration programs has found that the Orion program is currently tracking a funding risk that the program could require an additional $560 to $840 million to meet the December 2017 EM-1 flight date. However, the agency has yet to complete the review that sets formal cost or schedule baselines for the Orion program. At this time, we have not conducted enough in-depth work on the GSDO program to comment on any specific risks the program is tracking. In our July 2014 report we recommended, among other things, that NASA develop baselines for SLS based on matching cost and schedule resources to requirements that would result in a level of risk commensurate with its policies. NASA concurred with our findings and recommendations. In August 2014, NASA established formal cost and schedule baselines for the SLS program at the 70 percent joint confidence level for a committed launch readiness date of November 2018. Nevertheless, the program plans to continue to pursue an initial capability of SLS by December 2017 as an internal goal and has calculated a joint cost and schedule confidence level of 30 percent associated with that date. As illustrated by table 1 below, the SLS and GSDO programs are pursuing ambitious and varying target dates for the EM-1 test flight. In addition, the Orion program is currently tracking and reporting to December 2017. The agency acknowledges differences in the target dates the programs are pursuing and has indicated that it will develop an integrated target launch date after all three systems hold their individual critical design reviews. The SLS program has assigned a low confidence level--30 percent-- associated with meeting the program's internal target date of December 2017. Even if SLS does meet that goal, however, it is unlikely that both Orion and GSDO will achieve launch readiness by that point. For example, the GSDO program only has a 30 percent confidence level associated with a later June 2018 date. Additionally, the Orion program is currently behind its planned schedule and is facing significant technical risks and officials indicated that the program will not achieve launch readiness by December 2017. The Orion program has submitted a schedule to NASA headquarters that indicates the program is now developing plans for a September 2018 EM-1 launch, though that date is preliminary until the program establishes official cost and schedule baselines now planned for May 2015. With the Orion and GSDO programs likely unable to meet the December 2017 date, NASA risks exhausting limited human exploration resources to achieve an aggressive SLS program schedule when those resources may be needed to resolve other issues within the human exploration effort. In other work, we have reported that in pursuing internal schedule goals, some programs have exhausted cost reserves, which has resulted in the need for additional funding to support the agency baseline commitment date once the target date is not achieved. NASA's urgency to complete development and demonstrate a human launch capability as soon as possible is understandable. The United States has lacked the ability to launch humans into space since the last flight of the Space Shuttle in July 2011 and the initial goal from Congress was that NASA demonstrate a new human launch capability by 2016. Also, the SLS and GSDO programs have already slipped their committed launch readiness dates to November 2018, and Orion appears likely to follow suit. While these delays were appropriate actions on the agency's part to reduce risk, their compounding effect could have impacts on the first crewed flight--EM-2--currently scheduled for 2021. We reported in July 2014 that NASA's metrics indicated the SLS program was on track to meet many of its design goals for demonstrating the initial capability of SLS. However, we found that the development of the core stage--SLS's fuel tank and structural backbone--represents the critical path of activities that must be completed to maintain the program's schedule as a whole. The core stage development had an aggressive schedule in order to meet the planned December 2017 first test flight. For example, the core stage had threats of nearly 5 months to its schedule due to difficulty acquiring liquid oxygen fuel lines capable of meeting SLS operational requirements. The aggressiveness of, and therefore the risk associated with the core stage schedule was reduced when the agency delayed its commitment for initial capability of SLS until November 2018. With SLS continuing to pursue a target date of December 2017, however, the aggressive core stage schedule remains a risk. Further, we reported that the program faced challenges integrating heritage hardware, which was designed for less stressful operational environments, into the SLS design. We found that these issues were not significant schedule drivers for the program as each had, and continues to have, significant amounts of schedule reserve to both the target and agency baseline commitment dates for launch readiness. The Orion program just completed its first experimental test flight--EFT-1. This flight tested Orion systems critical to crew safety, such as heat shield performance, separation events, avionics and software performance, attitude control and guidance, parachute deployment, and recovery operations. According to NASA, the data gathered during the flight will influence design decisions and validate existing computer models. Data from this flight are required to address several significant risks that the Orion program is currently tracking that must be addressed before humans can be flown on Orion. Specifically, our ongoing work indicates that the Orion program passed its preliminary design review--a review that evaluates the adequacy of cost schedule and technical baselines and whether the program is ready to move forward--in August 2014 by meeting the minimum standards for all 10 success criteria. For 7 of the 10 success criteria, however, review officials highlighted known issues that could compromise Orion's success. Specifically, the review officials noted concerns about several unresolved design risks, including technical challenges with the parachute system and heat shield. For example, during parachute testing, NASA discovered that when only two of the three main parachutes are deployed, they begin to swing past each other creating a "pendulum" effect. This effect could cause the capsule to increase speed and to hit the water at an angle that may damage the capsule thereby endangering the crew. Further, NASA faces choices between differing design solutions to resolve cracking issues discovered during manufacturing of the heat shield that protects the capsule during re-entry. Program officials plan to make a decision prior to the program's critical design review, based on additional testing and analysis, about how to resolve these risks with a goal of limiting design changes to the capsule's structure. Both the parachute and heat shield challenges must be resolved before EM-2 because each represents a significant risk to crew safety. Significant cost and schedule impacts could result if a redesign is required to address any of these unresolved design risks. NASA has yet to address our concerns regarding mission planning or life- cycle cost estimates. NASA has not yet defined specific mission requirements for any variant of the SLS. The two currently scheduled flights are developmental test flights designed to demonstrate and test the capabilities of the 70-mt launch vehicle and the capability of the core stage in particular. Office of Management and Budget guidance indicates that agencies should develop long-range objectives, supported by detailed budgets and plans that identify the agency's performance gaps and the resources needed to close them. With mission requirements unspecified, NASA has not yet finalized plans for the next step in evolving the SLS and risks investing limited available resources in systems and designs that are not yet needed and missing opportunities to make early investments in developing systems that may be needed in the future. According to agency officials, beyond the two scheduled test flights, future mission destinations remain uncertain. In the absence of specific mission requirements, officials indicated the SLS program is developing current and future variants based on top-level requirements derived from NASA's Design Reference Architectures for conducting missions in line with the agency's strategic plan. NASA's 2014 strategic plan, for example, identifies sending humans to Mars as one of the agency's long-term goals; in turn, the agency's Mars Design Reference Architecture indicates that multiple missions using a vehicle with a lift capability of about 130-mt will be necessary to support that goal. We recommended based on these findings that NASA define a range of possible missions beyond the second test flight and introduce increased competition in the acquisition of hardware needed for future variants to reduce long-term costs. The agency concurred with our recommendations, but has not yet taken specific actions to address our concerns The long-term affordability of the human exploration programs are also uncertain, as we found in May 2014, because NASA's cost estimates for the programs do not provide any information about the longer-term, life- cycle costs of developing, manufacturing, and operating the launch vehicles.estimate for SLS does not cover program costs after EM-1 or costs to design, develop, build, and produce the 105- or 130-mt variants. Though the subsequent variants will evolve from the first variant, they each represent substantial, challenging development efforts and will require billions of more dollars to complete. For example, the 105-mt vehicle will require development of a new upper stage and upper stage engine or the development of advanced boosters, either of which will be significant efforts for the program. If you or your staff have any questions about this testimony, please contact Cristina T. Chaplain, Director, Acquisition and Sourcing Management at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Shelby S. Oakley, Assistant Director; Jennifer Echard; Laura Greifner; Sylvia Schatz; Ryan Stott; Ozzy Trevino; Kristin Van Wychen; and John S. Warren, Jr. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit: the SLS vehicle; the Orion capsule, which will launch atop the SLS and carry astronauts; and GSDO, the supporting ground systems. As a whole, the efforts represent NASA's largest exploration investment over the next decade, approaching $23 billion, to demonstrate initial capabilities. In May 2014, GAO found that NASA's preliminary life-cycle cost estimates for human exploration were incomplete and recommended that NASA establish life-cycle cost and schedule baselines for each upgraded block of SLS, Orion, and GSDO; NASA partially concurred. In July 2014, GAO issued a report on SLS's progress toward its first test flight and recommended that NASA match SLS's resources to its requirements and define specific missions beyond the second test flight, among other actions. NASA concurred with these recommendations. This testimony is based on GAO's May 2014 report ( GAO-14-385 ), July 2014 report ( GAO-14-631 ), and ongoing audit work related to SLS and Orion. It discusses NASA's efforts to match resources to requirements for the SLS program and developmental challenges facing the SLS and Orion programs. To conduct this work, GAO reviewed relevant design, development, cost, and schedule documents and interviewed program officials. In 2014, GAO reported on a number of issues related to the National Aeronautics and Space Administration's (NASA) human exploration programs: the Space Launch System (SLS) vehicle, the Orion Multi-Purpose Crew Vehicle (Orion), and the Ground Systems Development and Operations (GSDO). For example, in July 2014, GAO found that NASA had not matched resources to requirements for the SLS program and was pursuing an aggressive development schedule--a situation compounded by the agency's reluctance to request funding commensurate with the program's needs. In August 2014, NASA established formal cost and schedule baselines for the SLS program at the agency-required 70 percent joint cost and schedule confidence level (JCL), which satisfied one recommendation from GAO's July 2014 report. The JCL is a calculation NASA uses to estimate the probable success of a program meeting its cost and schedule targets. To satisfy the 70 percent JCL requirement, the SLS program delayed its committed launch readiness date for its first test flight from December 2017 to November 2018. The program is still pursuing December 2017 as an internal goal, or target date, for the test flight, even though NASA calculated the JCL associated with launching SLS on this date at 30 percent. Moreover, neither the Orion nor GSDO program expects to be ready for the December 2017 launch date. With these programs likely unable to meet the December 2017 date, NASA risks exhausting limited human exploration resources to achieve an accelerated SLS program schedule when those resources may be needed to resolve challenges on other human exploration programs. In addition, GAO's ongoing work has found that the Orion program is facing significant technical and funding issues. Orion just completed its first test flight, and data from this flight is required to address several risks that must be resolved before the second test flight in 2021 because they represent risks to crew safety. For example, during parachute testing, NASA discovered that when only two of the three main parachutes are deployed, they begin to swing past each other creating a "pendulum" effect. This effect could cause the capsule to increase speed and to hit the water at an angle that may damage the capsule, thereby endangering the crew. In addition, data from the test is necessary to inform NASA's design solution to address heat shield cracking issues, which NASA has been working to resolve since August 2013. The heat shield is integral to crew safety during re-entry.
3,590
826
While the majority of businesses pay the taxes withheld from employees' salaries as well as the employer's matching amounts, a significant number of businesses do not. Our review of IRS tax records showed that over 1.6 million businesses owed over $58 billion in unpaid payroll taxes to the federal government as of September 30, 2007, and over 100,000 businesses currently owe for more than 2 years (8 quarters) of payroll taxes. This total includes amounts earned by employees that were withheld from their salaries to satisfy their tax obligations, as well as the employer's matching amounts, but which the business diverted for other purposes. Many of these businesses repeatedly failed to remit amounts withheld from employees' salaries. For example, 70 percent of all unpaid payroll taxes are owed by businesses with more than a year (4 tax quarters) of unpaid payroll taxes, and over a quarter of unpaid payroll taxes are owed by businesses that have tax debt for more than 3 years (12 tax quarters). Figure 1 shows the total dollar amount of payroll tax debt summarized by the number of unpaid payroll tax quarters outstanding. Using IRS's database of unpaid taxes, we were able to identify many of the industry types associated with businesses owing payroll taxes. The top industries with unpaid payroll tax debt included construction ($8.6 billion), professional services ($4.4 billion), and healthcare ($4 billion). When businesses fail to remit taxes withheld from employees' salaries, the payroll tax receipts are less than the payroll taxes due, and the Social Security and Hospital Insurance Trust Funds have fewer financial resources available to cover current and future benefit payments. However, the trust funds are funded based on wage estimates and not actual payroll tax collections. Therefore, the General Fund transfers to the trust funds amounts that should be collected but are not necessarily collected, resulting in the General Fund subsidizing the trust funds for amounts IRS is unable to collect in payroll taxes from employers. As of November 1, 2007, IRS estimated that the amount of unpaid taxes and interest attributable to Social Security and Hospital Insurance taxes in IRS's $282 billion unpaid assessments balance was approximately $44 billion. This estimate represents a snapshot of the amount that needed to be provided to the Social Security and Hospital Insurance Trust Funds based on the outstanding payroll tax debt on IRS's books at the time. It does not include an estimate for tax debts that have been written off of IRS's tax records in previous years because of the expiration of the statutory collection period. Recent IRS data indicate that the cumulative shortfall increases by an additional $2 billion to $4 billion annually because of uncollected payroll taxes. Although IRS has taken a number of steps to improve collections by prioritizing cases with better potential for collectibility, the collection of payroll taxes remains a significant problem for IRS. From 1998, when we performed our last in-depth review of payroll taxes, to September 2007, we found that while the number of businesses with payroll tax debt decreased from 1.8 million to 1.6 million, the balance of outstanding payroll taxes in IRS's inventory of tax debt increased from about $49 billion to $58 billion. Our analysis of the unpaid payroll tax inventory shows that the number of businesses with more than 20 quarters of tax debt (5 years of unpaid payroll tax debt) almost doubled between 1998 and 2007. The number of businesses that had not paid payroll taxes for over 40 quarters (10 years or more) also almost doubled, from 86 businesses to 169 businesses. These figures are shown in table 1. Of the $58 billion in unpaid payroll taxes as of September 30, 2007, IRS categorized about $4 billion (7 percent) as going through IRS's initial notification process. Because IRS has made the collection of payroll taxes one of its highest priorities, once a case completes the notification process, it is generally sent to IRS's field collections staff for face-to-face collection action. However, IRS does not have sufficient resources to immediately begin collection actions against all of its high-priority cases. As a result, IRS holds a large number of cases in a queue awaiting assignment to a revenue officer in the field. About $7 billion (12 percent) of the unpaid payroll tax amount was being worked on by IRS revenue officers for collection, and about $9 billion (16 percent) was in a queue awaiting assignment for collection action. Most of the unpaid payroll tax inventory--$30 billion (52 percent)--was classified as currently uncollectible by IRS. IRS classifies tax debt cases as currently not collectible for several reasons, including (1) the business owing the taxes is defunct, (2) the business is insolvent after bankruptcy, or (3) the business is experiencing financial hardship. Of those unpaid payroll tax cases IRS has classified as currently not collectible, almost 70 percent were as a result of a business being defunct. Much of the unpaid payroll tax debt has been outstanding for several years. As reflected in figure 2, our analysis of IRS records shows that over 60 percent of the unpaid payroll taxes was owed for tax periods from 2002 and prior years. Prompt collection action is vital because, as our previous work has shown, as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. Further, the continued accrual of interest and penalties on the outstanding federal taxes can, over time, eclipse the original tax obligation. Additionally, as discussed previously, IRS is statutorily limited in the length of time it has to collect unpaid taxes-- generally 10 years from the date the tax debt is assessed. Once that statutory period expires, IRS can no longer attempt to collect the tax. IRS records indicate that over $4 billion of unpaid payroll taxes will expire in each of the next several years because of the expiration of their statutory collection period. Our audit of payroll tax cases identified several issues that adversely affect IRS's ability to prevent the accumulation of unpaid payroll taxes and to collect these taxes. Foremost is that IRS's approach focuses on getting businesses--even those with dozens of quarters of payroll tax debt--to voluntarily comply. We found that IRS often either did not use certain collection tools, such as liens or TFRPs, or did not use them timely, and that IRS's approach does not treat the business's unpaid payroll taxes and responsible party's penalty assessments as a single collection effort. Additionally, although unpaid payroll taxes is one of its top collection priorities, IRS did not have performance measures to evaluate the collection of unpaid payroll taxes or the related TFRP assessments. Finally, we found some state revenue agencies are using tools to collect or prevent the further accumulation of unpaid taxes that IRS is either legally precluded from using or that it has not yet developed. We have previously reported that IRS subordinates the use of some of its collection tools in order to seek voluntary compliance and that IRS's repeated attempts to gain voluntary compliance often results in minimal or no actual collections. Our audit of businesses with payroll tax debt and our analysis of businesses with multiple quarters of unpaid payroll taxes again found revenue officers continuing to work with a business to gain voluntary compliance while the business continued to accumulate unpaid payroll taxes. For example, our analysis of IRS's inventory of unpaid payroll taxes found that over 10,000 businesses owed payroll taxes for 20 or more quarters--5 years or more. Failing to take more aggressive collection actions against businesses that repeatedly fail to remit payroll taxes has a broader impact than on just a single business. If left to accumulate unpaid payroll taxes, businesses can gain an unfair business advantage over their competitors at the expense of the government. As we have found previously, in at least one of our case study businesses, IRS determined that the non-compliant business obtained contracts through its ability to undercut competitors in part because the business's reduced costs associated with its non-payment of payroll taxes. Similarly, in another case the revenue officer noted that the business was underbidding on contracts and was using unpaid payroll taxes to offset the business's losses. Failure to take prompt actions to prevent the further accumulation of unpaid payroll taxes can also have a detrimental impact on the business and the associated owners/officers. As we have reported in the past, non- compliant businesses can accumulate substantial unpaid taxes as well as associated interest and penalties. Over time, these unpaid balances may compound beyond the business's ability to pay--ultimately placing the business and responsible officers in greater financial jeopardy. IRS is legally precluded from taking collection actions during certain periods, such as when a tax debtor is involved in bankruptcy proceedings. During those periods, even though IRS may not be able to take collection actions, tax debtors may continue to accumulate additional tax debt. However, IRS's focus on voluntary compliance has negatively affected IRS's collection efforts for years. Our current findings on IRS's focus on voluntary compliance are similar to those of a study performed by the Treasury Inspector General for Tax Administration (TIGTA) 8 years ago. In that study, TIGTA found that revenue officers were focused on IRS's customer service goals and therefore were reluctant to take enforcement actions. In another study performed 3 years ago, TIGTA reported that IRS allowed tax debtors to continue to delay taking action on their tax debt by failing to take aggressive collection actions. TIGTA found that IRS did not take timely follow-up action in half of the cases for which tax debtors missed specific deadlines. One official from a state taxing authority told us that the state benefited from IRS's approach because it allowed the state to collect its unpaid taxes from business tax debtors before IRS. In one of our case study businesses, although IRS successfully levied some financial assets, a mortgage holder and state and local officials seized the business's assets to satisfy the business's debts. IRS has recently strengthened its procedures to include some specific steps for dealing with businesses that repeatedly fail to remit payroll taxes and to stress the importance of preventing the further accumulation of such payroll taxes. We found that for payroll tax debt, one of IRS's highest collection priorities, IRS does not always file tax liens to protect the government's interest in property, and when IRS does so, it does not always do so timely. Our analysis of IRS's inventory of unpaid payroll taxes as of September 30, 2007, found that IRS had not filed liens on over one-third of all businesses with payroll tax debt cases assigned to the field for collection efforts--over 140,000 businesses. IRS guidance states that filing a lien is extremely important to protect the interests of the federal government, creditors, and taxpayers in general, and that the failure to file and properly record a federal tax lien may jeopardize the federal government's priority right against other creditors. A 2005 IRS study of TFRP cases found that cases where a lien had been filed had more average payments--about a third more--than where a lien had not been filed. Failure to file a lien can have a negative impact on tax collections. For example, IRS assessed the business owner in one of our case studies a TFRP to hold the owner personally liable for the withheld payroll taxes owed by the business. However, IRS did not assign the assessment to a revenue officer for collection and thus did not file a lien on the owner's property. Because there was no lien filed, the owner was able to sell a vacation home in Florida, and IRS did not collect any of the unpaid taxes from the proceeds of the sale. As in the case above, IRS's case assignment policy can delay the filing of liens for payroll tax cases. Because payroll tax cases are one of IRS's top collection priorities, once the notification process is complete, IRS routes these cases to revenue officers for face-to-face collection action instead of being routed to the Automated Collection System (ACS) for telephone contact. However, IRS generally places cases in a queue of cases awaiting assignment until a revenue officer is available to work the cases. Cases can be in the queue for extended periods of time awaiting assignment to a revenue officer. For the period that a case is in the queue, revenue officers are not assigned to file liens and take other collection actions. Our analysis found that for all payroll tax cases in the queue awaiting assignment as of September 30, 2007, over 80 percent did not have a lien filed. As a result, lower priority tax cases that go through the ACS process may have liens filed faster than the higher priority payroll tax cases. IRS has a powerful tool to hold responsible owners and officers personally liable for unpaid payroll taxes through assessing a TFRP. However, we found that IRS often takes a long time to determine whether to hold the owners/officers of businesses personally liable and, once the decision is made, to actually assess penalties against them for the taxes. In reviewing a sample of TFRP assessments selected as part of our audit of IRS's fiscal year 2007 financial statements, we found that from the time the tax debt was assessed against the business, IRS took over 2 years, on average, to assess a TFRP against the business owners/officers. We found that revenue officers, once assigned to a payroll tax case, took an average of over 40 weeks to decide whether to pursue a TFRP against business owners/officers and an additional 40 weeks on average to formally assess the TFRP. For 5 of the 76 sampled cases, we found that IRS took over 4 years to assess the TFRP. We did not attempt to identify how frequently IRS assesses a TFRP against responsible owners/officers. However, in TIGTA's 2005 report on its review of IRS's collection field function, it noted that revenue officers did not begin the TFRP process in over a quarter of the cases it reviewed. The timely assessment of TFRPs is an important tool in IRS's ability to prevent the continued accumulation of unpaid payroll taxes and to collect these taxes. Once a TFRP is assessed, IRS can take action against both the owners/officers and the business to collect the withheld taxes. For egregious cases, such as some of those in our case studies, taking strong collection actions against the owners' personal assets may be the best way to either get the business to become tax compliant or to convince the owners to close the non-compliant business, thus preventing the further accumulation of unpaid taxes. Failure to timely assess a TFRP can result in businesses continuing to accumulate unpaid payroll taxes and lost opportunities to collect these taxes from the owners/officers of the businesses. For example, one business we reviewed had tax debt from 2000, but IRS did not assess a TFRP against the business's owner until the end of 2004. In the meantime, the owner was drawing an annual salary of about $300,000 and had sold property valued at over $800,000. Within 1 month of IRS's assessing the TFRP, the owner closed the business, which by then had accumulated about $3 million in unpaid taxes. In September 2007, IRS implemented new requirements to address the timeliness of TFRP assessments. Under the new policy, IRS is now requiring revenue officers to make the determination on whether to pursue a TFRP within 120 days of the case's being assigned and to complete the assessment within 120 days of the determination. However, the revised policy maintains a provision that allows the revenue officer to delay the TFRP determination. Additionally, the policy does not include a requirement for IRS to monitor the new standards for assessing TFRPs. IRS assigns a higher priority to collection efforts against the business with unpaid payroll taxes than against the business's responsible owners/officers. Further, it treats the TFRP assessments as a separate collection effort unrelated to the business tax debt, even though the business payroll tax liabilities and the TFRP assessments are essentially the same tax debt. As a result, once the revenue officer assigned to the business payroll tax case decides to pursue a TFRP against the responsible owners/officers, the TFRP case does not automatically remain with this revenue officer. Accordingly, IRS often does not assign the TFRP assessment to a revenue officer for collection, and when it does, it may not assign it to the same revenue officer who is responsible for collecting unpaid taxes from the business. In reviewing the sample of TFRP assessments selected as part of our audit of IRS's fiscal year 2007 financial statements, we found that half of the TFRP assessments had not been assigned to a revenue officer by the time of our audit. Of those that had been assigned, over half of the TFRP assessments had not been assigned to the same revenue officer who was working the related business case. Assigning the collection efforts against the business and the TFRP assessments to different revenue officers can result in the responsible owners/officers being able to continue to use the business to fund a personal lifestyle while not remitting payroll taxes. For example, in one of our case studies the owner was assessed a TFRP, but continued to draw a six-figure income while not remitting amounts withheld from the salaries of the business's employees. For egregious cases, taking strong collection actions against the owner's personal assets may be a more effective means of either getting the business to be compliant or convincing the owner to close the non-compliant business to prevent the further accumulation of unpaid payroll taxes. IRS collection officials stated that attempting to assign the same revenue officer both the TFRP assessments and the business payroll tax case for collection would overload the revenue officers with work and result in fewer high-priority payroll tax cases being worked. This view, however, stems from separating the collection efforts of the business and the individual and not considering the business's unpaid payroll taxes and the TFRP assessment as a single case. In essence, the TFRP assessment is the same tax debt as the business's payroll tax debt; the assessment is merely another means through which IRS can attempt to collect the monies withheld from a business's employees for income, Social Security, and Hospital Insurance taxes that were not remitted to the government. This view that the payroll tax debt and the TFRP assessment are essentially the same tax debt is reinforced by IRS's practice of crediting all related parties' accounts whenever a collection is made against either assessment. Prior studies have found that IRS's practice of assigning TFRP assessments a lower priority than business cases has not been very successful for collecting the unpaid taxes. In its own 2005 study of TFRP cases, IRS reported that it had assessed over $11.9 billion in TFRP assessments (including interest) between 1996 and 2004, yet had collected only 8 percent of those assessments. IRS policies have not resulted in effective steps being taken against egregious businesses to prevent the further accumulation of unpaid payroll taxes. Our audit found thousands of businesses that had accumulated more than a dozen tax quarters of unpaid payroll tax debt. IRS policies state that revenue officers must stop businesses from accumulating payroll tax debt and instructs revenue officers to use all appropriate remedies to bring the tax debtor into compliance and to immediately stop any further accumulation of unpaid taxes. IRS policies further state that if routine case actions have not stopped the continued accumulation of unpaid payroll taxes, revenue officers should consider seizing the business's assets or pursuing a TFRP against the responsible parties. However, IRS successfully pursued fewer than 700 seizure actions in fiscal year 2007. We were unable to determine how many of those seizure actions were taken against payroll tax debtors. Regarding TFRPs, as discussed previously, IRS does not always assess the TFRPs timely, and IRS does not prioritize the TFRP assessment against the owner as highly as it does the unpaid payroll taxes of the business. This can result in little collection action being taken against the parties responsible for the failure to remit the withheld payroll taxes. When a business repeatedly fails to comply after attempts to collect, IRS policies state that the business should be considered an egregious offender and IRS should take aggressive collection actions, including threats of legal action that can culminate in court-ordered injunctions for the business to stop accumulating unpaid payroll taxes or face closure. However, IRS obtained less than 10 injunctions in fiscal year 2007 to stop businesses from accumulating additional payroll taxes. Revenue officers we spoke to believe the injunctive relief process to be too cumbersome to use effectively in its present form. One revenue officer stated that because of the difficulty in carrying out the administrative and judicial process to close a business through injunctive relief, he had not attempted to take such action in over a decade. IRS is taking some action to attempt to address this issue by piloting a Streamline Injunctive Relief Team to identify cases and develop procedures to quickly move a case from administrative procedures to judicial actions. These procedures will be used for the most egregious taxpayers when the revenue officer can establish that additional administrative procedures would be futile. Similar to IRS, all of the state tax collection officials we contacted told us that their revenue department's primary goal was to prevent businesses from continuing to flaunt tax laws and to stop them from accumulating additional tax debt. These officials said that after a business had been given a period of time to comply with its current tax obligations and begin paying past taxes, state tax collection officials changed their focus to one of "stopping the bleeding." As such, some have made the policy decision to seek to close non-compliant businesses. To the extent IRS is not taking effective steps to deal with egregious payroll tax offenders that repeatedly fail to comply with the tax laws, businesses may continue to withhold taxes from employees' salaries but divert the funds for other purposes. Although IRS has made the collection of unpaid payroll taxes one of its top priorities, IRS has not established goals or measures to assess its progress in collecting or preventing the accumulation of payroll tax debt. Performance measurement and monitoring, however, support resource allocation and other policy decisions to improve an agency's operations and the effectiveness of its approach. Performance monitoring can also help an agency by measuring the level of activity (process), the number of actions taken (outputs), or the results of the actions taken (outcomes). Although IRS does have a broad array of operational management information available to it, we did not identify any specific performance measures associated with payroll taxes or TFRP assessments. While IRS has caseload and other workload reports for local managers (to measure process and outputs), these localized reports are not rolled up to a national level to allow IRS managers to monitor the effectiveness or efficiency of its collection and enforcement efforts. These operational reports do contain information about unpaid payroll and TFRP case assignments, but they are used primarily to monitor workload issues, not program effectiveness. For example, IRS has developed some reports that identify "over-aged" cases (those that have not been resolved within a certain length of time) and that identify businesses that continue to accrue additional payroll tax debt, but these reports are designed for workload management. To report on its outcomes or the effectiveness of its operations, IRS reports on overall collection statistics and presents that information in the Management Discussion and Analysis section of its annual financial statement and in its IRS Data Book. However, IRS does not specifically address unpaid payroll taxes as a part of this reporting. IRS officials stated that they do not have specific lower-level performance measures that target collection actions or collection results for unpaid payroll taxes or TFRP assessments. Such performance measures could be useful to serve as an early warning system to management or as a vehicle for improving IRS's approach or actions. In our discussions with IRS revenue officers concerning some of the egregious payroll tax offenders included in our case studies, the officers noted that having certain additional tools available to them could allow them to more effectively deal with recalcitrant businesses. In discussions with a number of state tax collection officials, we found that several states had already developed and were effectively using the types of tools IRS revenue officers said would be beneficial to them. For example, while the Internal Revenue Code prohibits IRS from publicly disclosing federal tax information without taxpayer consent,an increasing number of states--at least 19, including New Jersey, Connecticut, Indiana, Louisiana, and California--are seeking to increase tax collections by publicizing the names of those with delinquent tax bills. In California, a recent law mandates the state to annually publish the names of the top 250 personal and corporate state tax debtors with at least $100,000 in state tax debt. Public disclosure of tax debtors can be very effective. Just threatening to publish the names of tax offenders can bring some into compliance, while actually appearing on a tax offender list can bring about societal pressure to comply. In California, 26 tax debtors threatened with public disclosure stepped forward to settle their tax debts and thus avoided appearing on the list; in Connecticut, the state claims the public disclosure of tax debtors has resulted in over $100 million in collections from the first 4 years of the program. The potential public disclosure of tax debtors may also encourage greater tax compliance among the general population of taxpayers to avoid potentially being on the list. As another example, while IRS has the authority to levy a tax debtor's income and assets when there is a demand for payment and there has been a refusal or an inability to pay by the taxpayer subject to the levy, IRS officials stated that they often have difficulty using levies to collect unpaid payroll taxes. They noted that the levy may be made against funds in a bank account at a certain point in time when little or no funds are available. They also noted, and in our case studies we found, that IRS sometimes has difficulty identifying which banks or financial institutions a tax debtor is using. This is the case because tax debtors will often change financial institutions to avoid IRS levies. However, several states use legal authorities to assist in identifying levy sources. States such as Kentucky, Maryland, Massachusetts, Indiana, and New Jersey have enacted legislation for matching programs or entered into agreements with financial institutions to participate in matching bank account information against state tax debts. This matching allows states to more easily identify potential levy sources and simplifies the financial institution's obligations to respond to multiple levies. IRS is working with at least one state to investigate the potential for this matching, but in our discussions with IRS collection officials they stated that IRS has not sought legislation or agreements with financial institutions. Our analysis of unpaid payroll tax debt found substantial evidence of abusive and potentially criminal activity related to the federal tax system by businesses and their owners or officers. We identified tens of thousands of businesses that filed 10 or more tax returns acknowledging that the business owed payroll taxes, yet failed to remit those taxes to the government. While much of the tax debt may be owed by those with little ability to pay, some abuse the tax system, willfully diverting amounts withheld from their employees' salaries to fund their business operations or their own personal lifestyle. In addition to owing payroll taxes for multiple tax periods and accumulating tax debt for years, many of the owners and officers of these businesses are repeat offenders. We identified owners who were involved in multiple businesses, all of which failed to remit payroll taxes as required. In total, IRS records indicate that over 1,500 owners/officers had been found by IRS to be responsible for non-payment of payroll taxes at 3 or more businesses and that 18 business owners/officers were found by IRS to be responsible for not paying the payroll taxes for over 12 separate businesses. It should be noted that these numbers represent only those responsible individuals who IRS found acted willfully in the non-payment of the businesses' payroll taxes and who were assessed TFRPs--these figures do not represent the total number of repeat offenders with respect to non-payment of payroll taxes. Table 2 shows the number of individuals with TFRPs for two or more businesses. Our audits and investigations of 50 case study businesses with tax debt found substantial evidence of abuse and potential criminal activity related to the tax system. All of the case studies involved businesses that had withheld taxes from their employees' paychecks and diverted the money to fund business operations or for personal gain. Table 3 shows the results of 12 of the case studies we performed. Businesses that withhold money from their employees' salaries are required to hold those funds in trust for the federal government. Willful failure to remit these funds is a breach of that fiduciary responsibility and is a felony offense. A business's repeated failure to remit payroll taxes to the government over long periods of time affects far more than the collection of the unpaid taxes. First, allowing businesses to continue to not remit payroll taxes affects the general public's perception regarding the fairness of the tax system, a perception that may result in lower overall compliance. Second, because of failure of businesses to remit payroll taxes, the burden of funding the nation's commitments, including Social Security and Hospital Insurance Trust Fund payments, falls more heavily on taxpayers who willingly and fully pay their taxes. Third, the failure to remit payroll taxes can give the non-compliant business an unfair competitive advantage because that business can use those funds that should have been remitted for taxes to either lower overall business costs or increase profits. Businesses that fail to remit payroll taxes may also under bid tax-compliant businesses, causing them to lose business and encouraging them to also become non-compliant. Fourth, allowing businesses to continue accumulating unpaid payroll taxes has the effect of subsidizing their business operations, thus enriching tax abusers or prolonging the demise of a failing business. Fifth and last, in an era of growing federal deficits and amidst reports of an increasingly gloomy fiscal outlook, the federal government cannot afford to allow businesses to continue to accumulate unpaid payroll tax debt with little consequence. For these reasons, it is vital that IRS use the full range of its collection tools against businesses with significant payroll tax debt and have performance measures in place to monitor the effectiveness of IRS's actions to collect and prevent the further accumulation of unpaid payroll taxes. Businesses that continue to accumulate unpaid payroll tax debt despite efforts by IRS to work with them are demonstrating that they are either unwilling or unable to comply with the tax laws. In such cases, because the decision to not file or remit payroll taxes is made by the owners or responsible officers of a business, IRS should consider strong collection action against both the business and the responsible owners or officers to prevent the further accumulation of unpaid payroll taxes and to collect those taxes for which the business and owners have a legal and fiduciary obligation to pay. IRS faces difficult challenges in balancing the use of aggressive collection actions against taxpayer rights and individuals' livelihoods. However, to the extent IRS does not pursue aggressive collection actions against businesses with multiple quarters of unpaid payroll taxes, there is a significant concern as to whether IRS is acting in the best interests of the federal government, the employees of the businesses involved, the perceived fairness of the tax system, or overall compliance with the tax laws. Therefore, it is incumbent upon IRS to revise its approach and develop performance measures that include the appropriate use of the full range of available enforcement tools against egregious offenders to prevent their businesses from accumulating tax debt. It is also incumbent upon IRS to proactively seek out and appropriately implement other tools (particularly those with demonstrated success at the state level) to enhance IRS's ability to prevent the further accumulation of unpaid payroll taxes and to collect those taxes that are owed. Although IRS does need to work with businesses to try to gain voluntary tax compliance, for businesses with demonstrated histories of egregious abuse of the tax system, IRS needs to alter its approach to include focusing on stopping the accumulation of additional unpaid payroll tax debt by egregious businesses. Our companion report being released today contains six recommendations to IRS to address issues regarding its ability to prevent the further accumulation of unpaid payroll taxes and collect such taxes. The recommendations include (1) developing a process and performance measures to monitor collection actions taken by revenue officers against egregious payroll tax offenders and (2) developing procedures to more timely file notice of federal tax liens against egregious businesses and assess penalties to hold responsible parties personally liable for not remitting withheld payroll taxes. Mr. Chairmen and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee and subcommittee have at this time. For future contacts regarding this testimony, please contact Steven J. Sebastian at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO previously reported that federal contractors abuse the tax system with little consequence. While performing those audits, GAO noted that much of the tax abuse involved contractors not remitting to the government payroll taxes that were withheld from salaries. As a result, GAO was asked to review the Internal Revenue Service's (IRS) processes and procedures to prevent and collect unpaid payroll taxes and determine (1) the magnitude of unpaid federal payroll tax debt, (2) the factors affecting IRS's ability to enforce compliance or pursue collections, and (3) whether some businesses with unpaid payroll taxes are engaged in abusive or potentially criminal activities with regard to the federal tax system. To address these objectives, GAO analyzed IRS's tax database, performed case study analyses of payroll tax offenders, and interviewed collection officials from IRS and several states. IRS records show that, as of September 30, 2007, over 1.6 million businesses owed over $58 billion in unpaid federal payroll taxes, including interest and penalties. Some of these businesses took advantage of the existing tax enforcement and administration system to avoid fulfilling or paying federal tax obligations--thus abusing the federal tax system. Over a quarter of payroll taxes are owed by businesses with more than 3 years (12 tax quarters) of unpaid payroll taxes. Some of these business owners repeatedly accumulated tax debt from multiple businesses. For example, IRS found over 1,500 individuals to be responsible for non-payment of payroll taxes at three or more businesses, and 18 were responsible for not remitting payroll taxes for a dozen different businesses. Although IRS has powerful tools at its disposal to prevent the further accumulation of unpaid payroll taxes and to collect the taxes that are owed, IRS's current approach does not provide for their full, effective use. IRS's overall approach to collection focuses primarily on gaining voluntary compliance--even for egregious payroll tax offenders--a practice that can result in minimal or no actual collections for these offenders. Additionally, IRS has not always promptly filed liens against businesses to protect the government's interests and has not always taken timely action to hold responsible parties personally liable for unpaid payroll taxes. GAO selected 50 businesses with payroll tax debt as case studies and found extensive evidence of abuse and potential criminal activity in relation to the federal tax system. The business owners or officers in our case studies diverted payroll tax funds for their own benefit or to help fund business operations.
7,186
507
In general, the term "gatekeeping" refers to the responsibilities and activities that entities--VA, Education, and Labor--undertake to determine whether postsecondary educational and training programs and institutions meet federal requirements. Although the standards, procedures, and methods used by the entities may differ, the overriding purpose of gatekeeping remains the same regardless of the programs or agencies involved. To assess the overlap that occurs, it is important to first understand each of the three agencies' particular gatekeeping approaches. VA administers a number of programs designed to assist individuals in gaining access to postsecondary education or training for a specific occupation. VA generally provides its assistance in the form of payments to veterans, service persons, reservists, and certain spouses and dependents. Before an individual entitled to VA education assistance can obtain money for an education or training program, the program must be approved by an SAA, or by VA in those cases in which an SAA has not been contracted to perform the gatekeeping work. In all, 61 SAAs existed in the 50 states, the District of Columbia, and Puerto Rico during 1994. SAAs are responsible both for determining which courses should be approved and for ensuring that schools are complying with the schools' established standards relating to the course or courses that have been approved. According to a VA official, SAAs are generally expected to make an annual supervisory visit to each school with enrolled education beneficiaries. In fiscal year 1994, about 95 percent of SAA staff performed these primary functions for academic and vocational schools, with the remaining 5 percent covering apprenticeship and other OJT training programs. Contract costs paid to each SAA by VA primarily represent reimbursements to the state for salaries and travel and an allowance for administrative expenses. For budgetary purposes, costs are allocated using formula-driven guidelines and are largely dependent on such factors as projected school or training program work loads, state employee salary schedules, and the distances SAA officials must travel to inspect or supervise schools or training programs. SAA contracts have been the focus of cost-cutting activity in recent years. VA officials said that before fiscal year 1988, VA was spending about $17 to $18 million annually for SAA contracts. Starting in fiscal year 1988, the Congress set an annual funding cap of $12 million. For fiscal year 1994, the 61 SAAs requested VA funding totaling $14.4 million but received $12 million. These requests were to support a total of 164 professional staff in SAAs whose staffing ranged from 12.3 positions to less than 0.5 position. For fiscal year 1995, the Congress increased the cap to $13 million. Most of the aid associated with Education's programs is provided in the form of grants and guaranteed student loans under title IV of the Higher Education Act of 1965, as amended. In fiscal year 1994, postsecondary student aid administered by Education totaled more than $32 billion, with more than 6.6 million students receiving some form of assistance. Education's approach involves activities conducted by a gatekeeping "triad" composed of accrediting agencies, state licensing agencies, and Education itself. In order for students attending a school to receive title IV financial aid, the school must be (1) accredited by an entity recognized for that purpose by the Secretary of Education, (2) licensed or otherwise legally authorized to provide postsecondary education in the state in which it is located, and (3) certified to participate in federal student aid programs by Education. Each part of the gatekeeping triad has its own responsibilities. Although specific responsibilities differ, parts of the triad may be evaluating similar areas, such as aspects of a school's curriculum, students' progress, or the school's financial capability to participate in title IV programs. Accreditation is an essential step in Education's gatekeeping process, in that unaccredited schools or programs are ineligible to participate in title IV programs. The process of accreditation is a nongovernmental peer evaluation that is performed by more than 90 accrediting associations of regional or national scope. Each accrediting body applies a relevant set of standards to the institution, department, or program under review. Those that meet the standards become accredited. To participate in title IV programs, each educational institution must also have legal authority to operate in the state in which it is located. At the state level, licensing or other approval is conducted by a state agency. Each of the states has its own agency structure, and each state can choose its own set of standards. Education's own responsibilities include determining the administrative and financial capacity of schools to participate in title IV programs and monitoring the performance of accrediting and licensing bodies. In all, more than 7,500 postsecondary institutions were certified to participate in title IV student aid programs by Education in 1994. Apprenticeship programs are a focus of Labor's gatekeeping activities. Under the National Apprenticeship Act of 1937, Labor establishes and promotes labor standards to safeguard the welfare of apprentices. Eligibility for various federal programs, including VA education assistance to veterans attending apprenticeship programs, is conditioned upon conformance to these standards. The standards require, for example, that an apprenticeship program (1) provide for periodic review and evaluation of the apprentice's progress in job performance and related instruction and (2) prepare appropriate progress records documenting such reviews. Labor's Bureau of Apprenticeship and Training determines whether a program conforms to Labor's standards. If the program is found to be in conformance, it can be "registered," either by Labor or by a state apprenticeship agency or council that Labor has recognized. After examining gatekeepers' activities, comparing their assessment standards, and conducting other analyses, we determined that most SAA activity overlapped work done by others. More specifically, an estimated 87 percent of SAA staff time, costing about $10.5 million of the $12 million spent by VA in fiscal year 1994, was spent reviewing and approving courses at academic and vocational schools that were also accredited by Education-approved agencies (see fig. 1). An estimated 3 percent of SAA staff time, costing about $400,000, was spent assessing apprenticeships, but we could not readily determine whether this activity overlapped Labor's efforts. The remaining portion of SAA staff time, costing about $1.1 million, was spent on gatekeeping functions that did not overlap the efforts of other entities. Most SAA activity occurred at academic and vocational schools that had been accredited by nationally recognized accrediting agencies--part of the activity of Education's gatekeeping triad. In fiscal year 1994, SAAs reviewed and approved 6,294 academic and vocational schools that had been accredited by accrediting agencies. These schools were also potentially subject to the two other parts of Education's gatekeeping triad. We examined how likely it was that these schools had also been certified by Education itself. We selected a judgmental sample of five states (Mississippi, Vermont, Washington, West Virginia, and Wyoming) and the District of Columbia. For these six jurisdictions, we obtained (1) a list from VA of 273 SAA-approved vocational and academic schools that had also been accredited and (2) a list from Education of all schools that were Education-certified. In all, 255 (93 percent) of the schools on the VA list were also Education-certified. While SAA reviews may differ somewhat from those conducted by Education gatekeepers, SAAs and Education use similar standards for approving education and training programs. Both VA and Education base their standards for approving or certifying schools and courses on federal laws and regulations. We identified 15 key standards in the law and regulations that academic and vocational schools must meet to be approved by SAAs (see app. IV). We compared these key standards with those used by accrediting bodies, states, and Education and found them to be similar (see app. V). Examples follow. A school seeking SAA approval must have a policy that gives veterans appropriate credit for previous education and training. Of the seven accrediting agencies whose standards we reviewed, five required schools to have such a policy, and the policies were similar. Schools seeking SAA approval must also demonstrate that they have sufficient financial resources to ensure their proper operation and to fulfill their commitment to provide quality education for their students. Both Education and accreditation agencies had similar requirements concerning financial resources. The possibility exists that SAA reviews of apprenticeship programs also overlap Labor's gatekeeping efforts. The law requires SAA approval of an apprenticeship if a student in the program is to receive VA educational assistance. Before approving such a program, an SAA must determine that the training establishment and its apprentice courses are in conformance with Labor's standards of apprenticeship. However, VA regulations do not require that an SAA-approved apprenticeship program be registered by Labor. While the potential for overlap exists, we were unable to determine if it actually occurred because data were not available to determine whether SAA-approved programs were also registered by Labor. About 9 percent of SAAs' staff effort did not overlap other gatekeeping efforts. This portion of SAA activity fell into two categories: approval of unaccredited schools and programs, and approval of OJT programs other than apprenticeships. Unaccredited institutions. Under the law, SAAs may approve courses of study at unaccredited institutions, thereby making veterans eligible to receive assistance for attending. By contrast, Education's regulations generally require schools to be accredited before they are certified, thereby making students eligible for title IV programs. As of September 30, 1994, SAAs had approved courses of study for veterans at 534 unaccredited academic and vocational schools. The SAA staff that reviewed and approved these schools--about 7 percent of SAA staff--did not duplicate Education's efforts. Other OJT programs. SAAs also review and approve other OJT programs that do not qualify as apprenticeship programs and that are not subject to review and registration by Labor. SAAs' efforts to assess other OJT programs thus did not overlap Labor's gatekeeping efforts. We estimate that for fiscal year 1994, these approvals took about 2 percent of SAA staff time. The substantial amount of overlap that occurred between SAA and other gatekeepers' efforts raises questions about whether SAA efforts should continue at their current level. We estimated that 87 percent of the approval effort expended by SAAs related to schools and programs also subject to accreditation by Education-approved entities. Also, in our review of six jurisdictions, 93 percent of the accredited schools were also certified by Education to participate in title IV student aid programs. School certification involves applying standards that are similar to those used by SAAs. On its face, an SAA review of courses of study at an Education-certified school would appear to add only marginal value. The same may be true for SAA reviews of apprenticeship programs, though the lack of information precludes us from determining if overlap exists with Labor's oversight. We believe an opportunity exists for reducing federal expenditures by over $10 million annually through the elimination of overlapping SAA gatekeeping efforts. VA and SAA efforts would be better focused on such activities as reviewing courses offered by unaccredited schools, for which no other form of federal oversight currently exists. The Congress may wish to consider whether it is necessary for VA to continue contracting with SAAs to review and approve educational programs at schools that have already been reviewed and certified by Education. We requested comments on a draft of this report from the Secretaries of Education and Veterans Affairs. Education provided several clarifying and technical suggestions, which we incorporated where appropriate. In general, VA said that it has reservations about relying upon Education's gatekeeping system to ensure the integrity and quality of education and training programs made available to VA education program beneficiaries. VA's two principal comments were that the draft report did not elaborate on the specific mechanisms or organizational elements within Education that are in place to ensure that the requirements of title 38 of the U.S. Code are met and it is questionable whether accreditation, in the absence of funding for the state postsecondary review entities (SPRE) program, will accomplish the approval, monitoring, and supervisory requirements of the laws governing VA education programs. In the report, we do discuss Education's gatekeeping triad composed of accrediting agencies, state licensing agencies, and Education itself, which performs the same basic function as SAAs for many of the same schools. Under title 38, the essential responsibility of SAAs is to determine which courses should be approved and to ensure that schools are complying with their established standards relating to the courses that have been approved before an individual entitled to VA education assistance can obtain money for an education or training program. Education's gatekeeping triad does similar work: assessing whether schools and training programs offer education of sufficient quality for students to receive federal financial assistance under title IV of the Higher Education Act, as amended. In fiscal year 1994, the Department of Education provided more than $32 billion in financial aid to 6.6 million students. The SPRE program has never been fully operational, and only nine states' SPREs had been approved by Education as of September 30, 1995. Thus, the elimination of SPRE funding should have little impact on the operation of the gatekeeping triad. In addition, before the SPRE program was initiated, the majority of education and training programs approved by SAAs were offered by schools that were also accredited and certified by Education's gatekeeping system. And, as illustrated in this report, we found that both VA and Education gatekeepers apply similar standards in determining educational program acceptability at the same schools. VA also said that the role states and SAAs perform in approving education and training programs should continue and that it believes that such a function should not be centralized at the federal level. However, as noted in our report, just as the SAA functions are not totally centralized at the federal level, neither are the gatekeeping efforts of Education's triad, which relies on the nonfederal work of accrediting entities and state licensing bodies to perform an important portion of the school approval work. The full text of VA's comments appears in appendix VI of this report. Copies of this report are being sent to the Chairman and Ranking Minority Member, House Committee on Veterans' Affairs; the Secretaries of Veterans Affairs, Education, and Labor; appropriate congressional committees; and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions regarding this report. Major contributors include Joseph J. Eglin, Jr., Assistant Director; Charles M. Novak; Daniel C. Jacobsen; and Robert B. Miller. To determine the functions of SAAs, we reviewed various VA and SAA documents, including regulations, policies, procedures, contracts, budget submissions, training manuals, and congressional testimony. We also held discussions with VA, SAA, and National Association of State Approving Agencies officials. On the basis of these efforts and additional discussions with officials from Education and Labor, we confirmed that the work of Education and Labor gatekeepers would be most appropriate to compare with SAA gatekeeping work. As an indicator of overlapping or duplicative functions, we analyzed SAAs' gatekeeping activities for fiscal year 1994 to determine the extent that schools with SAA-approved courses of study were also reviewed as part of Education's gatekeeping system. Since much of the SAA data we needed for analysis were not centrally available from VA, the VA central office gathered the information we requested from its regional offices and provided it to us. We did not verify the accuracy of this information. VA was unable to readily provide a listing of SAA-approved apprenticeship programs or to determine whether such approved programs were also registered by Labor. Therefore, we had no basis on which to determine the existence or the extent of overlapping functions between SAAs and Labor for apprenticeship programs. As an indicator of the similarities between Education and VA gatekeeping work, we identified, from the law and VA regulations, key standards used by SAAs in reviewing schools and educational courses and compared them with standards used by Education in evaluating schools for participation in title IV programs. The focus of our review was overlapping and duplicative functions between SAAs and other entities; we were not asked to analyze the effectiveness of these functions. SAAs administer VA's largest education benefits programs: the Montgomery G.I. Bill, the Post-Vietnam Era Veterans' Educational Assistance, and the Survivors' and Dependents' Educational Assistance programs. In fiscal year 1994, these programs served 453,973 trainees at an estimated cost of about $1 billion (see table II.1), an average of $2,223 per trainee. The Montgomery G.I. Bill, which covers veterans, military personnel, and selected reservists, is the largest program and accounts for over 85 percent of the total funds expended. Funds expended (in thousands) VA categorizes the types of training allowed under its educational programs as academic--degree and certain professional programs at institutions of vocational--noncollege degree, vocational, or technical diploma or apprenticeship--OJT typically requiring a minimum of 2,000 hours' work experience supplemented by related classroom instruction, leading to journeyman status in a skilled trade; and other OJT--typically requiring supervised job instruction for a period of not less than 6 months and not more than 2 years, leading to a particular occupation. During fiscal year 1994, over 91 percent of VA education beneficiaries received academic training at institutions of higher learning (see fig. II.1). The focus of accrediting bodies is to determine the quality of education or training provided by the institutions or programs they accredit. In general, institutions of higher education are permitted to operate with considerable independence and autonomy. As a consequence, American educational institutions can vary widely in the character and quality of their programs. To ensure a basic level of quality, the practice of accreditation arose in the United States as a means of conducting nongovernmental peer evaluation of educational institutions and programs. Private educational associations of regional or national scope have adopted standards reflecting the qualities of a sound educational program and have developed procedures for evaluating institutions or programs to determine whether they are operating at basic levels of quality. Educational accreditation can be institutional or specialized. Institutional accreditation involves assessing the educational quality of an entire institution; this type of accreditation is used when each of an institution's parts is seen as contributing to the achievement of the institution's objectives. At the end of fiscal year 1994, the Secretary of Education recognized nine institutional accrediting commissions or agencies, covering six geographical regions of the country, as qualified to perform accreditation. In addition, eight national institutional accrediting commissions or agencies were recognized by the Secretary. Specialized, or programmatic, accreditation usually applies to particular programs, departments, or schools. Most of the specialized accrediting agencies review units within higher education institutions that have been institutionally accredited. At the end of fiscal year 1994, 74 specialized accrediting agencies were also recognized by the Secretary as qualified to perform accreditation throughout the nation. State licensing agencies authorize educational institutions to operate within their borders. Schools must be licensed by each state in order to participate in the title IV program. In addition to licensing agencies, several states have created SPREs under the Higher Education Amendments of 1992, in part, to reduce program fraud and abuse. Under the 1992 amendments, the federal government provided funding for states that choose to create SPREs to produce a more active and consistent state role in the gatekeeping structure. SPREs are charged with developing review standards, in consultation with institutions in the state, for approval by the Secretary of Education. SPREs then use these standards as criteria for reviewing educational institutions referred to them by the Secretary. Those institutions that do not satisfy SPRE review standards may be required to comply or cease participating in title IV programs. The future of SPREs is in doubt because their funding was rescinded by the 104th Congress (P.L. 104-19). As the federal representative in the gatekeeping triad, the role of Education is varied. First, Education is responsible for determining the administrative and financial capacity of institutions to participate in title IV programs. It also determines whether each applicant school has met all eligibility requirements (including accreditation and state licensing) before it certifies the school for participation in title IV programs. Finally, Education monitors and oversees the responsibilities of the other two triad members by recognizing and publishing a list of those accrediting agencies the Secretary believes are reliable authorities as to the quality of education or training offered by institutions of higher education and ensuring that these agencies have appropriate standards for conducting their accreditation work and evaluating and approving (or disapproving) each SPRE's review standards and referring specific educational institutions to a SPRE for review. We identified from the law and regulations the following key standards that VA and SAAs used in reviewing education and training programs at participating schools. 1. Information in school catalogs is to cover such things as enrollment requirements; student progress (that is, grading and absences) and conduct; refunds; schedule of charges; course outlines; faculty; and school calendar. 2. Schools are to maintain adequate records of and enforce policies on student progress and conduct, including attendance records for nondegree programs. 3. Schools are to maintain records of and proper credit for students' previous education. 4. Schools or courses are to be accredited by a nationally recognized agency. Alternatively, course quality, content, and length are to be consistent with similar courses of other schools, with recognized accepted standards. 5. Course credit is to be awarded in standard semester or quarter hours or by college degree, or courses are to lead to a vocational objective and certificate of completion. 6. Space, equipment, facilities, and instructional material should be adequate. 7. Schools should have a sufficient number of adequately educated and experienced personnel. 8. Schools' personnel are to be of good reputation and character. 9. Schools are to be financially sound. 10. Schools should maintain a pro rata refund policy for student tuition and charges. 11. Schools' advertising, sales, and enrollment practices should not be erroneous, deceptive, or misleading. 12. Schools must comply with various government safety codes and regulations. 13. Schools' courses of study must have had a 2-year period of operation prior to enrollment of students receiving VA program benefits (except training establishment courses). 14. A school is precluded from approval when more than 85 percent of its enrolled students are having their costs paid in part by the school or VA. 15. Under certain conditions, courses offered at a school branch or extension may be approved in combination with courses offered at the parent facility. We reviewed the standards of seven accrediting bodies as representative of the 91 accreditors that were recognized nationally by the Secretary of Education at the end of fiscal year 1994. Four accrediting bodies were specialized program accreditors covering the entire nation, and three were institutional accreditors covering various regions of the country. The seven accrediting bodies' standards we reviewed follow. The Accrediting Bureau of Health Education Schools' Manual for Allied Health Education Schools, 5th edition, 1989. The Bureau accredits private and proprietary postsecondary health education institutions and specialized programs (primarily certificate or associate degree) for medical assistant and medical laboratory technician. The American Assembly of Collegiate Schools of Business' Achieving Quality and Continuous Improvement Through Self-Evaluation and Peer Review: Standards for Accreditation in Business Administration and Accounting, April 1994. The Assembly accredits any institutionally accredited collegiate institution offering degrees in business administration and accounting. The American Culinary Federation Educational Institute Accrediting Commission's Policies, Procedures, and Standards, April 1994. The Commission accredits programs that award postsecondary certificates or associate degrees in the culinary arts or food service management areas at accredited institutions or to nationally registered apprenticeship programs. The Computer Science Accreditation Commission of the Computing Sciences Accreditation Board's Criteria for Accrediting Programs in Computer Science in the United States, June 1992. The Board accredits 4-year baccalaureate programs in computer science. The Middle States Association of Colleges and Schools Commission on Higher Education's Characteristics of Excellence in Higher Education: Standards for Accreditation, February 1994 (five states and the District of Columbia, Puerto Rico, and the Virgin Islands). The Commission accredits degree-granting institutions of higher education. The North Central Association of Colleges and Schools Commission on Institutions of Higher Education's Handbook of Accreditation, September 1994 (19 states). The Commission accredits degree-granting institutions of higher education. The Northwest Association of Schools and Colleges Commission on Colleges' Accreditation Handbook, 1994 edition (seven states). The Commission accredits institutions, rather than specific programs, whose principal programs lead to formal degrees, associate and higher. We reviewed the state review standards for SPREs that are provided in federal regulation 34 C.F.R., part 667, subpart C. The standards we reviewed included the following rules and procedures that Education uses. To determine whether an educational institution qualifies in whole or in part as an eligible higher education institution under the Higher Education Act: 34 C.F.R., part 600. To determine a higher education institution's financial responsibility: 34 C.F.R. 668.15, and to determine its administrative capability: 34 C.F.R. 668.16. To ensure that accrediting agencies are, for the Higher Education Act and other federal purposes, reliable authorities as to the quality of education or training offered by the higher education institutions or programs they accredit: 34 C.F.R., part 602. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO determined the extent to which state approving agencies (SAA) assessment activities overlap the efforts of other agencies. GAO found that: (1) $10.5 million of the $12 million paid to SAA in 1994 was spent to conduct assessments already performed by the Department of Education; (2) these assessments involved reviews of accredited academic and vocational schools; (3) the remaining SAA assessment activities did not overlap the activities of other agencies, since they involved on-the-job training programs and unaccredited schools; and (4) although SAA use evaluation standards that differ from those of other reviewing agencies, SAA activity should be reduced to schools and programs not subject to Department of Education approval.
5,784
149
CMS's method of adjusting payments to MA plans to reflect beneficiary health status has changed over time. Prior to 2000, CMS adjusted MA payments based only on beneficiary demographic data. From 2000 to 2003, CMS adjusted MA payments using a model that was based on a beneficiary's demographic characteristics and principal inpatient diagnosis. In 2004, CMS began adjusting payments to MA plans based on the CMS-HCC model.conditions, are groups of medical diagnoses where related groups of diagnoses are ranked based on disease severity and cost. The CMS- HCC model adjusts MA payments more accurately than previous models HCCs, which represent major medical because it includes more comprehensive information on beneficiaries' health status. The CMS-HCC risk adjustment model uses enrollment and claims data from Medicare FFS. The model uses beneficiary characteristic and diagnostic data from a base year to calculate each beneficiary's risk For example, CMS used MA beneficiary scores for the following year.demographic and diagnostic data for 2007 to determine the risk scores used to adjust payments to MA plans in 2008. CMS estimated that 3.41 percent of 2010 MA beneficiary risk scores was attributable to differences in diagnostic coding between MA and Medicare FFS since 2007. To calculate this percentage, CMS estimated the annual difference in disease score growth between MA and Medicare FFS beneficiaries for three different groups of beneficiaries who were either enrolled in the same MA plan or in Medicare FFS from 2004 to 2005, 2005 to 2006, and 2006 to 2007. CMS accounted for differences in age and mortality when estimating the difference in disease score growth between MA and Medicare FFS beneficiaries for each period. Then, CMS calculated the average of the three estimates.estimate to 2010 MA beneficiaries, CMS multiplied the average annual difference in risk score growth by its estimate of the average length of time that 2010 MA beneficiaries had been continuously enrolled in MA plans over the previous 3 years, and CMS multiplied this result by 81.8 percent, its estimate of the percentage of 2010 MA beneficiaries who were enrolled in an MA plan in 2009 and therefore were exposed to MA coding practices. CMS implemented this same adjustment of 3.41 percent in 2011 and has announced it will implement this same adjustment in 2012. We found that diagnostic coding differences exist between MA plans and Medicare FFS and that these differences had a substantial effect on payment to MA plans. We estimated that risk score growth due to coding differences over the previous 3 years was equivalent to $3.9 billion to $5.8 billion in payments to MA plans in 2010 before CMS's adjustment for coding differences. Before CMS reduced 2010 MA beneficiary risk scores, we found that these scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than the risk scores likely would have been as a result of diagnostic coding differences, that is, if the same beneficiaries had been continuously enrolled in FFS (see fig. 1). Our estimates suggest that, after accounting for CMS's 3.4 percent reduction to MA risk scores in 2010, MA risk scores were too high by at least 1.4 percent, and perhaps as much as 3.7 percent, equivalent to $1.2 billion and $3.1 billion in payments to MA plans. Our two estimates were based on different assumptions of the impact of coding differences over time. We found that the annual impact of coding differences for our study population increased from 2005 to 2008. Based on this trend, we projected risk score growth for the period 2008 to 2010 and obtained the higher estimate, 7.1 percent, of the cumulative impact of differences in diagnostic coding between MA and FFS. However, coding differences may reach an upper bound when MA plans code diagnoses as comprehensively as possible, so we produced the lower estimate of 4.8 percent by assuming that the impact of coding differences on risk scores remained constant and was the same from 2008 to 2010 as it was from 2007 to 2008. Plans with networks may have greater potential to influence the diagnostic coding of their providers, relative to plans without networks. Specifically, when we restricted our analysis to MA beneficiaries in plans with provider networks (HMOs, PPOs, and plans offered by PSOs), our estimates of the cumulative effect of differences in diagnostic coding between MA and FFS increased to an average of 5.5 or 7.8 percent of MA beneficiary risk scores in 2010, depending on the projection assumption for 2008 to 2010. Altering the year by which MA coding patterns had "caught up" to FFS coding patterns, from our original assumption of 2007 to 2005, had little effect on our results. Specifically, we estimated the cumulative impact of coding differences from 2005 to 2010 and found that our estimates for all MA plans increased slightly to 5.3 or 7.6 percent, depending on the projection assumption from 2008 to 2010. Our analysis estimating the cumulative impact of coding differences on 2010 MA risk scores suggests that this cumulative impact is increasing. Specifically, we found that from 2005 to 2008, the impact of coding differences on MA risk scores increased over time (see app. 1, table 1). Furthermore, CMS also found that the impact of coding differences While we did not have more recent data, increased from 2004 to 2008. the trend of coding differences through 2008 suggests that the impact of coding differences in 2011 and 2012 could be larger than in 2010. CMS analysis provided to us showed annual risk score growth due to coding differences to be 0.015 from 2004 to 2005, 0.015 from 2005 to 2006, 0.026 from 2006 to 2007, and 0.038 from 2007 to 2008. CMS's estimate of the impact of coding differences on 2010 MA risk scores was smaller than our estimate due to the collective impact of three methodological differences described below. For its 2011 and 2012 adjustments, the agency continued to use the same estimate of the impact of coding differences it used in 2010, which likely resulted in excess payments to MA plans. Three major differences between our and CMS's methodology account for the differences in our 2010 estimates. First, CMS did not include data from 2008. CMS initially announced the adjustment for coding differences in its advance notice for 2010 payment before 2008 data were available. While 2008 data became available prior to the final announcement of the coding adjustment, CMS decided not to incorporate 2008 data into its final adjustment. In its announcement for 2010 payment, CMS explains that it took a conservative approach for the first year that it implemented the MA coding adjustment. Incorporating 2008 data would have increased the size of CMS's final adjustment. Second, CMS did not take into account the increasing impact of coding differences over time. However, without 2008 data, the increasing trend of the annual impact of coding differences is less apparent, and supports the agency's decision to use the average annual impact from 2004 to 2007 as a proxy for the annual impact from 2007 to 2010. Third, CMS only accounted for differences in age and mortality between the MA and FFS study populations. We found that accounting for additional beneficiary characteristics explained more variation in disease score growth, and consequently improved the accuracy of our risk score growth estimate. CMS did not update its estimate in 2011 and 2012 with more current data, even though data were available. CMS did not include 2008 data in its 2010 estimate due to its desire to take a conservative approach for the first year it implemented a coding adjustment, and the agency did not update its estimate for 2011 or 2012 due to concerns about the many MA payment changes taking place. While maintaining the same level of adjustment for 2011 and 2012 maintains stability and predictability in MA payment rates, it also allows the accuracy of the adjustment to diminish in each year. Including more recent data would have improved the accuracy of CMS's 2011 and 2012 estimates because more recent data are likely to be more representative of the year in which an adjustment was made. By not updating its estimate with more current data, CMS also did not account for the additional years of cumulative coding differences in its estimate: 4 years for 2011 (2007 to 2011) and 5 years for 2012 (2007 to 2012). While CMS stated in its announcement for 2011 payment that it would consider accounting for additional years of coding differences, CMS officials told us they were concerned about incorporating additional years using a linear methodology because it would ignore the possibility that MA plans may reach a limit at which they could no longer code diagnoses more comprehensively. We think it is unlikely that this limit has been reached. Given the financial incentives that MA plans have to ensure that all relevant diagnoses are coded, the fact that CMS's 3.41 percent estimate is below our low estimate of 4.8 percent, and considering the increasing use of electronic health records to capture and maintain diagnostic information, the upper limit is likely to be greater than the 3 years CMS accounted for in its 2011 and 2012 estimates. In addition to not including more recent data, CMS did not incorporate the impact of the upward trend in coding differences on risk scores into its estimates for 2011 and 2012. Based on the trend of increasing impact of coding differences through 2008, shown in both CMS's and our analysis, we believe that the impact of coding differences on 2011 and 2012 MA risk scores is likely to be larger than it was on 2010 MA risk scores. In addition, less than 1.4 percent of MA enrollees in 2011 were enrolled in a plan without a network, suggesting that our slightly larger results based on only MA plans with a network are more accurate estimates of the impact of coding differences in 2011 and 2012. By continuing to implement the same 3.41 percent adjustment for coding differences in 2011 and 2012, we believe CMS likely substantially underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. Risk adjustment is important to ensure that payments to MA plans adequately account for differences in beneficiaries' health status and to maintain plans' financial incentive to enroll and care for beneficiaries regardless of their health status or the resources they are likely to consume. For CMS's risk adjustment model to adjust payments to MA plans appropriately, diagnostic coding patterns must be similar among both MA plans and Medicare FFS. We confirmed CMS's finding that differences in diagnostic coding caused risk scores for MA beneficiaries to be higher than those for comparable Medicare FFS beneficiaries in 2010. This finding underscores the importance of continuing to adjust MA risk scores to account for coding differences and ensuring that these adjustments are as accurate as possible. If an adjustment for coding differences is too low, CMS would pay MA plans more than it would pay providers in Medicare FFS to provide health care for the same beneficiaries. We found that CMS's 3.41 percent adjustment for coding differences in 2010 was too low, resulting in $1.2 billion to $3.1 billion in payments to MA plans for coding differences. By not updating its methodology in 2011 or in 2012, CMS likely underestimated the impact of coding differences on MA risk scores to a greater extent in these years, resulting in excess payments to MA plans. If CMS does not update its methodology, excess payments due to differences in coding practices are likely to increase. To help ensure appropriate payments to MA plans, the Administrator of CMS should take steps to improve the accuracy of the adjustment made for differences in diagnostic coding practices between MA and Medicare FFS. Such steps could include, for example, accounting for additional beneficiary characteristics, including the most current data available, identifying and accounting for all years of coding differences that could affect the payment year for which an adjustment is made, and incorporating the trend of the impact of coding differences on risk scores. CMS provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, CMS stated that it found our methodological approach and findings informative and suggested that we provide some additional information about how the coding differences between MA and FFS were calculated. In response, we added additional details to appendix I about the regression models used, the calculations used to generate our cumulative impact estimates, and the trend line used to generate our high estimate. CMS did not comment on our recommendation for executive action. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, interested congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix explains the scope and methodology that we used to address our objective that determines the extent to which differences, if any, in diagnostic coding between Medicare Advantage (MA) plans and Medicare fee-for-service (FFS) affect risk scores and payments to MA plans in 2010. To determine the extent to which differences, if any, in diagnostic coding between MA plans and Medicare FFS affected MA risk scores in 2010, we used Centers for Medicare & Medicaid Services (CMS) enrollment and risk score data from 2004 to 2008, the most current data available at the time of our analysis, and projected the estimated impact to 2010. For three periods (2005 to 2006, 2006 to 2007, and 2007 to 2008), we compared actual risk score growth for beneficiaries in our MA study population with the estimated risk score growth the beneficiaries would have had if they were enrolled in Medicare FFS. Risk scores for a given calendar year are based on beneficiaries' diagnoses in the previous year, so we identified our study population based on enrollment data for 2004 through 2007 and analyzed risk scores for that population for 2005 through 2008. Our MA study population consisted of a retrospective cohort of MA beneficiaries. We included MA beneficiaries who were enrolled in health maintenance organization (HMO), preferred provider organization (PPO), and private fee-for-service (PFFS) plans as well as plans offered by provider-sponsored organizations (PSO). Specifically, we identified the cohort of MA beneficiaries who were enrolled in MA for all of 2007 and followed them back for the length of their continuous enrollment to 2004. In addition, for beneficiaries who were enrolled in Medicare FFS and switched to MA in 2005, 2006, or 2007, we included data for 1 year of Medicare FFS enrollment immediately preceding their MA enrollment.Our MA study population included three types of beneficiaries, each of which we analyzed separately for each period: MA joiners--beneficiaries enrolled in Medicare FFS for the entire first year of each period and then enrolled in MA for all of the following year, MA plan stayers--beneficiaries enrolled in the same MA plan for the first and second year of the period, and MA plan switchers--beneficiaries enrolled in one MA plan for the first year of the period and a second MA plan in the following year. Our control population consisted of a retrospective cohort of FFS beneficiaries who were enrolled in FFS for all of 2007 and 2006. We followed these beneficiaries back to 2004 and included data for all years of continuous FFS enrollment. For both the study and control populations, we excluded data for years during which a beneficiary (1) was diagnosed with end-stage renal disease (ESRD) during the study year; (2) resided in a long-term care facility for more than 90 consecutive days; (3) died prior to July 1, 2008; (4) resided outside the 50 United States; Washington, D.C.; and Puerto Rico; or (5) moved to a new state or changed urban/rural status. We calculated the actual change in disease score--the portion of the risk score that is based on a beneficiary's coded diagnoses--for the MA study population for the following three time periods (in payment years): 2005 to 2006, 2006 to 2007, and 2007 to 2008.disease scores that would have occurred if those MA beneficiaries were enrolled continuously in FFS, we used our control population to estimate a regression model that described how beneficiary characteristics To estimate the change in influenced change in disease score. In the regression model we used change in disease score (year 2 - year 1) as our dependent variable and included age, sex, hierarchical condition categories (HCC), HCC interaction variables, Medicaid status, and original reason for Medicare entitlement was disability as independent variables as they are specified in the CMS-HCC model. We also included one urban and one rural variable for each of the 50 United States; Washington, D.C.; and Puerto Rico as independent variables to identify beneficiary residential location. Then we used these regression models and data on beneficiary characteristics for our MA study population to estimate the change in disease scores that would have occurred if those MA beneficiaries had been continuously enrolled in FFS. We identified the difference between the actual and estimated change in disease scores as attributable to coding differences between MA and FFS because the regression model accounted for other relevant factors affecting disease score growth (see table 1). To convert these estimates of disease score growth due to coding differences into estimates of the impact of coding differences on 2010 MA risk scores, we divided the disease score growth estimates by the average MA risk score in 2010. Because 2010 risk scores were not available at the time we conducted our analysis, we calculated the average MA community risk score for the most recent data available (risk score years 2005 through 2008) and projected the trend to 2010 to estimate the average 2010 MA risk score. We projected these estimates of the annual impact of coding difference on 2010 risk scores through 2010 using two different assumptions. One projection assumed that the annual impact of coding differences on risk scores was the same from 2008 to 2010 as it was from 2007 to 2008. The other projection assumed that the trend of increasing coding difference impact over 2005 to 2008 continued through 2010 (see fig. 2). To calculate the cumulative impact of coding differences on MA risk scores for 2007 through 2010, we summed the annual impact estimates for that period and adjusted each impact estimate to account for beneficiaries who disenrolled from the MA program before 2010. The result is the cumulative impact of coding differences from 2007 to 2010 on MA risk scores in 2010.of coding differences from 2007 to 2010 on MA risk scores in 2010 for beneficiaries in MA plans with provider networks (HMOs, PPOs, and PSOs) because such plans may have a greater ability to affect provider coding patterns. We separately estimated the cumulative impact We also performed an additional analysis to determine how sensitive our results were to our assumption that coding patterns for MA and FFS were similar in 2007. CMS believes that MA coding patterns may have been less comprehensive than FFS when the CMS-HCC model was implemented, and that coding pattern differences caused MA risk scores to grow faster than FFS; therefore, there may have been a period of "catch-up" before MA coding patterns became more comprehensive than FFS coding patterns. While the length of the "catch-up" period is not known, we evaluated the impact of assuming the actual "catch-up" period was shorter, and that MA and FFS coding patterns were similar in 2005. Specifically, we evaluated the impact of analyzing two additional years of coding differences by estimating the impact of coding differences from 2005 to 2010. To quantify the impact of both our and CMS's estimates of coding differences on payments to MA plans in 2010, we used data on MA plan bids--plans' proposed reimbursement rates for the average beneficiary-- which are used to determine payments to MA plans. We used these data to calculate total risk-adjusted payments for each MA plan before and after applying a coding adjustment, and then used the differences between these payment levels to estimate the percentage reduction in total projected payments to MA plans in 2010 resulting from adjustments for coding differences. Then we applied the percentage reduction in payments associated with each adjustment to the estimated total payments to MA plans in 2010 of $112.8 billion and accounted for reduced Medicare Part B premium payments received by CMS, which offset the reduction in MA payments (see table 2). The CMS data we analyzed on Medicare beneficiaries are collected from Medicare providers and MA plans. We assessed the reliability of the CMS data we used by interviewing officials responsible for using these data to determine MA payments, reviewing relevant documentation, and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our study. In addition to the contact named above, Christine Brudevold, Assistant Director; Alison Binkowski; William Black; Andrew Johnson; Richard Lipinski; Elizabeth Morrison; and Merrile Sing made key contributions to this report.
The Centers for Medicare & Medicaid Services (CMS) pays plans in Medicare Advantage (MA)--the private plan alternative to Medicare fee-for-service (FFS)--a predetermined amount per beneficiary adjusted for health status. To make this adjustment, CMS calculates a risk score, a relative measure of expected health care costs, for each beneficiary. Risk scores should be the same among all beneficiaries with the same health conditions and demographic characteristics. Policymakers raised concerns that differences in diagnostic coding between MA plans and Medicare FFS could lead to inappropriately high MA risk scores and payments to MA plans. CMS began adjusting for coding differences in 2010. GAO (1) estimated the impact of any coding differences on MA risk scores and payments to plans in 2010 and (2) evaluated CMS's methodology for estimating the impact of these differences in 2010, 2011, and 2012. To do this, GAO compared risk score growth for MA beneficiaries with an estimate of what risk score growth would have been for those beneficiaries if they were in Medicare FFS, and evaluated CMS's methodology by assessing the data, study populations, study design, and beneficiary characteristics analyzed. GAO found that diagnostic coding differences exist between MA plans and Medicare FFS. Using data on beneficiary characteristics and regression analysis, GAO estimated that before CMS's adjustment, 2010 MA beneficiary risk scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than they likely would have been if the same beneficiaries had been continuously enrolled in FFS. The higher risk scores were equivalent to $3.9 billion to $5.8 billion in payments to MA plans. Both GAO and CMS found that the impact of coding differences increased over time. This trend suggests that the cumulative impact of coding differences in 2011 and 2012 could be larger than in 2010. In contrast to GAO, CMS estimated that 3.4 percent of 2010 MA beneficiary risk scores were attributable to coding differences between MA plans and Medicare FFS. CMS's adjustment for this difference avoided $2.7 billion in excess payments to MA plans. CMS's 2010 estimate differs from GAO's in that CMS's methodology did not include more current data, did not incorporate the trend of the impact of coding differences over time, and did not account for beneficiary characteristics other than age and mortality, such as sex, health status, Medicaid enrollment status, beneficiary residential location, and whether the original reason for Medicare entitlement was disability. CMS did not update its coding adjustment estimate in 2011 and 2012 to include more current data, to account for additional years of coding differences, or to incorporate the trend of the impact of coding differences. By continuing to implement the same 3.4 percent adjustment for coding differences in 2011 and 2012, CMS likely underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. GAO's findings underscore the importance of both CMS continuing to adjust risk scores to account for coding differences and ensuring that those adjustments are as complete and accurate as possible. In its comments, CMS stated that it found our findings informative. CMS did not comment on our recommendation. GAO recommends that CMS should improve the accuracy of its MA risk score adjustments by taking steps such as incorporating adjustments for additional beneficiary characteristics, using the most current data available, accounting for all relevant years of coding differences, and incorporating the effect of coding difference trends.
4,353
711
The criminal justice process--from arrest through correctional supervision--in any jurisdiction is generally complex and typically involves a number of participants, including police, prosecutors, defense attorneys, courts, and corrections agencies. Because of the large number of agencies involved, coordination among agencies is necessary for the process to function as efficiently as possible within the requirements of due process. That is, all involved agencies need to work together to ensure proper and efficient system operations, identify any problems that emerge, and decide how best to balance competing interests in resolving these problems. The unique structure and funding of D.C.'s criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional coordination challenges. As shown in table 1, the D.C. criminal justice system consists of four D.C. agencies principally funded through local D.C. funds, six federal agencies, and three D.C. agencies principally funded through federal appropriations. According to most officials we interviewed and our own analyses, an overarching problem within the D.C. criminal justice system has been the lack of coordination among all participating agencies. Typically, federal and nonfederal criminal justice systems include the following stages: (1) arrest and booking, (2) charging, (3) initial court appearance, (4) release decision, (5) preliminary hearing, (6) indictment, (7) arraignment, (8) trial, (9) sentencing, and (10) correctional supervision. Most stages require the participation of several agencies that need to coordinate their activities for the system to operate efficiently while also meeting the requirements of due process. That is, all involved agencies need to work together to ensure that their roles and operations mesh well with those of other agencies and to identify any problems that emerge and decide how best to resolve them. Table 2 shows the stages in D.C.'s criminal justice system and the agencies that participate in each stage. As shown in the table, 7 of the 10 stages typically involve multiple agencies with different sources of funding, which results in different reporting structures and different oversight entities. For example, as many as six agencies--one D.C. (MPDC), three federal (the U.S. Attorney's Office for the District of Columbia (USAO), U.S. Marshals Service, and D.C. Pretrial Services Agency), and two federally funded D.C. agencies (Superior Court and Public Defender Service (Defender Service)--need to coordinate their activities before the arrestee's initial court appearance for a felony offense can occur. At the latter stages of the system, an offender's sentencing and correctional supervision may require the participation of as many as eight agencies-- one D.C.-funded agency (the Department of Corrections (DOC), five federal agencies (USAO, Federal Bureau of Prisons (BOP), U.S. Marshals Service, U.S. Parole Commission, and the Court Services and Offender Supervision Agency (Court Services)), and two federally funded D.C. agencies (Superior Court and Defender Service). At any stage, the participation of other agencies might also be required. In addition, the reporting and funding structure for these participating agencies often differs. For example, USAO, the U.S. Marshals Service, BOP, and the U.S. Parole Commission ultimately report to the U.S. Attorney General and are funded by the appropriations subcommittee that funds the Department of Justice; MPDC and the Office of the Corporation Counsel (Corporation Counsel) ultimately report to the D.C. Mayor; and Superior Court, Defender Service, Pretrial Services, and Court Services are independent of both D.C and the U.S. Department of Justice, submit their budgets to Congress, and are funded by the appropriations subcommittee for D.C. According to most officials we interviewed and our analyses, an overarching problem within the D.C. criminal justice system has been the lack of coordination among all participating agencies. Agency officials pointed to several major problem areas, each the subject of recent studies that have identified coordination issues. The areas included scheduling of court cases, which has resulted in the inefficient use of officer, attorney, and court personnel time; information technology, which uses more than 70 different systems that are not linked to facilitate the sharing of information; correctional supervision, in which poor communication among agencies has led to monitoring lapses with tragic consequences; and forensics, in which the sharing of responsibilities among agencies increases the possibility of evidentiary mishaps resulting from lapses in coordination. The scheduling of court cases has had adverse affects on several criminal justice agencies involved in case processing. As shown in table 2, MPDC, prosecutors, Defender Service, U.S. Marshals Service, Pretrial Services, Court Services, and Superior Court could be involved in the court-related processing of a case from the preliminary hearing to the trial and subsequent sentencing. Representatives from several of these agencies are typically required to be present at court trials and hearings. Because specific court times are not established, individuals who are expected to appear in court are required to be present when the court first convenes in the morning. These individuals might be required to wait at the courthouse for some period of time for the case to be called, if (1) more trials or hearings are scheduled than can be conducted, (2) any one of the involved individuals is not present or prepared, or (3) the case is continued for any number of reasons. MPDC recorded that during calendar year 1999 its officers spent 118 full-time staff years in court-related activities such as preliminary hearings and trials. While MPDC officials stated that officers often spent many hours at court waiting for cases to be called, data were not available on the proportion of the 118 full-time staff years that were attributable to actual court time compared to the time spent waiting for cases to be called, including cases that were rescheduled. CJCC selected the Council for Court Excellence and the Justice Management Institute to conduct a detailed study of criminal justice resource management issues, with particular emphasis on court case processing and the utilization of police resources. In its March 2001 report, the Council for Court Excellence and the Justice Management Institute concluded that major changes were needed in the D.C. criminal justice caseflow system to improve the system's efficiency. Among other things, the report found inefficiencies and counterproductive policies at every stage in case processing. The report also concluded that little use was being made of modern technology in the arrest, booking, papering, and court process that could improve system operations. The Council for Court Excellence and the Justice Management Institute identified priority areas for system improvements, such as redesigning court procedures in misdemeanor cases, improving the methods used to process cases from arrest through initial court appearance by automating the involved processes, and improving the systems used to notify police officers about court dates. Congress provided $1 million for fiscal year 2001 to implement some of the recommended case management initiatives, such as a differentiated case management system for misdemeanors and traffic offenses, the papering pilot project between MPDC and Corporation Counsel, and a mental health pilot treatment project for appropriate, nonviolent pretrial release defendants in coordination with the D.C. Commission on Mental Health Services. D.C.'s criminal justice system is complex, with more than 70 different information systems in use among the various participating agencies. These systems are not linked in a manner that permits timely and useful information sharing among disparate agencies. For example, it is very difficult to obtain data to determine the annual amount of time MPDC officers spend meeting with prosecutors about cases in which prosecutors eventually decide not to file charges against the arrestee. We determined that such an analysis would require data about: (1) MPDC arrests, (2) MPDC officer time and attendance, (3) charges filed by USAO or Corporation Counsel, and (4) Superior Court case dispositions. Such data are currently maintained in separate systems with no reliable tracking number that could be used to link the information in each system for a specific case and no systematic exchange of information. This lack of shared information diminishes the effectiveness of the entire criminal justice system. For example, according to a CJCC official, there is no immediate way for an arresting officer to determine whether an arrestee is on parole or for an arrestee's community supervision officer to know that the parolee had been arrested. Such information could affect both the charging decision and the decision whether or not to release an arrestee from an MPDC holding cell. In 1999, CJCC attempted to address problems with D.C. criminal justice information systems by preparing, among other things, an Information Technology Interagency Agreement that was adopted by CJCC members. The agreement recognized the need for immediate improvement of information technology in the D.C. criminal justice system and established the Information Technology Advisory Committee (ITAC) to serve as the governing body for justice information system development. ITAC recognized that it was difficult for a single agency involved in the criminal justice system to access information systems maintained by other agencies, and pursued developing a system that would allow an agency to share information with all other criminal justice agencies, while maintaining control over its own system. ITAC devised a District of Columbia Justice Information System (JUSTIS). In July 2000, CJCC partnered with the D.C. Office of the Chief Technology Officer in contracting with a consulting firm to design JUSTIS based on modern dedicated intranet and Web browser technology. When completed, JUSTIS is to allow each agency to maintain its current information system, while allowing the agency to access selected data from other criminal justice agencies. Effective correctional supervision, which includes probation, incarceration, and post-prison parole or supervised released for convicted defendants, requires effective coordination among participating agencies. In D.C., the stage of the criminal justice system referred to as correctional supervision involves several agencies, including: (1) Superior Court, which sentences convicted defendants and determines whether to revoke a person's release on community supervision; (2) Court Services, which monitors offenders on community supervision; (3) DOC, which primarily supervises misdemeanants sentenced to D.C. Jail or one of several halfway houses in D.C.; (4) BOP, which supervises felons incarcerated in federal prisons; (5) the U.S. Parole Commission, which determines the prison release date and conditions of release for D.C. inmates eligible for parole;and (6) the U.S. Marshals Service, which transports prisoners. Gaps in coordination among agencies may lead to tragic consequences, such as those that occurred in the case of Leo Gonzales Wright, who committed two violent offenses while under the supervision of D.C.'s criminal justice system. Wright, who was paroled in 1993 after serving nearly 17 years of a 15-to-60 year sentence for armed robbery and second degree murder, was arrested in May 1995 for automobile theft charges, which were later dismissed. In June 1995, Wright was arrested for possession with intent to distribute cocaine. However, he was released pending trial for the drug arrest, due in part to miscommunication among agencies. Wright subsequently committed two carjackings, murdering one of his victims. He was convicted in U.S. District Court for the District of Columbia and is currently serving a life without parole sentence in federal prison at Leavenworth, KS. The outcry over the Wright case resulted in two studies, including a comprehensive review of the processing of Wright's case prepared for the U.S. Attorney General by the Corrections Trustee in October 1999. The report included 24 recommendations to help ensure that instances similar to the Wright case do not occur. In July 2000, the Corrections Trustee issued a progress report on the implementation of recommendations from the October 1999 report. According to the Corrections Trustee, while not all recommendations in the October 1999 report have been fully implemented, progress has been made in addressing several of them. For example, with funds provided by the Corrections Trustee, DOC has purchased a new jail-management information system for tracking inmates and implemented a new policy on escorted inmate trips. In addition, in January 2000, the Corrections Trustee began convening monthly meetings of an Interagency Detention Work Group, whose membership largely parallels that of CJCC. The group and its six subcommittees have focused on such issues as the convicted felon designation and transfer process, and parole and halfway house processing. In addition to the studies and the actions of the Corrections Trustee, CJCC and Court Services are addressing the monitoring and supervision of offenders. CJCC has begun to address the issues of halfway house management and programs that monitor offenders. Court Services is developing a system in which sanctions are imposed whenever individuals violate conditions of probation or parole. Forensics is another area where lack of coordination can have adverse effects. D.C. does not have a comprehensive forensic laboratory to complete forensic analysis for use by police and prosecutors. Instead, MPDC currently uses other organizations such as the FBI, the Drug Enforcement Administration, the Bureau of Alcohol, Tobacco and Firearms, and a private laboratory to conduct much of its forensic work. MPDC performs some forensic functions such as crime scene response, firearms testing, and latent print analysis. The Office of the Chief Medical Examiner, a D.C. agency, performs autopsies and certain toxicological tests, such as the testing for the presence of drugs in the body. Coordination among agencies is particularly important because several organizations may be involved in handling and analyzing a piece of evidence. For example, if MPDC finds a gun with a bloody latent fingerprint at a crime scene, the gun would typically need to be examined by both MPDC and the FBI. In order to complete the analysis, multiple forensic disciplines (e.g., DNA or firearm examiners) would need to examine the gun. If the various forensic tests were coordinated in a multidisciplinary approach, forensic examiners would be able to obtain the maximum information from the evidence without the possibility of contaminating it. Such contamination could adversely affect the adjudication and successful resolution of a criminal investigation. In April 2000, the National Institute of Justice (NIJ) issued a report on the D.C. criminal justice system's forensic capabilities. The report concluded that D.C. had limited forensic capacity and that limitations in MPDC prevented the effective collection, storage, and processing of crime scene evidence, which ultimately compromised the potential for successful resolution of cases. NIJ-identified deficiencies included, among other things: lengthy delays in processing evidence; ineffective communications in the collection, processing, and tracking of evidence from the crime scene; and ineffective communications between forensic case examiners and prosecutors. The NIJ report supported the development of a centralized forensic laboratory that would be shared by MPDC and the D.C. Office of the Chief Medical Examiner. The report did not examine the costs to build a comprehensive forensic laboratory. In his fiscal year 2002 proposed budget, the Mayor has allocated $7.5 million for the development of a forensics laboratory that is designed to be a state-of-the-art, full-service crime laboratory, medical examiner/morgue facility, and public health laboratory that meets all applicable National Lab Standards. We did not independently evaluate the costs and benefits of a comprehensive forensic laboratory. However, such a facility could potentially improve coordination by housing all forensic functions in one location, eliminating the need to transport evidence among multiple, dispersed locations. A principal area where D.C.'s unique structure has led to coordination problems is case processing that occurs from the time of arrest through initial court appearance. As shown in table 2, as many as six agencies need to coordinate before an arrested person's initial court appearance for a felony offense can occur. However, we identified several aspects of the current process where a lack of coordination posed problems. For example, unlike many other major metropolitan jurisdictions, prosecutors in D.C. require an officer who is knowledgeable about the facts of the arrest to meet personally with them before they determine whether to formally charge an arrestee with a felony or misdemeanor crime. This process is called papering. During calendar year 1999, papering required the equivalent of 23 full-time officers devoted solely to these appearances, ultimately reducing the number of officers available for patrol duty by an equal amount. Efforts in 1998 and 1999 to revise the papering process failed in part because the costs and benefits of the changes under consideration were perceived by one or more participating agencies to be unevenly distributed. We focused our review on offenses prosecuted by the USAO because during 1999 they accounted for over 85 percent of MPDC officer hours expended on papering. USAO's requirement that MPDC officers personally meet with prosecutors in order to make a charging decision appears to be unusual, particularly for misdemeanors. A 1997 Booz-Allen and Hamilton survey found that in 30 of 38 responding jurisdictions (51 were surveyed), police officers were not required to meet with prosecutors until court (i.e., trial), and in 3 cities officers were not required to appear in person until the preliminary hearing. In addition, we reviewed the charging processes in Philadelphia and Boston. Neither of these cities required face-to-face meetings with prosecutors for processing most cases. According to USAO officials, the current papering process is critical for USAO to make an initial charging decision correctly. Both USAO and MPDC officials said that the paperwork submitted to USAO for charging decisions has been of uneven quality. In the past decade, several attempts have been made to change the initial stages of case processing in D.C. These efforts--which were made by MPDC, Corporation Counsel, and USAO, in conjunction with consulting firms--involved projects in the areas of night papering, night court, and officerless papering. However, the involved agencies never reached agreement on all components of the projects, and each of the projects was ultimately suspended. The Chief of MPDC has publicly advocated the establishment of some type of arrangement for making charging decisions during the evening and/or night police shifts. Night Papering and Night Court Currently, both USAO and Corporation Counsel are only open to paper cases during typical workday hours, that is, generally from about 8:00 a.m. to 5:00 p.m., Monday through Saturday. Night papering could permit officers on evening and night shifts to generally present their paperwork to prosecutors during their shifts. Night court refers to conducting certain court proceedings, such as initial court appearance, during a late evening or night shift. Night papering would require USAO and Corporation Counsel charging attorneys to work evening hours, and night court would involve a much broader commitment of D.C. Superior Court resources as well as the participation of other agencies. Officerless papering would not require an officer to appear in person before the prosecutor, and provisions could be made for the prosecutor to contact the officer to clarify issues, as needed. In March 2001, MPDC and Corporation Counsel began an officerless papering pilot program for 17 minor offenses prosecuted by Corporation Counsel. In the absence of an automated system for completing and transmitting the forms required for documenting arrests and making charging decisions, simple entry errors resulting from entering the same information multiple times can hamper the initial stages of case processing. USAO has cited such problems as one reason that officers should be required to meet face to face with prosecutors for papering decisions. To the extent that the police do not have a reliable process for reviewing and ensuring the completeness and accuracy of the paperwork submitted to prosecutors, USAO is likely to continue to resist efforts to institute officerless papering. Even if these issues were to be successfully addressed, the distribution of costs among the participants in any revised system would still likely pose an obstacle to change. The costs of the current system of processing cases from arrest through initial court appearance are borne principally by MPDC--primarily a locally funded D.C. agency--not USAO or D.C. Superior Court, both of which are federally funded. On the other hand, instituting night papering would likely reduce MPDC's costs, while increasing the costs borne by USAO, Corporation Counsel, and/or D.C. Superior Court, depending upon the approach taken. CJCC is the primary venue in which D.C. criminal justice agencies can identify and address interagency coordination issues. Its funding and staffing have been modest--about $300,000 annually with four staff. CJCC has functioned as an independent entity whose members represent the major organizations within the D.C. criminal justice system. According to many criminal justice officials we spoke with, during its nearly 3-year existence, CJCC has had some success in improving agency coordination, mostly in areas where all participants stood to gain from a coordinated approach to a problem. In problem areas where a solution would help one agency possibly at the expense of another, CJCC has been less successful mainly because it lacked the authority to compel agencies to address the issues. However, on balance, CJCC has provided a valuable independent forum for discussions of issues affecting multiple agencies. The D.C. Control Board did not fund CJCC for fiscal year 2001, and CJCC's sole remaining staff member is funded by a grant. It is not known whether CJCC will continue to formally exist, and if it exists, how it will be funded, whether it will have staff, and whether it will remain independent or under the umbrella of another organization, such as the D.C. Mayor's office. Recently, the Mayor included $169,000 in his fiscal year 2002 proposed budget to fund CJCC. While we welcome the Mayor's support for CJCC, we believe that for CJCC to be most successful it must be viewed as independent by participating agencies. CJCC has not been required to formally report on its activities, including areas of focus, successes, and areas of continuing discussion and disagreement. The transparency provided by an annual report would help to spotlight areas of accomplishment and continuing disagreement and could assist with oversight by those responsible for funding individual CJCC members. As of November 2000, CJCC and other agencies involved in the D.C. criminal justice system reported 93 initiatives for improving the operation of the system. Most of these initiatives were ongoing; consequently, their impact had not yet been evaluated. However, we found numerous instances where participating agencies did not agree on an initiative's goals, status, starting date, participating agencies, or results to date. This lack of agreement underscores a lack of coordination among the participating agencies that could reduce the effectiveness of these initiatives. Every criminal justice system faces coordination challenges. However, the unique structure and funding of the D.C. criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. CJCC has played a useful role in addressing such coordination challenges, especially in areas where agencies perceived a common interest. However, CJCC's uncertain future could leave D.C. without benefit of an independent entity for coordinating the activities of its unique criminal justice system. Funding CJCC through any participating agency diminishes its stature as an independent entity in the eyes of a number of CJCC's member agencies, reducing their willingness to participate. Without a requirement to report successes and areas of continuing discussion and disagreement to each agency's funding source, CJCC's activities, achievements, and areas of disagreement have generally been known only to its participating agencies. This has created little incentive to coordinate for the common good, and all too often agencies have simply "agreed to disagree" without taking action. Furthermore, without a meaningful role in cataloging multiagency initiatives, CJCC has been unable to ensure that criminal justice initiatives are coordinated among all affected agencies to help eliminate duplicative efforts and maximize their effectiveness. In our March 30, 2001, report, we recommended that Congress consider: Funding an independent CJCC--with its own director and staff--to help coordinate the operations of the D.C. criminal justice system. Congressional funding ensures that CJCC will retain its identity as an independent body with no formal organizational or funding link to any of its participating members. Requiring CJCC to report annually to Congress, the Attorney General, and the D.C. Mayor on its activities, achievements, and issues not yet resolved and why.
Every criminal justice system faces coordination challenges. However, the unique structure and funding of the D.C. criminal justice system, in which federal and D.C. jurisdictional boundaries and dollars are blended, creates additional challenges. The Criminal Justice Coordinating Council (CJCC) has played a useful role in addressing such coordination challenges, especially in areas in which agencies perceived a common interest. However, CJCC's uncertain future could leave D.C. without benefit of an independent entity for coordinating the activities of its unique criminal justice system. Funding CJCC through any participating agency diminishes its stature as an independent entity in the eyes of several CJCC member agencies, reducing their willingness to participate. Without a requirement to report successes and areas of continuing discussion and disagreement to each agency's funding source, CJCC's activities, achievements, and areas of disagreement have generally been known only to its participating agencies. This has created little incentive to coordinate for the common good, and all too often agencies have simply "agreed to disagree" without taking action. Furthermore, without a meaningful role in cataloging multiagency initiatives, CJCC has been unable to ensure that criminal justice initiatives are coordinated among all affected agencies to help eliminate duplicative efforts and maximize their effectiveness. This testimony summarizes a March 2001 report (GAO-01-187).
5,355
291
Geospatial information describes entities or phenomena that can be referenced to specific locations relative to the Earth's surface. For example, entities such as houses, rivers, road intersections, power plants, and national parks can all be identified by their locations. In addition, phenomena such as wildfires, the spread of the West Nile virus, and the thinning of trees due to acid rain can also be identified by their geographic locations. A geographic information system (GIS) is a system of computer software, hardware, and data used to capture, store, manipulate, analyze, and graphically present a potentially wide array of geospatial information. The primary function of a GIS is to link multiple sets of geospatial data and display the combined information as maps with many different layers of information. Each layer of a GIS map represents a particular "theme" or feature, and one layer could be derived from a data source completely different from the others. Typical geospatial data layers (themes) include cadastral-- describing location, ownership, and other information about real property; digital orthoimagery--containing images of the Earth's surface that have the geometric characteristics of a map and image qualities of a photograph; and hydrography--describing water features such as lakes, ponds, streams and rivers, canals, oceans, and coastlines. As long as standard processes and formats have been used to facilitate integration, each of these themes could be based on data originally collected and maintained by a separate organization. Analyzing this layered information as an integrated whole can significantly aid decision makers in considering complex choices, such as where to locate a new department of motor vehicles building to best serve the greatest number of citizens. Figure 1 portrays the concept of data themes in a GIS. Federal, state, and local governments and the private sector rely on geographic information systems to provide vital services to their customers. These various entities independently provide information and services, including maintaining land records for federal and nonfederal lands, property taxation, local planning, subdivision control and zoning, and direct delivery of many other public services. These entities also use geographic information and geographic information systems to facilitate and support delivery of these services. Many federal departments and agencies use GIS technology to help carry out their primary missions. For example, the Department of Health and Human Services uses GIS technology for a variety of public health functions, such as reporting the results of national health surveys; the Census Bureau maintains the Topologically Integrated Geographic Encoding and Referencing (TIGER) database to support its mission to conduct the decennial census and other censuses and surveys; and the Environmental Protection Agency maintains a variety of databases with information about the quality of air, water, and land in the United States. State governments also rely on geospatial information to provide information and services to their citizens. For example, the state of New York hosts a Web site to provide citizens with a gateway to state government services at http://www.nysegov.com/map-NY.cfm. Using this Web site, citizens can access information about state agencies and their services, locate county boundaries and services, and locate major state highways. Many other states, such as Oregon (http://www.gis.state.or.us/), Virginia (http://www.vgin.virginia.gov/index.html), and Alaska (http://www.asgdc.state.ak.us/), provide similar Web sites and services. Local governments use GISs for a variety of activities. For example, local fire departments can use geographic information systems to determine the quickest and most efficient route from a firehouse to a specific location, taking into account changing traffic patterns that occur at various times of day. Additionally, according to a March 2002 Gartner report, New York City's GIS was pivotal in the rescue, response, and recovery efforts after the September 11, 2001, terrorist attacks. The city's GIS provided real-time data on the area around the World Trade Center so that the mayor, governor, federal officials, and emergency response agencies could implement critical rescue, response, and recovery activities. Local governments often possess more recent and higher resolution geospatial data than the federal government, and in many cases private-sector companies collect these data under contract to local government agencies. The private sector plays an important role in support of government GIS activities because it captures and maintains a wealth of geospatial data and develops GIS software. Private companies provide services such as aerial photography, digital topographic mapping, digital orthophotography, and digital elevation modeling to produce geospatial data sets that are designed to meet the needs of governmental organizations. Figure 2 provides a conceptual summary of the many entities--including federal, state, and local governments and the private sector--that may be involved in geospatial data collection and processing relative to a single geographic location or event. Figure 3 shows the multiple data sets that have been collected by different agencies at federal, state, and local levels to capture the location of a segment of roadway in Texas. As we testified last year, the federal government has for many years taken steps to coordinate geospatial activities, both within and outside of the federal government. These include the issuance of OMB Circular A-16 and Executive Order 12906, and the E-Government Act of 2002. In addition to its responsibilities for geospatial information under the E-Government Act, OMB has specific oversight responsibilities regarding federal information technology (IT) systems and acquisition activities--including GIS--to help ensure their efficient and effective use. These responsibilities are outlined in the Clinger-Cohen Act of 1996, the Paperwork Reduction Act of 1995, and OMB Circular A-11. Table 1 provides a brief summary of federal guidance related to information technology and geospatial information. In addition to activities associated with federal legislation and guidance, OMB's Administrator, Office of Electronic Government and Information Technology, testified before the Subcommittee last June that the strategic management of geospatial assets would be accomplished, in part, through development of a robust and mature federal enterprise architecture. In 2001, the lack of a federal enterprise architecture was cited by OMB's E- Government Task Force as a barrier to the success of the administration's e-government initiatives. In response, OMB began developing the Federal Enterprise Architecture (FEA), and over the last 2 years it has released various versions of all but one of the five FEA reference models. According to OMB, the purpose of the FEA, among other things, is to provide a common frame of reference or taxonomy for agencies' individual enterprise architecture efforts and their planned and ongoing investment activities. Costs associated with collecting and maintaining geographically referenced data and systems for the federal government are significant. Specific examples of the costs of collecting and maintaining federal geospatial data and information systems include FEMA's Multi-Hazard Flood Map Modernization Program--estimated to cost $1 billion over the next 5 years; Census's TIGER database--modernization is estimated to have cost over $170 million between 2001 and 2004; Agriculture's Geospatial Database--acquisition and development reportedly cost over $130 million; Interior's National Map--development is estimated to cost about $88 million through 2008; The Department of the Navy's Primary Oceanographic Prediction, and Oceanographic Information systems--development, modernization, and operation were estimated to cost about $32 million in fiscal year 2003; and NOAA's Coastal Survey--expenditures for geospatial data are estimated to cost about $30 million annually. In addition to the costs for individual agency GISs and data, the aggregated annual cost of collecting and maintaining geospatial data for all NSDI- related data themes and systems is estimated to be substantial. According to a recent estimate by the National States Geographic Information Council (NSGIC), the cost to collect detailed data for five key data layers of the NSDI--parcel, critical infrastructure, orthoimagery, elevation, and roads--is about $6.6 billion. The estimate assumes that the data development will be coordinated among federal, state, and local government agencies, and the council cautions that without effective coordination, the costs could be far higher. Both Executive Order 12906 and OMB Circular A-16 charge FGDC with responsibilities that support coordination of federal GIS investments. Specifically, the committee is designated the lead federal executive body with responsibilities including (1) promoting and guiding coordination among federal, state, tribal, and local government agencies, academia, and the private sector in the collection, production, sharing, and use of spatial information and the implementation of the NSDI; and (2) preparing and maintaining a strategic plan for developing and implementing the NSDI. Regarding coordination with federal and other entities and development of the NSDI, FGDC has taken a variety of actions. It established a committee structure with participation from federal agencies and key nonfederal organizations such as NSGIC, and the National Association of Counties, and established several programs to help ensure greater participation from federal agencies as well as other government entities. In addition, key actions taken by FGDC to develop the NSDI include implementing the National Geospatial Data Clearinghouse and establishing a framework of data themes. In addition to FGDC's programs, two other efforts are under way that aim to coordinate and consolidate geospatial information and resources across the federal government--the Geospatial One-Stop initiative and The National Map project. Geospatial One-Stop is intended to accelerate the development and implementation of the NSDI to provide federal and state agencies with a single point of access to map-related data, which in turn will enable consolidation of redundant geospatial data. OMB selected Geospatial One- Stop as one of its e-government initiatives, in part to support development of an inventory of national geospatial assets, and also to support reducing redundancies in federal geospatial assets. In addition, the portal includes a "marketplace" that provides information on planned and ongoing geospatial acquisitions for use by agencies that are considering acquiring new data to facilitate coordination of existing and planned acquisitions. The National Map is being developed and implemented by the U.S. Geological Survey (USGS) as a database to provide core geospatial data about the United States and its territories, similar to the data traditionally provided on USGS paper topographic maps. USGS relies heavily on partnerships with other federal agencies as well as states, localities, and the private sector to maintain the accuracy and currency of the national core geospatial data set as represented in The National Map. According to Interior's Assistant Secretary--Policy, Management, and Budget, FGDC, Geospatial One-Stop, and The National Map are coordinating their activities in several areas, including developing standards and framework data layers for the NSDI, increasing the effectiveness of the clearinghouse, and making information about existing and planned data acquisitions available through the Geospatial One-Stop Web site. Regarding preparing and maintaining a strategic plan for developing and implementing the NSDI, in 1994, FGDC issued a strategic plan that described actions federal agencies and others could take to develop the NSDI, such as establishing data themes and standards, training programs, and partnerships to promote coordination and data sharing. In April 1997, FGDC published an updated plan--with input from many organizations and individuals having a stake in developing the NSDI--that defined strategic goals and objectives to support the vision of the NSDI as defined in the 1994 plan. No further updates have been made. As the current national geospatial strategy document, FGDC's 1997 plan is out of date. First, it does not reflect the recent broadened use of geospatial data and systems by many government agencies. Second, it does not take into account the increased importance that has been placed on homeland security in the wake of the September 11, 2001, attacks. Geospatial data and systems have an essential role to play in supporting decision makers and emergency responders in protecting critical infrastructure and responding to threats. Finally, significant governmentwide geospatial efforts--including the Geospatial One-Stop and National Map projects-- did not exist in 1997, and are therefore not reflected in the strategic plan. In addition to being out of date, the 1997 document lacks important elements that should be included in an effective strategic plan. According to the Government Performance and Results Act of 1993, such plans should include a set of outcome-related strategic goals, a description of how those goals are to be achieved, and an identification of risk factors that could significantly affect their achievement. The plans should also include performance goals and measures, with resources needed to achieve them, as well as a description of the processes to be used to measure progress. While the 1997 NSDI plan contains a vision statement and goals and objectives, it does not include other essential elements. These missing elements include (1) a set of outcome-related goals, with actions to achieve those goals, that would bring together the various actions being taken to coordinate geospatial assets and achieve the vision of the NSDI; (2) key risk factors that could significantly affect the achievement of the goals and objectives; and (3) performance goals and measures to help ensure that the steps being taken result in the development of the National Spatial Data Infrastructure. FGDC officials, in consultation with the executive director of Geospatial One-Stop, USGS, and participating FGDC member agencies, have initiated a "future directions" effort to begin the process of updating their existing plan. However, this activity is just beginning, and there is no time frame as to when a new strategy will be in place. Until a comprehensive national strategy is in place, the current state of ineffective coordination is likely to remain, and the vision of the NSDI will likely not be fully realized. OMB Circular A-16 directs federal agencies to coordinate their investments to facilitate building the NSDI. The circular lists 11 specific responsibilities for federal agencies, including (1) preparing, maintaining, publishing, and implementing a strategy for advancing geographic information and related spatial data activities appropriate to their mission, in support of the NSDI; (2) using FGDC standards, including metadata and other appropriate standards, documenting spatial data with relevant metadata; and (3) making metadata available online through a registered NSDI-compatible clearinghouse site. In certain cases, federal agencies have taken steps to coordinate their specific geospatial activities. For example, the Forest Service and Bureau of Land Management collaborated to develop the National Integrated Land System (NILS), which is intended to provide land managers with software tools for the collection, management, and sharing of survey data, cadastral data, and land records information. At an estimated cost of about $34 million, a single GIS--NILS--was developed that can accommodate the shared geospatial needs of both agencies, eliminating the need for each agency to develop a separate system. However, despite specific examples of coordination such as this, agencies have not consistently complied with OMB's broader geospatial coordination requirements. For example, only 10 of 17 agencies that provided reports to FGDC reported having published geospatial strategies as required by Circular A-16. In addition, agencies' spatial data holdings are generally not compliant with FGDC standards. Specifically, the annual report shows that, of the 17 agencies that provided reports to FGDC, only 4 reported that their spatial data holdings were compliant with FGDC standards. Ten agencies reported being partially compliant, and 3 agencies provided answers that were unclear as to whether they were compliant. Finally, regarding the requirement for agencies to post their data to the National Geospatial Data Clearinghouse, only 6 of the 17 agencies indicated that their data or metadata were published through the clearinghouse, 10 indicated that their data were not published, 1 indicated that some data were available through the clearinghouse. According to comments provided by agencies to FGDC in the annual report submissions, there are several reasons why agencies have not complied with their responsibilities under Circular A-16, including the lack of performance measures that link funding to coordination efforts. According to the Natural Resources Conservation Service, few incentives exist for cross-agency cooperation because budget allocations are linked to individual agency performance rather than to cooperative efforts. In addition, according to USGS, agencies' activities and funding are driven primarily by individual agency missions and do not address interagency geospatial coordination. In addition to the information provided in the annual report, Department of Agriculture officials said that no clear performance measures exist linking funding to interagency coordination. OMB has recognized that potentially redundant geospatial assets need to be identified and that federal geospatial systems and information activities need to be coordinated. To help identify potential redundancies, OMB's Administrator of E-Government and Information Technology testified in June 2003 that the agency uses three key sources of information: (1) business cases for planned or ongoing IT investments, submitted by agencies as part of the annual budget process; (2) comparisons of agency lines of business with the Federal Enterprise Architecture (FEA); and (3) annual reports compiled by FGDC and submitted to OMB. However, none of these major oversight processes have been effective tools to help OMB identify major redundancies in federal GIS investments. In their IT business cases, agencies must report the types of data that will be used, including geospatial data. According to OMB's branch chief for information policy and technology, OMB reviews these business cases to determine whether any redundant geospatial investments are being funded. Specifically, the process for reviewing a business case includes comparing proposed investments, IT management and strategic plans, and other business cases, in an attempt to determine whether a proposed investment duplicates another agency's existing or already-approved investment. However, business cases submitted to OMB under Circular A-11 do not always include enough information to effectively identify potential geospatial data and systems redundancies because OMB does not require such information in agency business cases. For example, OMB does not require that agencies clearly link information about their proposed or existing geospatial investments to the spatial data categories (themes) established by Circular A-16. Geospatial systems and data are ubiquitous throughout federal agencies and are frequently integrated into agencies' mission-related systems and business processes. Business cases that focus on mission-related aspects of agency systems and data may not provide the information necessary to compare specific geospatial investments with other, potentially similar investments unless the data identified in the business cases are categorized to allow OMB to more readily compare data sets and identify potential redundancies. For example, FEMA's fiscal year 2004 business case for its Multi-Hazard Flood Map Modernization project indicates that topographic and base data are used to perform engineering analyses for estimating flood discharge, developing floodplain mapping, and locating areas of interest related to hazards. However, FEMA does not categorize these data according to standardized spatial data themes specified in Circular A-16, such as elevation (bathymetric or terrestrial), transportation, and hydrography. As a result, it is difficult to determine whether the data overlap with other federal data sets. Without categorizing the data using the standard data themes as an important step toward coordinating that data, information about agencies' planned or ongoing use of geospatial data in their business cases cannot be effectively assessed to determine whether it could be integrated with other existing or planned federal geospatial assets. An FEA is being constructed that, once it is further developed, may help identify potentially redundant geospatial investments. According to OMB, the FEA will comprise a collection of five interrelated reference models designed to facilitate cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration within and across federal agencies. According to recent GAO testimony on the status of the FEA, although OMB has made progress on the FEA, it remains a work in process and is still maturing. OMB has identified multiple purposes for the FEA. One purpose cited is to inform agencies' individual enterprise architectures and to facilitate their development by providing a common classification structure and vocabulary. Another stated purpose is to provide a governmentwide framework that can increase agencies' awareness of IT capabilities that other agencies have or plan to acquire, so that agencies can explore opportunities for reuse. Still another stated purpose is to help OMB decision makers identify opportunities for collaboration among agencies through the implementation of common, reusable, and interoperable solutions. We support the FEA as a framework for achieving these ends. According to OMB's branch chief for information policy and technology, OMB reviews all new investment proposals against the federal government's lines of business in its Business Reference Model to identify those investments that appear to have some commonality. Many of the model's lines of business include areas in which geospatial information is of critical importance, including disaster management (the cleanup and restoration activities that take place after a disaster); environmental management (functions required to monitor the environment and weather, determine proper environmental standards, and address environmental hazards and contamination); and transportation (federally supported activities related to the safe passage, conveyance, or transportation of goods and people). The Service Component Reference Model includes specific references to geospatial data and systems. It is intended to identify and classify IT service components (i.e., applications) that support federal agencies and promote the reuse of components across agencies. The model includes 29 types of services--including customer relationship management and the visualization service, which defines capabilities that support the conversion of data into graphical or picture form. One component of the visualization service is associated with mapping, geospatial, elevation, and global positioning system services. Identification of redundant investments under the visualization service could provide OMB with information that would be useful in identifying redundant geospatial systems investments. Finally, the Data and Information Reference Model would likely be the most critical FEA element in identifying potentially redundant geospatial investments. According to OMB, this model will categorize the government's information along general content areas and describe data components that are common to many business processes or activities. Although the FEA includes elements that could be used to help identify redundant investments, it is not yet sufficiently developed to be useful in identifying redundant geospatial investments. While the Business and Service Component reference models have aspects related to geospatial investments, the Data and Information Reference Model may be the critical element for identifying agency use of geospatial data because it is planned to provide standard categories of data that could support comparing data sets among federal agencies. However, this model has not yet been completed and thus is not in use. Until the FEA is completed and OMB develops effective analytical processes to use it, it will not be able to contribute to identifying potentially redundant geospatial investments. OMB Circular A-16 requires agencies to report annually to OMB on their achievements in advancing geographic information and related spatial data activities appropriate to their missions and in support of the NSDI. To support this requirement, FGDC has developed a structure for agencies to use to report such information in a consistent format and for aggregating individual agencies' information. Using the agency reports, the committee prepares an annual report to OMB purportedly identifying the scope and depth of spatial data activities across agencies. For the fiscal year 2003 report, agencies were asked to respond to several specific questions about their geospatial activities, including (1) whether a detailed strategy had been developed for integrating geographic information and spatial data into their business processes, (2) how they ensure that data are not already available prior to collecting new geospatial data, and (3) whether geospatial data are a component of the agency's enterprise architecture. However, additional information that is critical to identifying redundancies was not required. For example, agencies were not requested to provide information on their specific GIS investments or the geospatial data sets they collected and maintained. According to the FGDC staff director, the annual reports are not meant to provide an inventory of federal geospatial assets. As a result, they cannot provide OMB with sufficient information to identify redundancies in federal geospatial investments. Further, because not all agencies provide reports to FGDC, the information that OMB has available to identify redundancies is incomplete. According to OMB's program examiner for the Department of the Interior, OMB does not know how well agencies are complying with the reporting requirements in Circular A-16. Until the information reported by agencies is consistent and complete, OMB will not be able to effectively use it to identify potential geospatial redundancies. According to OMB officials responsible for oversight of geospatial activities, the agency's methods have not yet led to the identification of redundant investments that could be targeted for consolidation or elimination. The OMB officials said they believe that, with further refinement, these tools will be effective in the future in helping them identify redundancies. In addition, OMB representatives told us that they are planning to institute a new process to collect more complete information on agencies' geospatial investments by requiring agencies to report all such investments through the Geospatial One-Stop Web portal. OMB representatives told us that reporting requirements for agencies would be detailed in a new directive that OMB expects to issue by the end of summer 2004. Without a complete and up-to-date strategy for coordination or effective investment oversight by OMB, federal agencies continue to acquire and maintain duplicative data and systems. According to the initial business case for the Geospatial One-Stop initiative, about 50 percent of the federal government's geospatial data investment is duplicative. Such duplication is widely recognized. Officials from federal and state agencies and OMB have all stated that unnecessarily redundant geospatial data and systems exist throughout the federal government. The Staff Director of FGDC agreed that redundancies continue to exist throughout the federal government and that more work needs to be done to specifically identify them. DHS's Geospatial Information Officer also acknowledged redundancies in geospatial data acquisitions at his agency, and said that DHS is working to create an enterprisewide approach to managing geospatial data in order to reduce redundancies. Similarly, state representatives to the National States Geographic Information Council have identified cases in which they have observed multiple federal agencies funding the acquisition of similar data to meet individual agency needs. For example, USGS, FEMA, and the Department of Defense (DOD) each maintain separate elevation data sets: USGS's National Elevation Dataset, FEMA's flood hazard mapping elevation data program, and DOD's elevation data regarding Defense installations. FEMA officials indicated that they obtained much of their data from state and local partners or purchased them from the private sector because data from those sources better fit their accuracy and resolution requirements than elevation data available from USGS. Similarly, according to one Army official, available USGS elevation data sets generally do not include military installations, and even when such data are available for specific installations, they are typically not accurate enough for DOD's purposes. As a result, DOD collects its own elevation data for its installations. In this example, if USGS elevation data-collection projects were coordinated with FEMA and DOD to help ensure that the needs of as many federal agencies as possible were met through the project, potentially costly and redundant data- collection activities could be avoided. According to the USGS Associate Director for Geography, USGS is currently working to develop relationships with FEMA and DOD, along with other federal agencies, to determine where these agencies' data-collection activities overlap. In another example, officials at the Department of Agriculture and the National Geospatial-Intelligence Agency (NGA) both said they have purchased data sets containing street-centerline data from commercial sources, even though the Census Bureau maintains such data in its TIGER database. According to these officials, they purchased the data commercially because they had concerns about the accuracy of the TIGER data. The Census Bureau is currently working to enhance its TIGER data in preparation for the 2010 census, and a major objective of the project is to improve the accuracy of its street location data. However, despite Agriculture and NGA's use of street location data, Census did not include either agency in the TIGER enhancement project plan's list of agencies that will be affected by the initiative. Without better coordination, agencies such as Agriculture and NGA are likely to continue to need to purchase redundant commercial data sets in the future. In summary, although various cross-government committees and initiatives, individual federal agencies, and OMB have each taken actions to coordinate the government's geospatial investments across agencies and with state and local governments, agencies continue to purchase and maintain uncoordinated and duplicative geospatial investments. Without better coordination, such duplication is likely to continue. In order to improve the coordination of federal geospatial investments, our report recommends that the Director of OMB and the Secretary of the Interior direct the development of a national geospatial data strategy with outcome-related goals and objectives; a plan for how the goals and objectives are to be achieved; identification of key risk factors; and performance measures. Our report also recommends that the Director of OMB develop criteria for assessing the extent of interagency coordination on proposals for potential geospatial investments. Based on these criteria, funding for potential geospatial investments should be delayed or denied when coordination is not adequately addressed in agencies' proposals. Finally, our report provides specific recommendations to the Director of OMB in order to strengthen the agency's oversight actions to more effectively coordinate federal geospatial data and systems acquisitions and thereby reduce potentially redundant investments. Mr. Chairman, this concludes my testimony. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information regarding this statement, please contact me at (202) 512-6240 or by e-mail at [email protected]. Other key contributors to this testimony included Neil Doherty, John de Ferrari, Michael P. Fruitman, Michael Holland, Steven Law, and Elizabeth Roach. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The collection, maintenance, and use of location-based (geospatial) information are essential to federal agencies carrying out their missions. Geographic information systems (GIS) are critical elements used in the areas of homeland security, healthcare, natural resources conservation, and countless other applications. GAO was asked to review the extent to which the federal government is coordinating the efficient sharing of geospatial assets, including through Office of Management and Budget (OMB) oversight. GAO's report on this matter, Geospatial Information: Better Coordination Needed to Identify and Reduce Duplicative Investments (GAO-04-703), is being released today. GAO's testimony focuses on the extent to which the federal government is coordinating the sharing of geospatial assets, including through oversight measures in place at the Office of Management and Budget (OMB), in order to identify and reduce redundancies in geospatial data and systems. OMB, cross-government committees, and individual federal agencies have taken actions to coordinate geospatial investments across agencies and with state and local governments. However, these efforts have not been fully successful due to (1) a complete and up-to-date strategic plan is missing. The existing strategic plan for coordinating national geospatial resources and activities is out of date and lacks specific measures for identifying and reducing redundancies, (2) federal agencies are not consistently complying with OMB direction to coordinate their investments, and (3) OMB's oversight methods have not been effective in identifying or eliminating instances of duplication. This has resulted from OMB not collecting consistent, key investment information from all agencies. Consequently, agencies continue to independently acquire and maintain potentially duplicative systems. This costly practice is likely to continue unless coordination is significantly improved.
6,702
385
The ability to produce the information needed to efficiently and effectively manage the day-to-day operations of the federal government and provide accountability to taxpayers and the Congress has been a long-standing challenge for federal agencies. To help address this challenge, many agencies are in the process of replacing their core financial systems as part of their financial management system improvement efforts. Although the implementation of any major system is not a risk-free proposition, organizations that follow and effectively implement disciplined processes can reduce these risks to acceptable levels. The use of the term acceptable levels acknowledges the fact that any systems acquisition has risks and will suffer the adverse consequences associated with defects. However, effective implementation of the disciplined processes reduces the potential for risks to occur and helps prevent those that do occur from having any significant adverse impact on the cost, timeliness, and performance of the project. A disciplined software development and acquisition process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. Although there is no standard set of practices that will ever guarantee success, several organizations, such as the Software Engineering Institute (SEI) and the Institute of Electrical and Electronic Engineers (IEEE), as well as individual experts have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. The key to having a disciplined system development effort is to have disciplined processes in multiple areas, including project planning and management, requirements management, configuration management, risk management, quality assurance, and testing. Effective processes should be implemented in each of these areas throughout the project life cycle because change is constant. Effectively implementing the disciplined processes necessary to reduce project risks to acceptable levels is hard to achieve because a project must effectively implement several best practices, and inadequate implementation of any one practice may significantly reduce or even eliminate the positive benefits of the others. Successfully acquiring and implementing a new financial management system requires a process that starts with a clear definition of the organization's mission and strategic objectives and ends with a system that meets specific information needs. We have seen many system efforts fail because agencies started with a general need, such as improving financial management, but did not define in precise terms (1) the specific problems they were trying to solve, (2) what their operational needs were, and (3) what specific information requirements flowed from these operational needs. Instead, they plunged into the acquisition and implementation process in the belief that these specifics would somehow be defined along the way. The typical result was that systems were delivered well past anticipated milestones; failed to perform as expected; and, accordingly, were overbudget because of required costly modifications. Undisciplined projects typically show a great deal of productive work at the beginning of the project, but the rework associated with defects begins to consume more and more resources. In response, processes are adopted in the hopes of managing what later turns out to have been unproductive work. Generally, these processes are "too little, too late" because sufficient foundations for building the systems were not established or not established adequately. Experience has shown that projects for which disciplined processes are not implemented at the beginning are forced to implement them later when it takes more time and the processes are less effective. A major consumer of project resources in undisciplined efforts is rework (also known as thrashing). Rework occurs when the original work has defects or is no longer needed because of changes in project direction. Disciplined organizations focus their efforts on reducing the amount of rework because it is expensive. Experts have reported that fixing a defect during the testing phase costs anywhere from 10 to 100 times the cost of fixing it during the design or requirements phase. Projects that are unable to successfully address their rework will eventually only be spending their time on rework and the associated processes rather than on productive work. In other words, the project will continually find itself reworking items. We found that HHS had adopted some best practices in its development of UFMS. The project had support from senior officials and oversight by independent experts, commonly called independent verification and validation (IV&V) contractors. We also view HHS' decision to follow a phased implementation to be a sound approach. However, at the time of our review, HHS had not effectively implemented several disciplined processes essential to reducing risks to acceptable levels and therefore key to a project's success, and had adopted other practices that put the project at unnecessary risk. HHS officials told us that they had carefully considered the risks associated with implementing UFMS and that they had put in place strategies to manage these risks and to allow the project to meet its schedule within budget. However, we found that HHS had focused on meeting its schedule to implement the first phase of the new system at the Centers for Disease Control and Prevention (CDC) in October 2004, to the detriment of disciplined processes and thus had introduced unnecessary risks that may compromise the system's cost, schedule, and performance. We would now like to briefly highlight a few of the key disciplined processes that HHS had not fully implemented at the time of our review. These matters are discussed in detail in our report. Requirements management. Requirements are the specifications that system developers and program managers use to design, develop, and acquire a system. Requirements management deficiencies have historically been a root cause of systems that do not meet their cost, schedule, and performance objectives. Effective requirements management practices are essential for ensuring the intended functionality will be included in the system and are the foundation for testing. We found significant problems in HHS' requirements management process and that HHS had not developed requirements that were clear and unambiguous. Testing. Testing is the process of executing a program with the intent of finding errors. Without adequate testing, an organization (1) is taking a significant risk that substantial defects will not be detected until after the system is implemented and (2) does not have reasonable assurance that new or modified systems will function as planned. We found that HHS faced challenges in implementing a disciplined testing program, because, first of all, it did not have an effective requirements management system that produced clear, unambiguous requirements upon which to build its testing efforts. In addition, HHS had scheduled its testing activities, including those for converting data from existing systems to UFMS, late in the implementation cycle leaving little time to correct defects identified before the scheduled deployment in October 2004. Project management and oversight using quantitative measures. We found that HHS did not have quantitative metrics that allowed it to fully understand (1) its capability to manage the entire UFMS effort; (2) how problems in its management processes would affect the UFMS cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified with its processes. Such quantitative measures are essential for adequate project management oversight. Without such information, HHS management can only focus on the project schedule and whether activities have occurred as planned, not on whether the substance of the activities achieved their system development objectives. As we note in our report, this is not an effective practice. Risk management. We noted that HHS routinely closed its identified risks on the premise that they were being addressed. Risk management is a continuous process to identify, monitor, and mitigate risks to ensure that the risks are being properly controlled and that new risks are identified and resolved as early as possible. An effective risk management process is designed to mitigate the effects of undesirable events at the earliest possible stage to avoid costly consequences. In addition, HHS' effectiveness in managing the processes associated with its data conversion and UFMS interfaces will be critical to the success of this project. For example, CDC's ability to convert data from its existing systems to the new system will be crucial to helping ensure that UFMS will provide the kind of data needed to manage CDC's programs and operations. The adage "garbage in garbage out" best describes the adverse impact. Furthermore, HHS expects that once UFMS is fully deployed, the system will need to interface with about 110 other systems, of which about 30 system interfaces are needed for the CDC deployment. Proper implementation of the interfaces between UFMS and the other systems it receives data from and sends data to is essential for providing data integrity and ensuring that UFMS will operate as it should and provide the information needed to help manage its programs and operations. Compounding these UFMS-specific problems are departmentwide weaknesses we have previously reported in information technology (IT) investment management, enterprise architecture, and information security. Specifically, HHS had not established the IT management processes needed to provide UFMS with a solid foundation for development and operation. Such IT weaknesses increase the risk that UFMS will not achieve planned results within the estimated budget and schedule. We will now highlight the IT management weaknesses that HHS must overcome: Investment management. HHS had weaknesses in the processes it uses to select and control its IT investments. Among the weaknesses we previously identified, HHS had not (1) established procedures for the development, documentation, and review of IT investments by its review boards or (2) documented policies and procedures for aligning and coordinating investment decision making among its investment management boards. Until HHS addresses weaknesses in its selection or control processes, IT projects like UFMS will face an increased likelihood that the projects will not be completed on schedule and within estimated costs. Enterprise architecture. While HHS is making progress in developing an enterprise architecture that incorporates UFMS as a central component, most of the planning and development of the UFMS IT investment had occurred without the guidance of an established enterprise architecture. An enterprise architecture is an organizational blueprint that defines how an entity operates today and how it intends to operate in the future and invest in technology to transition to this future state. Our experience with other federal agencies has shown that projects developed without the constraints of an established enterprise architecture are at risk of being duplicative, not well integrated, unnecessarily costly to maintain and interface, and ineffective in supporting missions. Information security. HHS had not yet fully implemented the key elements of a comprehensive security management program and had significant and pervasive weaknesses in its information security controls. The primary objectives of information security controls are to safeguard data, protect computer application programs, prevent unauthorized access to system software, and ensure continued operations. Without adequate security controls, UFMS cannot provide reasonable assurance that the system is protected from loss due to errors, fraud and other illegal acts, disasters, and incidents that cause systems to be unavailable. Finally, we believe it is essential that an agency take the necessary steps to ensure that it has the human capital capacity to design, implement, and operate a financial management system. We found that staff shortages and limited strategic workforce planning have resulted in the project not having the resources needed to effectively design, implement, and operate UFMS. We identified the following weaknesses: Staffing. HHS had not filled positions in the UFMS Program Management Office that were identified as needed. Proper human capital planning includes identifying the workforce size, skills mix, and deployment needed for mission accomplishment and to create strategies to fill the gaps. Scarce resources could significantly jeopardize the project's success and have led to several key UFMS deliverables being significantly behind schedule. Strategic workforce planning. HHS had not yet fully developed key workforce planning tools, such as the CDC skills gap analysis, to help transform its workforce so that it can effectively use UFMS. Strategic workforce planning focuses on developing long-term strategies for acquiring, developing, and retaining an organization's total workforce (including full- and part-time federal staff and contractors) to meet the needs of the future. Strategic workforce planning is essential for achieving the mission and goals of the UFMS project. By not identifying staff with the requisite skills to operate such a system and by not identifying gaps in needed skills and filling them, HHS has not optimized its chances for the successful implementation and operation of UFMS. To address the range of problems we have just highlighted, our report includes 34 recommendations that focus on mitigating the risks associated with this project. We made 8 recommendations related to the initial deployment of UFMS at CDC that are specifically tied to implementing critical disciplined processes. In addition, we recommended that until these 8 recommendations are substantially addressed, HHS delay the initial deployment. The remaining 25 recommendations were centered on developing an appropriate foundation for moving forward and focused on (1) disciplined processes, (2) IT security controls, and (3) human capital issues. In its September 7, 2004, response to a draft of our report, HHS disagreed regarding management of the project and whether disciplined processes were being followed. In its comments, HHS characterized the risk in its approach as the result, not of a lack of disciplined processes, but of an aggressive project schedule. From our perspective, this project demonstrated the classic symptoms of a schedule-driven effort for which key processes had been omitted or shortcutted, thereby unnecessarily increasing risk. As we mentioned at the outset of our testimony, this is a multiyear project with an estimated completion date in fiscal year 2007 and a total estimated cost of over $700 million. With a project of this magnitude and importance, we stand by our position that it is crucial for the project to adhere to disciplined processes that represent best practices. Therefore, in order to mitigate its risk to an acceptable level, we continue to believe it is essential for HHS to adopt and effectively implement our 34 recommendations. In commenting on our draft report, HHS also indicated that actions had either been taken, were under way, or were planned that address a number of our recommendations. In addition, HHS subsequently contacted us on September 23, 2004, to let us know that it had decided to delay the implementation of a significant amount of functionality associated with the CDC deployment from October 2004 until April 2005 in order to address the issues that had been identified with the project. HHS also provided us with copies of IV&V reports and other documentation that had been developed since our review. Delaying implementation of significant functionality at CDC is a positive step forward given the risks associated with the project. This delay, by itself, will not reduce the risk to an acceptable level, but will give HHS a chance to implement the disciplined processes needed to do so. HHS will face a number of challenges in the upcoming 6 months to address the weaknesses in its management of the project that were discussed in our report. At a high level, the key challenge will be to implement an event driven project based on effectively implemented disciplined processes, rather than a schedule-driven project. It will be critical as well to address the problems noted in the IV&V reports that were issued during and subsequent to our review. If the past is prologue, taking the time to adhere to disciplined processes will pay dividends in the long term. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this statement, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, who may be reached at (202) 512-2600 or by e-mail at [email protected], or Keith A. Rhodes, Chief Technologist, Applied Research and Methodology Center for Engineering and Technology, who may be reached at (202) 512- 6412 or by e-mail at [email protected]. Other key contributors to this testimony include Kay Daly, Michael LaForge, Chris Martin, and Mel Mench. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has previously reported on systemic problems the federal government faces in achieving the goals of financial management reform and the importance of using disciplined processes for implementing financial management systems. As a result, the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform, asked GAO to review and evaluate the agencies' plans and ongoing efforts for implementing financial management systems. The results of GAO's review of the Department of Health and Human Services' (HHS) ongoing effort to develop and implement the Unified Financial Management System (UFMS) are discussed in detail in the report Financial Management Systems: Lack of Disciplined Processes Puts Implementation of HHS' Financial System at Risk (GAO-04-1008). In this report, GAO makes 34 recommendations focused on mitigating risks associated with the project. In light of this report, the Subcommittee asked GAO to testify on the challenges HHS faces in implementing UFMS. HHS had not effectively implemented several disciplined processes, which are accepted best practices in systems development and implementation, and had adopted other practices, that put the project at unnecessary risk. Although the implementation of any major system is not a risk-free proposition, organizations that follow and effectively implement disciplined processes can reduce these risks to acceptable levels. While GAO recognized that HHS had adopted some best practices related to senior level support, oversight, and phased implementation, GAO noted that HHS had focused on meeting its schedule to the detriment of disciplined processes. GAO found that HHS had not effectively implemented several disciplined processes to reduce risks to acceptable levels, including requirements management, testing, project management and oversight using quantitative measures, and risk management. Compounding these problems are departmentwide weaknesses in information technology management processes needed to provide UFMS with a solid foundation for development and operation, including investment management, enterprise architecture, and information security. GAO also identified human capital issues that significantly increase the risk that UFMS will not fully meet one or more of its cost, schedule, and performance objectives, including staffing and strategic workforce planning. HHS stated that it had an aggressive implementation schedule, but disagreed that a lack of disciplined processes is placing the UFMS program at risk. GAO firmly believes if HHS continues to follow an approach that is schedule-driven and shortcuts key disciplined processes, it is unnecessarily increasing its risk. GAO stands by its position that adherence to disciplined processes is crucial, particularly with a project of this magnitude and importance. HHS indicated that it plans to delay deployment of significant functionality associated with its UFMS project for at least 6 months. This decision gives HHS a good opportunity to effectively implement disciplined processes to enhance the project's opportunity for success.
3,411
598
Over the last 15 years, the federal government's increasing demand for IT has led to a dramatic rise in the number of federal data centers and a corresponding increase in operational costs. According to OMB, the federal government had 432 data centers in 1998 and more than 1,100 in 2009. Operating such a large number of centers is a significant cost to the federal government, including costs for hardware, software, real estate, and cooling. For example, according to the Environmental Protection Agency, the electricity cost to operate federal servers and data centers across the government is about $450 million annually. According to the Department of Energy, data center spaces can consume 100 to 200 times more electricity than a standard office space. According to OMB, reported server utilization rates as low as 5 percent and limited reuse of data centers within or across agencies lend further credence to the need to restructure federal data center operations to improve efficiency and reduce costs. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and the environmental footprint of federal data center activities, OMB, under the direction of the Federal CIO, established FDCCI in February 2010. This initiative's four high-level goals are to promote the use of "green IT" by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of FDCCI, OMB required the 24 agencies to identify a senior, dedicated data center consolidation program manager to lead their agency's consolidation efforts. In addition, agencies were required to submit an asset inventory baseline and other documents that would result in a plan for consolidating their data centers. The asset inventory baseline was to contain detailed information on each data center and identify the consolidation approach to be taken for each one. It would serve as the foundation for developing the final data center consolidation plan. The data center consolidation plan would serve as a technical road map and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency. While OMB is primarily responsible for FDCCI, the agency designated two agency CIOs to be executive sponsors to lead the effort within the Federal CIO Council, the principal interagency forum to improve IT- related practices across the federal government. In addition, OMB identified two additional organizations to assist in managing and overseeing FDCCI: The GSA FDCCI Program Management Office is to support OMB in the planning, execution, management, and communications for FDCCI. The Data Center Consolidation Task Force is comprised of the data center consolidation program managers from each agency. According to its charter, the Task Force is critical to supporting collaboration across the FDCCI agencies, including identifying and disseminating key pieces of information, solutions, and processes that will help agencies in their consolidation efforts. In an effort to accelerate federal data center consolidation, OMB has directed agencies to use cloud computing as an approach to migrating or replacing systems with Internet-based services and resources. In December 2010, in its 25 Point IT Reform Plan, OMB identified cloud computing as having the potential to play a major part in achieving operational efficiencies in the federal government's IT environment. According to OMB, cloud computing brings a wide range of benefits, including that it is (1) economical--a low initial investment is required to begin and additional investment is needed only as system use increases, (2) flexible--computing capacity can be quickly and easily added or subtracted, and (3) fast--long procurements are eliminated, while providing a greater selection of available services. To help achieve these benefits, OMB issued a "Cloud First" policy that required federal agencies to increase their use of cloud computing whenever a secure, reliable, and cost-effective cloud solution exists. GAO, Information Technology Reform: Progress Made but Future Cloud Computing Efforts Should be Better Planned, GAO-12-756 (Washington, D.C.: July 11, 2012) and Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing, GAO-10-513 (Washington, D.C.: May 27, 2010). unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. Agencies also noted that, because of the on-demand, scalable nature of cloud services, it can be difficult to define specific quantities and costs and, further, that these uncertainties make contracting and budgeting difficult due to the fluctuating costs associated with scalable and incremental cloud service procurements. Finally, agencies cited other challenges associated with obtaining guidance, and acquiring knowledge and expertise, among other things. More recently, in March 2013, OMB issued a memorandum documenting the integration of FDCCI with the PortfolioStat initiative. Launched by OMB in March 2012, PortfolioStat requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending, demonstrate how its IT investments align with the agency's mission and business functions, and make decisions on eliminating duplication. OMB's March 2013 memorandum discusses OMB's efforts to further the PortfolioStat initiative by incorporating several changes, such as consolidating previously collected IT-related plans, reports, and data submissions. The memorandum also establishes new agency reporting requirements and related time frames. Specifically, agencies are no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies are to submit information to OMB via three primary means--an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. In July 2012, we issued a report on the status of FDCCI and found that, while agencies' 2011 inventories and plans had improved as compared to their 2010 submissions, significant weaknesses still remained. Specifically, while all 24 agencies reported on their inventories to some extent, only 3 had submitted a complete inventory. The remaining 21 agency submissions had weaknesses in several areas. For example, while most agencies provided complete information on their virtualization efforts, network storage, and physical servers, 18 agencies did not provide complete data center information, such as data center type, gross floor area, and target date for closure. In particular, several agencies fully reported on gross floor area and closure information, but partially reported data center costs. In addition, 17 agencies did not provide full information on their IT facilities and energy usage. For example, the Department of Labor partially reported on total data center IT power capacity and average data center electricity usage and did not report any information on total data center power capacity. We also noted that 3 agencies had submitted their inventory using an outdated format, in part, because OMB had not publicly posted its revised guidance. Figure 1 provides an assessment of the completeness of agencies' 2011 inventories, by key element. Officials from several agencies reported that some of the required information was unavailable at certain data center facilities. We reported that, because the continued progress of FDCCI is largely dependent on accomplishing goals built on the information provided by agency inventories, it will be important for agencies to continue to work on completing their inventories, thus providing a sound basis for their savings and utilization forecasts. In addition, while all 24 agencies submitted consolidation plans to OMB, For the remaining 23 agencies, only 1 had submitted a complete plan.selected elements were missing from each plan. For example, among the 24 agencies, all provided complete information on their qualitative impacts, and nearly all included a summary of the consolidation approach, a well-defined scope for data center consolidation, and a high- level timeline for consolidation efforts. However, most notably, 21 agencies did not fully report their expected cost savings; of those, 13 agencies provided partial cost savings information and 8 provided none. Among the reasons that this information was not included, a Department of Defense official told us that it was challenging to gather savings information from all the department's components, while a National Science Foundation official told us the information was not included because the agency had not yet realized any cost savings and so had nothing to report. Other significant weaknesses were that many agencies' consolidation plans did not include a full cost-benefit analysis that included aggregate year-by-year investment and cost savings calculations through fiscal year 2015, a complete master program and quantitative goals, such as complete savings and schedule,utilization forecasts. Figure 2 provides an assessment of the completeness of agencies' 2011 consolidation plans, by key element. Officials from several agencies reported that the plan information was still being developed. We concluded that, in the continued absence of completed consolidation plans, agencies are at risk of implementing their respective initiatives without a clear understanding of their current state and proposed end state and not being able to realize anticipated savings, improved infrastructure utilization, or energy efficiency. We also found that while agencies were experiencing data center consolidation successes, they were also encountering challenges. While almost 20 areas of success were reported, the 2 most often cited focused on virtualization and cloud services as consolidation solutions, and working with other agencies and components to find consolidation opportunities. Further, while multiple challenges were reported, the two most common challenges were both specifically related to FDCCI data reporting required by OMB: obtaining power usage information and providing a quality data center asset inventory. We further reported that, to assist agencies with their data center consolidation efforts, OMB had sponsored the development of a FDCCI total cost of ownership model that was intended to help agencies refine their estimated costs for consolidation; however, agencies were not required to use the cost model as part of their cost estimating efforts. We stated that, until OMB requires agencies to use the model, agencies will likely continue to use a variety of methodologies and assumptions in establishing consolidation estimates, and it will remain difficult to summarize projections across agencies. Accordingly, we reiterated our prior recommendation that agencies complete missing plan and inventory elements and made new recommendations to OMB to publically post guidance updates on the FDCCI website and to require agencies to use its cost model. OMB generally agreed with our recommendations and has since taken steps to address them. More specifically, OMB posted its 2012 guidance for updating data center inventories and plans, as well as guidance for reporting consolidation progress, to the FDCCI public website. Further, the website has been updated to provide prior guidance documents and OMB memoranda. In addition, OMB's 2012 consolidation plan guidance requires agencies to use the cost model as they develop their 2014 budget request. We and other federal agenciesto develop performance measures to gauge progress. According to government and industry leading practices, performance measures should be measurable, outcome-oriented, and actively tracked and reported. For FDCCI, OMB originally established goals for data center closures and the expected cost savings. Specifically, OMB expected to consolidate approximately 40 percent of the total number of agency data centers and achieve $3 billion in cost savings by the end of 2015, and established the means of measuring performance against those goals through several methods. have documented the need for initiatives The 24 agencies have collectively made progress towards OMB's data center consolidation goal to close 40 percent, or approximately 1,253 of the 3,133 data centers, by the end of 2015. To track their progress, OMB requires agencies to report quarterly on their completed and planned performance against that goal via an online portal. After being reviewed for data quality and security concerns, the GSA FDCCI Program Management Office makes the performance information available on the federal website dedicated to providing the public with access to datasets developed by federal agencies, http://data.gov. As of February 2013, agencies had collectively reported closing a total of 420 data centers by the end of December 2012, and were planning to close an additional 396 data centers--for a total of 816--by September 2013. While the number of data centers that agencies are planning to close from October 2013 through December 2015 (the planned completion date of FDCCI) is not reported on http://data.gov, OMB's July 2012 quarterly report to Congress on the status of federal IT reform efforts contains other information on agencies' data center closure plans. Among other things, the report states that agencies have collectively committed to closing a total of 968 data centers by the end of 2015. According to OMB staff from the Office of E-Government and Information Technology, this figure represents the number of commitments reported by agencies, as compared to the initiative's overall goal of closing 1,253 data centers by December 2015. The agencies have not identified the remaining 285 consolidation targets to achieve that goal. OMB's January 2013 quarterly report to Congress does not provide any new information about either planned or completed agency data center closures. See figure 3 for a graphical depiction of agencies' progress against OMB's data center consolidation goal. However, OMB has not measured agencies' progress against the cost savings goal of $3 billion by the end of 2015. According to a staff member from OMB's Office of E-Government and Information Technology, as of November 2012, the total savings to date had not been tracked but were believed to be minimal. The staff member added that, although data center consolidation involves reductions in costs for existing facilities and operations, it also requires investment in new and upgraded facilities and, as a result, any current savings are often offset by the reinvestment of those funds into ongoing consolidation efforts. Finally, the staff member stated that OMB recognizes the importance of tracking cost savings and is working to identify a consistent and repeatable method for tracking cost savings as part of the integration of FDCCI with PortfolioStat, but stated that there was no time frame for when this would occur. The lack of initiativewide cost savings data makes it unclear whether agencies will be able to achieve OMB's projected savings of $3 billion by the end of 2015. In previous work, we found that agencies' cost savings projections were incomplete and, in some cases, unreliable. Specifically, in July 2012, we reported that most agencies had not reported their expected cost savings in their 2011 consolidation plans. Officials from several agencies reported that this information was still being developed. Notwithstanding these weaknesses, we found that agencies collectively reported anticipating about $2.4 billion in cumulative cost savings by the With less than 3 end of 2015 (the planned completion date of FDCCI).years remaining to the 2015 FDCCI deadline, almost all agencies still need to complete their inventories and consolidation plans and continue to identify additional targets for closure. Because closing facilities is a significant driver in realizing consolidation savings, the time required to realize planned cost savings will likely extend beyond the current 2015 time frame. With at least one agency not planning on realizing savings until after 2015 and other agencies having not yet reported on planned savings, there is an increased likelihood that agencies will either need more time to meet the overall FDCCI savings goal or that there are additional savings to be realized in years beyond 2015. Until OMB tracks cost savings data, the agency will be limited in its ability to determine whether or not FDCCI is on course toward achieving planned performance goals. Additionally, extending the horizon for realizing planned cost savings could provide OMB and FDCCI stakeholders with input and information on the benefits of consolidation beyond OMB's initial goal. We have previously reported that oversight and governance of major IT initiatives help to ensure that the initiatives meet their objectives and performance goals. When an initiative is governed by multiple entities, the roles and responsibilities of those entities should be clearly defined and documented, including the responsibilities for coordination among those entities. We have further reported, and OMB requires, that an executive-level body be responsible for overseeing major IT initiatives. Among other things, we have reported that this body should have documented policies and procedures for management oversight of the initiative, regularly track progress against established performance goals, and take corrective actions as needed. Oversight and governance of FDCCI is the responsibility of several organizations--the Task Force, the GSA FDCCI Program Management Office, and OMB. Roles and responsibilities for these organizations are documented in the Task Force charter and OMB memoranda, while others are described in OMB's January 2013 quarterly report to Congress or have been communicated by agency officials. See table 1 for a listing of the FDCCI oversight and governance entities and their key responsibilities. The Task Force, the GSA FDCCI Program Management Office, and OMB have performed a wide range of FDCCI responsibilities. For example, the Task Force holds monthly meetings to, among other things, communicate and coordinate consolidation best practices and to identify policy and implementation issues that could negatively impact the ability of agencies to meet their goals. Further, the Task Force has assisted agencies with the development of their consolidation plans by discussing lessons learned during its monthly meetings and disseminating new OMB guidance. GSA has collected responses to OMB-mandated document deliveries, including agencies' consolidation inventories and plans, on an annual basis. In addition, GSA has collected data related to FDCCI data center closure updates, disseminated the information publically on the consolidation progress dashboard on http://data.gov, and provided ad hoc and quarterly updates to OMB regarding these data. Lastly, as the executive-level body, OMB issued FDCCI policies and guidance in a series of memoranda that, among other things, required agencies to provide an updated data center asset inventory at the end of every third quarter and an updated consolidation plan at the end of every fourth quarter. In addition, OMB launched a publically available electronic dashboard to track and report on agencies' consolidation progress. However, oversight of FDCCI is not being performed in other key areas. For example, The Task Force has not provided oversight of the agency consolidation peer review process. According to officials, the purpose of the peer review process is for agencies to get feedback on their consolidation plans and potential improvement suggestions from a partner agency with a data center environment of similar size and complexity. While the Task Force documented the agency pairings for 2011 and 2012 reviews, it did not provide agencies with guidance for executing their peer reviews, including information regarding the specific aspects of agency plans to be reviewed and the process for providing feedback. As a result, the peer review process did not ensure that significant weaknesses in agencies' plans were being identified. As previously mentioned, in July 2012, we reported that all of the agencies' plans were incomplete except for one. In addition, we noted that three agencies had submitted their June 2011 inventory updates, a required component of consolidation documentation, in an incorrect format--an outdated template. The GSA FDCCI Program Management Office has not executed its responsibilities related to analyzing agencies' inventories and plans and reviewing these documents for errors. In July 2012, we reported on agencies' progress toward completing their inventories and plans and found that only three agencies had submitted a complete inventory and only one agency had submitted a complete plan, and that most agencies did not fully report cost savings information and eight agencies did not include any cost savings information. The lack of cost savings information is particularly important because, as previously noted, initiativewide cost savings have not been determined--a shortcoming that could potentially be addressed if agencies had submitted complete plans that addressed cost savings realized, as required. Although OMB is the approval authority of agencies' consolidation plans, it has not approved agencies' submissions on the basis of their completeness. In an October 2010 memorandum, OMB stated that its approval of agencies' consolidation plans was in progress and would be completed by December 2010. However, OMB did not issue a subsequent memorandum indicating that it had approved agencies' plans, or an updated time frame for completing its review. This is important because, in July 2011 and July 2012, we reported that agencies' consolidation plans had significant weaknesses and that nearly all were incomplete. OMB has not reported on agencies' progress against its key performance goal of achieving $3 billion in cost savings by the end of 2015. Although the 2012 Consolidated Appropriations Act included a provision directing OMB to submit quarterly progress reports to the Senate and House Appropriations Committees that identify savings achieved through governmentwide IT reform efforts, OMB has not yet reported on cost savings realized for FDCCI. Instead, the agency's quarterly reports had only described planned FDCCI-related savings and stated that future reports will identify savings realized. As of the January 2013 report, no such savings have been reported. These weaknesses in oversight are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies' risk of not realizing anticipated cost savings. OMB's recent integration of FDCCI and PortfolioStat made significant changes to data center consolidation oversight and reporting requirements. According to OMB's March 2013 memorandum, to more effectively measure the efficiency of an agency's data center assets, agency progress will no longer be measured solely by closures. Instead, agencies will also be measured by the extent to which their data centers are optimized for total cost of ownership by incorporating metrics for energy, facility, labor, and storage, among other things. In addition, OMB stated that the Task Force will categorize agencies' data center populations into two categories--core and non-core data centers--for which the memorandum does not provide specific definitions. Additionally, as previously discussed, agencies are no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies are to submit information to OMB via three primary means--an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. Using these tools, an agency is to report on, among other things, its approach to optimizing its data centers; the state of its data center population, including the number of core and non-core data centers; the agency's progress on closures; and the extent to which an agency's data centers are optimized for total cost of ownership. However, OMB's memorandum does not fully address the revised goals and reporting requirements of the combined initiative. Specifically, OMB stated that its new goal is to close 40 percent of non-core data centers but, as previously mentioned, the definitions for core and non-core data center were not provided. Therefore, the total number of data centers to be closed under OMB's revised goal cannot be determined. In addition, although OMB has indicated which performance measures it plans to use going forward, such as those related to data center energy and labor, it has not documented the specific metrics for agencies to report against. The memorandum indicates that these will be developed by the Task Force, but does not provide a time frame for when this will be completed. Lastly, although OMB has previously stated that PortfolioStat is expected to result in savings of approximately $2.5 billion through 2015, its memorandum does not establish a new cost savings goal for FDCCI, nor does it refer to the previous goal of saving $3 billion. Instead, OMB states that all cost savings goals previously associated with FDCCI will be integrated into broader agency efforts to reshape their IT portfolios, but does not provide a revised savings estimate. The lack of a new cost savings goal will further limit OMB's ability to determine whether or not the new combined initiative is on course toward achieving its planned objectives. In addition, several important oversight responsibilities related to data center consolidation have not been addressed. For example, with the elimination of the requirement to submit separate data center consolidation plans under the new combined initiative, the memorandum does not discuss whether either the Task Force or the GSA Program Management Office will continue to be used in their same oversight roles for review of agencies' documentation. In addition, while the memorandum discusses OMB's responsibility for reviewing agencies' draft strategic plans, it does not discuss the responsibility for approving them. In the absence of defined oversight assignments and responsibilities, it cannot be determined how OMB will have assurance that agencies' plans meet the revised program requirements and, moving forward, whether these plans support the goals of the combined initiative. In our report being released today, we are making recommendations to better ensure that FDCCI achieves expected cost savings and to improve executive-level oversight of the initiative. Specifically, we are recommending that the Director of OMB direct the Federal CIO to track and annually report on key data center consolidation performance measures, such as the size of data centers being closed and cost savings to date; extend the time frame for achieving cost savings related to data center consolidation beyond the current 2015 horizon, to allow time to meet the initiative's planned cost savings goal; and establish a mechanism to ensure that the established responsibilities of designated data center consolidation oversight organizations are fully executed, including responsibility for the documentation and oversight of the peer review process, the review of agencies' updated consolidation inventories and plans, and approval of updated consolidation plans. The Federal CIO stated that the agency concurred with the first and third recommendation. Regarding the second recommendation, OMB neither agreed nor disagreed. However, the Federal CIO stated that, as the FDCCI and PortfolioStat initiatives proceed and continue to generate savings, OMB will consider whether updates to the current time frame are appropriate. In summary, after more than 3 years into FDCCI, agencies have made progress in their efforts to close data centers. However, many key aspects of the integration of FDCCI and PortfolioStat, including new data center consolidation and cost savings goals, have not yet been defined. Further compounding this lack of clarity, total cost savings to date from data center consolidation efforts have not been determined, creating uncertainty as to whether OMB will be able to meet its original cost savings goal of $3 billion by the end of 2015. In the absence of tracking and reporting on cost savings and additional time for agencies to achieve planned savings, OMB will be challenged in ensuring that the initiative, under this new direction, is meeting its established objectives. Recognizing the importance of effective oversight of major IT initiatives, OMB directed that three oversight organizations--the Task Force, the GSA FDCCI Program Management Office, and OMB--be responsible for federal data center consolidation oversight activities. These organizations have performed a wide range of FDCCI responsibilities, including facilitating collaboration among agencies and developing tools to assist agencies in their consolidation efforts. However, other key oversight activities have not been performed. Most notably, the lack of formal guidance for consolidation plan peer review and approval increases the risk that missing elements will continue to go undetected and that agencies' efforts will not fully support OMB's goals. Further, OMB's March 2013 memorandum does not address whether the Task Force and GSA's Program Management Office will continue their oversight roles, which does not help to mitigate this risk. Finally, while OMB has put in place initiatives to track consolidation progress, consolidation inventories and plans are not being reviewed for errors and cost savings are not being tracked or reported. The collective importance of these activities to federal data center consolidation success reinforces the need for oversight responsibilities to be fulfilled in accordance with established requirements. Chairman Mica, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at [email protected]. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Justin Booth, Nancy Glover, and Jonathan Ticehurst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2010, as focal point for information technology management across the government, OMB's Federal Chief Information Officer launched the Federal Data Center Consolidation Initiative--an effort to consolidate the growing number of federal data centers. In July 2011 and July 2012, GAO evaluated 24 agencies' progress and reported that nearly all of the agencies had not completed a data center inventory or consolidation plan and recommended that they do so. GAO was asked to testify on its report, being released today, that evaluated agencies' reported progress against OMB's planned consolidation and cost savings goals, and assessed the extent to which the oversight organizations put in place by OMB for the Federal Data Center Consolidation Initiative are adequately performing oversight of agencies' efforts to meet these goals. In this report, GAO assessed agencies' progress against OMB's goals, analyzed the execution of oversight roles and responsibilities, and interviewed OMB, GSA, and Data Center Consolidation Task Force officials about their efforts to oversee agencies' consolidation efforts. The 24 agencies participating in the Federal Data Center Consolidation Initiative made progress towards the Office of Management and Budget's (OMB) goal to close 40 percent, or 1,253 of the 3,133 total federal data centers, by the end of 2015, but OMB has not measured agencies' progress against its other goal of $3 billion in cost savings by the end of 2015. Agencies closed 420 data centers by the end of December 2012, and have plans to close an additional 548 to reach 968 by December 2015--285 closures short of OMB's goal. OMB has not determined agencies' progress against its cost savings goal because, according to OMB staff, the agency has not determined a consistent and repeatable method for tracking cost savings. This lack of information makes it uncertain whether the $3 billion in savings is achievable by the end of 2015. Until OMB tracks and reports on performance measures such as cost savings, it will be limited in its ability to oversee agencies' progress against key goals. Pursuant to OMB direction, three organizations--the Data Center Consolidation Task Force, the General Services Administration (GSA) Program Management Office, and OMB--are responsible for federal data center consolidation oversight activities; while most activities are being performed, there are still several weaknesses in oversight. Specifically, While the Data Center Consolidation Task Force has established several initiatives to assist agencies in their consolidation efforts, such as holding monthly meetings to facilitate communication among agencies, it has not adequately overseen its peer review process for improving the quality of agencies' consolidation plans. The GSA Program Management Office has collected agencies' quarterly data center closure updates and made the information publically available on an electronic dashboard for tracking consolidation progress, but it has not fully performed other oversight activities, such as conducting analyses of agencies' inventories and plans. OMB has implemented several initiatives to track agencies' consolidation progress, such as establishing requirements for agencies to update their plans and inventories yearly and to report quarterly on their consolidation progress. However, the agency has not approved the plans on the basis of their completeness or reported on progress against its goal of $3 billion in cost savings. The weaknesses in oversight of the data center consolidation initiative are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies' risk of not realizing expected cost savings. In March 2013, OMB issued a memorandum that integrated the Federal Data Center Consolidation Initiative with the PortfolioStat initiative, which requires agencies to conduct annual reviews of its information technology investments and make decisions on eliminating duplication, among other things. The memorandum also made significant changes to the federal data center consolidation effort, including the initiative's reporting requirements and goals. Specifically, agencies are no longer required to submit the previously required consolidation plans and the memorandum does not identify a cost savings goal. In its report, GAO recommended that OMB's Federal Chief Information Officer track and report on key performance measures, extend the time frame for achieving planned cost savings, and improve the execution of important oversight responsibilities. OMB agreed with two of GAO's recommendations and plans to evaluate the remaining recommendation related to extending the time frame.
6,004
901
DOD's Real Property Management Program is governed by statute and DOD regulations, directives, and instructions that establish real property accountability and financial reporting requirements. These laws, regulations, directives, and instructions--a selection of which are discussed below--require DOD and the military departments to maintain a number of data elements about their facilities to help ensure efficient property management and thus help identify potential facility consolidation opportunities. Department of Defense Directive 4165.06, Real Property (Oct. 13, 2004, certified current Nov.18, 2008). includes Army Regulation 405-70; the Naval Facilities Engineering Command P-78; and Air Force Policy Directive 32-10. The guidance requires, among other things, that real property records be accurate and be managed efficiently and economically. It also requires the military departments to maintain a complete and accurate real property inventory with up-to-date information, to annually certify that the real property inventory has been reconciled, and to ensure that all real property holdings under the military departments' control are being used to the maximum extent possible. Appendix II describes some of the guidance from DOD and the military departments and includes excerpts of the related requirements to manage real property. In managing the real property under their control, the military departments are responsible for implementing real property policies and programs to, among other things, hold or make plans to obtain the land and facilities they need for their own missions and for other DOD components' missions that are supported by the military departments' real property. Additionally, the military departments are required to (1) budget for and financially manage so as to meet their own real property requirements; (2) accurately inventory and account for their land and facilities; and (3) maintain a program monitoring the use of real property to ensure that all holdings under their control are being used to the maximum extent possible consistent with both peacetime and mobilization requirements. The military departments' processes for managing and monitoring the utilization of facilities generally occur at the installation level. According to OSD guidance, inventories are to be conducted every 5 years except for those real property assets designated as historic, which are to be reviewed and physically inventoried every 3 years. According to DOD Instruction 4165.70, the military departments' real property administrators are accountable for maintaining a current inventory count of the military departments' facilities and up-to-date information regarding, among other things, the status, condition, utilization, present value, and remaining useful life of each real property asset. Inventory counts and associated information should be current as of the last day of each fiscal year. When DOD's real property is no longer needed for current or projected defense requirements, it is DOD's policy to dispose of it. In addition, DOD Instruction 4165.70 requires the military departments to periodically review their real property holdings, both land and facilities, to identify unneeded and underused property. The three military departments maintain a number of real property databases that are to be used to manage real property assets for the Army, Navy, Marine Corps, and the Air Force as shown in table 1. OSD's Base Structure Report Fiscal Year 2013 Baseline (OSD's Base Structure Report) is a summary of DOD's real property inventory and a "snapshot" of DOD's real property data collected as of September 30, 2012, and serves as the beginning balance for fiscal year 2013. The report identifies DOD's real property assets, including buildings, structures, and linear structures, worldwide. Table 2 shows the total assets, percentages, and plant replacement values of real property assets for each of the military departments and the Washington Headquarters Services. OSD compiles and maintains the department's real property assets inventory in a single database, called the Real Property Assets Database. OSD's Real Property Assets Database contains specific reporting data on the military departments' real property records and is considered the single authoritative source for all DOD real property inventory. OSD's objectives for the Real Property Assets Database are to comply with current DOD business architecture, support the DOD standardized real property requirements, and implement DOD Instruction 4165.14: Real Property Inventory and Forecasting. The Real Property Assets Database is the source used for OSD's annual real property reporting that includes the Federal Real Property Profile reportReport. OSD's Base Structure Report is a snapshot of real property assets as of September 30 of the previous fiscal year and serves as the baseline for each contemporaneous fiscal year. It is a consolidated summary of the three military departments' real property inventory data, submitted annually. The three military departments' real property inventory records, which are the source for compiling DOD's real property records on an annual basis, are uploaded to OSD's Real Property Assets and OSD's Base Structure Database. Additionally, the Secretaries of the military departments are to certify annually that the real property inventory records have been reconciled. we found that as of September 30, 2010, DOD's In September 2011,Real Property Assets Database reported utilization data for fewer than half of DOD's total inventory of facilities and that much of the data were outdated and did not reflect the true usage of the structures. OSD stated at the time that utilization data in its database did not cover the full DOD inventory because the primary focus of the department's efforts to collect and record such data had been in response to reporting requirements from the Federal Real Property Council, which requires annual reports on utilization of five categories of buildings for the Federal Real Property Profile. However, OSD annually reports all of its real property in its Base Structure Report. Further, we found that when utilization-rate data were recorded in OSD's database, the recorded entry often did not reflect the true usage of the facilities. For example, we found that in fiscal year 2010 the real property data for the Air Force reported a utilization rate of 0 percent for 22,563 buildings that were reported to be in an active status. As a result, we recommended that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment to (1) develop and implement a methodology for calculating and recording utilization data for all types of facilities and to modify their processes for updating and verifying the accuracy of reported utilization data to reflect a facility's true status and (2) develop strategies and measures to enhance the management of DOD's excess facilities after the current demolition program ends, taking into account external factors that might affect future disposal efforts. OSD partially concurred with our first recommendation because it stated that it had some actions already underway to address the recommendation. However, at that time, OSD did not specify what actions it had undertaken to date or the time frames for completing efforts to improve the collection and reporting of utilization data. DOD concurred with our second recommendation, but did not provide any details or specific time frames for efforts to address it. As of June 2014, according to OSD officials, they have not fully implemented these two recommendations. Our body of work on results-oriented management has shown that successful organizations in both the public and private sectors use results-oriented management tools to help achieve desired program outcomes. These tools, or principles, derived from the Government Performance and Results Act (GPRA) of 1993, provide agencies with a management framework for effectively implementing and managing programs and shift program-management focus from measuring program activities and processes to measuring program outcomes. The framework can include various management tools, such as long-term goals, performance goals, and performance measures, which can assist agencies in measuring performance and reporting results. Our prior work has also shown that organizations need effective strategic management planning in order to identify and achieve long-term goals. We have identified key elements that should be incorporated into strategic plans to help establish a comprehensive, results-oriented management framework for programs within DOD. Further our prior body of work has also shown that organizations conducting strategic planning need to develop a comprehensive, results- oriented management framework to remain operationally effective, efficient, and capable of meeting future requirements. A results-oriented management framework provides an approach whereby program effectiveness is measured in terms of outcome metrics. Approaches to such planning vary according to agency-specific needs and missions; however, irrespective of the context in which planning is done, our prior work has shown that such a strategic plan should contain the following seven critical elements: (1) a comprehensive mission statement; (2) long- term goals; (3) strategies to achieve the goals; (4) use of metrics to gauge progress; (5) identification of key external factors that could affect the achievement of the goals; (6) a discussion of how program evaluations will be used; and (7) stakeholder involvement in developing the plan. In our analysis of OSD's Real Property Assets Database over the past 4 fiscal years, we found that although the department has made some progress in improving its real property records, OSD continued to collect incomplete utilization data for its real property assets. Specifically, we found that OSD's methodology for calculating and recording utilization data has not changed since our September 2011 report and the data continue to be incomplete and not encompass all of DOD's assets. OSD guidance requires that utilization rates be included for all categories of its real property asset records. The percentage of total real property assets with a reported utilization rate increased from 46 percent to 53 percent over the past 4 fiscal years, as shown in table 3. For example, as of September 30, 2013, we found that facility utilization data were missing for 245,281 of DOD's 524,189 assets--that is, about 47 percent of its total real property assets. Although the percentage of facilities not reporting any utilization rate decreased since 2011, OSD's fiscal year 2013 database still reflects that almost half of DOD's total real property assets records do not reflect a utilization rate. Further, related to accuracy of the data, we found a number of real property assets reporting a zero utilization rate, which may indicate either inaccurate records or some type of a consolidation opportunity. We used three data fields to determine whether a facility's utilization was consistently reported in OSD's Real Property Assets Database. Specifically, we used the following three criteria--a utilization rate reported as "zero" (indicating the facility was not being utilized), a status reported as "active" (indicating the facility was being utilized), and the type of asset described as a "building." We found that as of September 30, 2013, OSD reported 7,596 buildings across the four military services with inconsistent or inaccurate reported utilization, as shown in table 4. We then assessed these facilities and found that 30 percent (2,255 of the 7,596 facilities) were also described as "utilized" in the Real Property Assets Database. Having a utilization rate of zero and being in an active status and described as utilized shows potential inconsistencies or inaccuracies in the data. We analyzed the inconsistencies across the four services and found the following: The Army reported 6,391 real property records with a zero utilization rate, but 1,734 (about 27 percent) of those buildings were described as utilized; the remainder of the Army's records noted 37 buildings (about 0.01 percent) described as underutilized; and 4,620 (about 72 percent) of those buildings had no utilization description. The Navy's 13 buildings and the Marine Corps' 18 buildings that were reported with a zero utilization rate had no utilization description. Of the Air Force's 1,174 buildings reported with a zero utilization rate, 521 (about 44 percent) were described as utilized and 653 (about 56 percent) had no utilization description. Our analysis also showed that OSD has made some improvements in addressing some other inaccuracies in the utilization rates in its real property records. For example, we found that OSD corrected its real property records for those reported with a utilization rate greater than 100 percent. Specifically, our analysis showed that OSD had previously reported real property records of 2,270; 2,093; and 999, in fiscal years 2010, 2011, and 2012, respectively, with a utilization rate greater than 100 percent. In fiscal year 2013, OSD had addressed this inaccuracy and reported no real property records with a utilization rate greater than 100 percent. As another example, according to OSD's real property inventory data element dictionary, the utilization rate for its real property records should be reported as a whole number from 0 percent to 100 percent. Our analysis found that, since fiscal year 2010, OSD has been making progress in addressing the utilization rates that were not reported as whole numbers and that, overall, the total number of real property records in OSD's Real Property Assets Database reporting a utilization rate that is not a whole number has steadily decreased over the past 4 fiscal years. As with our analysis of OSD's Real Property Assets Database, we found that the military departments do not collect and maintain accurate real property records in their respective databases, which limits the use of the databases as a tool to identify consolidation opportunities. We found, first, that at all 11 of the military service installations we visited, according to the installation officials, the utilization data are not systematically updated, but instead are updated when (1) there is a request for space; (2) a facility is consolidated or remodeled; (3) an area is being reviewed for potential military construction projects; (4) there may be a transfer of personnel at the installation; or (5) there is a periodic review of their real property holdings, both land and facilities, to identify unneeded and underused property. Real property officials at all 11 of the military service installations we visited told us that evaluating the utilization of facilities requires physical inspections to verify and validate the accuracy of the utilization data within their real property inventory records. For example, according to Army Regulation 405-70, Army installations are required to perform an annual utilization survey and report findings of The Navy and the Air unused, underutilized, or excess real property.Force do not have a similar requirement for annual utilization surveys. The Army regulation requires a report containing a list of unused or underutilized buildings by facility classes and category code, building number, total gross square feet, gross square feet available, type of construction (permanent, semi-permanent, or temporary), and disposition. However, the real property officials at the three Army installations we visited told us that they had not completed the annual utilization surveys for their installations, because they did not have the manpower, the time to accomplish what they characterized as a time-consuming task on an annual basis, or the resources to pay a contractor to accomplish the task. Secondly, we found during our discussions with service headquarters officials and visits to installations that those real property inventory records that are maintained in the military departments' authoritative real property inventory databases are not always accurate. For example: Army headquarters officials demonstrated a recently developed program called the Army's Quality Assurance Reporting Tool, which is used to detect inaccuracies within its real property inventory database at the installation level. In August 2013, Army officials showed us more than 45,000 errors of all types within the real property database for one of the installations we planned to visit. As of August 2013, Army headquarters officials provided us with a listing from one of their real property databases showing the dates when the installation facilities were reviewed. Based on our analysis of the list of facility review dates, we found significant anomalies. For example, we found that the list of facility review dates included such erroneous entries as the years 0012, 0013, 0201, 0212, 0213, 1012, 1776, 1777, 1839, 1855, 1886, 1887, 1888, 1889, 2020, 2030, 2114, 2114, 2201, and 3013. We told Army headquarters' officials about these particular facility review dates, and they responded that they would correct them. Table 5 below shows our analysis of the Army's review dates, building count, and percentage reviewed. In order to determine if established internal control procedures over the Air Forces' real property were operating effectively, a Real Property Assertion Team consisting of representatives from Headquarters Air Force, Civil Engineering, Asset Accountability and Optimization, the Deputy Assistant Secretary, Accounting and Financial Operations, and independent contractors was assembled. and the authoritative real property inventory system provided inaccurate data and could not support audit readiness assertions over real property assets. Included in an Air Force Audit Agency report at one of the installations we visited are five recommendations to develop and implement oversight procedures to validate the accuracy of Air Force's real property data. Military installation-level officials at all 11 locations we visited told us that they use the departments' databases as a tool to help identify space requirements and potential consolidation opportunities; however, incomplete and inaccurate data limit the usefulness of the databases to do so. Specifically, according to these installation-level officials, because the utilization data currently contained in their databases are often missing, out of date, or inaccurate, the installations rely on physical verifications of facilities' utilization to identify consolidation opportunities. The installation-level officials stated that these physical verifications are performed as a result of requests for space or other common real property management processes, such as changes to mission and personnel at the installation. For example, at the 11 installations we visited, we found that consolidations had been performed in the past reactively in response to events, such as new or changing mission requirements, changes to force structure, or requests for facility space. Overall, the four military services use similar criteria and methodologies to address changes in mission requirements or requests for space at an installation. The installations' civil engineers, real property planners, and facility specialists analyze the installations' mission requirements and the space that is authorized to fulfill those missions in order to determine different potential courses of action for use of installation facilities. The installations are required by DOD Instruction 4165.14 to perform physical inventories every 5 years for real property and every 3 years for historical real property. Thus, according to the military installation-level officials, they generally complete 20 percent of the inventories each year, including verifying and correcting real property record data such as the utilization rate. We analyzed OSD's Real Property Assets Database as of September 30, 2013, to determine whether some of the data fields could be used to identify potential consolidation opportunities.among the 11 locations we visited, that there were 12 real property assets or facilities that had data fields that indicated that they may have potential consolidation opportunities, which were located at 3 of the locations we visited. In our analysis, we found, At the first location, with 7 such facilities (including 4 office facilities), according to the real property officer, one of the office facilities was demolished in December 2013 and the real property record removed in January 2014. Another of the office facilities was demolished in November 2007, and the real property record should have been removed, yet it was present in DOD's September 30, 2013, real property records-- reflecting an error that has been ongoing for more than 6 years. In addition, the real property officer noted that the remaining 2 office facilities that had reported zero utilization rates could be identified as potential consolidation opportunities, but had not been identified until we pointed out our findings to the official. The 3 other facilities at that location (which were not offices) were marked for demolition. The second location had 1 facility, and, according to the Public Works official, this facility is 100 percent utilized and the real property record was reported correctly in the Army's General Fund Enterprise Business System. However, the official noted that this facility had two real property unique identifier numbers--reflecting an error in DOD's Real Property Assets Database, which had not been found until we identified it. The third location had 4 facilities, and, according to the real property officer, 2 of the facilities were put on the installation's demolition list as of February 2014 and the other 2 facilities have usable space that is being considered for reuse by other activities needing space. OSD and the military departments have taken some steps to make improvements to the completeness and accuracy of their data since 2011; however, based on our analysis of OSD's Real Property Assets Database, there continues to be incomplete and inaccurate data. In September 2011, we recommended that DOD develop and implement a methodology for calculating and recording utilization data for all types of facilities, and modify processes to update and verify the accuracy of reported utilization data to reflect a facility's true status. As previously discussed, DOD partially concurred with the recommendation and stated that it recognized the need for further improvements in the collection and reporting of utilization data across the department. Further, DOD stated at the time that it had already begun some efforts to improve utilization data, but it did not specify what actions it had completed to date or the time frames for completing efforts to improve collection and reporting of utilization data. Fully implementing our September 2011 recommendation would help provide reasonable assurance that the utilization data are complete and accurate, which could also help better position the military services to identify consolidation opportunities and realize the potential attendant cost avoidance from no longer maintaining and operating more facility space than needed. OSD does not have a strategic plan to manage DOD's real property efficiently and facilitate the department in identifying opportunities for consolidating unutilized or underutilized facilities. According to DOD Directive 4165.06, it is DOD policy that DOD real property shall be managed to promote the most efficient and economic use of DOD real property assets and in the most economical manner, consistent with defense requirements. In addition, our prior work has shown that organizations need sound strategic management planning in order to identify and achieve long-range goals and objectives. Our prior work also identified critical elements that should be incorporated into strategic plans to establish a comprehensive, results-oriented management framework. A results-oriented management framework approach includes a strategic plan with, among other things, long-term goals, and strategies to achieve the goals, and metrics or performance measures to gauge progress of the implementation to meet the goals. While OSD has established a directive and a number of instructions for the management of real property, including for the maintenance of data elements about their facilities, OSD has not developed a strategic plan nor established department-wide goals, strategies to achieve the goals, or metrics to gauge progress for how it intends to manage its real property in the most economical and efficient manner. Two critical elements of a strategic plan are the establishment of long-term goals and a description of strategies to achieve those goals. Such goals could be focused on correcting inaccurate and incomplete facility utilization-rate data in OSD's Real Property Assets Database to provide better visibility on the status of the utilized, unutilized, and underutilized facilities. Another goal could be to identify opportunities for consolidating unutilized or underutilized facilities in order to effectively and efficiently use facilities as well as to reduce operation and maintenance costs in a time of declining defense budgets. Further, OSD has not established department-wide metrics for assessing progress related to real property management. Such metrics could be used to gauge progress in the efficient utilization of DOD's current real property inventory. For example, a metric could be established for the military departments to complete a 100 percent inventory of all their real property at their respective installations within a specific time frame in order to baseline the number of utilized, unutilized, and underutilized facilities, which could help them to identify consolidation opportunities. OSD officials acknowledged that there is currently no OSD strategic plan that clearly establishes long-term goals, strategies to achieve the goals, and the use of metrics to gauge progress to manage DOD's real property, because DOD has focused on other priorities. However, real property management is a long-standing issue and DOD's real property assets represent significant resources, as well as the opportunity for cost savings through the consolidation or disposal of unutilized or underutilized inventory. Without a strategic plan that includes long-term goals, strategies to achieve the goals, and metrics to gauge progress, it will be difficult for OSD to effectively manage its facilities, and it may be missing opportunities to identify additional consolidation opportunities, and therefore may not be utilizing its facilities to their utmost extent. OSD has made some progress in improving the completeness and accuracy of its facility utilization data in its Real Property Assets Database. However, there continues to be incomplete and inaccurate data at the OSD and military-service level. We continue to believe that fully implementing our 2011 recommendation to develop and implement a methodology for calculating and recording utilization data for all types of facilities, and to modify processes to update and verify the accuracy of reported utilization data to reflect a facility's true status, would help provide reasonable assurance that the utilization data are complete and accurate. Further, OSD's lack of a strategic plan to facilitate the department's management of its real property puts OSD and the military departments at risk for missing consolidation opportunities. As part of a results-oriented management framework, such a strategic plan should contain, among other things, long-term goals; strategies to achieve the goals; and the use of metrics to gauge progress. Without an OSD strategic plan, OSD and the military departments will be challenged in managing their real property in an efficient and economical manner, as required, and in identifying utilized, unutilized, or underutilized facilities as well as consolidation opportunities. To better enable DOD to manage its real property inventory effectively and efficiently, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment to establish a strategic plan as part of a results-oriented management framework that includes, among other things, long-term goals, strategies to achieve the goals, and use of metrics to gauge progress to manage DOD's real property and to facilitate DOD's ability to identify all unutilized or underutilized facilities for potential consolidation opportunities. We provided a draft of this report to DOD for official review and comment. In its comments, DOD concurred with our recommendation and stated that a strategy review is currently underway with initial guidance and initiatives to be identified by the close of the calendar year. DOD also provided technical comments which we incorporated in our report as appropriate. DOD's written comments are reproduced in their entirety in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; Deputy Under Secretary of Defense for Installations and Environment; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Office of the Secretary of Defense (OSD) has improved the completeness and accuracy of facility utilization data in its Real Property Assets Database and the military services use the data contained in their respective real property inventory databases to identify potential consolidation opportunities, we obtained selected data fields containing the military services' real property records from OSD's Real Property Assets Database. We selected the same data fields we had used as part of our methodology and analysis for our September 2011 report. Specifically, we analyzed the utilization-rate data fields for fiscal years 2010 through 2013--the most recent full year available at the time of this review--to determine whether more complete utilization-rate data had been entered since our previous review of the fiscal year 2010 data. We assessed the reliability of the Department of Defense's (DOD) real property inventory data by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable to assess the trends of the utilization data reported in OSD's Real Property Assets Database for fiscal years 2010 through 2013. We also reviewed our prior work on excess and underutilized real property to understand issues previously identified with real property management. We gathered and analyzed documentation, such as a DOD directive and instructions as well as military department regulations, reflecting OSD's and the military departments' management of real property and how OSD used the data contained in its Real Property Assets Database to identify unutilized or underutilized facilities or potential consolidation opportunities. We interviewed officials in the Office of the Under Secretary of Defense for Installations & Environment; each of the three military departments, which include the four military services; and the military service installations we visited, and discussed their processes to manage real property. We selected 11 active military installations to visit to include installations from the four services and to reflect those with high numbers of buildings.While the results of our interviews and visits cannot be generalized to all installations, they provided perspectives on how installations manage their real property. Using OSD's Real Property Assets Database and the following data fields--the utilization rate, the status as "active," and the property description as "building," we used a fourth data field, which described the asset as "utilized," to determine any inconsistencies that might exist between these data fields. Using these four criteria, we reviewed the real property records for the 11 installations we visited to identify the extent to which other consolidation opportunities, if any, may exist on the installations and potential inconsistencies and inaccuracies. We contacted and received information from DOD representatives, as delineated in table 6. To determine the extent to which OSD has a strategic plan to manage DOD's real property efficiently and to facilitate the identification of unutilized and underutilized facilities, we obtained and analyzed documentation, such as the Office of the Deputy Under Secretary of Defense for Installations and Environment report, 2013 Accomplishments and 2014 Goals and Objectives, a DOD directive and instructions, and military department regulations for the management of real property. We also reviewed OSD's Real Property Assets Database as of September 30, 2013--the most recent data available at the time of our review--to identify what facilities, if any, were reported as being unutilized or underutilized and ascertain how OSD implemented its policy and guidance to manage real property in the most economical manner. We discussed the policies and guidance used in managing these facilities with officials in the Office of the Under Secretary of Defense for Installations & Environment and compared OSD's efforts and guidance to the DOD directive and instructions for real property management and the results-oriented management framework as a best practice for strategic planning. We conducted this performance audit from July 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Defense's (DOD) Real Property Management Program is governed by statute and DOD regulations, directives, and instructions that establish real property accountability and financial reporting requirements. Table 7 describes some of the guidance from DOD and the military departments and includes excerpts of the related requirements to manage real property. In addition to the contact named above, Harold Reich (Assistant Director), James Ashley, Ronnie Bergman, Pat Bohan, Tracy Burney, Cynthia Grant, Mary Catherine Hult, Cheryl Weissman, and Michael Willems made key contributions to this report.
GAO has designated DOD's Support Infrastructure Management as a high-risk area in part due to challenges DOD faces in reducing excess infrastructure. DOD manages a global real property portfolio of over 557,000 facilities DOD estimates to be valued at about $828 billion as of September 30, 2012. In September 2011, GAO found that DOD was limited in its ability to reduce excess inventory because OSD did not maintain accurate and complete data on the utilization of its facilities in its Real Property Assets Database. House Report 113-102 mandated GAO to review DOD efforts to improve these data. This report examines the extent to which OSD has (1) improved the completeness and accuracy of facility-utilization data in its Real Property Assets Database and the military departments' use of data to identify consolidation opportunities, and (2) a strategic plan to manage DOD's real property efficiently and to facilitate the identification of unutilized and underutilized facilities. GAO analyzed OSD's real property data from fiscal years 2010 through 2013, visited 11 active DOD installations from the four services to reflect those with high numbers of buildings, and interviewed officials. While not generalizable, the interviews provided perspectives about facility utilization. The Office of the Secretary of Defense (OSD) has made some improvements, but OSD's utilization data continue to be incomplete and inaccurate; and data limitations affect the military departments' use of their databases to identify consolidation opportunities. GAO's analysis found that the percentage of total real property assets with a reported utilization rate in OSD's Real Property Assets Database increased from 46 to 53 percent over the past 4 fiscal years. OSD made some improvements in addressing inaccuracies in the utilization rates in its real property records, such as correcting records for those facilities reported with a utilization rate greater than 100 percent. The military departments use databases to a certain degree to identify opportunities to consolidate facilities, but primarily only in response to specific events, such as requests for space. Officials at all 11 installations GAO visited stated that inaccurate and incomplete data in the departments' databases limited opportunities to identify these opportunities. In September 2011, GAO recommended that the Department of Defense (DOD) develop and implement a methodology for calculating and recording utilization data, and modify processes to update and verify the accuracy of reported data. OSD partially concurred because it stated that it had some actions already underway to address the recommendation. However, at that time, OSD did not specify what actions it had undertaken. Moreover, the recommendation has not yet been fully implemented. Fully implementing GAO's recommendation would help provide reasonable assurance that the utilization data are complete and accurate and better position the department to use the databases to identify consolidation opportunities. OSD does not have a strategic plan, with goals and metrics, to manage DOD's real property efficiently and facilitate identifying opportunities for consolidating unutilized or underutilized facilities. According to a DOD directive, it is DOD policy that DOD real property shall be managed to promote the most efficient and economic use of DOD real property assets, and in the most economical manner consistent with defense requirements. However, OSD officials stated that there is currently no OSD strategic plan to manage DOD's real property nor established department-wide goals, strategies to achieve the goals, or metrics to gauge progress for how it intends to manage its real property in the most efficient manner. Such goals could focus on correcting inaccurate and incomplete facility utilization data to provide better visibility on the status of facilities and to identify opportunities for consolidating unutilized or underutilized facilities and reducing operations and maintenance costs. GAO's prior work has shown that organizations need sound strategic planning to identify and achieve long-range goals and objectives. Without a strategic plan, it will be difficult for OSD to effectively manage its facilities and utilize them efficiently. GAO recommends that OSD establish a strategic plan to identify unutilized and underutilized facilities. In written comments on a draft of the report, DOD concurred with the recommendation.
6,734
854
Federal organizations relocate their civilian employees to help them accomplish their many varied and unique missions. Organizations carry out their missions through a civilian workforce of nearly 2 million employees assigned to offices in locations throughout the United States, its territories and possessions, and various foreign countries. The Secretary of State determines the length of an overseas tour for Foreign Service Officers. The tour of duty overseas for Department of Defense (DOD) employees is prescribed by DOD's Joint Travel Regulations (2 JTR). When civilian employees are authorized to relocate in the interests of the government, they are to be authorized to relocate prior to the time they actually move, and they generally have up to 2 years, and can request a third year, from the date that they report to their new location to complete the relocation and receive reimbursement for the associated costs. Therefore, the actual relocation may not take place in the fiscal year that it is authorized. Also, expenses associated with the relocation may be paid to the employee over the 2- to 3-year period. Two federal laws provide government organizations with the primary authority to pay the travel and related expenses of relocating a civilian employee: the Administrative Expenses Act of 1946, as amended, 5 U.S.C. SSSS 5701-5742, and the Foreign Service Act of 1980, 22 U.S.C. 4081. GSA's Federal Travel Regulation (FTR), 41 C.F.R., chapters 301 to 304, implements the provisions of the Administrative Expenses Act. FTR, chapter 302, governs the travel and relocation expenses of civilian employees, except those in the Foreign Service. Based on authority provided in the Foreign Service Act, travel and relocation expenses for Foreign Service Officers are prescribed by the Secretary of State in the Foreign Service Travel Regulations. These regulations are contained in volume 6 of the Foreign Affairs Manual (6 FAM). Once any civilian employee is located in a foreign area, his or her travel allowances and differentials are set by the Secretary of State in the Standardized Regulations (Government Civilians, Foreign Areas). Both the Department of State's and GSA's travel regulations authorize federal organizations to pay basically the same expenses for relocations within the United States. These expenses include transportation of individuals, per diem, subsistence, transportation and storage of household and personal effects, and real estate expenses. The key difference is that Foreign Service Officers are not entitled to relocation income tax allowances. Overseas, the Standard Regulations apply to both Foreign Service Officers and other civilian employees, and generally provide them with the same allowances. These allowances include living quarters allowance, temporary quarters subsistence allowance, and cost of living allowance. One difference is that Foreign Service Officers are entitled to separation travel, which is relocation to anyplace in the United States that they choose upon retirement regardless of where they are located when they retire. On the other hand, other civilian employees returning from overseas are only entitled to reimbursement for travel and relocation expenses to their home of record. Another difference is that Foreign Service Officers may be authorized rest and recuperation travel when assigned to a hardship post. In order to receive reimbursement for relocation expenses/allowances to which they are entitled, both civilian employees and Foreign Service Officers must sign a service agreement to remain with the government for 12 months after the date that they report to their new duty station, mission, or agency, unless they leave the government for reasons beyond their control and that are acceptable to the agency. An employee who violates the agreement must repay the government the amount it spent to relocate him or her. Neither FTR nor FAM specifically define the term relocation. For the purposes of this report, we define relocation as (1) the transfer, in the interest of the government, of an existing civilian employee or appointee from one office, mission, or agency to another for permanent duty; (2) the moving of a new eligible appointee from his or her actual residence in one location to his or her first office or mission in another location; (3) the return of an existing eligible civilian employee or appointee who is separated from an overseas office or mission to his or her actual residence; and (4) the return of an existing eligible career appointee on retirement from an office or mission to his or her elected residence within the United States, its territories, or possessions. Collecting exact cost information for relocation travel is difficult. Office of Management and Budget (OMB) Circular No. A-11, Preparation and Submission of Budget Estimates and Circular No. A-34 Instructions on Budget Execution require that federal organizations record obligations and expenditures by object class according to the nature of the services or articles procured. There are no object classes dedicated solely to recording relocation travel obligations and expenditures. Rather, relocation obligations and expenditures are captured in at least four different object classes, along with obligations and expenses that are not related to relocation travel. These object classes include (1) 12.1 civilian personnel benefits; (2) 21.0 travel and transportation of persons; (3) 22.0 transportation of things; and (4) 25.7 operation and maintenance of equipment (related to storage of household goods). As a result, relocation obligations and expenditures cannot be extracted from OMB budget/object class data. Instead, relocation obligations and expenditures must be obtained from each federal organization through queries of its automated systems or examination of its travel records. As the government's travel manager, GSA's Office of Governmentwide Policy is responsible for establishing governmentwide civilian travel and relocation policy, updating FTR, gathering travel and relocation costs, and providing leadership to develop sound travel and relocation policy. GSA was required by the Federal Civilian Employee and Contractor Travel Expenses Act of 1985, 5 U.S.C. SS 5707(c), to periodically, but at least every 2 years, submit to the Director of OMB an analysis of, among other things, estimated total agency payments for employee relocation. GSA is to survey a sampling of agencies, each of which spent more than $5 million on travel and transportation payments in the prior fiscal year. This provision was to expire with the administrator's submission of the analysis that included fiscal year 1991. The Treasury, Postal Service and General Government Appropriations Act of 1995, Pub. L. No. 103-329 (Sept. 30, 1994), reinstated this provision with no future expiration date. GSA collected the required travel information for fiscal years 1989, 1990, and 1991. However, GSA only analyzed the travel information for fiscal year 1989. GSA's Office of Governmentwide Policy recently distributed its survey to collect travel information, including relocation travel, for fiscal year 1996. To provide the requested civilian employee relocation information, we developed and distributed a questionnaire to 120 federal organizations. We asked the organizations to report their total number of and cost for civilian employee relocations. We also asked whether they had a rotational policy that resulted in civilian employee relocations. We received responses from 119 (or 99 percent) of the 120 organizations surveyed. The Department of Commerce's Economic Development Administration, which had a civilian workforce of less than 400 employees, did not provide a response. The names of the organizations that we surveyed are listed in appendix I. To develop the questionnaire and ensure its completeness, we researched FTR and OMB Circular No. A-11 to identify the allowances for relocation expenses and the object classes that federal organizations use to record relocation obligations, respectively. We drafted the questionnaire with the assistance of our staff knowledgeable in federal travel and relocation practices. We pretested the questionnaire with the following six organizations: the Bureau of the Census, Defense Educational Activity, Department of State, Drug Enforcement Administration, U.S. Marine Corps, and Office of Personnel Management. Using the pretest results, we revised the questionnaire to help ensure that our questions were interpreted correctly and that the requested relocation information was available. We did not independently verify the accuracy of the civilian employee relocation information that the federal organizations provided or assess the appropriateness of their relocations or the associated cost because of time constraints and the number of organizations surveyed. However, we reviewed each questionnaire for clarity and completeness and followed up with the organization's contact person in those instances in which the responses were unclear or incomplete. To provide information on rotational policies that resulted in civilian employee relocations, we obtained copies of these policies from the pertinent federal organizations. We reviewed the policies to understand their purposes, their rotational requirements, and which employees were affected. Additionally, we interviewed cognizant officials to discuss the policies in greater detail, clarify specific issues, and determine current use of the policies. Appendix II contains a more detailed description of our objectives, scope, and methodology. We did our work in Washington, D.C., from June 1996 to June 1997 in accordance with generally accepted government auditing standards. Because it was impractical for us to obtain comments from all 119 federal organizations, we requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. Their comments are discussed at the end of this letter. Most of the federal organizations that responded to our survey reported authorizing over 130,000 relocations and the other organizations reported making over 40,000 relocations during fiscal years 1991 through 1995. A small percentage of the organizations reported the majority of the relocations. Over half of the relocations authorized or made were reported by 7 percent and 9 percent of the organizations, respectively. In addition, while the total number of relocations authorized and the total number of relocations made fluctuated yearly across the organizations that reported data for all 5 fiscal years, there was moderate overall change between fiscal years 1991 and 1995. Ninety-seven federal organizations that responded to our survey reported that they authorized 132,837 civilian employees to relocate at the government's expense from fiscal year 1991 through fiscal year 1995.However, the total number of relocations authorized is probably understated because 7 of the 97 organizations did not, for various reasons, provide this relocation information for all 5 fiscal years. Also, one organization did not report relocations authorized by one of its components for fiscal year 1991. As shown in figure 1, seven organizations accounted for 52 percent (69,072) of the reported relocations authorized. Among the seven organizations, the number of relocations authorized ranged from 17,881 by the Department of State to 5,509 by the Forest Service. (Appendix III shows the number of relocations authorized for each fiscal year reported by the federal organizations.) Because not all of the 97 federal organizations that reported relocations authorized provided relocation information for all of their components for all 5 fiscal years, we were precluded from determining the total change in relocations authorized. However, 89 organizations did provide relocation information for all 5 fiscal years. Across these organizations, total relocations authorized fluctuated yearly. In fiscal year 1991, total relocations authorized were about 25,600; they continually declined to a low of about 20,080 in fiscal year 1993. Thereafter, total relocations authorized began to increase, and in fiscal year 1995 reached about 25,370. Overall, total relocations authorized decreased less than 1 percent from fiscal year 1991 to fiscal year 1995. The 23 other federal organizations that responded to our survey reported that they made 40,252 civilian employee relocations from fiscal year 1991 through fiscal year 1995. The total number of relocations reportedly made was probably understated because 4 of the 23 organizations--including the Departments of the Army, Energy, and the National Oceanic and Atmospheric Administration, which were among those that made the most relocations--did not provide complete relocation information for all 5 fiscal years. As shown in table 1, the Departments of the Army and the Navy accounted for 21,947 (about 55 percent) of the reported civilian employee relocations made. (Appendix IV shows the number of relocations made for each fiscal year reported by the federal organizations.) Nineteen federal organizations reported relocations made for all 5 fiscal years. Across these organizations, total relocations made varied yearly. Relocations made increased from 3,468 in fiscal year 1991 to 3,759 in fiscal year 1992. In fiscal year 1993, relocations made decreased to a low of 3,426. But, total relocations made increased to 3,622 in fiscal year 1994 and rose to 3,902 in fiscal year 1995. Overall, total relocations made increased about 12.5 percent from fiscal year 1991 to fiscal year 1995. Although relocations reportedly made increased over the 5-year period, this increase was not distributed evenly among the 19 federal organizations. Two organizations--Defense Logistics Agency (DLA) and Tennessee Valley Authority (TVA)--reported the greatest changes in relocations made. DLA's reported relocations made rose about 221 percent, from 301 civilian relocations made in fiscal year 1991 to 965 in fiscal year 1995. According to a DLA official, the number of civilian relocations made increased substantially during this period due to base realignments and closures and Defense Management Review decisions. These decisions resulted in DLA acquiring control of all DOD supply depots and supporting civilian employees. DLA consolidated these depots, reducing the number from 31 to 23 and relocated employees from closing depots to gaining depots. DLA also consolidated its 9 contract management districts into 2 districts, which led to additional civilian relocations. TVA's reported relocations made decreased by 52 percent, from 1,026 in fiscal year 1991 to 490 in fiscal year 1995. TVA did not provide an explanation for this decrease. The changes in the number of relocations made reported by DLA and TVA generally offset each other. Collectively, the 17 remaining organizations displayed about a 14-percent overall increase in relocations made during this period. Most of the federal organizations that responded to our survey reported obligating over $3 billion for relocations and the other organizations reported expending over $350 million for relocations during fiscal years 1991 through 1995. Again, a small percentage of the organizations reported the majority of the costs. Over half of the total relocation obligations were reported by 8 percent of the organizations, and 70 percent of the total relocation expenditures were reported by 13 percent of the organizations. Across the organizations that provided data for all 5 fiscal years, total relocation obligations and total relocation expenditures varied yearly. When adjusted for inflation, there was a noticeable increase in the total reported relocation obligations and a larger increase in total relocation expenditures. However, the majority of the increase in total relocation expenditures was due to one organization. Ninety-seven federal organizations reported that they obligated about $3.4 billion for employee relocation expenses for fiscal years 1991 through 1995. Fourteen of the 97 organizations did not provide information for all 5 fiscal years, which probably resulted in an understatement of the funds reported obligated. Also, nine organizations did not provide obligations for certain relocation expense categories, and one organization did not provide fiscal year 1994 relocation obligations for its regional offices. As shown in figure 2, eight organizations accounted for over 53 percent (about $1.8 billion) of the total reported obligations for employee relocation expenses. (Each federal organization's reported relocation obligations are located in appendix V.) From fiscal year 1991 through fiscal year 1995, the total reported relocation obligations fluctuated yearly across the 83 federal organizations that provided relocation obligations for all 5 fiscal years. In constant 1995 dollars, total relocation obligations continually decreased from $652.1 billion in fiscal year 1991 to $546.9 billion in fiscal year 1993. But, total relocation obligations increased in fiscal year 1994 and rose to $759.6 billion in fiscal year 1995. Overall, total relocation obligations increased about 16 percent from fiscal year 1991 to fiscal year 1995. This increase was not greatly influenced by one organization or a small group of organizations. The 23 other federal organizations reported that for fiscal years 1991 through 1995 they expended over $362.8 million to relocate their civilian employees. Reported relocation expenditures were probably understated because one organization did not report fiscal year 1992 relocation expenditures for one of its components. In addition, 2 of the 23 organizations did not provide expenditures for all expense categories. As shown in table 2, the Departments of Energy and the Navy and the U.S. Information Agency accounted for over $254 million (70 percent) of the total reported expenditures to relocate civilian employees during this period. (Appendix VI shows the reported relocation expenditures for each fiscal year by federal organization.) Annually, total reported relocation expenditures increased across the 22 federal organizations that provided relocation expenditures for all 5 fiscal years and included each of their components. In constant 1995 dollars, total relocation expenditures increased from $45.7 million in fiscal year 1991 to $46 million in fiscal year 1992. In fiscal year 1993, total relocation expenditures increased to $50.3 million; in fiscal year 1994, increased to $81 million; and in fiscal year 1995, rose to $86 million. Overall, total relocation expenditures increased about 88 percent from fiscal year 1991 to fiscal year 1995. The Navy accounted for this increase because its reported relocation expenditures more than quadrupled during this period. Navy's relocation expenditures reportedly rose about 367 percent, from $11 million in fiscal year 1991 to $51.4 million in fiscal year 1995, in constant 1995 dollars. According to a Navy official, expenditures for civilian relocations increased substantially during this period due to the increase in the number of relocations caused by base realignment and closure decisions. During this period Navy closed or began closing and realigning 114 bases. Excluding the Navy from the total expenditures, the 21 remaining organizations' total reported relocation expenditures decreased by less than 1 percent, from $34.7 million in fiscal year 1991 to $34.6 million in fiscal year 1995. Fifteen federal organizations reported that they had rotational policies that required some of their civilian employees to relocate on a prescribed schedule. Nine of the 15 organizations reported that they had these policies because they assign their civilian employees to overseas locations and must comply with federal regulations or a treaty that limits such employees' tours of duty. The six remaining organizations reported that they had these policies either to (1) maintain the safety and security of their civilian employees who may be assigned to dangerous/hazardous locations, (2) maintain their civilian employees' objectivity when inspecting or auditing specific locations, or (3) enhance the job-related knowledge and experiences of their civilian employees, regardless of where they are assigned. In addition, the 15 federal organizations estimated the annual percentage of their civilian employee relocations that were due to their rotational policies. Among these organizations, their estimated annual percentages ranged from 100 to less than 1. Using the organizations' estimated percentages, we calculated the estimated impact these policies had on the number of civilian employee relocations authorized and made by the 15 organizations. Specifically, we multiplied each organization's percentage by either its reported number of relocations authorized or made. As shown in table 3, 11 organizations' (including some Navy components') rotational policies led to an estimated 24,671 civilian employees being authorized to relocate during fiscal years 1991 through 1995. These relocations authorized--triggered by rotational policies--accounted for about 18.6 percent of the total reported relocations authorized. As shown in table 4, five federal organizations' (including some Navy components') rotational policies resulted in an estimated 2,792 civilian employees being relocated during the same 5-year period. These relocations made--triggered by rotational policies--accounted for about 6.9 percent of the total relocations made that were reported by the organizations we surveyed. On June 5, 1997, we requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. On June 11, 1997, GSA officials, including the Acting Director, Travel & Transportation Management Policy Division, provided oral comments. In general, GSA officials characterized the report as a useful resource that will assist them in fulfilling GSA's legislative requirement to biannually survey agencies and report on, among other things, the estimated cost of civilian employee relocations. GSA officials also provided updated information on the status of GSA's biannual survey and technical comments. On June 11, 1997, OMB staff within the Justice and GSA Branch provided their views on the draft report, which were technical in nature and involved clarification issues. GSA's and OMB's technical comments were incorporated in the report where appropriate. Copies of this report will be sent to the Ranking Minority Members of your Committees; the Administrator of the General Services Administration; the Director of the Office of Management and Budget; all federal organizations included in this report; and other interested parties. Copies will also be made available to others upon request. If you have any questions concerning this report, please call me on (202) 512-4232 or Gerald P. Barnes, Assistant Director, on (202) 512-4228. Major contributors are listed in appendix VII. Animal and Plant Health Inspection Service Cooperative, State, Research, Education & Extension Service Food Safety and Inspection Service Grain Inspection, Packers, and Stockyards Administration Office of the Chief Financial Officer Office of the Inspector General National Institute of Standards and Technology National Oceanic and Atmospheric Administration National Telecommunications and Information Administration (continued) Administration for Children and Families Agency for Health Care Policy & Research Centers for Disease Control and Prevention Health Resources & Services Administration Substance Abuse & Mental Health Services Administration Office of Surface Mining Reclamation and Enforcement U.S. Fish and Wildlife Service (continued) Offices, boards, and divisions National Highway Traffic Safety Administration Research and Special Programs Administration Bureau of Alcohol, Tobacco, and Firearms Bureau of Engraving and Printing Bureau of the Public Debt Federal Law Enforcement Training Center Office of the Comptroller of the Currency Departmental Offices and Office of the Inspector General Department of Housing and Urban Development (continued) As agreed, our objectives were to provide information for the executive branch departments and largest independent agencies on (1) the total number of civilian employees who were relocated at the federal government's expense, (2) the total cost of these relocations to the government, and (3) the agencies that had rotational policies requiring their civilian employees to relocate. To provide the requested relocation information, we developed and distributed a questionnaire to the 14 executive branch departments and the 18 largest independent agencies. Relocation travel at most of the 14 executive branch departments is decentralized, and subordinate agencies/bureaus/administrations controlled their own relocations. Thus, we requested that a separate questionnaire be completed by each federal organization that had control over its relocations. As a result, the questionnaire was distributed to a total of 120 federal organizations.These federal organizations employed about 1.9 million civilian employees, representing 96 percent of the federal civilian workforce as of September 1995. We received responses from 119 of the 120 federal organizations. The Department of Commerce's Economic Development Administration, which had a workforce of less than 400 employees, did not provide a response. Appendix I lists the federal organizations we surveyed. To develop the questionnaire and ensure its completeness, we researched FTR and OMB Circular No. A-11, Preparation and Submission of Budget Estimates, to identify the allowances for relocation expenses and the object classes that federal organizations are to use in reporting relocation obligations. We drafted the questionnaire with the assistance of our staff knowledgeable of federal travel and relocation practices. We pretested the questionnaire with six federal organizations: the Bureau of the Census, Defense Educational Activity, Department of State, Drug Enforcement Administration, U.S. Marine Corps, and the Office of Personnel Management. Using the pretest results, we revised the questionnaire to help ensure that our questions were interpreted correctly and that the requested relocation information was available. Federal organizations are not required to track or keep relocation information in any specific way. During pretesting, we found that organizations maintained relocation travel information at different organizational levels and used different categories to track the information. Organizations had to go through varying levels of effort to provide the information that we requested. Some organizations had centralized automated systems that required them to write special programs to extract the information. Some of the organizations with automated systems had to retrieve the earlier years of information from archives and then run special programs to extract the information we requested. Other organizations did not have centralized systems or reporting requirements for this type of information and had to query a number of local offices, which in turn had to go through automated or paper records to obtain the information. We also know of at least one organization that had to go through paper records and manually tabulate the number and cost of its relocations. The organizations generally took from 1 to 3 months to complete the questionnaire. Since federal organizations maintained relocation information at different levels and used different categories for tracking purposes, our questionnaire was carefully designed to collect the best and most complete information possible from each federal organization on its number and cost of relocations. The questionnaire allowed organizations to report their relocation information based on the categories they used. As a result, for the number of relocations, 97 organizations reported relocations that they authorized and the other 23 organizations reported the relocations that they made. Similarly, the cost of relocations were reported by 97 organizations using obligations, while the other 23 organizations reported expenditures. To help the organizations report complete cost data, we developed a list of the expense categories related to relocation travel. We developed this list based on our research of FTR and discussions with knowledgeable officials in several federal organizations. Our survey asked the federal organizations to include costs incurred in all of these expense categories and to indicate if there were categories of expenses for which they could not provide cost data. While federal organizations are not required to track or keep relocation data in a specific way, they are required to maintain travel records for 6 years that contain information on reimbursements for individuals. Based on your request for relocation information over the last several years, our questionnaire was designed to collect relocation information for fiscal years 1990 through 1995. However, at the time we sent the questionnaires to the federal organizations, they were required to have data for fiscal years 1991 through 1996, and many organizations could not provide the data for 1990. Therefore, our report presents information for fiscal years 1991 through 1995. Although most federal organizations were able to provide the requested information for fiscal years 1991 through 1995, the total numbers and costs of relocations are understated in three respects. First, 15 organizations were not able to provide any information for 1 or more years for one or two of the four reporting categories. Second, 10 federal organizations reported that they could not provide any cost information for one or more of the expense categories. Lastly, 6 organizations said that the information they reported did not include data from all components for at least 1 year. Federal organizations' reasons for not being able to provide the requested information included (1) records were inaccessible due to asbestos contamination; (2) records were incomplete due to office or base closures or realignments; (3) records had been sent to off-site storage; (4) accounting systems had changed during the period; and (5) the inability to separate relocation related travel expenses from other travel expenses. We did not independently verify the accuracy of the relocation information that the federal organizations provided because of time constraints and the number of federal organizations surveyed. However, we reviewed each questionnaire for clarity and completeness and followed up with the federal organization's contact personnel in those instances in which the response(s) was unclear or incomplete. To provide information on rotational policies that resulted in civilian employee relocations, we obtained copies of these policies from the pertinent federal organizations. We reviewed the policies to understand their purposes, their rotational requirements, and which employees were affected. Additionally, we interviewed cognizant officials to discuss the policies in greater detail, clarify specific issues, and determine current use of the policies. We did our work in Washington, D.C., from June 1996 to June 1997 in accordance with generally accepted government auditing standards. We did not request comments on this report from the heads of the 119 federal organizations that responded to our survey because it was impractical. We requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. GSA provided oral comments, which are discussed in this report. In addition, GSA and OMB provided technical comments, which are incorporated in the report where appropriate. Animal and Plant Health Inspection Service Cooperative, State, Research, Education, and Extension Service (continued) Department of Health and Human Services Agency for Health Care Policy & Research Health Resources & Services Administration Substance Abuse & Mental Health Services Administration Administration for Children and Families Department of Housing and Urban Development Office of Surface Mining Reclamation and Enforcement (continued) Bureau of Alcohol, Tobacco, and Firearms Federal Law Enforcement Training Center Office of the Comptroller of the Currency(continued) UA: data were not available. Fiscal year 1991 does not include data from Europe. Grain Inspection, Packers, and Stockyards Administration Department of Health and Human Services Centers for Disease Control and Prevention (Table notes on next page) Not all components of NOAA reported for each fiscal year, most notably, the National Weather Service. According to an Army Official, relocations made reported for fiscal year 1991 are underreported. Fiscal year 1992 data were not available from the Bonneville Power Administration. Information reported on a calendar-year basis. Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (Table notes on next page) UA: data were not available. Obligations reported do not include nontemporary storage of household goods expenses. Obligations reported for fiscal years 1994 and 1995 do not include enroute travel expense. Obligations reported do not include overseas renewal agreement expenses. Questionnaire was sent to multiple installations for completion but not all installations were able to report obligations for all categories of expenses. Obligations for fiscal year 1994 do not include regional data. Obligations reported for fiscal years 1991 to 1995 do not include relocation service contract expenses. Obligations reported do not include miscellaneous moving expenses. Obligations reported do not include transportation and storage of household goods, mobile homes, and vehicle expenses. Expenditures reported by fiscal year (nominal dollars) (continued) Expenditures reported by fiscal year (nominal dollars) Fiscal year 1992 data were not available from the Bonneville Power Administration. Expenditures reported do not include overseas renewal agreement expenses. Information reported on a calendar-year basis. Expenditures reported do not include enroute travel expenses. Gerald P. Barnes, Assistant Director Maria Edelstein, Evaluator-in-Charge Shirley Bates, Evaluator Martin DeAlteriis, Social Science Analyst Stuart Kaufman, Social Science Analyst Hazel Bailey, Evaluator (Communications Analyst) Robert Heitzman, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the number of civilian employees relocated during fiscal years (FY) 1991 and 1995 and the associated costs of these relocations, focusing on: (1) the total number of civilian employees who were relocated at the federal government's expense; (2) the total cost of these relocations to the government; (3) the agencies that had rotational policies requiring their civilian employees to relocate; and (4) trends for the number and cost of civilian employee relocations during this period. GAO noted that, for FY 1991 through 1995: (1) 97 federal organizations reported authorizing about 132,800 relocations, and 23 other organizations reported making about 40,200 relocations; (2) a small number of organizations accounted for the bulk of the relocations authorized or made; (3) while the total numbers of relocations authorized and made fluctuated yearly across the organizations that provided data for all 5 fiscal years, there was moderate change in these totals between FY 1991 and 1995; (4) across the organizations that provided data for all 5 fiscal years, the total number of relocations authorized decreased by less than 1 percent (89 organizations) and the total number of relocations made increased by about 12.5 percent (19 organizations) from FY 1991 to 1995; (5) 97 federal organizations reported obligating about $3.4 billion for relocations, and 23 other organizations reported expending about $363 million for relocations; (6) a small number of organizations accounted for the bulk of the relocation obligations or expenditures; (7) across the organizations that provided data for all 5 fiscal years, total relocation obligations varied and total relocation expenditures increased yearly; (8) there was noticeable change in these totals between FY 1991 and 1995; (9) in constant 1995 dollars, total relocation obligations increased about 16 percent (83 organizations) and total relocation expenditures increased about 88 percent (22 organizations) from FY 1991 to 1995; (10) for the 22 organizations, this increase was due to the Department of the Navy's expenditures; (11) excluding the Navy's expenditures, the 21 remaining organizations' total expenditures decreased by less than 1 percent during the period; (12) 15 federal organizations reported that they had mandatory rotational policies requiring some of their employees to rotate on a prescribed schedule; (13) most of these organizations attributed their policies to federal regulations that limit overseas tours of duty; and (14) based on data provided by these 15 organizations, GAO estimated that these rotational policies accounted for about 19 percent of the total relocations reported as authorized and about 7 percent of the total relocations reported as made during this period.
6,869
549
We found weaknesses in the implementation of NASA's export control policy and procedures concerning the CEA function and foreign national access procedures, which increase the risk of unauthorized access to export-controlled technology. Variations in CEA Position, Function, and Resources: NASA's export control policy provides the CEA the responsibility to ensure compliance of all Center program activities with U.S. export control laws and regulations and states that the position should be "senior-level," but does not define what "senior-level" means. NASA headquarters export control officials define senior-level as a person at the GS-15 level or in the senior executive service; however, we found that no CEAs were at the senior executive service level, three were GS-15s, and the CEAs at the remaining seven centers were at the GS-14 and GS-13 levels. In addition, NASA's export control NPR does not contain a provision on the placement of the export control function and CEA within the center's organizational structure. At some centers where they were several levels removed from the Center Director, CEAs stated that this placement makes it difficult to maintain authority and visibility to staff, to communicate concerns to center management, and to obtain the resources necessary to carry out their export control responsibilities. Conversely, a CEA at another center stated that his placement as Special Assistant to the Center Director creates a supportive environment to incorporate export controls into the project management processes and to require and provide export control training for the majority of center staff. NASA headquarters' export control officials, as well as several CEAs, noted that limitations in staff resources and time spent on export control functions makes it difficult to carry out the full range of export control duties, such as improving center export control procedures or providing a more robust export control training program. However, NASA's export control NPR does not discuss the allocation of resources for the export control function or for the CEA within the center, and, according to NASA headquarters' export control officials, each Center Director has the discretion of how to allocate resources to the export control function. As a result, we found variation among the centers in the staff resources assigned to the export control function, as shown in figure 1. Moreover, we found indications that the resources assigned to export controls at centers did not always appear to be commensurate with the export control workload. Specifically, 8 of the 10 centers had two or fewer civil servant staff to carry out export control activities for hundreds to thousands of foreign national visits, Scientific and Technical Information (STI) reviews, international agreements, and technical assistance agreements. For example, at one center in 2013, two civilian export control officials working less than full time on export control activities were responsible for reviewing and providing any needed export control access restrictions for over 3,000 foreign national visitors and conducting STI reviews for over 2,000 publications. NASA's procedural requirements for STI requires that all STI intended for release outside of NASA or presented at internal meetings where foreign persons may be present undergo technical, legal, and export control reviews, among others, to ensure that information is not unintentionally released through publication. See figure 2 for export control workload by center for fiscal year 2013. The CEA at one of the centers stated that the time to complete required review activities leaves little time to improve procedures or provide more robust training. To address the variations in authority, placement, and resources of the CEAs, we recommended NASA establish guidance defining the appropriate level and placement for the CEA function and assess the CEA workload to determine appropriate resources needed at each Center. NASA concurred, indicating plans to update existing guidance and to explore strategies to enhance support for the export control function. Weaknesses in Foreign National Access: Throughout fiscal year 2013 NASA centers and Headquarters approved over 11,000 foreign national visits for periods ranging from less than 30 days to greater than 6 months. NASA's security procedure requires screening of all foreign national visitors prior to gaining approval for access to any NASA facility. However, we identified instances in which NASA security procedures for foreign national access were not followed, which were significant given the potential impact on national security or foreign policy from unauthorized access to NASA technologies. Specifically, at one center, export control officials' statements and our review of documentation identified instances between March and July of 2013, where foreign nationals fulfilled the role of sponsors for other foreign nationals by identifying the access rights to NASA technology for themselves and other foreign nationals for one NASA program. This is not in compliance with NASA's security procedures which provide that only NASA civil servants or JPL employees who are U.S. citizens can act as sponsors for foreign nationals, which is one step in NASA's process of approving and activating foreign national access. This center is taking action to address this issue and, as of December 2013, it developed a new approval process and criteria for foreign nationals requesting access to center automated databases and made revisions to center policies for information systems and foreign national access. We identified planned corrective actions at this and other Centers related to the management of foreign national access and, in our April report, we recommended that NASA develop plans with specific time frames to monitor these corrective actions to ensure their effectiveness. NASA concurred and indicated that it plans to take action to increase the effectiveness of its existing procedures and implement improvements. We found that NASA headquarters export control officials and some CEAs faced challenges in providing effective oversight. In particular, the lack of a comprehensive inventory of export-controlled technologies and not effectively utilizing available oversight tools limit their ability to identify and address risks. Lack of a Comprehensive Inventory of Export-Controlled Technologies: NASA headquarters export control officials and CEAs lack a comprehensive inventory of the types and location of export- controlled technologies at the centers, limiting their ability to identify internal and external risks to export control compliance. Five CEAs told us that they do not know the types and locations of export-controlled technologies, but rather rely on NASA program and project managers to have knowledge of this information. NASA's export control NPR provides that NASA Center Program and Project Managers, in collaboration with CEAs, are to identify and assess export-controlled technical data. Additionally, NASA Center Project Managers are required by NASA's export control NPR to provide appropriate safeguards to ensure export- controlled items and technical data are marked or identified prior to authorized transfer to foreign parties consistent with export control requirements. The CEA and security chief at one center told us that they requested a plan identifying where export-controlled and sensitive technologies are located within a research branch in order to facilitate foreign national visit requests. According to the branch manager, he was unable to provide this information, stating it would be too cumbersome to map out all of that information and try to restrict access to the areas with sensitive technologies. Assessing areas of vulnerability, including identifying and assessing export-controlled items, could better ensure that consistent procedures are practiced. NASA's lack of a comprehensive inventory of its export-controlled technologies is a longstanding issue that the NASA Inspector General identified as early as 1999. Three centers began recent efforts to identify export-controlled technologies at their centers--one of which involves coordination with the center counterintelligence officer. Specifically, at this center, the counterintelligence office collaborated with the CEA to conduct a sensitive technology survey--designed to identify the most sensitive technologies at the center--to better manage risks by developing protective measures for these technologies in the areas of counterintelligence, information technology security, and export controls. Such approaches, implemented NASA-wide, could enable the agency to take a more risk-based approach to oversight by targeting existing resources to identify the most sensitive technologies and then ensure the location of such technologies are known and protected. To implement a risk-based approach, we recommended NASA build off of existing information sources, such as assessments by NASA's counterintelligence office, to identify targeted technologies. In its response, NASA highlighted plans to implement a risk-based approach that would include CEAs, program managers, and counterintelligence officials. Underutilization of Oversight Tools: NASA's oversight tools, including annual audits, export control conferences with CEA, and voluntary disclosures, have identified deficiencies, but NASA headquarters has not addressed them. Specifically, we found that seven centers have unresolved findings, recommendations, or observations spanning a period from 2005 to 2012, in areas including export control awareness, management commitment, resources, training, foreign national visitor processes, and disposal of property. At five centers, responding to audit findings and implementing recommendations required that the CEA coordinate with other offices and programs across the center beyond the CEA's control. The remaining two centers cited resource constraints, organizational priorities, and insufficient coordination with center management as barriers to implementing corrective actions and resolving recommendations. NASA's current procedures do not address coordination among offices at a center to address findings from annual audits. Further, NASA headquarters export control officials hold annual export control program reviews with the CEAs to discuss export control changes and CEA concerns and recommendations for the program. At NASA's 2013 annual review, the CEAs presented NASA headquarters export control officials with a list of comments regarding the export control program, many of which echo the issues raised in our April 2014 report, such as CEA position and resources, foreign national access, and awareness of export-controlled technologies. NASA headquarters' export control officials stated that they agree with the issues raised by the CEAs but acknowledged that they have not fully addressed the CEA concerns from the most recent program review in March 2013 and have not developed specific plans to do so. In fact, we found that over the last 3 years, NASA headquarters export control officials provided only one policy update or other direction to address export control concerns raised by the CEAs. In our April report, we made two recommendations to address underutilization of the audit and program review tools. To ensure implementation of audit findings, we recommended that NASA direct Center Directors to oversee implementation of the audit findings. Similarly, we recommended that NASA develop a plan, including timeframes, to ensure CEA issues and suggestions for improvement are addressed. NASA concurred and plans to revise existing guidance. NASA may also be missing an opportunity to use voluntary disclosures to help improve export control compliance. NASA's export control NPR provides that it is every NASA employee's personal responsibility to comply with U.S. export control laws and regulations; and further provides the Departments of State and Commerce's regulatory requirements for voluntary self disclosure of noncompliance in export activities, even if the errors were inadvertent. NASA's headquarters' export control program officials told us that few or no voluntary disclosures might indicate a weakness in a center's export control program. We found little usage of the voluntary disclosure process at the NASA centers: a total of 13 voluntary disclosures divided among four of the NASA centers since 2011, and potential noncompliance ranged from failure to file a record of shipment to Germany to potential foreign national exposure to a program's technical data. The remaining six NASA centers have not submitted voluntary disclosures since 2011. We found that a similar event may lead to a voluntary disclosure at one center but not another and that CEA approaches toward voluntary disclosures at some centers may affect NASA's ability to identify and report potential violations of export control regulations. To ensure consistency in reporting potential export control violations, in our April 2014 report, we recommended that NASA re-emphasize to CEAs the requirements on how and when to notify headquarters. NASA concurred and plans to revise and develop additional guidance. As stated above, NASA concurred with all of our recommendations and stated that our findings and recommendations complement results from the recent reviews by the NASA's Inspector General and the National Academy of Public Administration. Further, NASA stated in its response to each of these reviews that it plans to adopt a more comprehensive, risk-based approach to enhance its export control program. Subsequent to our report, the NASA Administrator issued an email to all employees reiterating the importance of the export control program and announcing plans to expand the online and in-person export control training. This is an important step as it sets a tone from the top and could help ensure the centers apply consistent approaches. However, it will be important for NASA to be vigilant in assessing actions taken to help ensure effective implementation and to avoid a relapse into the former practices. Collectively, improvements in all of these areas can help NASA strike an effective balance between protecting the sensitive export- controlled technologies and information it creates and uses and supporting international partners and disseminating important scientific information as broadly as possible. Mr. Chairmen, Ranking Members, and members of the subcommittees, this concludes my prepared remarks. I would happy to answer any questions that you may have. For questions about this statement, please contact Belva Martin at (202) 512-4841, or at ([email protected]). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include William Russell, Assistant Director; Caryn Kuebler, Analyst-in- Charge; Marie Ahearn; Lisa Gardner; Laura Greifner; Amanda Parker; and Roxanna Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NASA develops sophisticated technologies and shares them with its international partners and others. U.S. export control regulations require NASA to identify and protect its sensitive technology; NASA delegates implementation of export controls to its 10 research and space centers. Recent allegations of export control violations at two NASA centers have raised questions about NASA's ability to protect its sensitive technologies. GAO was asked to review NASA's export control program. This report assessed (1) NASA's export control policies and how centers implement them, and (2) the extent to which NASA Headquarters and CEAs apply oversight of center compliance with its export control policies. To do this, GAO reviewed export control laws and regulations, NASA export control policies, and State and Commerce export control compliance guidance. GAO also reviewed NASA information on foreign national visits and technical papers and interviewed officials from NASA and its 10 centers as well as from other agencies. Weaknesses in the National Aeronautics and Space Administration (NASA) export control policy and implementation of foreign national access procedures at some centers increase the risk of unauthorized access to export-controlled technologies. NASA policies provide Center Directors wide latitude in implementing export controls at their centers. Federal internal control standards call for clearly defined areas of authority and establishment of appropriate lines of reporting. However, NASA procedures do not clearly define the level of center Export Administrator (CEA) authority and organizational placement, leaving it to the discretion of the Center Director. GAO found that 7 of the 10 CEAs are at least three levels removed from the Center Director. Three of these 7 stated that their placement detracted from their ability to implement export control policies by making it difficult to maintain visibility to staff, communicate concerns to the Center Director, and obtain resources; the other four did not express concerns about their placement. However, in a 2013 meeting of export control officials, the CEAs recommended placing the CEA function at the same organizational level at each center for uniformity, visibility, and authority. GAO identified and the NASA Inspector General also reported instances in which two centers did not comply with NASA policy on foreign national access to NASA technologies. For example, during a 4-month period in 2013, one center allowed foreign nationals on a major program to fulfill the role of sponsors for other foreign nationals, including determining access rights for themselves and others. Each instance risks damage to national security. Due to access concerns, the NASA Administrator restricted foreign national visits in March 2013, and directed each center to assess compliance with foreign national access and develop corrective plans. By June 2013, six centers identified corrective actions, but only two set time frames for completion and only one planned to assess the effectiveness of actions taken. Without plans and time frames to monitor corrective actions, it will be difficult for NASA to ensure that actions are effective. NASA headquarters export control officials and CEAs lack a comprehensive inventory of the types and location of export-controlled technologies and NASA headquarters officials have not addressed deficiencies raised in oversight tools, limiting their ability to take a risk-based approach to compliance. Export compliance guidance from the regulatory agencies of State and Commerce states the importance of identifying controlled items and continuously assessing risks. NASA headquarters officials acknowledge the benefits of identifying controlled technologies, but stated that current practices, such as foreign national screening, are sufficient to manage risk and that they lack resources to do more. Recently identified deficiencies in foreign national visitor access discussed above suggest otherwise. Three CEAs have early efforts under way to better identify technologies which could help focus compliance on areas of greatest risk. For example, one CEA is working with NASA's Office of Protective Services Counterintelligence Division to identify the most sensitive technologies at the center to help tailor oversight efforts. Such approaches, implemented NASA-wide, could enable the agency to better target existing resources to protect sensitive technologies. In April 2014, GAO recommended that the NASA Administrator establish guidance to better define the CEA function, establish time frames to implement foreign national access corrective actions and assess results, and establish a more risk-based approach to oversight, among other actions. NASA concurred with all of our recommendations and provided information on actions taken or planned to address them.
2,909
853
DEA establishes quotas for the maximum amount of each basic class of schedule I and II controlled substances--such as amphetamine or morphine--that can be produced each year in the United States. DEA also establishes quotas for individual manufacturers, who must apply to DEA to obtain quotas for specific classes of controlled substances. The CSA and DEA's implementing regulations specify dates by which DEA must propose and establish its quotas. The quotas that DEA establishes each year are required to provide for the estimated medical, research, and industrial needs of the United States. In setting quotas, DEA considers information from many sources including manufacturers' production history and anticipated needs from manufacturers' quota applications and past histories of quota granted for each substance from YERS/QMS, which is DEA's system for tracking and recording quota applicants and decisions. Both DEA and FDA have important responsibilities in preventing and responding to shortages of drugs containing controlled substances subject to quotas. In addition to preventing diversion, DEA works to ensure that an adequate and uninterrupted supply of controlled substances is available for legitimate medical and other needs. As part of its mission, FDA works to prevent, alleviate, and resolve drug shortages. The Food and Drug Administration Safety and Innovation Act (FDASIA), enacted in 2012, contains provisions that require DEA and FDA to coordinate their respective efforts during shortages of drugs containing controlled substances subject to quotas. When FDA is notified of a supply disruption of certain drugs that contain controlled substances subject to quotas, FDASIA requires that FDA request that DEA increase quotas applicable to that controlled substance, if FDA determines that it is necessary. Similarly, when FDA has determined that a drug subject to quotas is in shortage in the United States, manufacturers may submit quota applications requesting that DEA authorize additional quota for that substance. FDASIA requires that DEA respond to these requests from manufacturers within 30 days. The CSA requires businesses, entities, or individuals that import, export, manufacture, distribute, dispense, conduct research with respect to, or administer controlled substances to register with the DEA. As of December 2014, there were over 1.5 million registered distributors, pharmacies, and practitioners; more than 1.4 million of these registrants were practitioners. DEA registrants must comply with a variety of requirements imposed by the CSA and its implementing regulations. For example, a registrant must keep accurate records and maintain inventories of controlled substances, among other requirements, in compliance with applicable federal and state laws. Additionally, all registrants must provide effective controls and procedures to guard against theft and diversion of controlled substances. Examples of some of the specific regulatory requirements for distributors, pharmacists, and practitioners include the following: Distributors: Registrants must design and operate a system to disclose suspicious orders of controlled substances, and must inform the DEA field division office in the registrant's area of suspicious orders when the registrant discovers them. Pharmacists: While the responsibility for proper prescribing and dispensing of controlled substances rests with the prescribing practitioner, the pharmacist who fills the prescription holds a corresponding responsibility for ensuring that the prescription was issued in the usual course of professional treatment for a legitimate purpose. Practitioners: Practitioners are responsible for the proper prescribing and dispensing of controlled substances for legitimate medical uses. A prescription for a controlled substance must be issued for a legitimate medical purpose by an individual practitioner acting in the usual course of that person's professional practice. It is important for registrants to adhere to their responsibilities under the CSA because they play a critical role in the prescription drug supply chain, which is the means through which prescription drugs are ultimately delivered to patients with legitimate medical needs. Although prescription drugs are intended for legitimate medical uses, as shown in figure 1, the prescription drug supply chain may present opportunities for the drugs to be abused and diverted. For example, an individual may visit multiple practitioners posing as a legitimate patient, referred to as a doctor shopper, to obtain prescriptions for drugs for themselves or others. In an example of diversion, criminal enterprises may rob distributors and pharmacies of prescription drugs to sell to others for a profit. Confidential informants provide information and take action at the direction of law enforcement agencies to further investigations. Agencies may rely on confidential informants in situations in which it could be difficult to use an undercover officer. To help ensure appropriate oversight of informants, The Attorney General's Guidelines Regarding the Use of Confidential Informants (the Guidelines) set forth detailed procedures and review mechanisms to ensure that law enforcement agencies exercise their authorities appropriately and with adequate oversight. Adherence to the Guidelines is mandatory for DOJ law enforcement agencies, including DEA. The Guidelines require each DOJ law enforcement agency to develop agency-specific policies regarding the use of informants, and the DOJ Criminal Division is tasked with reviewing these agency-specific policies to ensure that the policies comply with the Guidelines. The Guidelines require that, prior to using a person as an informant, agencies vet informants to assess their suitability for the work and that agents conduct a continuing suitability review for the informant at least annually thereafter. Additionally, the Guidelines permit agencies to authorize informants to engage in activities that would otherwise constitute crimes under federal, state, or local law if someone without such authorization engaged in these same activities. For example, in the appropriate circumstance, an agency could authorize an informant to purchase illegal drugs from someone who is the target of a drug- trafficking investigation. Such conduct is termed "otherwise illegal activity." The Guidelines include certain requirements for authorizing otherwise illegal activity and restrictions on the types of activities an agency can authorize. In our February 2015 report, we found that DEA had not effectively administered the quota process, nor had DEA and FDA established a sufficiently collaborative relationship to address shortages of drugs containing controlled substances subject to quotas. Since then, DEA has taken some actions to address the seven recommendations we made in our February 2015 report with respect to the agency's administration of the quota process and efforts to address drug shortages, but DEA has only fully implemented two of the seven recommendations. As we reported in February 2015, DEA had not proposed or established quotas within the time frames required by its regulations for any year from 2001 through 2014. DEA officials attributed this lack of compliance to inadequate staffing and noted that the agency's workload with respect to quotas had increased substantially. Manufacturers who reported quota- related shortages cited late quota decisions as causing or exacerbating shortages of their drugs. We could not confirm whether DEA's lack of timeliness in establishing quotas had caused or exacerbated shortages because of concerns about the reliability of DEA's data, among other things. However, by not promptly responding to manufacturers' quota applications, we concluded that DEA may have hindered manufacturers' ability to manufacture drugs that contain schedule II controlled substances that may help prevent or resolve a shortage. Additionally, our February 2015 report found that DEA had weak internal controls, which jeopardized the agency's ability to effectively manage the quota process. Specifically: DEA did not have adequate controls to ensure the reliability of YERS/QMS, which it used to track manufacturers' quota applications and record its quota decisions. DEA officials described some data checks of YERS/QMS, such as managers verifying that information entered into the system was accurate. However, the agency did not have systematic quality checks to ensure that the data were accurate or the checks it had in place were sufficient. This lack of systematic data checks was also concerning because we estimated that 44 percent of YERS/QMS records in 2011 and 10 percent in 2012 had errors. DEA officials said that 2011 was the first year manufacturers applied for quotas electronically and they expected data from 2012 and beyond to be more accurate. DEA lacked critical management information because it did not have performance measures related to setting quotas. In the absence of such performance measures, we concluded that DEA was missing important information for program managers to use when making decisions about program resources, and the agency could not effectively demonstrate program results. DEA did not monitor or analyze YERS/QMS data to assess the performance of the quota process. Absent such analysis, DEA was unable to evaluate its responses to manufacturers' quota applications or to understand the nature of its workload. DEA did not have reasonable assurance that the quotas it set were in accordance with its requirements and could not ensure continuity of its operations, as it did not have protocols, policies, training materials, or other documentation to manage the quota process. Instead, the agency said it relied on its regulations and the CSA to serve as guidance on how to conduct these activities. However, the need for detailed policies, procedures, and practices is particularly important because the process of setting quotas is very complex, requiring staff to weigh data from at least five different sources that may have contradictory information. To address these deficiencies, our February 2015 report recommended that DEA take four actions to ensure it is best positioned to administer the quota process. Specifically, we recommended that DEA (1) strengthen its internal controls of YERS/QMS, (2) establish performance measures related to quotas, (3) monitor and analyze YERS/QMS data, and (4) develop internal policies for processing quota applications and setting quotas. In commenting on our report, DEA did not explicitly agree or disagree with these four recommendations. As of June 2016, DEA has taken some actions to address these recommendations. Specifically, in response to our first recommendation, the agency stated that it implemented a series of system-generated flags in YERS/QMS that verify the information manufacturers enter into their quota applications and identify entries made by DEA staff that warrant further review within the agency. Additionally, in October 2015, DEA said that it would compare a random sample of manufacturers' applications and DEA's responses in YERS/QMS on a quarterly basis starting in fiscal year 2016. In June 2016, DEA provided the results of its review of 146 YERS/QMS records from March through May 2016, which identified a nearly nonexistent error rate (.01 percent). Because of these actions, we believe that DEA has implemented this recommendation. In response to our second recommendation, DEA stated in October 2015 that it would develop performance standards that outline time frames for when manufacturers should expect DEA to respond to their quota applications, as well as develop web-based training to help manufacturers improve the quality of the information submitted to the agency. However, in June 2016, DEA stated that developing performance measures specific to the quota process would not be feasible because actions affecting quotas are outside of the agency's control. Instead, DEA focused on training manufacturers about the quota process to improve the accuracy and quality of their quota applications by holding additional trainings in April 2016 and developing web-based training. The agency plans to finish developing the web-based training in fiscal year 2017. Although training is an important step in improving the information being submitted to DEA, it is also important that DEA establish measures to assess its performance in achieving its mission of ensuring an adequate and uninterrupted supply of controlled substances, as it does for its diversion-related mission. As a result, we do not believe DEA's actions are fully responsive to our recommendation. In response to our third recommendation, DEA stated that it streamlined its process for reviewing manufacturers' quota applications, which led to a significant reduction in the agency's response times. For example, DEA said that it is now responding to manufacturers' quota applications within four weeks. As of June 2016, the agency plans to continue monitoring and analyzing the quality of the YERS/QMS data and DEA's timeliness in responding to quota applications. We are currently awaiting documentation about DEA's analysis of YERS/QMS data in relation to the agency's timeliness in responding to manufacturers' quota applications and will update the status of this recommendation as applicable. Lastly, in response to our fourth recommendation, in June 2016, DEA said that it established internal policies for the quota process and is in the process of updating its employee training materials for new staff to help ensure that each staff member has the information needed to issue quotas in accordance with the CSA and DEA's regulations. DEA agreed to provide the materials to us when they are completed, and we will assess the status of this recommendation at that time. Our February 2015 report also identified several barriers that may hinder DEA and FDA from effectively coordinating with each other during shortages of drugs containing controlled substances subject to quotas. For example: We found that DEA and FDA sometimes disagreed about what constitutes a shortage because the two agencies defined drug shortages differently. FDA defined a drug shortage as a period of time when the demand or projected demand for the drug within the United States exceeds the supply of the drug. In contrast, DEA officials told us that there is no shortage, from DEA's perspective, as long as there is quota available to manufacture a given controlled substance, regardless of which particular manufacturers are producing the product and which strengths or formulations are available. We concluded that by not reaching agreement about what constitutes a drug shortage, it was unclear whether the two agencies would be able to successfully coordinate should a shortage of a drug containing a controlled substance subject to a quota occur. We also found that DEA lacked policies, procedures, or other means to coordinate with FDA about shortages of a controlled substance related to quotas. FDA established such policies and procedures in September 2014, but DEA officials said the agency did not plan to establish formal policies and procedures to coordinate the agency's response to FDA. While FDASIA directs DEA to respond within 30 days to manufacturers that request additional quota pertaining to a shortage of a schedule II drug, the law does not specify how quickly DEA must respond to a request from FDA. A time frame for DEA to respond would be particularly important given that a request from FDA means it has determined that there is a shortage of a life-sustaining drug that an increase in quota is necessary to address. Further, both agencies told us that they were subject to restrictions on exchanging the proprietary information they receive from drug manufacturers, which may be helpful to prevent or address shortages. At the time our report was issued in February 2015, the agencies had been working for more than 2 years to develop an updated memorandum of understanding (MOU) to share such information. To address these barriers to effective coordination, we made three recommendations. First, we recommended that DEA and FDA promptly update the MOU between the two agencies. Second, we recommended that either in the MOU or a separate agreement, DEA and FDA specifically outline what information they will share and the time frames for sharing such information in response to a potential or existing drug shortage. Third, we recommended that DEA expeditiously establish formal policies and procedures to coordinate with FDA, as directed by FDASIA, with respect to expediting shortage-related quota applications. In commenting on a draft of our report, DEA did not explicitly agree or disagree with these three recommendations. The Department of Health and Human Services agreed with the two recommendations we made to FDA. In March 2015, FDA and DEA updated the MOU to establish procedures regarding the exchange of proprietary and other sensitive information between DEA and FDA, which fully addresses one of our three recommendations. According to DEA, the two agencies have shared information under the auspices of the MOU at least six times in fiscal year 2016. Although the MOU established procedures for sharing information, it calls for the development of separate plans to specify precisely what information is to be shared, and who it is to be shared with. In October 2015, DEA said that it had met with FDA to determine the specific procedures by which information regarding drug shortages shall be exchanged, and a draft of such a work plan has been circulated between the two agencies for comment. As of June 2016, DEA expects the work plan to be completed no later than December 2016. DEA also noted that the work plan will contain formal policies and procedures to facilitate coordination with FDA, as directed by FDASIA. As a result, the two related recommendations remain open at this time. In June 2015, we reported that DEA provided information to its registrants regarding their roles and responsibilities for preventing abuse and diversion through conferences, training, and other initiatives. We also found that DEA provided additional resources, such as manuals for specific registrant groups and DEA's Know Your Customer guidance for distributors. However, based on our generalizable survey of four DEA registrant groups, we reported that many registrants were not aware of these resources or they would like additional guidance, information, or communication from DEA to better understand their roles under the CSA. We recommended that DEA take three actions to address registrants' concerns. DEA has made some progress, but additional actions are needed to fully address our recommendations. In June 2015, we reported that DEA periodically hosted events such as conferences or meetings for various components of its registrant population during which the agency provided information about registrants' CSA roles and responsibilities for preventing abuse and diversion. We found that DEA was also often a presenter at various conferences at the national, state, or local level, which registrants could attend. We asked distributors whether representatives of their facility attended DEA's 2013 Distributor Conference, and asked individual pharmacies and chain pharmacy corporate offices whether they or other representatives of their pharmacy (or pharmacy chain) had attended a Pharmacy Diversion Awareness Conference (PDAC). Based on our surveys, we estimated that 27 percent of distributors and 17 percent of individual pharmacies had participated in the DEA-hosted events, while 63 percent (20 of 32) of chain pharmacy corporate offices we surveyed had participated in a PDAC. Of the large percentages of distributors and pharmacies that did not participate in these conferences, many cited lack of awareness as the reason. For example, an estimated 76 percent of individual pharmacies that had not attended a PDAC and 35 percent of distributors that had not attended the 2013 Distributor Conference cited lack of awareness as a reason for not participating. Our June 2015 report also stated that DEA had created various resources, such as guidance manuals and a registration validation tool, which registrants could use to understand or meet their roles and responsibilities under the CSA. However, based on our surveys, we found that many registrants were not using these resources because they were not aware that they existed. For example, DEA had created guidance manuals for pharmacists and practitioners to help them understand how the CSA and its implementing regulations pertain to these registrants' professions. These documents were available on DEA's website. In 2011, DEA released guidance for distributors containing suggested questions a distributor should ask customers prior to shipping controlled substances (referred to as the Know Your Customer guidance). Additionally, DEA offered a registration validation tool on its website so that registrants, such as distributors and pharmacies, could determine if a pharmacy or practitioner had a valid, current DEA registration. However, our survey results suggested that many registrants were not using these resources that could help them better understand and meet their CSA roles and responsibilities because they were unfamiliar with them. For example, of particular concern were the estimated 53 percent of individual pharmacies that were not aware of either DEA's Pharmacist's Manual or the registration validation tool, and the 70 percent of practitioners that were not aware of DEA's Practitioner's Manual, and were therefore not using these resources. The lack of awareness among registrants of DEA resources and conferences suggested that DEA may not have an adequate means of communicating with its registrant populations. Further, with so many registrants unaware of DEA's conferences and resources, we reported that DEA lacked assurance that registrants had sufficient information to understand and meet their CSA responsibilities. Therefore, we recommended that DEA identify and implement means of cost-effective, regular communication with distributor, pharmacy, and practitioner registrants, such as through listservs or web-based training. DEA agreed that communication from DEA to the registrant population was necessary and vital. As of April 2016, DEA reported that it had taken steps towards addressing this recommendation. In particular, DEA reported that it was in the process of developing web-based training modules for all of its registrant population, and was considering the best way to implement a listserv to disseminate information to its various registrant types. We plan to continue to monitor the agency's efforts in this area, and this recommendation remains open. As we reported in June 2015, some responses to our registrant survey indicated that additional guidance for distributors regarding suspicious orders monitoring and reporting, as well as more regular communication, would be beneficial. In response to an open-ended question in our survey about how DEA could improve its Know Your Customer document, the guidance document DEA has provided to distributors, half of distributors (28 of 55) that offered comments said that they wanted more guidance from DEA. Additionally, just over one-third of distributors (28 of 77) reported that DEA's Know Your Customer document was slightly or not at all helpful. Furthermore, in response to an open-ended question about what additional interactions they would find helpful to have with DEA, more than half of the distributors that offered comments (36 of 55) said that they needed more communication or information from, or interactions with, DEA. Some of the specific comments noted that distributors would like more proactive communication from DEA that was collaborative in nature, rather than being solely violation- or enforcement-oriented. Some of the additional communication and interactions proposed by distributors included quarterly meetings with the local field office and more training or conferences related to their regulatory roles and responsibilities. Also, while DEA had created guidance manuals for pharmacists and practitioners, the agency had not developed a guidance manual or comparable document for distributors. DEA officials told us that they believed the information in agency regulations was sufficient for distributors to understand their CSA responsibilities for suspicious orders monitoring and reporting. DEA officials also said that they met routinely with distributors and distributors had fewer requirements compared to other registrant types and officials did not believe such guidance was necessary. Additionally, DEA officials said that while distributors wanted specific instructions on how to avoid enforcement actions, DEA could not do that because circumstances that lead to enforcement actions (e.g., individual business practices) vary. However, as we stated in our June 2015 report, a guidance document for distributors similar to the one offered for pharmacies and practitioners could help distributors further understand and meet their roles and responsibilities under the CSA for preventing diversion, though the document may not need to be as detailed. Specifically, we concluded that although DEA may not be able to provide guidance that will definitively answer the question of what constitutes a suspicious order or offer advice about which customers to ship to, DEA could, for example, provide guidance around best practices in developing suspicious orders monitoring systems. DEA could also enhance its proactive communication with distributors--which could be done, for example, via electronic means if additional in-person outreach would be cost prohibitive. Such steps are key to addressing distributors' concerns, because without sufficient guidance and communication from DEA, distributors may not be fully understanding or meeting their roles and responsibilities under the CSA for preventing diversion. Additionally, in the absence of clear guidance from DEA, our survey data showed that many distributors were setting thresholds on the amount of certain controlled substances that can be ordered by their customers (i.e., pharmacies and practitioners), which could negatively impact pharmacies and ultimately patients' access. For example, we estimated that 62 percent of individual pharmacies did business with distributors that put thresholds on the quantity of controlled substances they could order, and we estimated that 25 percent of individual pharmacies have had orders cancelled or suspended by distributors. Responses to our surveys also showed that some pharmacies wanted updated or clearer guidance, as well as more communication and information, from DEA. For example, we found that DEA's Pharmacist's Manual was last updated in 2010, and since that time DEA had levied large civil fines against some pharmacies. Some pharmacy associations reported these fines had caused confusion in the industry about pharmacists' CSA roles and responsibilities. In their responses to an open-ended question in our survey about DEA's Pharmacist's Manual, some chain pharmacy corporate offices (7 of 18) said that the manual needed updates or more detail, some chain pharmacy corporate offices (5 of 18) reported other concerns with the manual, and some individual pharmacies (13 of 33) said that the manual needed improvement, such as more specifics. For example, several chain pharmacy corporate offices commented that the manual needed to be updated to reflect changes in DEA enforcement practices or regulations (e.g., the rescheduling of hydrocodone from a schedule III to a schedule II drug). The need for clearer guidance for pharmacists was also suggested by some chain pharmacy corporate offices' responses to a question about DEA field office consistency. Specifically, when asked how consistent the responses of staff in different field offices had been to their inquiries about pharmacists' roles and responsibilities, nearly half of chain pharmacy corporate offices (8 of 19) that had contact with multiple DEA field offices said that staff responses were slightly or not at all consistent. In an open- ended response to this question, one chain pharmacy corporate office noted that in its interactions with different DEA field offices throughout the country it had received different, widely varying interpretations of DEA requirements that affected the chain's day-to-day operations, such as requirements for theft/loss reporting of controlled substances and requirements for prescribers to be reported when the prescriber fails to provide a written prescription. These responses from chain pharmacy corporate offices about field office inconsistencies suggested that the existing pharmacy guidance may not be clear even to some DEA field office officials. Additionally, the desire for more or clearer guidance and more communication from DEA was a common theme in the responses offered from both individual pharmacies and chain pharmacy corporate offices to the open-ended questions in our survey related to DEA interactions. For example, in response to an open-ended question about what additional interactions they would find helpful to have with DEA headquarters or field office staff, nearly all of the chain pharmacy corporate offices that offered comments (15 of 18) said that they wanted more guidance or clearer interpretation of the guidance from DEA, more communication with DEA, or a more proactive, collaborative relationship with DEA. In addition, nearly a third of individual pharmacies (18 of 60) that offered open-ended answers to a question about any new guidance, resources, or tools that DEA should provide to help them understand their roles and responsibilities said that they would like more proactive communication from DEA through methods such as a newsletter or e-mail blast. To help address the concerns raised by some distributor and pharmacy registrants, we recommended that DEA solicit input from distributors, or associations representing distributors, and develop additional guidance for distributors regarding their roles and responsibilities for suspicious orders monitoring and reporting. We also recommended that the office solicit input from pharmacists, or associations representing pharmacists, about updates and additions needed to existing guidance for pharmacists, and revise or issue guidance accordingly. In commenting on our report, DEA raised concerns about the recommendation to solicit input from distributors and stated that short of providing arbitrary thresholds to distributors, it cannot provide more specific suspicious orders guidance because the variables that indicate a suspicious order differ among distributors and their customers. In April 2016, DEA provided information about ongoing efforts to educate distributors about their roles and responsibilities for monitoring and reporting suspicious orders, such as their Distributors' Conferences, and noted that it plans to host yearly training for distributors. However, DEA did not mention any plans to develop and distribute additional guidance for distributors. We continue to believe that a guidance document similar to the one offered for pharmacies and practitioners could help distributors further understand and meet their role and responsibilities under the CSA. Specifically, although DEA may not be able to provide guidance that will definitively answer the question of what constitutes a suspicious order or offer advice about which customers to ship to, DEA could, for example, provide guidance around best practices in developing suspicious orders monitoring systems. In the absence of clear guidance from DEA, our survey data show that many distributors are setting thresholds on the amount of certain controlled substances that can be ordered by their customers (i.e., pharmacies and practitioners), which can negatively impact pharmacies and ultimately patients' access. We plan to continue to monitor the agency's efforts in this area, and this recommendation remains open. With respect to our recommendation that DEA solicit input from pharmacists, in commenting on our report, DEA described actions it would take to partially address the recommendation, including updating the Pharmacist's Manual to reflect two subject matter area changes related to the rescheduling of hydrocodone and new drug disposal regulations. However, at that time, DEA did not comment about providing any additional guidance to pharmacists related to their roles and responsibilities in preventing abuse and diversion under the CSA. In April 2016, DEA reported that it continues to work with the National Association of Boards of Pharmacy regarding issues raised during stakeholder discussions, which resulted in a March 2015 consensus document published by stakeholders entitled "Stakeholders' Challenges and Red Flag Warning Signs Related to Prescribing and Dispensing Controlled Substances." DEA also described other ways in which the agency works with pharmacists or associations representing pharmacists, such as during regional one-day Pharmacy Diversion Awareness Conferences, and noted that it was still working to update the Pharmacist's Manual regarding changes related to the rescheduling of hydrocodone and new drug disposal regulations. DEA also commented that it would continue to update or issue guidance as warranted, but again, did not indicate that it had updated, or planned to update, existing guidance to pharmacists related to their roles and responsibilities in preventing abuse and diversion under the CSA. We plan to continue to monitor the agency's efforts in this area, as well, and consequently this recommendation remains open. In September 2015, we reported that DEA's confidential informants policy required agents to consider most of the factors identified in the Attorney General's Guidelines for conducting initial suitability reviews prior to using a person as an informant. Furthermore, in accordance with the Guidelines, DEA's policy required that a continuing suitability review be conducted at least annually. However, we determined that DEA's policy was either partially consistent with or did not address some provisions in the Guidelines regarding oversight of informants' authorized illegal activities. We recommended that DEA update its policy and corresponding monitoring processes to address these provisions from the Guidelines. As of June 2016, DEA had made progress, but had not fully implemented our recommendation. In September 2015, we reported that DEA's policy was partially consistent with the Guidelines' requirements to provide written instructions to an informant regarding the parameters of the authorized otherwise illegal activity and to have the informant sign an acknowledgment of these instructions. Additionally, regarding the Guidelines' provisions on the suspension or revocation of authorization for an informant to engage in otherwise illegal activity, DEA's policy was consistent with the provision for revoking authorization in cases where DEA has reason to believe that an informant is not in compliance with the authorization. However, DEA's policy did not address circumstances unrelated to the informant's conduct in which DEA may, for legitimate reasons, be unable to comply with precautionary measures necessary for overseeing otherwise illegal activity. At the time of our review, DEA officials told us that they did not authorize informants to participate in otherwise illegal activity without agent supervision, and, therefore, these officials said they believe this requirement would not be applicable to DEA. However, we found that DEA's policy did not explicitly state that direct supervision of an agent is required for all instances of an informant's participation in otherwise illegal activity. Additionally, regardless of the circumstances for suspending or revoking an authorization for otherwise illegal activity, DEA's policy did not require the informant to sign a written acknowledgment that the authorization had been suspended or revoked. As a result, we recommended that DEA, with assistance and oversight from the DOJ Criminal Division, update its policy and corresponding monitoring procedures to explicitly address the Guidelines' provisions on oversight of informants' illegal activities. DOJ concurred with this recommendation, and has coordinated with DEA on updating the agency's policy. According to an April 2016 memo, the Criminal Division has reviewed a revised version of DEA's agents manual, which contains DEA's policies and practices regarding confidential informants, and the Criminal Division determined that the revised manual is fully consistent with the Guidelines. Based on follow up discussions with DOJ, as of June 2016, DEA's Office of the Chief Counsel was preparing the language needed to incorporate the new policy and expects to complete this process in summer 2016. At that time, we plan to review the updated policy to determine whether DEA has fully implemented our recommendation. Chairman Grassley, Ranking Member Leahy, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Diana C. Maurer at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Kristy Love (Assistant Director), Karen Doran, Alana Finley, Sally Gilley, Rebecca Hendrickson, Lisa Lusk, Geri Redican- Bigott, Christina Ritchie, Kelly Rolfes-Haase, and Sarah Turpin. Key contributors for the previous work on which this testimony is based are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DEA administers and enforces the CSA to help ensure the availability of controlled substances, including certain prescription drugs, for legitimate use while limiting their availability for abuse and diversion. The CSA requires DEA to set quotas that limit the amount of certain substances that are available in the United States. The CSA also requires those handling controlled substances to register with DEA. In addition, DEA works to disrupt and dismantle major drug trafficking organizations and uses confidential informants to help facilitate its investigative efforts. This testimony addresses DEA's efforts to address prior GAO recommendations concerning: (1) administration of the quota process, (2) information provided to registrants on their roles and responsibilities under the CSA, and (3) compliance with guidelines regarding confidential informants. This statement is based on findings from three GAO reports issued during 2015, and selected status updates from DEA through June 2016. In its prior work, GAO analyzed quota data, surveyed DEA registrants, reviewed DEA policy documents and interviewed DEA officials. For selected updates, GAO reviewed DEA documentation and held discussions with agency officials. In three reports issued during 2015, GAO made eleven recommendations to the Drug Enforcement Administration (DEA) related to administering the quota process for controlled substances, providing information and guidance to registrants, and complying with guidelines for overseeing confidential informants. As of June 2016, DEA had taken some actions to address these recommendations but had fully implemented only two of them. Administering the quota process. In February 2015, GAO found that DEA had not effectively administered the quota process that limits the amount of certain controlled substances available for use in the United States. For example, manufacturers apply to DEA for quotas needed to make drugs annually. GAO found that DEA did not respond within the time frames required by its regulations for any year from 2001 through 2014, which, according to some manufacturers, caused or exacerbated shortages of drugs. GAO recommended that DEA take seven actions to improve its management of the quota process and to address drug shortages. In March 2015, DEA implemented one recommendation to finalize an information sharing agreement with the Food and Drug Administration regarding drug shortages. In June 2016, DEA implemented a second recommendation strengthening internal controls in the quota system. DEA has not fully implemented the other five recommendations. In October 2015, DEA identified steps it planned to take, including developing performance standards for responsiveness to manufacturers, but has not yet completed these actions. Providing information to registrants. In June 2015, based on four nationally representative surveys of DEA registrants, GAO reported that many registrants were not aware of various DEA resources, such as manuals for pharmacists and practitioners. In addition, some distributors, individual pharmacies, and chain pharmacy corporate offices wanted improved guidance from, and additional communication with, DEA about their roles and responsibilities under the Controlled Substances Act (CSA). GAO recommended that DEA take three actions to increase registrants' awareness of DEA resources and to improve the information DEA provides to registrants. In April 2016, DEA reported that it had taken some steps towards addressing these recommendations, such as developing web-based training and updating the Pharmacist's Manual to reflect new regulations. However, DEA did not mention plans to develop and distribute additional guidance for distributors or pharmacies and therefore has not yet fully implemented GAO's recommendations. Compliance with confidential informant guidelines. In September 2015, GAO reported that DEA's confidential informant policies were not fully consistent with provisions in the Attorney General's Guidelines . For example, DEA did not fully address the requirements to provide the informant with written instructions about authorized illegal activity and require signed acknowledgment from the informant. GAO recommended that DEA update its policy and corresponding monitoring processes to explicitly address these particular provisions in the Guidelines. According to an April 2016 memo and subsequent follow up, DEA has revised its policy accordingly, and it is undergoing internal processing, which is expected to be completed in summer 2016. Until GAO can review the new policy and verify that it complies with the Guidelines, this recommendation remains open. GAO previously made eleven recommendations to DEA related to the quota process, guidance to registrants, and confidential informants. DEA generally agreed with and has begun taking actions to address the recommendations, and has so far fully implemented two.
7,599
948
The Nuclear Waste Policy Act of 1982, as amended, establishes a comprehensive policy and program for the safe, permanent disposal of commercial spent nuclear fuel and other highly radioactive wastes in one or more geologic repositories. The act charges DOE with (1) establishing criteria for recommending sites for repositories; (2) "characterizing" (investigating) the Yucca Mountain site to determine its suitability for a repository; (3) if the site is found suitable, recommending it to the President, who would submit a recommendation to the Congress if he agreed that the site was qualified; and (4) seeking permission from NRC to construct and operate a repository at the approved site. Under the Nuclear Waste Policy Act, users of nuclear-power-generated electricity pay $0.001 per kilowatt-hour into a Nuclear Waste Fund, which may be used only to pay for the siting, licensing, and construction of a nuclear waste repository. In fiscal year 2006, DOE reported that the fund had $19.4 billion. DOE also reported that it had spent about $11.7 billion (in fiscal year 2006 dollars) from project inception in fiscal years 1983 through 2005 and estimated that an additional $10.9 billion (in fiscal year 2006 dollars) would be incurred from fiscal years 2006 to 2017 to build the repository. Since the early 1980s, DOE has studied the Yucca Mountain site to determine whether it is suitable for a high-level radioactive waste and spent nuclear fuel repository. For example, DOE completed numerous scientific studies of water flow and the potential for rock movement near the mountain, including the likelihood that volcanoes and earthquakes will adversely affect the repository's performance. To allow scientists and engineers greater access to the rock being studied, DOE excavated two tunnels for studying the deep underground environment: (1) a 5-mile main tunnel that loops through the mountain, with several research areas or alcoves connected to it, and (2) a 1.7-mile tunnel that crosses the mountain, allowing scientists to study properties of the rock and the behavior of water near the potential repository area. Since July 2002, when the Congress approved the President's recommendation of the Yucca Mountain site for the development of a repository, DOE has focused on preparing its license application. In October 2005, DOE announced a series of changes in the management of the project and in the design of the repository to simplify the project and improve its safety and operation. Previously, DOE's design required radioactive waste to be handled at least four separate times by transporting the waste to the Yucca Mountain site, removing the waste from its shipping container, sealing it in a special disposal container, and moving it into the underground repository. The new repository design relies on uniform canisters that would be filled and sealed before being shipped, reducing the need for direct handling of most of the waste prior to being placed in the repository. As a result, DOE will not have to construct several extremely large buildings costing millions of dollars for handling radioactive waste. In light of these changes, DOE has been working on revising the designs for the repository's surface facilities, developing the technical specifications for the canisters that will hold the waste, and revising its draft license application. In accordance with NRC regulations, before filing its license application, DOE must first make all documentary material that is potentially relevant to the licensing process electronically available via NRC's Internet-based document management system. This system, known as the Licensing Support Network, provides electronic access to millions of documents related to the repository project. DOE is required to initially certify to NRC that it has made its documentary material available no later than 6 months in advance of submitting the license application. NRC, Nevada, and other parties in the licensing process must also certify their documentary material was made available following DOE's initial certification. This information will then be available to the public and all the parties participating in the licensing process. OCRWM currently expects to certify its material in the Licensing Support Network by December 21, 2007. In addition, OCRWM expects to complete the necessary designs and have the draft license application ready for DOE management's review by February 29, 2008. NRC is charged with regulating the construction, operation, and decommissioning phases of the project and is responsible for ensuring that DOE satisfies public health, safety, and environmental regulatory requirements. Once DOE files the license application, NRC will begin a four-stage process to process the application and decide whether to (1) authorize construction of the repository, (2) authorize construction with conditions, or (3) deny the application. As shown in figure 1, this process includes the following steps: Acceptance review. NRC plans to take up to 180 days to examine the application for completeness to determine whether the license application has all of the information and components NRC requires. If NRC determines that any part of the application is incomplete, it may either reject the application or require that DOE furnish the necessary documentation. NRC will docket the application once it deems the application complete, indicating its readiness for a detailed technical review. Technical review. The detailed technical review, scheduled for 18 to 24 months, will evaluate the soundness of the scientific data, computer modeling, analyses, and preliminary facility design. The review will focus on evaluating DOE's conclusions about the ability of the repository designs to limit exposure to radioactivity, both during the construction and operation phase of the repository (known as preclosure) and during the phase after the repository has been filled, closed, and sealed (known as postclosure.) If NRC discovers problems with the technical information used to support the application, it may conduct activities to determine the extent and effect of the problem. As part of this review, NRC staff will prepare a safety evaluation report that details staff findings and conclusions on the license application. Public hearings. NRC will also convene an independent panel of judges-- called the Atomic Safety Licensing Board--to conduct a series of public hearings to address contested issues raised by affected parties and review in detail the related information and evidence regarding the license application. Upon completion, the board will make a formal ruling (called the initial decision) resolving matters put into controversy. This initial decision can then be appealed to the NRC commissioners for further review. NRC commission review. In the likely event of an appeal, the NRC commissioners will review the Atomic Safety Licensing Board's initial decision. In addition, outside of the adjudicatory proceeding, they will complete a supervisory examination of those issues contested in the proceeding to consider whether any significant basis exists for doubting that the facility will be constructed or operated with adequate protection of the public health and safety. The commissioners will also review any issues about which NRC staff must make appropriate findings prior to the authorization of construction, even if they were not contested in the proceeding. However, until DOE submits a license application, NRC's role has involved providing regulatory guidance; observing and gathering information on DOE activities related to repository design, performance assessment, and environmental studies; and verifying site characterization activities. These prelicensing activities are intended to identify and resolve potential licensing issues early to help ensure that years of scientific work are not found to be inadequate for licensing purposes. DOE and NRC have interacted since 1983 on the repository. In 1998, they entered into a prelicensing interaction agreement that provides for technical and management meetings, data and document reviews, and the prompt exchange of information between NRC's on-site representatives and DOE project personnel. Consistent with this prelicensing interaction agreement and NRC's regulations, NRC staff observe and review activities at the site and other scientific work as they are performed to allow early identification of potential licensing issues for timely resolution at the staff level. EPA also has a role in the licensing process--setting radiation exposure standards for the public outside the Yucca Mountain site. In 2001, EPA set standards for protecting the public from inadvertent releases of radioactive materials from wastes stored at Yucca Mountain, which are required by law to be consistent with recommendations of the National Academy of Sciences. In July 2004, the U.S. Court of Appeals for the District of Columbia Circuit ruled that EPA's standards were not consistent with the National Academy of Sciences' recommendations. In response, EPA proposed a revised rule in August 2005. The director of EPA's Office of Air and Radiation Safety told us that EPA plans to finalize its rule this year. In addition, NRC must develop exposure limits that are compatible with EPA's rule. NRC published a proposed rule which it states is compatible with EPA's rule, received public comments in 2005, but has not yet finalized the rule. If EPA's rule does not change significantly in response to public comments, NRC's rule would not require major revisions either and could be finalized within months. However, if EPA's final rule has major changes, it could require major changes to NRC's rule, which could take more than a year to redraft, seek and incorporate public comments, and finalize, according to NRC officials. In July 2006, DOE announced its intent to file a license application to NRC no later than June 30, 2008. OCRWM's director set the June 30, 2008, goal to jump-start what he viewed as a stalled project. OCRWM's director told us that he consulted with DOE and contractor project managers to get a reasonable estimate of an achievable date for submitting the license application and asked OCRWM managers to develop a plan and schedule for meeting the June 30, 2008, goal. OCRWM's director believes this schedule is achievable, noting that DOE had already performed a significant amount of work toward developing a license application. Specifically, DOE completed a draft license application in September 2005, but opted not to file it with NRC to allow more time to address the USGS e-mail issue, revise the repository's design to simplify the project and improve its safety and operation, and consider revising its technical documents in response to the possibility that EPA would revise the radiation standards for the proposed repository. Table 1 shows the project's major milestones. DOE did not consult with external stakeholders in developing this schedule because there was no legal or regulatory requirement or compelling management reason to do so, according to senior OCRWM officials. However, these officials noted that the NRC review process includes extensive public hearings on the application, which will provide stakeholders with an opportunity to comment on and challenge the substance of the application. In addition, regarding other aspects of the program, senior OCRWM officials noted that they have often consulted with external stakeholders, including city and county governments near the proposed repository site, NRC, USGS, and nuclear power companies. OCRWM has also consulted with Nevada, the U.S. Department of the Navy, and other DOE offices. For example, in developing its standards for the canisters that will be used to store, transport, and place the waste in the repository, DOE consulted with the Navy and the nuclear power plant operators that generate the nuclear waste and will use the proposed canisters. In addition, DOE has worked with the local city and county governments near the repository to develop the plans for transporting the waste to the proposed repository. OCRWM's director has made the submission of the license application by June 30, 2008, the project's top strategic objective and management priority. Accordingly, each OCWRM office has created business plans detailing how its work will support this objective. Furthermore, DOE has developed a license application management plan that incorporates the lessons learned from previous license application preparation efforts and works to ensure that the license application meets all DOE and NRC statutory, regulatory, and quality requirements. The plan establishes a process whereby teams assess the statutory and regulatory requirements for the license application, identify any gaps and inadequacies in the existing drafts of the license application, and draft or revise these sections. Since the license application is expected to be thousands of pages long, the plan divides the license application into 71 subsections, each with a team assigned specific roles and responsibilities, such as for drafting a particular subsection or approving a particular stage of the draft. Finally, the plan also creates new project management controls to provide oversight of this process and manage risks. For example, the plan details how issues that may pose risks to the schedule or quality of the license application should be noted, analyzed, and resolved, and how the remaining issues should be elevated to successively higher levels of management. NRC officials believe it is likely that DOE will submit a license application by June 30, 2008, but will not speculate about its quality due to a long- standing practice to maintain an objective and neutral position toward proposed license applications until they are filed with NRC. According to NRC officials, NRC's ability to review an application in a timely manner is contingent on the application being high quality, which NRC officials define as being complete and accurate, including traceable and transparent data that adequately support the technical positions presented in the license application. NRC has expressed concern about the lack of a rigorous quality assurance program and the reliability of USGS scientific work that DOE had certified before the USGS e-mails were discovered. Based on its prelicensing review, NRC recognizes that DOE is addressing problems with its quality assurance program and, by developing a new water infiltration model, is restoring confidence in the reliability of its scientific work. When the Nuclear Waste Policy Act of 1982 gave NRC responsibility for licensing the nuclear waste repository, NRC staff began engaging in prelicensing activities aimed at gathering information from DOE and providing guidance so that DOE would be prepared to meet NRC's statutory and regulatory requirements and NRC would be prepared to review the license application. NRC issued high-level waste disposal regulations containing criteria for approving the application and publicly available internal guidance detailing the steps and activities NRC will perform to review the application. NRC also established a site office at OCRWM's Las Vegas, Nevada, offices to act as NRC's point of contact and to facilitate prompt information exchanges. NRC officials noted that they have also been working for several years to communicate NRC's expectations for a high-quality license application. Although NRC has no formal oversight role in the Yucca Mountain project until DOE files a license application, NRC staff observe DOE audits of its quality assurance activities to identify potential issues and problems that may affect licensing. The NRC staff then report their findings in quarterly reports that summarize their work and detail any problems or issues they identify. For example, after observing a DOE quality assurance audit at the Lawrence Livermore National Laboratory in August 2005, NRC staff expressed concern that humidity gauges used in scientific experiments at the project were not properly calibrated--an apparent violation of quality assurance requirements. Due in part to concerns that quality assurance requirements had not been followed, BSC issued a February 7, 2006, stop- work order affecting this scientific work. In June 2007, OCRWM project managers told us that because quality assurance rules were not followed, DOE could not use this scientific work to support the license application. To facilitate prelicensing interactions, NRC and DOE developed a formal process in 1998 for identifying and documenting technical issues and information needs. As shown in table 2, issues were grouped into nine key technical issues focused mainly on postclosure performance of the geologic repository. Within this framework, NRC and DOE defined 293 agreements in a series of technical exchange meetings. An agreement is considered closed when NRC staff determines that DOE has provided the requested information. Agreements are formally closed in public correspondence or at public technical exchanges. As of June 2007, DOE has responded to all 293 of the agreements. NRC considers 260 of these to be closed. NRC considers 8 of the remaining 33 agreements to be potentially affected by the USGS e-mail issue that emerged in 2005. Their resolution will be addressed after NRC examines the new water infiltration analysis. NRC considers that the remaining 25 have been addressed but still need additional information. DOE has indicated that it does not plan any further responses on these agreements, and that the information will be provided in the June 2008 license application. NRC determined that adding agreements to the original 293 was not an efficient means to continue issue resolution during prelicensing, given DOE's stated intent to submit its license application, first in 2004, and now in 2008. NRC is now using public correspondence, as well as public technical exchanges and management meetings, to communicate outstanding and emerging technical issues. For example, NRC's September 2006 correspondence provided input on DOE's proposed approach for estimating seismic events during the postclosure period and requested further interactions on the topic. Also, since May 2006, NRC and DOE have conducted a series of technical exchanges to discuss such topics as DOE's total system performance assessment model, the seismic design of buildings, and other DOE design changes. Other interactions are planned to ensure that NRC has sufficient information to conduct its prelicensing responsibilities. DOE is implementing the recommendations and addressing the challenges identified in our March 2006 report, but it is unclear whether the department's actions will prevent similar problems from recurring. Specifically, in response to our recommendations that DOE improve its management tools, DOE has eliminated the one-page summary (or panel) of performance indicators and has revised its trend evaluation reports. DOE is supplementing these changes with more rigorous senior management meetings that track program performance to better ensure that new problems are identified and resolved. DOE has also begun addressing additional management challenges by independently reworking USGS's water infiltration analysis, fixing problems with a design and engineering process known as requirements management, and reducing the high-turnover rate and large number of acting managers in key project management positions. Our March 2006 report found that two of the project's management tools-- the panel of performance indicators and the trend evaluation reports-- were ineffective in helping DOE management to monitor progress toward meeting performance goals, detecting new quality assurance problems, and directing management attention where needed. In response, DOE has stopped using its panel of performance indicators and replaced them with monthly program review meetings--chaired by OCRWM's director and attended by top-level OCRWM, BSC, Sandia, and USGS managers--that review the progress of four main OCRWM projects: (1) the drafting of the license application; (2) the effort to select and load documents and records into NRC's Licensing Support Network; (3) work supplementing DOE's environmental impact statement to reflect the October 2005 changes in repository design, which shift from direct handling of waste to the use of canisters; and (4) the development of a system to transport waste from where it is generated, mainly nuclear power plants, to the repository. In addition, DOE has developed the following four new, high- level performance indicators that it evaluates and discusses at its monthly program review meetings: safety, including injuries and lost workdays due to accidents at the project; quality, including efforts to improve OCRWM's corrective action program, which works to detect and resolve problems at the project and the performance of the quality assurance program; cost, including actual versus budgeted costs, staffing levels, and efforts to recruit new employees; and culture, including the project's safety conscious work environment program, which works to ensure that employees are encouraged to raise safety concerns to their managers or to NRC without fear of retaliation and that employees' concerns are resolved in a timely and appropriate manner according to their importance. Although DOE plans to develop additional performance indicators, these four simplified indicators have replaced about 250 performance indicators on the previous performance indicator panel. According to a cognizant DOE official, the previous performance indicator panel was ineffective, in part, because it focused on what could be measured, as opposed to what should be measured, resulting in DOE focusing its efforts on developing the performance indicator panel instead of determining how to use this information as a management tool. The monthly program review and the new performance indicators are designed to be more useful to OCRWM management by being simpler and more focused on the key mission activities. DOE has also revised its trend evaluation reports to create new organizational structures and procedures that detail the processes and steps for detecting and analyzing trends and preparing trend evaluation reports for senior management review. DOE has appointed a trend program manager and implemented a work group to oversee these processes. Furthermore, as we recommended, the new trend program has an increased focus on the significance of the monitored condition by synthesizing trends projectwide instead of separating OCRWM's and BSC's trend evaluation reports. To improve the utility of trend evaluation reports as a management tool, the procedures now identify the following three types of trends and criteria for evaluating them: Adverse trends are (1) repeated problems that involve similar tasks or have similar causes and are determined by management to be significant or critical to the success of the project; (2) repeated problems that are less significant but collectively indicate a failure of the quality assurance program, may be precursors to a more significant problem, or pose a safety problem; and (3) patterns of problems that management determines warrant further analysis and actions to prevent their recurrence. Emerging trends are problems that do not meet the criteria for an adverse trend, but require actions to ensure that they do not evolve into an adverse trend. Monitored trends are fluctuations in the conditions being monitored that OCRWM management determines do not warrant action, but each fluctuation needs close monitoring to ensure that it does not evolve into an emerging or adverse trend. DOE has also implemented changes to its corrective action program--the program that provides the data that are analyzed in the trend evaluation program. The corrective action program is the broader system for recognizing problems and tracking their resolution. It is one of the key elements of the project's quality assurance framework and has been an area of interest to NRC in its prelicensing activities. The corrective action program consists of a computer system that project employees can use to enter information about a problem they have identified and create a record, known as a condition report, and a set of procedures for evaluating the condition reports and ensuring these problems are resolved. Regarding our broader conclusions that the OCRWM quality assurance program needed more management attention, in spring 2006, DOE requested a team of external quality assurance experts to review the performance of the quality assurance program. The experts concluded that 8 of the 10 topics they studied--including the corrective action program-- had not been effectively implemented. Specifically, the team found that the corrective action program did not ensure that problems were either quickly or effectively resolved. Furthermore, a follow-up internal DOE study, called a root cause analysis report, concluded that the corrective action program was ineffective primarily because senior management had failed to recognize the significance of repeated internal and external reviews and did not aggressively act to correct identified problems and ensure program effectiveness. In response, DOE has revised the corrective action program in an effort to change organizational behaviors and provide increased management attention. For example, DOE has restructured the condition screening team, which previously had poor internal communication and adversarial relationships among its members, according to a senior project manager. Similarly, a December 2006 external review of the quality assurance program found that OCRWM staff had focused its efforts on trying to downgrade the significance of condition reports to deflect individual and departmental responsibility, rather than ensuring that the underlying causes and problems were addressed. In response, DOE (1) reorganized the condition screening team to reduce the size of the team but include more senior managers; (2) identified roles, responsibilities, and management expectations for the team, including expectations for collaborating and communicating; and (3) formalized processes and criteria for screening and reviewing condition reports. The condition screening team now assigns one of four significance levels to each new condition report and assigns a manager who is responsible for investigating the problem. In addition, DOE has restructured the management review committee, which oversees the corrective action program and the condition screening team. The management review committee is charged with, among other things, reviewing the actions of the condition screening team, particularly regarding the condition reports identified as having the highest two levels of significance. The management review committee also reviews draft root cause analysis reports, and any condition reports that could affect the license application. Whereas these functions were previously performed by BSC, the management review committee is now sponsored by OCRWM's deputy director and includes senior DOE, BSC, and Sandia managers. DOE has also created written policies to clarify the roles, responsibilities, and expectations of the management review committee. The goal of these changes is to refocus management attention--with OCRWM's deputy director serving as a champion for the corrective action program--and ensure that problems are resolved in a timely and efficient manner. DOE has addressed to varying degrees three other management challenges identified in our March 2006 report: (1) restoring confidence in USGS's scientific documents; (2) problems with a design and engineering process known as requirements management; and (3) managing a changing and complex program, particularly given the high turnover in key management positions. Specifically: USGS e-mail issue. DOE has taken three actions to address concerns about the reliability of USGS's scientific work after a series of e-mails implied that some USGS employees had falsified scientific and quality assurance documents and disdained DOE's quality assurance processes. Specifically, DOE (1) evaluated USGS's scientific work; (2) directed Sandia to independently develop a new water infiltration model to compare with USGS's model and reconstruct USGS's technical documents; and (3) completed a root cause analysis, including a physical review of more than 50,000 e-mails and keyword searches of nearly 1 million other e-mails sampled from more than 14 million e-mails. DOE's evaluation of USGS's scientific work concluded that there was no evidence that the USGS employees falsified or modified information. DOE's root cause analysis team concluded that there was no apparent widespread or pervasive pattern across OCRWM of a negative attitude toward quality assurance or willful noncompliance with quality assurance requirements. However, the analysis found that OCRWM's senior management had failed to hold USGS personnel accountable for the quality of the scientific work, fully implement quality assurance requirements, and effectively implement the corrective action program. These internal studies and reports and Sandia's independent development of a new water infiltration model are intended to restore public confidence in the water infiltration modeling work in the license application. Problems with design control and the requirements management process. DOE has revised its design control and requirements management processes to address the problems that our March 2006 report identified. In addition, to gauge the effectiveness of these changes, DOE conducted an internal study called a readiness review, in which it determined that the changes in the processes were sufficient and that BSC was prepared to resume design and engineering work. Subsequently, in January 2007, DOE's independent assessment of BSC and the requirements management process concluded that the processes and controls were adequate and provided a general basic direction for the design control process. DOE has also contracted with Longenecker and Associates to review the project's engineering processes with the final report due in the summer of 2007. Management turnover. DOE has worked to fill and retain personnel in key management positions that had been vacant for extended periods of time, most notably the director of quality assurance and the OCRWM project director. In addition, as part of an effort to change the organizational culture, OCRWM's director has created a team to evaluate how to improve succession planning and identify gaps in the skills or staffing levels in OCRWM. However, DOE continues to lose key project managers, most recently with the departure of OCRWM's deputy director. Furthermore, additional turnover is possible after the 2008 presidential election, when the incoming administration is likely to replace OCRWM's director. Historically, new directors have tended to have different management priorities and have implemented changes to the organizational structure and policies. To address this concern, OCRWM's director suggested legislatively changing the director position by making it a long-term appointment to reflect the long-term nature of the Yucca Mountain project. The OCRWM director's schedule for filing a repository license application with NRC by June 30, 2008, will require a concerted effort by project personnel. However, given the waste repository's history since its inception in 1983, including two prior failed efforts to file a license application, it is unclear whether DOE's license application will be of sufficient quality to enable NRC to conduct a timely review of the supporting models and data that meet the statutory time frames. DOE has taken several important actions to change the organizational culture of the Yucca Mountain project since the issuance of our March 2006 report. These actions appear to be invigorating, for example, the quality assurance program by focusing management attention on improving quality by resolving problems. However, for a variety of reasons, it has yet to be seen whether DOE's actions will prevent the kinds of problems our March 2006 report identified from recurring or other challenges from developing. First, some of DOE's efforts, such as its efforts to reduce staff turnover, are in preliminary or planning stages and have not been fully implemented. Therefore, their effectiveness cannot yet be determined. Second, improving the quality assurance program will also require changes in the organizational behaviors of OCRWM's staff and contractors. OCRWM's director told us that these types of cultural changes can be particularly difficult and take a long time to implement. Consequently, it may be years before OCRWM fully realizes the benefits of these efforts. Finally, as we have previously reported, DOE has a long history of quality assurance problems and has experienced repeated difficulties in resolving these problems. We provided DOE and NRC with a draft of this report for their review and comment. In their written responses, both DOE and NRC agreed with our report. (See apps. I and II.) In addition, both DOE and NRC provided comments to improve the draft report's technical accuracy, which we have incorporated as appropriate. To examine the development of DOE's license application schedule, we reviewed DOE documents related to the announcement and creation of the license application. We also reviewed the DOE management plan for creating the license application and other internal reports on the progress in drafting the application. We interviewed OCRWM's director and other OCRWM senior management officials in DOE headquarters and its Las Vegas project office about the process for creating the schedule, including consultations with stakeholders. In addition, we observed meetings covering topics related to the license application schedule between DOE and NRC, the Advisory Committee on Nuclear Waste and Materials, and the Nuclear Waste Technical Review Board. These meetings were held in Rockville, Maryland; Las Vegas, Nevada; and Arlington, Virginia. To obtain NRC's assessment of DOE's readiness to file a high-quality license application, we obtained NRC documents--such as the status of key technical issues and briefing slides on NRC's technical exchanges with DOE. We also attended NRC staff briefings for the Commission's Advisory Committee on Nuclear Waste and Materials, including a briefing on NRC's prelicensing activities; reviewed meeting transcripts; and observed a NRC- DOE quarterly meeting and recorded NRC's comments. In addition, we interviewed NRC's project manager who is responsible for reviewing the postclosure portion of a license application, NRC's on-site representative at the Las Vegas office, and other NRC regional officials. Furthermore, we interviewed the director of EPA's Office of Air and Radiation Safety regarding the status of EPA's rulemaking to set radiation exposure standards for the public outside the Yucca Mountain site. To determine DOE's progress in implementing the recommendations and resolving the additional challenges identified in our March 2006 report, we reviewed prior GAO reports that assessed DOE's quality assurance process and relevant DOE corrective action reports, root cause analyses, and other internal reviews that analyzed DOE's efforts to improve its management tools and its corrective action program in general. We also reviewed related NRC documents, such as some observation audit reports. We observed NRC and DOE management meetings and technical exchanges in Rockville, Maryland, and Las Vegas, Nevada, that covered related issues. We also interviewed OCRWM's director in DOE headquarters and senior managers at the Yucca Mountain project office in Las Vegas about their efforts to address our recommendations. Regarding the quality assurance challenges noted in our prior report, we reviewed a January 2007 GAO report discussing the USGS issue and reviewed DOE documents detailing their actions to restore confidence in the scientific documents. We reviewed internal DOE documents regarding requirements management and interviewed the program's chief engineer in charge of resolving this issue. Finally, regarding staff turnover in key management positions, we reviewed OCRWM's strategic objectives, business plan, and project documents and interviewed OCRWM's director and other senior project managers about their efforts to improve succession planning. As agreed with your office, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, the Chairman of the Nuclear Regulatory Commission, the director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Richard Cheston, Casey Brown, Omari Norman, Alison O'Neill, and Daniel Semick.
Nuclear power reactors generate highly radioactive waste. To permanently store this waste, the Department of Energy (DOE) has been working to submit a license application to the Nuclear Regulatory Commission (NRC) for a nuclear waste repository at Yucca Mountain about 100 miles from Las Vegas, Nevada. Although the project has been beset with delays, in part because of persistent problems with its quality assurance program, DOE stated in July 2006 that it will submit a license application with NRC by June 30, 2008. NRC states that a high-quality application needs to be complete, technically adequate, transparent by clearly justifying underlying assumptions, and traceable back to original source materials. GAO examined (1) DOE's development of its schedule for submitting a license application and the stakeholders with whom it consulted, (2) NRC's assessment of DOE's readiness to submit a high-quality application, and (3) DOE's progress in addressing quality assurance recommendations and challenges identified in GAO's March 2006 report. GAO reviewed DOE's management plan for creating the license application, reviewed correspondence and attended prelicensing meetings between DOE and NRC, and interviewed DOE managers and NRC on-site representatives for the Yucca Mountain project. In commenting on a draft of the report, both DOE and NRC agreed with the report. The director of DOE's Office of Civilian Radioactive Waste Management set the June 30, 2008, date for filing the license application with NRC in consultation with the DOE and contractor managers for the Yucca Mountain project. DOE officials told us that external stakeholders were not consulted because there was neither a legal requirement nor a compelling management reason to do so. According to the director, the June 2008 schedule is achievable because DOE has already completed a large amount of work, including the completion of a draft license application in 2005 that DOE decided not to submit to NRC. NRC officials believe it is likely that DOE will submit a license application by June 30, 2008, but until NRC receives the application, officials will not speculate about whether it will be high quality. NRC has not seen a draft of the license application, and NRC's long-standing practice is to maintain an objective and neutral position toward a future application until it is filed. To help ensure that DOE understands its expectations, NRC has, among other things, held periodic prelicensing management and technical meetings with DOE. DOE has made progress in resolving the quality assurance recommendations and challenges identified in GAO's March 2006 report. For example, DOE has replaced the one-page summary of performance indicators that GAO had determined was ineffective with more frequent and rigorous project management meetings. DOE has addressed the management challenges GAO identified to varying degrees. For example, regarding management continuity, DOE has worked to fill and retain personnel in key management positions, such as the director of quality assurance. However, for various reasons--including the long history of recurring problems and likely project leadership changes in January 2009 when the current administration leaves office--it is unclear whether DOE's actions will prevent these problems from recurring.
7,354
657
To date, the Congress has designated 24 national heritage areas, primarily in the eastern half of the country (see fig. 1). Generally, national heritage areas focus on local efforts to preserve and interpret the role that certain sites, events, and resources have played in local history and their significance in the broader national context. For example, the Rivers of Steel Heritage Area commemorates the contribution of southwestern Pennsylvania to the development of the nation's steel industry by providing visitors with interpretive tours of historic sites and other activities. Heritage areas share many similarities-- such as recreational resources and historic sites--with national parks and other park system units but lack the stature and national significance to qualify them as these units. The process of becoming a national heritage area usually begins when local residents, businesses, and governments ask the Park Service, within the Department of the Interior, or the Congress for help in preserving their local heritage and resources. In response, although the Park Service has no program governing these activities, the agency provides technical assistance, such as conducting or reviewing studies to determine an area's eligibility for heritage area status. The Congress then may designate the site as a national heritage area and set up a management entity for it. This entity could be a state or local governmental agency, an independent federal commission, or a private nonprofit corporation. Usually within 3 years of designation, the area is required to develop a management plan, which is to detail, among other things, the area's goals and its plans for achieving those goals. The Park Service then reviews these plans, which must be approved by the Secretary of the Interior. After the Congress designates a heritage area, the Park Service enters into a cooperative agreement with the area's management entity to assist the local community in organizing and planning the area. Each area can receive funding through the Park Service's budget--generally limited to not more than $1 million a year for 10 or 15 years. The agency allocates the funds to the area through the cooperative agreement. No systematic process is in place to identify qualified candidate sites and designate them as national heritage areas. In this regard, the Park Service conducts studies--or reviews studies prepared by local communities--to evaluate the qualifications of sites proposed for national heritage designation. On the basis of these studies, the agency advises the Congress as to whether a particular location warrants designation. The agency usually provides its advice to the Congress by testifying in hearings on bills to authorize a particular heritage area. The Park Services' studies of prospective sites' suitability help the agency ensure that the basic components necessary to a successful heritage area--such as natural and cultural resources and community support--are either already in place or are planned. Park Service data show that the agency conducted or reviewed some type of study addressing the qualifications of all 24 heritage areas. However, in some cases, these studies were limited in scope so that questions concerning the merits of the location persisted after the studies were completed. As a result, the Congress designated 10 of the 24 areas with only a limited evaluation of their suitability as heritage areas. Of these 10 areas, the Park Service opposed or suggested that the Congress defer action on 6, primarily because of continuing questions about, among other issues, whether the areas had adequately identified goals or management entities or demonstrated community support. Furthermore, of the 14 areas that were designated after a full evaluation, the Congress designated 8 consistent with the Park Service's recommendations, 5 without the agency's advice, and 1 after the agency had recommended that action be deferred. Furthermore, the criteria the Park Service uses to evaluate the suitability of prospective heritage areas are not specific and, in using them, the agency has determined that a large portion of the sites studied qualify as heritage areas. According to the Heritage Area national coordinator, before the early 1990s, the Park Service used an ad hoc approach to determining sites' eligibility as heritage areas, with little in the way of objective criteria as a guide. Since then, however, the Park Service developed general guidelines to use in evaluating and advising the Congress on the suitability of sites as heritage areas. Based on these guidelines, in 1999, the agency developed a more formal approach to evaluating sites. This approach consisted of four actions that the agency believed were critical before a site could be designated as well as 10 criteria to be considered when conducting studies to assess an area's suitability. The four critical steps include the following: complete a suitability/feasibility study; involve the public in the suitability/feasibility study; demonstrate widespread public support for the proposed designation; and demonstrate commitment to the proposal from governments, industry, and private, nonprofit organizations. A suitability/feasibility study, should examine a proposed area using the following criteria: The area has natural, historic, or cultural resources that represent distinctive aspects of American heritage worthy of recognition, conservation, interpretation, and continuing use, and are best managed through partnerships among public and private entities, and by combining diverse and sometimes noncontiguous resources and active communities; The area's traditions, customs, beliefs, and folk life are a valuable part of the national story; The area provides outstanding opportunities to conserve natural, cultural, historic, and/or scenic features; The area provides outstanding recreational and educational opportunities; Resources that are important to the identified themes of the area retain a degree of integrity capable of supporting interpretation; Residents, businesses, nonprofit organizations, and governments within the area that are involved in the planning have developed a conceptual financial plan that outlines the roles for all participants, including the federal government, and have demonstrated support for designation of the area; The proposed management entity and units of government supporting the designation are willing to commit to working in partnership to develop the area; The proposal is consistent with continued economic activity in the area; A conceptual boundary map is supported by the public; and The management entity proposed to plan and implement the project is described. These criteria are broad and subject to multiple interpretations, as noted by an official in the agency's Midwest region charged with applying these criteria to prospective areas. Similarly, according to officials in the agency's Northeast region, they believe that the criteria were developed to be inclusive and that they are inadequate for screening purposes. The national coordinator believes, however, that the criteria are valuable but that the regions need additional guidance to apply them more consistently. The Park Service has developed draft guidance for applying these criteria but has no plans to issue them as final guidance. Rather, the agency is incorporating this guidance into a legislative proposal for a formal heritage area program. According to the national coordinator, some regions have used this guidance despite its draft status, but it has not been widely adopted or used to date. The Park Service's application of these broad criteria has identified a large number of potential heritage areas. Since 1989, the Park Service has determined that many of the candidate sites it has evaluated would qualify as national heritage areas. According to data from 22 of the 24 heritage areas, about half of their total funding of $310 million in fiscal years 1997 through 2002 came from the federal government and the other half from state and local governments and private sources. Table 1 shows the areas' funding sources from fiscal years 1997 through 2002. As figure 2 shows, the federal government's total funding to these heritage areas increased from about $14 million in fiscal year 1997 to about $28 million in fiscal year 2002, peaking at over $34 million in fiscal year 2000. The Congress sets the overall level of funding for heritage areas, determining which areas will receive funding and specifying the amounts provided. Newly designated heritage areas usually receive limited federal funds while they develop their management plans and then receive increasing financial support through Park Service appropriations after their plans are established. The first heritage areas received pass-through grants from the Park Service and funding through the agency's Statutory and Contractual Aid appropriations. However, in 1998, the Congress began appropriating funds to support heritage areas through the Heritage Partnership Program. In addition, the Congress has placed in each area's designating legislation certain conditions on the receipt of federal funds. While the legislation designating the earliest heritage areas resulted in different funding structures, generally those created since 1996 have been authorized funding of up to $10 million over 15 years, not to exceed $1 million in any single year. In conjunction with this limit, the designating legislation attempts to identify a specific date when heritage areas no longer receive federal financial or technical assistance. Although heritage areas are ultimately expected to become self-sufficient without federal support, to date the sunset provisions have not limited federal funding. Since the first national heritage area was designated in 1984, five have reached the sunset date specified in their designating legislation. However, in each case, the sunset date was extended and the heritage area continued to receive funding from the Congress. Finally, the areas' designating legislation typically requires the heritage areas to match the amount of federal funds they receive with a specified percentage of funds from nonfederal sources. Twenty-two of the 24 heritage areas are required to match the federal funds they receive. Of these 22 areas, 21 have a 50-percent match requirement--they must show that at least 50 percent of the funding for their projects has come from nonfederal sources--and one has a 25-percent match requirement. In the absence of a formal program, the Park Service oversees heritage areas' activities by monitoring the implementation of the terms set forth in the cooperative agreements. According to Park Service headquarters officials, the agency's cooperative agreements with heritage areas allow the agency to effectively oversee their activities and hold them accountable. These officials maintain that they can withhold funds from heritage areas--and have, in some circumstances, done so--if the areas are not carrying out the requirements of the cooperative agreements. However, regional managers have differing views on their authority for withholding funds from areas and the conditions under which they should do so. Although Park Service has oversight opportunities through the cooperative agreements, it has not taken advantage of these opportunities to help to improve oversight and ensure these areas' accountability. In this regard, the agency generally oversees heritage areas' funding through routine monitoring and oversight activities, and focuses specific attention on the areas' activities only when problems or potential concerns arise. However, the Park Service regions that manage the cooperative agreements with the heritage areas do not always review the areas' annual financial audit reports, although the agency is ultimately the federal agency responsible for heritage area projects that are financed with federal funds. For example, managers in two Park Service regions told us that they regularly review heritage areas' annual audit reports, but a manager in another region said that he does not. As a result, the agency cannot determine the total amount of federal funds provided or their use. According to these managers, the inconsistencies among regions in reviewing areas' financial reports primarily result from a lack of clear guidance and the collateral nature of the Park Service regions' heritage area activities--they receive no funding for oversight, and their oversight efforts divert them from other mission-critical activities. Furthermore, the Park Service has not yet developed clearly defined, consistent, and systematic standards and processes for regional staff to use in reviewing the adequacy of areas' management plans, although these reviews are one of the Park Service's primary heritage area responsibilities. Heritage areas' management plans are blueprints that discuss how the heritage area will be managed and operated and what goals it expects to achieve, among other issues. The Secretary of the Interior must approve the plans after Park Service review. According to the national coordinator, heritage area managers in the agency's Northeast region have developed a checklist of what they consider to be the necessary elements of a management plan to assist reviewers in evaluating the plans. While this checklist has not been officially adopted, managers in the Northeast and other regions consult it in reviewing plans, according to the national coordinator. Heritage area managers in the Park Service regions use different criteria for reviewing these plans, however. For example, managers in the regions told us that, to judge the adequacy of the plans, one region uses the specific requirements in the areas' designating legislation, another uses the designating legislation in conjunction with the Park Service's general designation criteria, and a third adapts the process used for reviewing national park management plans. While these approaches may guide the regions in determining the content of the plans, they provide little guidance in judging the adequacy of the plans for ensuring successful heritage areas. Finally, the Park Service has not yet developed results-oriented performance goals and measures--consistent with the requirements of the Government Performance and Results Act--that would help to ensure the efficiency and effectiveness of its heritage area activities. The act requires agencies to, among other actions, set strategic and annual goals and measure their performance against these goals. Effectively measuring performance requires developing measures that demonstrate results, which, in turn, requires data. According to the national coordinator, the principal obstacles to measuring performance are the difficulty of identifying meaningful indicators of success and the lack of funding to collect the needed data. With regard to indicators, the national coordinator told us that the agency has tried to establish meaningful and measurable goals both for their activities and the heritage areas. The agency has identified a series of "output" measures of accomplishment, such as numbers of heritage areas visitors, formal and informal partners, educational programs managed, and grants awarded. However, the national coordinator acknowledged that these measures are insufficient, and the agency continues to pursue identifying alternative measures that would be more meaningful and useful. However, without clearly defined performance measures for its activities, the agency will continue to be unable to effectively gauge what it is accomplishing and whether its resources are being employed efficiently and cost-effectively. The Park Service also has not required heritage areas to adopt a results- oriented management approach--linked to the goals set out in their management plans--which would enable both the areas and the agency to determine what is being accomplished with the funds that have been provided. In this regard, the heritage areas have not yet developed an effective, outcome-oriented method for measuring their own performance and are therefore unable to determine what benefits the heritage area-- and through it, the federal funds--have provided to the local community. For example, for many heritage areas, increasing tourism is a goal, but while they may be able to measure an increase in tourism, they cannot demonstrate whether this increase is directly associated with the efforts of the heritage area. To address these issues, the Alliance of National Heritage Areas is currently working with Michigan State University to develop a way to measure various impacts associated with a national heritage area. These impacts include, among others, the effects on tourism and local economies through jobs created and increases in tax revenues. According to Park Service officials, the agency has not taken actions to improve oversight because, without a formal program, it does not have the direction or funding it needs to effectively administer its national heritage area activities. National heritage areas do not appear to have affected private property rights, although private property rights advocates have raised a number of concerns about the potential effects of heritage areas on property owners' rights and land use. These advocates are concerned that heritage areas may be allowed to acquire or otherwise impose federal controls on nonfederal lands. However, the designating legislation and the management plans of some areas explicitly place limits on the areas' ability to affect private property rights and use. In this regard, eight areas' designating legislation stated that the federal government cannot impose zoning or land use controls on the heritage areas. Moreover, in some cases, the legislation included explicit assurances that the areas would not affect the rights of private property owners. For example, the legislation creating 13 of the 24 heritage areas stated that the area's managing entity cannot interfere with any person's rights with respect to private property or have authority over local zoning ordinances or land use planning. While management entities of heritage areas are allowed to receive or purchase real property from a willing seller, under their designating legislation, most areas are prohibited from using appropriated funds for this purpose. In addition, the designating legislation for five heritage areas requires them to convey the property to an appropriate public or private land managing agency. As a further protection of property rights, the management plans of some heritage areas deny the managing entity authority to influence zoning or land use. For example, at least six management plans state that the managing entities have no authority over local zoning laws or land use regulations. However, most of the management plans state that local governments' participation will be crucial to the success of the heritage area and encourage local governments to implement land use policies that are consistent with the plan. Some plans offer to aid local government planning activities through information sharing or technical or financial assistance to achieve their cooperation. Property rights advocates are concerned that such provisions give heritage areas an opportunity to indirectly influence zoning and land use planning, which could restrict owners' use of their property. Some of the management plans state the need to develop strong partnerships with private landowners or recommend that management entities enter into cooperative agreements with landowners for any actions that include private property. Despite concerns about private property rights, officials at the 24 heritage areas, Park Service headquarters and regional staff working with these areas, and representatives of six national property rights groups that we contacted were unable to provide us with a single example of a heritage area directly affecting--positively or negatively--private property values or use. National heritage areas have become an established part of the nation's efforts to preserve its history and culture in local areas. The growing interest in establishing additional areas will put increasing pressure on the Park Service's resources, especially since the agency receives limited funding for the technical and administrative assistance it provides to these areas. Under these circumstances, it is important to ensure that only those sites' that are most qualified are designated as heritage areas. However, no systematic process for designating these areas exists, and the Park Service does not have well-defined criteria for assessing sites' qualifications or effective oversight of the areas' use of federal funds and adherence to their management plan. As a result, the Congress and the public cannot be assured that future sites will have the necessary resources and local support needed to be viable or that federal funds supporting them will be well spent. Given the Park Service's resource constraints, it is important to ensure that the agency carries out its heritage area responsibilities as efficiently and effectively as possible. Park Service officials pointed to the absence of a formal program as a significant obstacle to effective management of the agency's heritage area efforts and oversight of the areas' activities. In this regard, without a program, the agency has not developed consistent standards and processes for reviewing areas' management plans, the areas' blueprints for becoming viable and self-sustaining. It also has not required regional heritage area managers to regularly and consistently review the areas' annual financial audit reports to ensure that the Park Service--the agency with lead responsibility for these areas--has complete information on their use of funds from all federal agencies as a basis for holding them accountable. Finally, the Park Service has not defined results-oriented performance goals and measures--both for its own heritage area efforts and those of the individual areas. As a result, it is constrained in its ability to determine both the agency's and areas' accomplishments, whether the agency's resources are being employed efficiently and effectively, and if federal funds could be better utilized to accomplish its goals. In the absence of congressional action to establish a formal heritage area program within the National Park Service or to otherwise provide direction and funding for the agency's heritage area activities, we recommend that the Secretary of the Interior direct the Park Service to take actions within its existing authority to improve the effectiveness of its heritage area activities and increase areas' accountability. These actions should include developing well-defined, consistent standards and processes for regional staff to use in reviewing and approving heritage areas' management plans; requiring regional heritage area managers to regularly and consistently review heritage areas' annual financial audit reports to ensure that the agency has a full accounting of their use of funds from all federal sources, and developing results-oriented performance goals and measures for the agency's heritage area activities, and requiring, in the cooperative agreements, heritage areas to adopt such a results-oriented management approach as well. Thank you, Mr. Chairman and Members of the Committee. This concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For more information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Elizabeth Curda, Preston S. Heard, Vincent P. Price, and Barbara Timmerman. To examine the establishment, funding, and oversight of national heritage areas and their potential effect on private property rights, we (1) evaluated the process for identifying and designating national heritage areas, (2) determined the amount of federal funding provided to support these areas, (3) evaluated the process for overseeing and holding national heritage areas accountable for their use of federal funds, and (4) determined the extent to which, if at all, these areas have affected private property rights. To address the first issue, we discussed the process for identifying and designating heritage areas with the Park Service's Heritage Area national coordinator and obtained information on how the 24 existing heritage areas were evaluated and designated. To determine the amount of federal funding provided to support these areas, we discussed funding issues and the availability of funding data with the national coordinator, the Park Service's Comptroller, and officials from the agency's Northeast, Midwest, Southeast, and Intermountain Regional Offices. We also obtained funding information from 22 of the 24 heritage areas for fiscal years 1997 through 2002, and discussed this information with the executive directors and staff of each area. As of mid-March 2004, two heritage areas had not provided us with funding data. To verify the accuracy of the data we obtained from these sources, we compared the data provided to us with data included in the heritage areas' annual audit and other reports that we obtained from the individual areas and the Park Service regions. We also discussed these data with the executive directors and other officials of the individual heritage areas and regional office officials. To evaluate the processes for holding national heritage areas accountable for their use of federal funds, we discussed these processes with the national coordinator and regional officials, and obtained information and documents supporting their statements. To determine the extent to which, if at all, private property rights have been affected by these areas, we discussed this issue with the national coordinator, regional officials, the Executive Director of the Alliance of National Heritage Areas--an organization that coordinates and supports heritage areas' efforts and is their collective interface with the Park Service--the executive directors of the 23 heritage areas that were established at the time of our work, and representatives of several private property rights advocacy groups and individuals, including the American Land Rights Association, the American Policy Center, the Center for Private Conservation, the Heritage Foundation, the National Wilderness Institute, and the Private Property Foundation of America. In each of these discussions, we asked the individuals if they were aware of any cases in which a heritage area had positively or negatively affected an individual's property rights or restricted its use. None of these individuals were able to provide such an example. In addition, we visited the Augusta Canal, Ohio and Erie Canal, Rivers of Steel, Shenandoah Valley Battlefields, South Carolina, Southwestern Pennsylvania (Path of Progress), Tennessee Civil War, and Wheeling National Heritage Areas to discuss these issues in person with the areas' officials and staff, and to view the areas' features and accomplishments first hand. We conducted our work between May 2003 and March 2004 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Congress has established, or "designated," 24 national heritage areas to recognize the value of their local traditions, history, and resources to the nation's heritage. These areas, including public and private lands, receive funds and assistance through cooperative agreements with the National Park Service, which has no formal program for them. They also receive funds from other agencies and nonfederal sources, and are managed by local entities. Growing interest in new areas has raised concerns about rising federal costs and the risk of limits on private land use. GAO was asked to review the (1) process for designating heritage areas, (2) amount of federal funding to these areas, (3) process for overseeing areas' activities and use of federal funds, and (4) effects, if any, they have on private property rights. No systematic process currently exists for identifying qualified sites and designating them as national heritage areas. While the Congress generally has designated heritage areas with the Park Service's advice, it designated 10 of the 24 areas without a thorough agency review; in 6 of these 10 cases, the agency recommended deferring action. Even when the agency fully studied sites, it found few that were unsuitable. The agency's criteria are very general. For example, one criterion states that a proposed area should reflect "traditions, customs, beliefs, and folk life that are a valuable part of the national story." These criteria are open to interpretation and, using them, the agency has eliminated few sites as prospective heritage areas. According to data from 22 of the 24 heritage areas, in fiscal years 1997 through 2002, the areas received about $310 million in total funding. Of this total, about $154 million came from state and local governments and private sources and another $156 million came from the federal government. Over $50 million was dedicated heritage area funds provided through the Park Service, with another $44 million coming from other Park Service programs and about $61 million from 11 other federal sources. Generally, each area's designating legislation imposes matching requirements and sunset provisions to limit the federal funds. However, since 1984, five areas that reached their sunset dates had their funding extended. The Park Service oversees heritage areas' activities by monitoring their implementation of the terms set forth in the cooperative agreements. These terms, however, do not include several key management controls. That is, the agency has not (1) always reviewed areas' financial audit reports, (2) developed consistent standards for reviewing areas' management plans, and (3) developed results-oriented goals and measures for the agency's heritage area activities, or required the areas to adopt a similar approach. Park Service officials said that the agency has not taken these actions because, without a program, it lacks adequate direction and funding. Heritage areas do not appear to have affected property owners' rights. In fact, the designating legislation of 13 areas and the management plans of at least 6 provide assurances that such rights will be protected. However, property rights advocates fear the effects of provisions in some management plans. These provisions encourage local governments to implement land use policies that are consistent with the heritage areas' plans, which may allow the heritage areas to indirectly influence zoning and land use planning in ways that could restrict owners' use of their property. Nevertheless, heritage area officials, Park Service headquarters and regional staff, and representatives of national property rights groups that we contacted were unable to provide us with any examples of a heritage area directly affecting--positively or negatively--private property values or use.
5,101
727
Biomonitoring--one technique for assessing people's exposure to chemicals--involves measuring the concentration of chemicals or their by- products in human specimens, such as blood or urine. While, biomonitoring has been used to monitor chemical exposures for decades, more recently, advances in analytic methods have allowed scientists to measure more chemicals, in smaller concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used for a variety of applications, including public health research and measuring the impact of certain environmental regulations, such as the decline in blood lead levels following declining levels of gasoline lead. CDC conducts the most comprehensive biomonitoring program in the country under its National Biomonitoring Program and published the first, second, third and fourth National Report on Human Exposure to Environmental Chemicals--in 2001, 2003, 2005, and 2009, respectively-- which reported the concentrations of certain chemicals or their by- products in the blood or urine of a representative sample of the U.S. population. For each of these reports, the CDC has increased the number of chemicals studied--from 27 in the first report, to 116 in the second, to 148 in the third, and to 212 in the fourth. Each report is cumulative (containing all the results from previous reports). These reports provide the most comprehensive assessment to date of the exposure of the U.S. population to chemicals in our environment including such chemicals as acrylamide, arsenic, BPA, triclosan, and perchlorate. These reports have provided a window into the U.S. population's exposure to chemicals, and the CDC continues to develop new methods for collecting data on additional chemical exposures with each report. For decades, government regulators have used risk assessment to understand the health implications of commercial chemicals. Researchers use this process to estimate how much harm, if any, can be expected from exposure to a given contaminant or mixture of contaminants and to help regulators determine whether the risk is significant enough to require banning or regulating the chemical or other corrective action. Biomonitoring research is difficult to integrate into this risk assessment process, since estimates of human exposure to chemicals have historically been based on the concentration of these chemicals in environmental media and on information about how people are exposed. Biomonitoring data, however, provide a measure of internal dose that is the result of exposure to all environmental media and depend on how the human body processes and excretes the chemical. EPA has made limited use of biomonitoring data in its assessments of risks posed by chemicals. As we previously reported, one major reason for the agency's limited use of such data is that, to date, there are no biomonitoring data for most commercial chemicals. The most comprehensive biomonitoring effort providing data relevant to the entire U.S. population includes only 212 chemicals, whereas EPA is currently focusing its chemical assessment and management efforts on the more than 6,000 chemicals that companies produce in quantities of more than 25,000 pounds per year at one site. Current biomonitoring efforts also provide little information on children. Large-scale biomonitoring studies generally omit children because it is difficult to collect biomonitoring data from them. For example, some parents are concerned about the invasiveness of taking blood samples from their children, and certain other fluids, such as umbilical cord blood or breast milk, are available only in small quantities and only at certain times. Thus, when samples are available from children, they may not be large enough to analyze. A second reason we reported for the agency's limited use of biomonitoring data is that EPA often lacks the additional information needed to make biomonitoring studies useful in its risk assessment process. In this regard, biomonitoring provides information only on the level of a chemical in a person's body but not the health impact. The detectable presence of a chemical in a person's blood or urine does not necessarily mean that the chemical causes harm. While exposure to larger amounts of a chemical may cause an adverse health impact, a smaller amount may be of no health consequence. In addition, biomonitoring data alone do not indicate the source, route, or timing of the exposure, making it difficult to identify the appropriate risk management strategies. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand whether the levels measured in people pose a health concern, but EPA's ability to require chemical companies to develop such data is limited. As a result, EPA has made few changes to its chemical risk assessments or safeguards in response to the recent proliferation of biomonitoring data. For most chemicals, EPA would need additional data on the following to incorporate biomonitoring into risk assessment: health effects; the sources, routes, and timing of exposure; and the fate of a chemical in the human body. However, as we have discussed in prior reports, EPA will face difficulty in using its authorities under TSCA to require chemical companies to develop health and safety information on the chemicals. In January 2009, we added transforming EPA's process for assessing and controlling toxic chemicals to our list of high-risk areas warranting attention by Congress and the executive branch. Subsequently, the EPA Administrator set forth goals for updated legislation that would give EPA the mechanisms and authorities to promptly assess and regulate chemicals. EPA has used some biomonitoring data in chemical risk assessment and management, but only when additional studies have provided insight on the health implications of the biomonitoring data. For example, EPA was able to use biomonitoring data on methylmercury--a neurotoxin that accumulates in fish--because studies have drawn a link between the level of this toxin in human blood and adverse neurological effects in children. EPA also used both biomonitoring and traditional risk assessment information to take action on certain perfluorinated chemicals. These chemicals are used in the manufacture of consumer and industrial products, including nonstick cookware coatings; waterproof clothing; and oil-, stain-, and grease-resistant surface treatments. EPA has several biomonitoring research projects under way, but the agency has no system in place to track progress or assess the resources needed specifically for biomonitoring research. For example, EPA awarded grants that are intended to advance the knowledge of children's exposure to pesticides through the use of biomonitoring and of the potential adverse effects of these exposures. The grants issued went to projects that, among other things, investigated the development of less invasive biomarker than blood samples--such as analyses of saliva or hair samples--to measures of early brain development. Furthermore, EPA has studied the presence of an herbicide in 135 homes with preschool-age children by analyzing soil, air, carpet, dust, food, and urine as well as samples taken from subject's hands. The study shed important light on how best to collect urine samples that reflect external dose of the herbicide and how to develop models that simulate how the body processes specific chemicals. Nonetheless, EPA does not separately track spending or staff time devoted to biomonitoring research. Instead, it places individual biomonitoring research projects within its larger Human Health Research Strategy. While this strategy includes some goals relevant to biomonitoring, EPA has not systematically identified and prioritized the data gaps that prevent it from using biomonitoring data. Nor has it systematically identified the resources needed to reach biomonitoring research goals or the chemicals that need the most additional biomonitoring-related research. Also, EPA has not coordinated its biomonitoring research with that of the many agencies and other groups involved in biomonitoring research, which could impair its ability to address the significant data gaps in this field of research. In addition to the CDC and EPA, several other federal agencies have been involved in biomonitoring research, including the U.S. Department of Health and Human Service's Agency for Toxic Substances and Disease Registry, entities within the U.S. Department of Health and Human Service's NIH, and the U.S. Department of Labor's Occupational Safety and Health Administration. Several states have also initiated biomonitoring programs to examine state and local health concerns, such as arsenic in local water supplies or populations with high fish consumption that may increase mercury exposure. Furthermore, some chemical companies have for decades monitored their workforce for chemical exposure, and chemical industry associations have funded biomonitoring research. Finally, some environmental organizations have conducted biomonitoring studies of small groups of adults and children, including one study on infants. As we previously reported, a national biomonitoring research plan could help better coordinate research and link data needs with collection efforts. EPA has suggested chemicals for future inclusion in the CDC's National Biomonitoring Program but has not gone any further toward formulating an overall strategy to address data gaps and ensure the progress of biomonitoring research. We have previously noted that to begin addressing the need for biomonitoring research, federal agencies will need to strategically coordinate their efforts and leverage their limited resources. Similarly, the National Academies of Science found that the lack of a coordinated research strategy allowed widespread exposures to go undetected, including exposure to flame retardants known as polybrominated diphenyl ethers--chemicals which may cause liver damage, among other things, according to some toxicological studies. The academy noted that a coordinated research strategy would require input from various agencies involved in biomonitoring and supporting disciplines. In addition to EPA, these agencies include the CDC, NIH, the Food and Drug Administration, and the U.S. Department of Agriculture. Such coordination could strengthen efforts to identify and possibly regulate the sources of the exposure detected by biomonitoring, since the most common sources--that is, food, environmental contamination, and consumer products--are under the jurisdiction of different agencies. We have recommended that EPA develop a comprehensive research strategy to improve its ability to use biomonitoring in its risk assessments. However, though EPA agreed with our recommendation, th agency still lacks such a comprehensive strategy to guide its own research efforts. In addition, we recommended that EPA establish an interagency e task force that would coordinate federal biomonitoring research effor across agencies and leverage available resources. If EPA determines that further authority is necessary, we stated that it should request that the Executive Office of the President establish an interagency task force to coordinate such efforts. Nonetheless, EPA has not established such an interagency task force to coordinate federal biomonitoring research, nor has it informed us that it has requested the Executive Office of the President do so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is generally untested and may be limited. Several provisions of TSCA are potentially relevant. For example, under section 4 of TSCA EPA can require chemical companies to test chemicals for their effects on health or the environment. However, biomonitoring data indicate only the presence of a chemical in a person's body and not its impact on the person's health. EPA told us that biomonitoring data may demonstrate chemical characteristics that would be relevant to a chemical's effects on health or the environment and that the agency could theoretically require that biomonitoring be used as a methodology for developing such data. EPA's specific authority to obtain biomonitoring data in this way is untested, however, and EPA is only generally authorized to require the development of such data after meeting certain threshold risk requirements that are difficult, expensive, and time- consuming. EPA may also be able to indirectly require the development of biomonitoring data using the leverage it has under section 5(e) of TSCA, though it has not yet attempted to do so. Under certain circumstances, EPA can use this section to seek an injunction to limit or prohibit the manufacture of a chemical. As an alternative, EPA sometimes issues a consent order that subjects manufacture to certain conditions, including testing, which could include biomonitoring. While EPA may not be explicitly authorized to require the development of such test data under this section, chemical companies have an incentive to provide the requested test data to avoid a more sweeping ban on a chemical's manufacture. EPA has not indicated whether it will use section 5(e) consent orders to require companies to submit biomonitoring data. Other TSCA provisions allow EPA to collect existing information on chemicals that a company already has, knows about, or could reasonably ascertain. For example, section 8(e) requires chemical companies to report to EPA any information they have obtained that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable as demonstrating a substantial risk if the chemical in question is known to have serious toxic effects and the biomonitoring data indicate a level of exposure previously unknown to EPA. Industry has asked for more guidance on this point, but EPA has not yet revised its guidance. Confusion over the scope of EPA's authority to collect biomonitoring data under section 8 (e) is highlighted by the history leading up to an EPA action against the chemical company E. I. du Pont de Nemours and Company (DuPont). Until 2000, DuPont used the chemical PFOA to make Teflon®️. In 1981, DuPont took blood from several female workers and two of their babies. The levels of PFOA in the babies' blood showed that PFOA had crossed the placental barrier. DuPont also tested the blood of twelve community members, 11 of whom had elevated levels of PFOA in their blood. DuPont did not report either set of results to EPA. After EPA received the results from a third party, DuPont argued that the information was not reportable under TSCA because the mere presence of PFOA in blood did not itself support the conclusion that exposure to PFOA posed any health risks. EPA subsequently filed two actions against DuPont for violating section 8(e) of TSCA by failing to report the biomonitoring data, among other claims. DuPont settled the claims but did not admit that it should have reported the data. However, based on the data it had received, EPA conducted a subsequent risk assessment, which contributed to a finding that PFOA was "likely to be carcinogenic to humans." In turn, this finding contributed to an agreement by DuPont and others to phase out the use of PFOA by 2015. However, EPA's authority to obtain biomonitoring data under section 8(e) of TSCA remains untested in court. Given the uncertainties regarding TSCA authorities, we have recommended that EPA should determine the extent of its legal authority to require companies to develop and submit biomonitoring data under TSCA. We also recommended that EPA request additional authority from Congress if it determines that such authority is necessary. If EPA determines that no further authority is necessary, we recommended that it develop formal written policies explaining the circumstances under which companies are required to submit biomonitoring data. However, EPA has not yet attempted a comprehensive review of its authority to require the companies to develop and submit biomonitoring data. The agency did not disagree with our recommendation, but commented that a case-by-case explanation of its authority might be more useful than a global assessment. However, we continue to believe that an analysis of EPA's legal authority to obtain biomonitoring data is critical. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of this Subcommittee may have. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include David Bennett, Antoinette Capaccio, Ed Kratzer, and Ben Shouse. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Biomonitoring, which measures chemicals in people's tissues or body fluids, has shown that the U.S. population is widely exposed to chemicals used in everyday products. Some of these have the potential to cause cancer or birth defects. Moreover, children may be more vulnerable to harm from these chemicals than adults. The Environmental Protection Agency (EPA) is authorized under the Toxic Substances Control Act (TSCA) to control chemicals that pose unreasonable health risks. One crucial tool in this process is chemical risk assessment, which involves determining the extent to which populations will be exposed to a chemical and assessing how this exposure affects human health This testimony, based on GAO's prior work, reviews the (1) extent to which EPA incorporates information from biomonitoring studies into its assessments of chemicals, (2) steps that EPA has taken to improve the usefulness of biomonitoring data, and (3) extent to which EPA has the authority under TSCA to require chemical companies to develop and submit biomonitoring data to EPA. EPA has made limited use of biomonitoring data in its assessments of risks posed by commercial chemicals. One reason is that biomonitoring data relevant to the entire U.S. population exist for only 212 chemicals. In addition, biomonitoring data alone indicate only that a person was somehow exposed to a chemical, not the source of the exposure or its effect on the person's health. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand if the levels measured in people pose a health concern, but EPA's authorities to require chemical companies to develop such data is limited. However, in September 2009, the EPA Administrator set forth goals for updated legislation to give EPA additional authorities to obtain data on chemicals. While EPA has initiated several research programs to make biomonitoring more useful to its risk assessment process, it has not developed a comprehensive strategy for this research that takes into account its own research efforts and those of the multiple federal agencies and other organizations involved in biomonitoring research. EPA does have several important biomonitoring research efforts, including research into the relationships between exposure to harmful chemicals, the resulting concentration of those chemicals in human tissue, and the corresponding health effects. However, without a plan to coordinate its research efforts, EPA has no means to track progress or assess the resources needed specifically for biomonitoring research. Furthermore, according to the National Academy of Sciences, the lack of a coordinated national research strategy has allowed widespread chemical exposures to go undetected, such as exposures to flame retardants. While EPA agreed with GAO's recommendation that EPA develop a comprehensive research strategy, the agency has not yet done so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is untested and may be limited. The TSCA section that authorizes EPA to require companies to develop data focuses on health and environmental effects of chemicals. However, biomonitoring data indicate only the presence of a chemical in the body, not its impact on health. It may be easier for EPA to obtain biomonitoring data under other TSCA sections, which allow EPA to collect existing information on chemicals. For example, TSCA obligates chemical companies to report information that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable if a chemical is known to have serious toxic effects and biomonitoring data indicates a level of exposure previously unknown to EPA. EPA took action against a chemical company under this authority in 2004. However, the action was settled without an admission of liability by the company, so EPA's authority to obtain biomonitoring data remains untested. GAO's 2009 report recommended that EPA clarify this authority, but it has not yet done so. The agency did not disagree, but commented that a case-by-case explanation of its authority might be more useful than a global assessment.
3,626
886
FAA faces significant demands that will challenge its ability to operate both in the current environment and in what it expects to encounter in the coming decade. With the industry still attempting to recover from the most tumultuous period in its history, FAA's funding is constrained by lowered Airports and Airways Trust Fund receipts and increased pressure on the contribution from the General Fund. To meet its current and future operational challenges, FAA is facing demands for greater efficiency and accountability. And it goes without saying that FAA must continue to meet demands for maintaining safety standards. Since 2001, the U.S. airline industry has confronted financial losses of previously unseen proportions. Between 2001 and 2003, the airline industry reported losses in excess of $20 billion. A number of factors - including the economic slowdown, a shift in business travel buying behavior, and the aftermath of the September 11, 2001 terrorist attacks-- contributed to these losses by reducing passenger and cargo volumes and depressing fares. The industry has reported smaller losses since 2001, but still may not generate net profits for 2004. To improve their financial position, many airlines cut costs by various means, notably by reducing labor expenditures and by decreasing capacity through cutting flight frequencies, using smaller aircraft, or eliminating service to some communities. According to data from the Bureau of Transportation Statistics, large U.S. air carriers cut their operating expenses by $7.8 billion from 2000 through 2002. The drop in total large air carrier operating expenses stands in sharp contrast to increases in FAA's budget. (See Figure 1.) FAA's budget - which has increased from $9 billion in 1998 to $14 billion in 2004 -- will be under pressure for the foreseeable future. Over the past 10 years, FAA has received on average approximately 80 percent of its annual funding from the Airports and Airways Trust Fund (Trust Fund), which derives its receipts from taxes and fees levied on airlines and passengers. The downturn in passenger travel, accompanied by decreases in average yields, has resulted in lowered receipts into the Trust Fund. On average, domestic yields have fallen since 2000, and are at their lowest levels since 1987. As a result, the total amount of transportation taxes that were remitted to the Trust Fund declined by $2.0 billion (19.6 percent) between fiscal years 1999 and 2003 (in 2002 dollars). Contributions from the General Fund have averaged about 20 percent of FAA's budget since 1994, but total Federal spending is under increasing stress because of growing budget deficits. According to the March 2004 analysis from the Congressional Budget Office, the Federal deficit under the President's fiscal 2005 budget will be $358 billion. Clearly, a major challenge for FAA both now and into the future will be cost-cutting and cost control. Operating costs represent over half of FAA's budget. For 2005, the Administration has requested $7.8 billion for Operations. Because salaries and benefits make up 73 percent of that total, restraining the growth in operations spending will be extremely difficult, even with improvements in workforce productivity. Capital expenses (i.e., the Facilities and Equipment account) represent less than 20 percent of FAA's budget, but virtually none of the projects requested for funding for 2005 is expected to generate any savings in the Operations account. Funds for airports' capital development have more than doubled since 1998, rising from $1.6 billion (18.3 percent of the total) to a requested $3.5 billion (25.1 percent of the total) in 2005. Current funding levels are sufficient to cover much of the estimated cost of planned capital development. However, building new runways is not always a practicable way to increase capacity. FAA must decide how to increase capacity and service, as well as improve system efficiency and safety. FAA's ability to operate efficiently and effectively - particularly regarding its air traffic control modernization projects -- have been hampered over time by inadequate management of information technology and financial management controls. FAA's ATC modernization projects have consistently experienced cost, schedule, and performance problems that we and others have attributed to systemic management issues. The effect has been extraordinary cost growth and a persistent failure to deploy systems. FAA initially estimated that its ATC modernization efforts could be completed over 10 years at a cost of $12 billion. Two decades and $35 billion later, FAA still has not completed key projects, and expects to need another $16 billion thru 2007, for a total cost of $51 billion. GAO has kept major FAA modernization systems on the watch list of high-risk federal programs since 1995. We believe that, in the current budget environment, cost growth and schedule problems with ongoing modernization efforts can have serious negative consequences: postponed benefits, costly interim systems, other systems not being funded, or a reduction in the number of units purchased. FAA recognizes that future U.S. air transport activity will likely place significant demands on its ability to keep the system operating. FAA's most recent forecasts project significant increases in overall system activity by 2015. Along with increased movements of aircraft and passengers comes an increased workload for FAA, as well as demands for more efficient operations and/or an expansion of capacity. (See Table 1). Evidence of FAA's inability to meet system capacity demands already exists from the experience at Chicago O'Hare earlier this year. To reduce flight delays, FAA asked American Airlines and United Airlines to reduce their peak scheduled operations by 7.5 percent by June 10. As Secretary Mineta has already recognized, unless system capacity expands, the nation will face "...more and more O'Hares as economy continues to grow, and as new technology and competition bring even greater demand." It seems clear, however, that FAA's Operational Evolution Plan, a few additional runways, and updating more controller workstations with the Standard Terminal Automation Replacement System (STARS) are not the answer to the system's need for capacity. We cannot pave our way to the year 2025. Over the years, systematic management issues, including inadequate management controls and human capital issues have contributed to the cost overruns, schedule delays, and performance shortfalls that FAA has consistently experienced in acquiring its major ATC modernization systems. Historically, some of the major factors impeding ATC acquisitions included an ineffective budget process and an inability to provide good cost and schedule estimates. A number of cultural problems including widely diffused responsibility and accountability, inadequate coordination, and poor contract management/oversight also slowed the progress of individual projects. Problems within FAA's acquisition and procurement processes included an inability to obligate and spend appropriate funds in a timely manner, a complicated procurement and acquisition cycle, failure to field systems in a timely fashion, and an inability to field current technology systems. FAA lacked a means to strategically analyze and control requirements, and good cost and schedule estimates were often not effectively developed and integrated into acquisition plans. To address many of these issues, Congress passed legislation in 1995 exempting FAA from many of the existing Federal personnel and procurement laws and regulations and directed the agency to develop and implement new acquisition and personnel systems. More recently, in 2000, the Congress and the administration together provided for a new oversight and management structure and a new air traffic organization to bring the benefits of performance management to ATC modernization. According to FAA, burdensome government-wide human capital rules impeded its ability to hire, train, and deploy personnel and thereby hampered its capacity to manage ATC modernization projects efficiently. In response to these concerns, Congress granted FAA broad exemptions from federal personnel laws and directed the agency to develop and implement a new personnel management system. Human capital reforms: Following the human capital exemptions granted by Congress in 1995, FAA initiated reforms in three primary areas: compensation and performance management, workforce management, and labor and employee relations. In the area of compensation and performance management, FAA introduced two initiatives--a new, more flexible pay system in which compensation levels are set within broad ranges, called pay bands, and a new performance management system intended to improve employees' performance through more frequent feedback with no summary rating. Both new systems required an exemption from laws governing federal civilian personnel management found in title 5 of the United States Code. In the area of workforce management, FAA implemented a number of initiatives in 1996 through the establishment of agency-wide flexibilities for hiring and training employees. In the area of labor and employee relations, FAA established partnership forums for union and nonunion employees and a new model work environment program. Other human capital initiatives have included restructuring FAA's organizational culture and implementing means to provide sustained leadership. Organizational culture: FAA issued an organizational culture framework in 1997 that attempted to address some of the vertical "stovepipes" that conflicted with the horizontal structure of ATC acquisition team operations. A key piece of this framework included the establishment of integrated product teams in an attempt to improve collaboration among technical experts and users. Moreover, integrated teams have not worked as intended. For example, competing priorities between two key organizations that were part of the Wide Area Augmentation System's integrated team ultimately negated its effectiveness and undermined its ability to meet the agency's goals for the system. Sustained leadership: Until former Administrator Garvey completed her 5- year term in 2002, FAA had been hampered by a lack of sustained leadership at FAA was also problematic. During the first 10 years of the ATC modernization effort, the agency had seven different Administrators and Acting Administrators, whose average tenure was less than 2 years. Such frequent turnover at the top contributed to an agency culture that focused on short-term initiatives, avoided accountability, and resisted fundamental improvements to the acquisition process. . Nine years have passed since the agency received broad exemptions from laws governing federal civilian personnel management. While FAA has taken a number of steps since personnel reforms were implemented, it is not clear whether and to what extent these flexibilities have helped FAA to more effectively manage its workforce and achieve its mission. The agency did not initially define clear links between reform goals and program goals, making it difficult to fully assess the impacts of personnel reform. FAA has not yet fully implemented all of its human capital initiatives and continues to face a number of key challenges with regard to personnel issues. In our February 2003 report, we found that the agency had not fully incorporated elements that are important to effective human capital management into its overall reform effort, including data collection and analysis and establishing concrete performance goals and measures. Currently, the agency is still working to implement tools to keep accurate cost and workforce data. The new Air Traffic Organization has announced plans for establishing cost accounting and labor distribution systems, but they are not yet in place. More comprehensive cost accounting systems and improved labor distribution systems are necessary to maximize workforce productivity and to plan for anticipated controller retirements. More broadly, taking a more strategic approach to reform will allow the agency to better evaluate the effects of human capital initiatives, which it sees as essential to its ATC modernization effort. FAA established its current acquisition management system (AMS) in 1996 following acquisition reform. The agency has reported taking steps to overseeing investment risk and capturing key information from the investment selection process in a management information system. It has also implemented guidance for validating costs, benefits, and risks. FAA has also taken steps to improve the management of its ATC modernization efforts. For example, it implemented an incremental, "build a little, test a little" approach that improved its management by providing for mid-course corrections and thus helping FAA to avoid costly late-stage changes. In the area of management controls, FAA has (1) developed a blueprint for modernization (systems architecture) to manage the development of ATC systems; (2) established processes for selecting and controlling information technology investments, (3) introduced an integrated framework for improving software and system acquisition processes, and (4) improved its cost-estimating and cost-accounting practices. Nonetheless, ATC modernization efforts continue to experience cost, schedule, and performance problems. FAA is not yet incorporating actual costs from related system development efforts in its processes for estimating the costs of new projects. Further, the agency has not yet fully implemented processes for evaluating projects after implementation in order to identify lessons learned and improve the investment management process. Reliable cost and schedule estimates are essential to addressing some of the ongoing problems with ATC acquisitions. In addition to controlling cost and schedule overruns, FAA needs to take concrete steps to identify and eliminate redundancies in the National Airspace System (NAS). FAA must review its long-term ATC modernization priorities to assess their relative importance and feasibility in light of current economic constraints, security requirements, and other issues. The ongoing challenges facing air traffic control modernization efforts led Congress and the administration to create a new oversight and management structure through the new Air Traffic Organization (ATO) in order to bring the benefits of performance management to ATC modernization. The ATO was created by an executive order in 2000 to operate the air traffic control system. In the same year, Congress enacted legislation establishing the Air Traffic Services Subcommittee, a five-member board to oversee the ATO and a chief operating officer to manage the organization. The ATO was designed to bring a performance management approach to ATC modernization efforts. The Air Traffic Services Subcommittee has made some initial efforts with regard to the establishment of the ATO. They have taken steps to focus on the structure of the ATC system, including reviewing and approving performance metrics for the ATO, establishing a budget, and approving three large procurements that FAA initiated. However, progress in establishing the organization has been slow, given that FAA received the mandate to establish the ATO nearly four years ago. FAA encountered difficulties finding a qualified candidate to take the position of chief operating officer, and did not fill the vacancy until June 2003. The final executive positions for the organization including the Vice- Presidents of Safety and Communications were just filled last month. Key tasks for the ATO will include organizational restructuring, implementing effective financial management and cost-accounting systems, evaluating day-to-day business practices, and fostering growth with efficiency. Rapidly changing technology, limited financial resources, and the critical importance of meeting client needs will present significant challenges in order for the ATO to truly evolve into a high performing organization. To successfully meet the challenges of the 21st century, FAA must fundamentally transform its people, processes, technology, and environment to build a high-performing organization. Our work has shown that high-performing organizations have adopted management controls, processes, practices, and systems that are consistent with prevailing best practices and contribute to concrete organizational results. Specifically, the key characteristics and capabilities of high-performing organizations fall into four themes as follows: A clear, well-articulated, and compelling mission. High-performing organizations have a clear, well-articulated, and compelling mission, strategic goals to achieve it and a performance management system that aligns with these goals to show employees how their performance can contribute to overall organizational results. FAA has taken its first steps toward creating a performance management system by aligning its goals and budgetary resources through its Flight Plan--blueprint for action for fiscal year 2004 through 2008--and its fiscal year 2005 budget submission. In addition, the new ATO has published both its vision and mission statement. Our past work has found that FAA's ability to acquire new ATC modernization systems has been hampered by its organizational culture, including employee behaviors that did not reflect a strong commitment to mission focus. Given the central role that FAA's employees will play in achieving these performance goals and overall agency results, it is critical for them to both embrace and implement the agency's mission in the course of their daily work. In addition, our work has found regularly communicating a clear and consistent message about the importance of fulfilling the organization's mission helps engage employees, clients, customers, partners, and other stakeholders in achieving higher performance. Strategic use of partnerships. Since the federal government is increasingly reliant on partners to achieve its outcomes, becoming a high- performing organization requires that federal agencies effectively manage relationships with other organizations outside of their direct control. FAA is currently working to forge strategic partnerships with its external customers in a number of ways. For example, the agency recently announced a program to create "express lanes in the sky" to reduce air traffic delays this spring and summer and is in the early stages of working with selected federal partners to develop a long-term plan for the national aerospace system (2025) and to leverage federal research funds to conduct mutually beneficial research. In addition, FAA has ongoing partnerships with the aviation community to assess and address flight safety issues (e.g., development of technology to prevent fuel tank explosions and to reduce the potential for aircraft wiring problems through development of a "smart circuit breaker"). However, our past work has shown that forging strategic partnerships with organizations outside of FAA can be difficult and time-consuming. For example, FAA's efforts to establish voluntary data sharing agreements with airlines--Flight Operational Quality Assurance Program (FOQA)-- spanned more than a decade, due in part, to tremendous resistance from aviation community stakeholders who formed a rare alliance to oppose several of FAA's proposals. In addition, when attempting to increase airport capacity (e.g., new runways), FAA and airport operators have frequently faced opposition from the residents of surrounding communities and environmental groups. Residents are often concerned about the potential for increases in airport noise, air pollutant emissions, and traffic congestion. Focus on needs of clients and customers. Serving the needs of clients and customers involves identifying their needs, striving to meet them, measuring performance, and publicly reporting on progress to help assure appropriate transparency and accountability. To better serve the needs of its clients and customers, FAA published Flight Plan, which provides a vehicle for identifying needs, measuring performance, and publicly reporting progress. Flight Plan includes performance goals in the areas of safety, greater capacity, international leadership, and organizational excellence, which are linked to the agency's budget and progress monitored through a Web-based tracking system. However, over the years, FAA's efforts to meet client and customer needs have not always been successful, and some have had a long lasting negative impact. FAA has had particular difficulty fielding new ATC modernization systems within cost, schedule and performance goals to meet the needs of the aviation community. Agency promises to deliver new capabilities to airlines via improvements to the ATC system led some airlines to install expensive equipment in their aircraft to position themselves to benefit from expected FAA services; however, when the agency failed to deliver on those promises, participating air carriers were left with equipment that they could not use--no return on their investment. In addition, shifting agency priorities have made it difficult for the aviation industry to anticipate future requirements and plan for them in a cost-effective manner (e.g., providing air carriers with adequate lead time to purchase new equipment and airframe manufacturers with lead time to incorporate changes into new commercial airplane designs). Furthermore, the absence of a full-functioning cost-accounting system makes it difficult for FAA to assess the actual cost of providing services to users of the National Airspace System. Strategic management of people. Most high-performing organizations have strong, charismatic, visionary, and sustained leadership, the capability to identify what skills and competencies the employees and the organization need, and other key characteristics including effective recruiting, comprehensive training and development, retention of high- performing employees, and a streamlined hiring process. Toward this end, FAA has hired a Chief Operating Officer (COO) to stand up its new ATO. Our work on high-performing organizations has recommended use of the COO concept to facilitate transformational change in federal agencies and to provide long-term attention and focus on management issues. Furthermore, FAA has placed 78 percent of its workforce under a pay-for- performance system and implemented a training approach for its acquisition workforce which reflects four of the six elements used by leading organizations to deliver training effectively. However, it is too soon to know the extent to which these elements of effective training will be incorporated into the new ATO. Finally, FAA is currently conducting an Activity Value Analysis, a bottoms-up effort to establish a baseline of ATO headquarters activities and their value to stakeholders. The results of this analysis are intended to help FAA's leadership target cost-cutting and cost savings efforts. Despite FAA's efforts to date, our past work has found the agency's strategic management of human capital lacking. For example, organizational culture issues at FAA (e.g., its vertical, stovepiped structure) have discouraged collaboration among technical experts and users of the ATC system and contributed to the agency's inability to deliver new ATC systems within cost, schedule and performance goals. One of the most significant early challenges facing the ATO will be negotiating a new contract with air traffic controllers, which is due to expire in September 2005. The DOT IG has repeatedly noted that despite the importance of controllers' jobs, that FAA simply cannot sustain the continued salary cost growth for this workforce, which rose from an average salary of $72,000 in 1998 to $106,000 in 2003. Given the inextricable link between FAA's operating costs and its controller workforce, striking an acceptable balance between controllers' contract demands and controlling spiraling operating costs will be a strong determinant of the ATO's credibility both within FAA and across the aviation industry. While FAA has taken some promising steps through its new ATO to restructure itself in a manner consistent with high-performing organizations, the agency still faces significant and longstanding systemic management challenges. These challenges must be overcome if FAA is to keep pace with ongoing changes in the aviation industry and transform itself into a world-class organization. Our work for more than two decades has shown that even modest organizational, operational, and technological changes at FAA can be difficult and time consuming, all of which underscores the difficult road ahead for FAA and its new ATO. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact JayEtta Hecker at (202) 512-2834 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Samantha Goodman, Steven Martin, Beverly Norwood, and Alwynne Wilbur. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the last two decades, FAA has experienced difficulties meeting the demands of the aviation industry while also attempting to operate efficiently and effectively. Now, as air traffic returns to pre- 9/11 levels, concerns have again arisen as to how prepared FAA may be to meet increasing demands for capacity, safety, and efficiency. FAA's air traffic control (ATC) modernization efforts are designed to enhance the national airspace system through the acquisition of a vast network of radar, navigation, and communication systems. Nine years have passed since Congress provided FAA with personnel and acquisition reforms. However, projects continue to experience cost, schedule and performance problems. FAA's Air Traffic Organization (ATO) is its most current reform effort. Expectations are that the ATO will bring a performance management approach to ATC modernization. This statement focuses on three main questions: (1) What are some of the major challenges and demands that confront FAA? (2) What is the status of FAA's implementation of reforms and/or procedural relief that Congress provided? and (3) What are some of the critical success factors that will enable FAA to become a highperforming organization? A forecasted increase in air traffic coupled with budgetary constraints will challenge FAA's ability to meet current and evolving operational needs. The commercial aviation industry is still recovering from financial losses exceeding $20 billion over the past 3 years. Many airlines cut their operating expenses, but FAA's budget continued to rise. However, transportation tax receipts into the Airport and Airways Trust Fund, from which FAA draws the majority of its budget, have fallen by $2.0 billion (nearly 20 percent) since 1999 (in constant 2002 dollars). Cost-cutting and cost-control will need to be watchwords for FAA from this point forward. FAA has implemented many of the reforms authorized by Congress 9 years ago, but achieved mixed results. Despite personnel and acquisition reforms the agency contended were critical to modernizing the nation's air traffic control (ATC) system, systemic management issues continue to contribute to the cost overruns, schedule delays, and performance shortfalls. FAA's most current reform effort, the Air Traffic Organization (ATO) -- a new performance-based organization mandated by AIR-21 that is operating the ATC system is just now being put in place. To meet its new challenges, FAA must fundamentally transform itself into a high-performing organization. The key characteristics and capabilities of high-performing organizations fall into four themes: (1) a clear, well articulated, and compelling mission; (2) strategic use of partnerships; (3) focus on the needs of clients and customers; and (4) strategic management of people. FAA has taken some promising steps through its new ATO to restructure itself like high-performing organizations, but still faces significant and longstanding systemic management challenges. Even modest organizational and operational changes at FAA can be difficult and time consuming.
4,942
621
Outsourcing for commercial services is a growing practice within the government in an attempt to achieve cost savings, management efficiencies, and operating flexibility. Various studies in recent years have highlighted the potential for DOD to achieve significant savings from outsourcing competitions, especially those that involve commercial activities that are currently being performed by military personnel. Most of DOD's outsourcing competitions, like those of other government agencies, are to be conducted in accordance with policy guidance and implementation procedures provided in the Office of Management and Budget's (OMB) Circular A-76 and its supplemental handbook. In August 1995, the Deputy Secretary of Defense directed the services to make outsourcing of support activities a priority. The Navy's initial outsourcing plans for fiscal years 1997 and 1998 indicated that it would conduct A-76 outsourcing competitions involving about 25,500 positions, including about 3,400 military billets. As of February 1998, however, the actual number of military billets announced for A-76 competitions in fiscal years 1997 and 1998 was changed to 2,100. Navy officials told us that when the Navy announces its intention to begin an A-76 study that includes military billets, the funding for those billets is eliminated from the military personnel budget beginning with the year the study is expected to be completed. The Navy's rationale for eliminating these billets from the budget is that it expects the functions to be either outsourced to the private sector or retained in-house and performed by government civilians. Either way, the functions will be funded through the service's operations and maintenance budget and not the military personnel budget. According to OMB's Circular A-76, certain functions should not be outsourced to the private sector. These functions include activities that are closely related to the exercise of national defense and DOD's war-fighting capability and must be performed by government personnel. DOD guidance designates that one such protected area is billets that are required to support rotational requirements for active duty enlisted military personnel returning from overseas assignments or sea duty. Rotational billets are generally defined as those positions that must remain available to military service members to (1) ensure that those returning from overseas assignments or sea duty have adequate rotation opportunities and (2) provide opportunities for the service members to continue to function within their areas of specialty for purposes of maintaining readiness, training, and required skills. The Navy has identified the minimum number of such rotational billets required for enlisted personnel for each specific skill specialty and grade. Its sea-shore rotation goal is that sufficient shore billets be available for each skill specialty and grade level to provide an equal mix of sea duty and shore duty, that is, 3 years at sea for every 3 years on shore, for its enlisted personnel in grades E-5 through E-9. Because sea billets exceed shore billets, the Vice Chief of Naval Operations established a sea-shore rotation policy in December 1997 directing that the aggregate sea-shore rotation for enlisted personnel in grades E-5 through E-9 be no more than 4 years at sea for every 3 years on shore. Actual sea-shore rotations, however, depending on the skill specialty and grade level, have ranged from 3 to 5 years at sea for every 3 years on shore. As of February 1998, the total number of sea billets exceeded shore billets by more than 40,000. Consequently, with fewer shore billets available for rotation purposes, less time is being spent ashore than at sea. For years, the Navy has been unable to attain its sea-shore rotation goal because of shortages of shore billets for some skills and the difficulty of duplicating some of the specific skill specialties on shore. Moreover, about 66 percent of the total enlisted billets for specific skill specialties (called ratings) for grades E-5 through E-9 required at sea aboard ships do not easily lend themselves to comparable shore duty, according to Navy officials. These types of billets, called sea ratings, include ratings such as electronic technicians, machinist mates, and various aviation-related specialties. To overcome the difficulty of providing comparable shore billets for all sea ratings, the Navy has had to use general duty shore billets for enlisted personnel that cannot be assigned to their specific rating on shore. General duty billets include such functions as security positions, recruiters, and other duties. Navy officials believe that using personnel in these billets is productive, but such positions should be limited because they can impact training, skill retention, and morale. According to these officials, personnel assigned to general duty billets are not receiving specific training and experience related to their sea-duty rating. Historically, the Navy has attempted to minimize the number of sailors in general duty billets. As of January 1998, the Navy had about 12,500 enlisted personnel working in general duty shore positions. As of August 1997, outsourcing studies announced by the Navy in fiscal years 1997 and 1998 included some military positions for which rotational shortages existed based on the sea-shore rotation policy effective at that time. Of the total 740 Navy-wide military billets announced for study in 1998, 306 billets are for tug operations and maintenance functions that include ratings that have rotational shortages. These included 201 military billets in Norfolk, Virginia, 51 military billets in Pearl Harbor, Hawaii, and 54 military billets in Guam for tug operations and maintenance functions. (See table 1.) We also identified other shore functions that have been announced for potential outsourcing that some Navy officials expect will create or contribute to existing shortages of rotational billets. In fiscal year 1997, the Navy announced plans to study 216 military billets for base operations support functions in Guam for ratings that have rotational shortages. In January 1998, the Navy announced plans to perform A-76 studies for bachelor officer quarters (BOQ) and bachelor enlisted quarters (BEQ) functions of military billets that have rotational shortages. (See table 2.) As of August 1997, data showed that these outsourcing initiatives would further reduce the rotation base for specific ratings and would add to existing rotational shortages. Navy officials told us that the decisions to study these functions for potential outsourcing were made before the Navy had developed servicewide and regional data needed to identify the impact on sea-shore rotations and, as a result, they were unaware of the potential impact. In commenting on a draft of this report, DOD added that, even though the Navy's decision to study these functions was made before today's stringent procedures were in place, the Navy concluded after the decision was made that the impact on sea-shore rotation and career progression would be acceptable. Navy officials at the affected installations stated that the Navy's decision to study these functions for potential outsourcing will seriously affect sea-shore rotation, resulting in the elimination of military billets and fewer opportunities available on shore for enlisted personnel grades E-5 through E-9. Other Navy officials expressed similar concerns and the view that these outsourcing initiatives could result in less flexibility for the Navy and impair career progression and morale for its enlisted servicemembers. In fiscal year 1997, the Commander in Chief, Atlantic Fleet, canceled outsourcing study plans for about 240 military billets in the BOQ and BEQ functions because of Navy-wide rotational shortages for mess specialists and the related impact on sea-shore rotation. Similarly, in fiscal year 1998, the Commander in Chief, Pacific Fleet, canceled plans to begin A-76 studies involving about 63 military billets in these functions for the same reason. Although the funding for these billets had been deleted from the 1999 budget, both commands are planning to reinstate funding authorization by reprogramming existing resources. The Navy also canceled an A-76 study of the BOQ and BEQ functions at the Naval Security Station, Washington, because Navy officials had determined that outsourcing these functions would have further degraded sea-shore rotation. The Navy does not intend to cancel its plans to begin the A-76 studies announced for tug operations and maintenance involving 306 military billets or the base operations support at Guam even though the shore billets that will be eliminated will further impact the sea-shore rotation base. According to Navy officials, the decision to study for outsourcing the tug operations and maintenance function was initially based on the fact that the Navy's tug boats were old and costly to maintain and would eventually have to be replaced if the tug operations and maintenance were not outsourced. Navy officials stated that several options will be considered to accommodate the impact on sea-shore rotation, such as reclassifying the shore billets to a related billet, general duty billet, or increasing the number of shore billets for those ratings in other locations. Until May 1997, the Navy did not have procedures in place to ensure that rotational requirements were adequately considered when it determined potential functions for outsourcing study. At that time, the Navy adopted policies and procedures to examine Navy-wide and regional effects of its outsourcing plans on the sea-shore rotation base. Specifically, a memorandum of agreement was established specifying the coordination process between the Navy's headquarters infrastructure officials and the military personnel officials regarding the procedures for studying military functions for potential outsourcing. This memorandum of agreement was further strengthened in September 1997 by a more detailed Navy-wide memorandum of agreement that applied to all major commands for all infrastructure reductions, including outsourcing. Also, in August 1997, the Navy's Bureau of Personnel provided major commands and other officials with Navy-wide and regional manpower data tools specifying the rotational requirements for each specific rating. Outsourcing officials are expected to use this information to assess rotational requirements of specific ratings for grades E-5 through E-9 when identifying potential candidates for outsourcing. If a rotational shortage is identified, the specific rating is not recommended for outsourcing to avoid further degradation of the sea-shore rotation base. In December 1997, the Vice Chief of Naval Operations approved a set of business rules to further strengthen the policies and procedures for protecting military billets with rotational shortages from potential outsourcing. These business rules require that the overall sea-shore rotation for sea ratings will not exceed 4 years at sea for every 3 years on shore and that the sea-shore rotation for individual ratings will not exceed 5 years at sea for every 3 years on shore. The Vice Chief of Naval Operations directed that these business rules be followed for all infrastructure reductions, including outsourcing. Moreover, Navy infrastructure officials and military manpower officials told us that they are continuing to work closely regarding outsourcing goals and sea-shore rotation requirements as the Navy moves to identify potential outsourcing candidates and meet its outsourcing study and savings goals. Between fiscal years 1997 and 2002, the Navy plans to study 80,500 civilian and military positions for potential outsourcing at an estimated savings of $2.5 billion. The Navy estimates that about 10,000 of these positions will be military billets and the remaining 70,500 will be positions currently occupied by civilians. (See table 3.) Because the funding for the Navy's military billets is eliminated from the personnel budget when the billets are announced for study, the funding for all military billets approved for competition will be deleted from the Navy's personnel budget by the year 2003. To eliminate the 10,000 military billets from the military personnel budget by the year 2003, the Navy's objective has been to announce about 2,000 military billets for study each year for 5 years beginning in fiscal year 1997. In fiscal year 1997, the Navy announced plans to study about 1,400 military billets for potential outsourcing. It appears likely, however, that the Navy will fall short of its goal for fiscal year 1998. As of January 1998, the Navy had announced plans to study 740 military billets and 6,678 civilian positions for fiscal year 1998. Navy officials told us in February 1998 they will announce additional A-76 studies in fiscal year 1998, but did not know the specific activities that would be studied or the number of billets that would be affected. As of February 1998, the Navy was attempting to identify potential functions and billets for outsourcing in subsequent fiscal years, but it had not determined the specific number of military billets or civilian positions that will be announced for study in those years. Some Navy officials have expressed concern over whether they will be able to attain the overall goal of studying 10,000 military billets by the year 2003. In addition, some Navy base commanders are concerned that outsourcing decisions affecting their installations may be made without their input. Despite these concerns, the Navy has programmed estimated savings of $2.5 billion from outsourcing into its future years defense plan, increasing the pressure to identify candidates for outsourcing studies. The Navy has established an ambitious goal for itself in terms of number of positions it plans to study for potential outsourcing under A-76. At the same time, the Navy is relying on its major commands to identify the functions to study to meet these programmed budget savings. Navy officials stated that they began a series of planning conferences in September 1997 involving appropriate officials from Navy headquarters and major commands. According to these officials, one of the primary objectives of the planning conferences is to begin discussing a strategic plan for accomplishing the outsourcing goals for fiscal years 1999 through 2001. While we believe that a strategic plan is necessary to achieve the Navy's outsourcing goals, ongoing coordination and improved planning between headquarters and the major commands will be required to reach agreement on realistic goals and time frames and to identify areas most conducive to outsourcing and likely to yield the greatest savings. In addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years' budgets. Navy officials stated that, in addition to the recent planning conferences, it plans to address the larger issue of how the Navy conducts its business and possible alternatives for meeting Navy-wide personnel levels and requirements. The Navy established ambitious goals for studying military and civilian personnel positions for potential outsourcing under A-76 competitions. Only as it began initiating the plans for some of these studies involving military personnel positions did it find that outsourcing some of these positions could affect positions reserved for sea-shore rotational requirements--a situation that caused the Navy to withdraw some of its planned outsourcing initiatives. The Navy has recently established policies and procedures to ensure that sea-shore rotation requirements are reviewed and considered when identifying potential functions for outsourcing. While the Navy has recently begun to focus on strategies for attaining its outsourcing goals for future years, improved planning and coordination between headquarters and major commands are needed to reach agreement on realistic goals and time frames. Improved planning and coordination could also identify areas most conducive to outsourcing, least likely to eliminate needed shore billets, and likely to yield the greatest savings. In addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years' budgets. To enhance the likelihood that plans for outsourcing are reasonable and achievable, we recommend that the Secretary of Defense take steps to ensure that the Secretary of the Navy, as it develops its strategic plan, involves the major commands to reach agreement on realistic goals and time frames, and identify areas most conducive to outsourcing. Likewise, we recommend that the Secretary of Defense periodically reassess whether outsourcing savings targets that are used in planning for future years budgets are achievable in the time frames planned. In commenting on a draft of this report, DOD concurred with our conclusions and recommendation (see app. II). DOD provided a number of comments addressing how the Navy has taken significant steps to implement policies and coordination procedures to protect rotational billets from outsourcing considerations and to involve the major commands in the strategic planning process for attaining its future years' outsourcing goals. DOD noted, and we concur, that a number of studies included in the Navy's initial outsourcing announcement were canceled because a subsequent review revealed that they were not appropriate competition candidates. Since then, DOD notes that the Navy has made progress to widen the scope of its outsourcing program and to involve all claimants (major commands) in the planning process. DOD indicated that the canceled outsourcing studies cited in our report were not representative of the Navy's competitive outsourcing program as it exists today. The Navy is currently implementing a number of initiatives to improve strategic planning that should enable them to identify areas most conducive to outsourcing, without exacerbating shortages of rotational billets. DOD also stated that the Navy's current outsourcing policies and procedures require that no function employing military personnel will be announced for potential outsourcing until the Navy's Manpower Office determines that outsourcing the function will not have an adverse affect. DOD stated that these activities demonstrate the Navy's commitment to work with its major commands, and therefore, additional direction from the Secretary of Defense is unnecessary. We agree that the Navy has begun some important actions toward developing a strategic plan and including its major commands in that process. However, the Navy has not completed its plan as of April 1998. At the same time, our report points out that it appears likely the Navy will fall short of its goal for new outsourcing studies in fiscal year 1998, and some Navy officials expressed concern to us over whether they will be able to attain the optimistic goal of studying 10,000 military billets by the year 2003 and save $2.5 billion from outsourcing in its future years defense plan. This goal adds pressure on the claimants to emphasize outsourcing, and accordingly, we believe it will remain critical for the Navy to continue to work with its major commands to complete the development of its plans for accomplishing these objectives. Likewise, we believe it is important to periodically reassess the extent to which savings goals and objectives are achievable and whether savings targets established for out-year budget purposes might need to be revised. In view of this, we have revised our recommendation to recommend that the Secretary of Defense ensure that the Secretary of the Navy, as it develops its strategic plan involves the major commands to reach agreement on realistic goals and time frames, and identify areas most conducive to outsourcing. We have also added a recommendation that the Secretary of Defense periodically reassess whether outsourcing savings targets that are used in planning for future years budgets are achievable in the time frames planned. Our scope and methodology are discussed in appendix I. DOD's comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committees on Armed Services and on Appropriations and the House Committees on National Security and on Appropriations; the Director, Office of Management and Budget; and the Secretaries of the Army, the Navy, and the Air Force. Copies will also be made available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Neither the Army nor the Air Force have experienced problems similar to the Navy in making outsourcing decisions, primarily because of mission differences. The Army's and Air Force's policies for protecting rotational billets are designed to ensure a proper balance between the numbers and types of billets located overseas and in the continental United States. The types of rotational billets that the Army and Air Force need to protect from outsourcing are generally in highly technical areas that would not normally be appropriate for outsourcing. Moreover, both services rely, to varying extent, on contractor personnel to perform base support type functions. The Navy, on the other hand, operates forward-deployed forces from its ships and requires military personnel to perform virtually all of its support services that might be done by civilians were the Navy operating from land bases. Therefore, the focus of the review was on the Navy. We met with officials from the Office of the Secretary of Defense, the Army, the Air Force, and the Navy regarding their policies for considering rotational and career development requirements in outsourcing decisions. We obtained policies related to outsourcing and rotational billets, memorandums of agreements, and procedures for identifying A-76 study candidates. We also met with officials from the Army Training and Doctrine Command at Fort Monroe, Virginia; the Air Force Air Combat Command at Langley Air Force Base, Virginia; and the Navy Commander in Chief, Atlantic Fleet in Norfolk, Virginia; to discuss their policies and procedures for identifying and protecting rotational billets from outsourcing considerations. We obtained documentation regarding current and planned A-76 studies, and A-76 study plans that were eliminated because of the impact on rotational requirements. We obtained information pertaining to outsourcing and rotational billets from the Navy Commander in Chief, Pacific Fleet at Pearl Harbor, Hawaii, and met with various Navy Base commanders in the Norfolk, Virginia, area to obtain their perspective on contracting out of functions historically performed by enlisted personnel. We reviewed the outsourcing initiatives and the impact of these initiatives on rotational billets in the Army, Air Force, and Navy. However, we focused the majority of our work on the Navy's outsourcing initiatives and the potential impact of those initiatives on sea-shore rotation. We compared the database of Navy-wide and regional data on the rotational requirements for each specific rating for grades E-5 through E-9 to the Navy's outsourcing initiatives. We did not independently validate the mathematical models the services used to identify rotational requirements or the criteria they used in building these models. We conducted our review from September 1997 to April 1998 in accordance with generally accepted government auditing standards. David A. Schmitt, Evaluator-in-Charge Sandra D. Epps, Site Senior Tracy Whitaker Banks, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Department of Defense's (DOD) outsourcing of commercial activities reduces the availability of rotational billets for active duty military personnel, focusing on: (1) how the Navy's current outsourcing efforts have affected rotational billets; and (2) whether the Navy has policies and procedures in place to minimize the impact of outsourcing on rotational billets in the future. GAO noted that: (1) several Navy Office of Management and Budget Circular A-76 competitions announced for study in fiscal years (FY) 1997 and 1998 have the potential to eliminate military billets in areas where rotational shortages exist for personnel returning from sea duty; (2) as a result, the Navy has decided not to begin some of these A-76 studies and plans to reinstate funding authorization for the military positions eliminated when the studies were announced; (3) until recently, the Navy had not developed specific policies and coordination procedures to protect rotational billets from outsourcing considerations; (4) according to Navy officials such policies and procedures were not needed prior to FY 1997 because the Navy's outsourcing initiatives were limited and not centrally managed; (5) in May 1997, Navy officials signed a memorandum of agreement specifying a coordination process between the Navy's headquarters infrastructure officials and military personnel representatives to ensure that consideration is given to rotation requirements when determining potential functions for outsourcing; (6) this memorandum of agreement was further strengthened in September 1997 by a more detailed Navy-wide memorandum of agreement that applied to all major commands, which the Navy refers to as major claimants, for all infrastructure reductions, including outsourcing; (7) this coordination policy should prove important since the Navy's goal is to have completed A-76 competitions for 80,500 positions by the year 2002, including about 10,000 military billets; (8) the Navy expects that its outsourcing efforts will produce savings and accordingly has programmed expected savings of $2.5 billion into its future years defense plan for FY 2000 through 2003; (9) the Navy has not identified the specific activities and locations that will be studied to achieve projected savings, but has tasked its major commands with recommending specific activities and locations for A-76 competitions to meet this savings goal; (10) the Navy recently began a series of planning conferences involving appropriate officials from headquarters and major commands focusing on strategies for attaining its future years' outsourcing goals; (11) however, given the Navy's plans for outsourcing competitions, ongoing coordination and improved planning between headquarters and major commands is needed to reach agreement on realistic goals and timeframes; and (12) in addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years' budgets.
4,777
581
Administered by SBA's Office of Disaster Assistance (ODA), the Disaster Loan Program is the primary federal program for funding long-range recovery for nonfarm businesses that are victims of disasters. It is also the only form of SBA assistance not limited to small businesses. Small Business Development Centers (SBDC) are SBA's resource partners that provide disaster assistance to businesses. SBA officials said that SBDCs help SBA by doing the following: conducting local outreach to disaster victims, assisting declined business loan applicants or applicants who have withdrawn their loan applications with applications for reconsideration or re-acceptance, assisting declined applicants in remedying issues that initially precluded loan approvals, and providing business loan applicants with technical assistance, including helping businesses reconstruct business records, helping applicants better understand what is required to complete a loan application, compiling financial statements, and collecting required documents. SBA can make available several types of disaster loans, including two types of direct loans: physical disaster loans and economic injury disaster loans. Physical disaster loans are for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. They are available to homeowners, renters, businesses of all sizes, and nonprofit organizations. Economic injury disaster loans provide small businesses that are not able to obtain credit elsewhere with necessary working capital until normal operations resume after a disaster declaration. Businesses of all sizes may apply for physical disaster loans, but only small businesses are eligible for economic injury loans. SBA has divided the disaster loan process into three steps: application, verification and loan processing, and closing. Applicants for physical disaster loans have 60 days from the date of disaster declaration to apply for the loan and applicants for the economic injury disaster loan applicants have 9 months. Disaster victims may apply for a disaster business loan through the disaster loan assistance web portal or by paper submission. The information from online and paper applications is fed into SBA's Disaster Credit Management System, which SBA uses to process loan applications and make determinations for its disaster loan program. SBA has implemented most of the requirements of the 2008 Act, which comprises 26 provisions with substantive requirements for SBA, including requirements for disaster planning and simulations, reporting, and plan updates (see app. I for a summary of the provisions). For example, SBA made several changes to programs, policies, and procedures to enhance its capabilities to prepare for major disasters. Section 12063 states that SBA should improve public awareness of disaster declarations and application periods, and create a marketing and outreach plan. In 2012, SBA completed a marketing and outreach plan that included strategies for identifying regional stakeholders (including SBDCs, local emergency management agencies, and other local groups such as business and civic organizations) and identifying regional disaster risks. SBA's plan stated that it would (1) develop webinars for specific regional risks and promote these before the traditional start of the season for certain types of disasters such as hurricanes; and (2) establish a recurring schedule for outreach with stakeholders when no disaster is occurring. Furthermore, the most recent Disaster Preparedness and Recovery Plan from 2016 outlines specific responsibilities for conducting region-specific marketing and outreach through SBA resource partners and others before a disaster as well as plans for scaling communications based on the severity of the disaster. (See below for more information about SBA's Disaster Preparedness and Recovery Plan.) Section 12073 states that SBA must assign an individual with significant knowledge of, and substantial experience in, disaster readiness and planning, emergency response, maintaining a disaster response plan, and coordinating training exercises. In June 2008, SBA appointed an official to head the agency's newly created Executive Office of Disaster Strategic Planning and Operations. SBA officials recently told us that the planning office, now named the Office of Disaster Planning and Risk Management, is under the office of the Chief Operating Officer. Although the organizational structure changed, the role of the director remains the same: to coordinate the efforts of other offices within SBA to execute disaster recovery as directed by the Administrator. Among the director's responsibilities are to create, maintain, and implement the comprehensive disaster preparedness and recovery plan, and coordinate and direct SBA training exercises relating to disasters, including simulations and exercises coordinated with other government departments and agencies. Section 12075 states that SBA must develop, implement, or maintain a comprehensive written disaster response plan and update the plan annually. SBA issued a disaster response plan in November 2009 and the agency has continued to develop, implement, and revise the written disaster plan every year since then. The plan, now titled the Disaster Preparedness and Recovery Plan, outlines issues such as disaster responsibilities of SBA offices, SBA's disaster staffing strategy, and plans to scale disaster loan-making operations. The plan is made available to all SBA staff as well as to the public through SBA's website. SBA has taken actions to fully address other provisions, such as those relating to augmenting infrastructure, information technology and staff as well as improving disaster lending. For example, to improve its infrastructure, information technology, and staff, SBA put in place a secondary facility in Sacramento, California, to process loans during times when the main facility in Fort Worth, Texas, is unavailable. SBA also improved its Disaster Credit Management System, which the agency uses to process loan applications and make determinations for its disaster loan program, by increasing the number of concurrent users that can access it. Furthermore, SBA increased access to funds by making nonprofits eligible for economic injury disaster loans. SBA has not piloted or implemented three guaranteed disaster loan programs. The 2008 Act included three provisions requiring SBA to issue regulations to establish new guaranteed disaster programs using private- sector lenders: Expedited Disaster Assistance Loan Program (EDALP) would provide small businesses with expedited access to short-term guaranteed loans of up to $150,000. Immediate Disaster Assistance Program (IDAP) would provide small businesses with guaranteed bridge loans of up to $25,000 from private-sector lenders, with an SBA decision within 36 hours of a lender's application on behalf of a borrower. Private Disaster Assistance Program (PDAP) would make guaranteed loans available to homeowners and small businesses in an amount up to $2 million. In 2009, we reported that SBA was planning to implement requirements of the 2008 Act, including pilot programs for IDAP and EDALP. SBA requested funding for the two programs in the President's budget for fiscal year 2010 and received subsidy and administrative cost funding of $3 million in the 2010 appropriation, which would have allowed the agency to pilot about 600 loans under IDAP. SBA officials also told us that they performed initial outreach to lenders to obtain reactions to and interest in the programs. They believed such outreach would help SBA identify and address issues and determine the viability of the programs. In May 2010, SBA told us its goal was to have the pilot for IDAP in place by September 2010. Furthermore, the agency issued regulations for IDAP in October 2010. In 2014, we reported on the Disaster Loan Program (following Hurricane Sandy) and found that SBA had yet to pilot or implement the three programs for guaranteed disaster loans. In July 2014, SBA officials told us that the agency still was planning to conduct the IDAP pilot. However, based on lender feedback, SBA officials said that the statutory requirements, such as the 10-year loan, made a product like IDAP undesirable and lenders were unwilling to participate unless the loan term was decreased to 5 or 7 years. Congressional action would be required to revise statutory requirements, but SBA officials said they had not discussed the lender feedback with Congress. SBA officials also told us the agency planned to use IDAP as a guide to develop EDALP and PDAP, and until challenges with IDAP were resolved, it did not plan to implement these two programs. As a result of not documenting, analyzing, or communicating lender feedback, SBA risked not having reliable information--both to guide its own actions and to share with Congress--on what requirements should be revised to encourage lender participation. Federal internal control standards state that significant events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. We concluded that not sharing information with Congress on challenges to implementing IDAP might perpetuate the difficulties SBA faced in implementing these programs, which were intended to provide assistance to disaster victims. Therefore, we recommended that SBA conduct a formal documented evaluation of lenders' feedback on implementation challenges and statutory changes that might be necessary to encourage lenders' participation in IDAP, and then report to Congress on these topics. In response to our recommendations, SBA issued an Advance Notice of Proposed Rulemaking in October 2015 to seek comments on the three guaranteed loan programs. In July 2016, SBA sent a letter to the Ranking Member of the House Committee on Small Business that discussed how the agency evaluated feedback on the three programs and explained the remaining challenges to address the statutory provisions for the three programs. Based on this action, we closed the recommendations for SBA to develop an implementation plan, formally evaluate lender feedback, and report to Congress on implementation challenges. SBA has yet to announce how it will proceed with the statutory requirements to establish these loan programs. SBA made several changes to its planning documents in response to recommendations in our 2014 report about the agency's response to Hurricane Sandy. In 2014, we found that after Hurricane Sandy, SBA did not meet its goal to process business loan applications (21 days from receipt to loan decision). SBA took an average of 45 days for physical disaster loan applications and 38 days for economic injury applications. According to SBA, the agency received a large volume of electronic applications within a few days of the disaster. While SBA created web- based loan applications to expedite the process and encouraged their use, the agency noted that it did not expect early receipt of such a high volume of loan applications early in its response and delayed increasing staffing. At the time of our 2014 report, SBA also had not updated its key disaster planning documents--the Disaster Preparedness and Recovery Plan and the Disaster Playbook--to adjust for the effects a large-volume, early surge in applications could have on staffing, resources, and forecasting models for future disasters. According to SBA's Disaster Preparedness and Recovery Plan, the primary goals of forecasting and modeling are to predict application volume and application receipt as accurately as possible. Federal internal control standards state that management should identify risk (with methods that can include forecasting and strategic planning) and then analyze the risks for their possible effect. Without taking its experience with early application submissions after Hurricane Sandy into account, SBA risked being unprepared for such a situation in future disaster responses, potentially resulting in delays in disbursing loan funds to disaster victims. We therefore recommended that SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing, resources, and timely disaster response. In response to our recommendation, SBA updated its key disaster planning documents, including the Disaster Preparedness and Recovery Plan and Disaster Playbook, to reflect the impact of early application submissions on staffing for future disasters. For example, the documents note that the introduction of the electronic loan application increased the intake of applications soon after disasters. SBA received 83 percent of applications electronically in fiscal year 2015 and 90 percent in 2016. The documents also note that the electronic loan application has reduced the time available to achieve maximum required staffing and that SBA has revised its internal resource requirements model for future disasters to activate staff earlier based on the receipt of applications earlier in the process. Furthermore, our review of the most recent Disaster Preparedness and Recovery Plan from 2016 shows that SBA continues to factor in the effect of electronic loan application submissions on staffing requirements. In our November 2016 report, we reviewed the actions SBA took or planned to take to improve the disaster loan program, as discussed in its Fiscal Year 2015 Annual Performance Report. SBA focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities (see table 1). We also reported in November 2016 that, according to SBA officials, the agency made recent enhancements to the disaster loan assistance web portal, such as a feature that allows a loan applicant to check the status of an application and the application's relative place in the queue for loan processing. The web portal also includes a frequently asked questions page, telephone, and e-mail contacts to SBA customer service, and links to other SBA information resources. These enhancements may have had a positive impact on the agency's loan processing. For example, we reported that an SBA official explained that information from online applications is imported directly into the Disaster Credit Management System, reducing the likelihood of errors in loan applications, reducing follow-up contacts with loan applicants, and expediting loan processing. As we found in our November 2016 report, SBA published information (print and electronic) about the disaster loan process, but much of this information is not easily accessible from the disaster loan assistance web portal. SBA's available information resources include the following: Disaster business loan application form (Form 5) lists required documents and additional information that may be necessary for a decision on the application. Fact Sheet for Businesses of All Sizes provides information about disaster business loans, including estimated time frames, in a question-and-answer format. 2015 Reference Guide to the SBA Disaster Loan Program and Three-Step Process Flier describe the three steps of the loan process, required documents, and estimated time frames. Partner Training Portal provides disaster-loan-related information and resources for SBDCs (at https://www.sba.gov/ptp/disaster). However, we found SBA had not effectively integrated these information resources into its online portals; much of the information was not easily accessible from the loan portal's launch page or available on the training portal. For example, when a user clicks on the "General Loan Information" link in the loan portal, the site routes the user to SBA's main website, where the user would encounter another menu of links. To access the fact sheet, the reference guide, and the three-step process flier, a site user would click on three successive links and then select from a menu of 15 additional links. Among the group of 15 links, the link for Disaster Loan Fact Sheets contains additional links to five separate fact sheets for various types of loans. According to SBA officials, SBA plans to incorporate information from the three-step loan process flier in the online application, but does not have a time frame for specific improvements. SBA officials also said that disaster-loan information is not prominently located on SBA's website because of layout and space constraints arising from the agency's other programs and priorities. We concluded that absent better integration of, and streamlined access to, disaster loan-related information on SBA's web portals, loan applicants--and SBDCs assisting disaster victims-- may not be aware of key information for completing applications. Thus, we recommended that SBA better integrate information (such as its reference guide and three-step process flier) into its portals. In response to our report, SBA stated in a January 2017 letter that the disaster loan assistance portal includes links to various loan-related resources and a link to SBA.gov, where users can access the SBA Disaster Loan Program Reference Guide and online learning center. However, SBA did not indicate what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency plans to centrally integrate links to loan-related resources into its disaster loan assistance web portal and Partner Training Portal. We also found in our November 2016 report that SBA has not consistently described key features of the loan process in its information resources, such as the application form, fact sheet, and reference guide, and none of these resources include explanations for required documents (see table 2). The Paperwork Reduction Act has a broad requirement that an agency explain reasons for collecting information and use of the collected information. According to SBDCs we interviewed and responses from SBA and American Customer Satisfaction Index surveys, some business loan applicants found the process confusing due to inconsistent information about the application process, unexpected requests for additional documentation, and lack of information about the reasons for required documents. We concluded that absent more consistent information in print and online resources, loan applicants and SBDCs might not understand the disaster loan process. As a result, we recommended SBA ensure consistency of content about its disaster loan process by including information, as appropriate, on the (1) three-step process; (2) types of documentation SBA may request and reasons for the requests; and (3) estimates of loan processing time frames and information on factors that may affect processing time. In response to our report, SBA stated in a January 2017 letter that the agency provides consistent messaging about the time frame for making approval decisions on disaster loan applications: SBA's goal is to make a decision on all home and business disaster loan applications within 2-3 weeks. However, SBA did not indicate that what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency will take any action to ensure content is consistent across print and online resources, among other things. In our November 2016 report, we further found that some business loan applicants were confused about the financial terminology and financial forms required in the application. Three SBDCs we interviewed mentioned instances in which applicants had difficulty understanding the parts of the loan application dealing with financial statements and financial terminology. For example, applicants were not familiar with financial statements, did not know how to access information in a financial statement, and did not know how to create a financial statement. Although the loan forms include instructions, the instructions do not define the financial terminology. According to SBA officials, the agency's customer service representatives can direct applicants to SBDCs for help. Two of the three SBDCs said these difficulties arose among business owners who did not have formal education or training in finance or related disciplines--and were attempting applications during high-stress periods following disasters. The Plain Writing Act of 2010 requires that federal agencies use plain writing in every document they issue. According to SBA officials, although the agency does not provide a glossary for finance terminology in loan application forms, the disaster loan assistance web portal has a "contextual help" feature that incorporates information from form instructions. SBA customer service representatives and local SBDCs also can help explain forms and key terms. SBA has taken other actions to inform potential applicants about its loan program, including holding webinars and conducting outreach. However, these efforts may not offer sufficient assistance or reach all applicants. We concluded that without explanations of financial terminology, loan applicants may not fully understand application requirements, which may contribute to confusion in completing the financial forms. Therefore, we recommended that SBA define financial terminology on loan application forms (for example, by adding a glossary to the "help" feature on the web portal). In response to our report, SBA stated in a January 2017 letter that the agency has been developing a glossary of financial terms used in SBA home and business disaster loan applications and in required supporting financial documents. Once completed, SBA stated that it will make the glossary available through the agency's disaster loan assistance portal and the SBA.gov website. We plan to follow up with SBA once the agency completes the glossary. Chairman Chabot, Ranking Member Velazquez, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Marshall Hamlett (Assistant Director), Christine Ramos (Analyst in Charge), John McGrail, and Barbara Roesmann. Appendix I: Summary of Provisions in the Small Business Disaster Response and Loan Improvements Act of 2008 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
While SBA is known primarily for its financial support of small businesses, the agency also assists businesses of all sizes and homeowners affected by natural and other declared disasters through its Disaster Loan Program. Disaster loans can be used to help rebuild or replace damaged property or continue business operations. After SBA was criticized for its performance following the 2005 Gulf Coast hurricanes, the agency took steps to reform the program and Congress also passed the 2008 Act. After Hurricane Sandy (2012), questions arose on the extent to which the program had improved since the 2005 Gulf Coast Hurricanes and whether previously identified deficiencies had been addressed. This statement discusses (1) SBA implementation of provisions from the 2008 Act; (2) additional improvements to agency planning following Hurricane Sandy; and (3) SBA's recent and planned actions to improve information resources for business loan applicants. This statement is based on GAO products issued between July 2009 and November 2016. GAO also met with SBA officials in April 2017 to discuss the status of open recommendations and other aspects of the program. The Small Business Administration (SBA) implemented most requirements of the Small Business Disaster Response and Loan Improvements Act of 2008 (2008 Act). For example, in response to the 2008 Act, SBA appointed an official to head the disaster planning office and annually updates its disaster response plan. SBA also implemented provisions relating to marketing and outreach; augmenting infrastructure, information technology, and staff; and increasing access to funds for nonprofits, among other areas. However, SBA has not yet implemented provisions to establish three guaranteed loan programs. In 2010, SBA received an appropriation to pilot one program and performed initial outreach to lenders. However, in 2014, GAO found that SBA had not implemented the programs or conducted a pilot because of concerns from lenders about loan features. GAO recommended that SBA evaluate lender feedback and report to Congress about implementation challenges. In response, SBA sought comments from lenders and sent a letter to Congress that explained remaining implementation challenges. After Hurricane Sandy, SBA further enhanced its planning for disaster response, including processing of loan applications. In a 2014 report on the Disaster Loan Program, GAO found that while SBA encouraged electronic submissions of loan applications, SBA did not expect early receipt of a high volume of applications after Sandy and delayed increasing staffing. SBA also did not update key disaster planning documents to adjust for the effects of such a surge in future disasters. GAO recommended SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing and resources. In response, SBA updated its planning documents to account for such impacts. SBA has taken some actions to enhance information resources for business loan applicants but could do more to improve its presentation of online disaster loan-related information. In 2016, GAO found that SBA took or planned to take various actions to improve the disaster loan program and focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities. However, GAO found that SBA had not effectively presented information on disaster loans (in a way that would help users efficiently find it), had not consistently described key features and requirements of the loan process in print and online resources, or clearly defined financial terminology used in loan applications. Absent better integration of, and streamlined access to, disaster loan-related information, loan applicants may not be aware of key information and requirements for completing the applications. Therefore, GAO recommended that SBA (1) integrate disaster loan-related information into its web portals to be more accessible to users, (2) ensure consistency of content about the disaster loan process across information resources, and (3) better define financial terminology used in the loan application forms. In January 2017, SBA indicated it was working on a glossary for the application. GAO plans to follow up with SBA about the other two open recommendations.
4,393
795
NTSB was initially established within the newly formed Department of Transportation (DOT) in 1966, but was made independent from DOT in 1974. NTSB is charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation--railroad, highway, marine and pipeline. NTSB determines the probable cause of the accidents and issues safety recommendations aimed at preventing future accidents. In addition, NTSB carries out special studies concerning transportation safety and coordinates the resources of the federal government and other organizations to provide assistance to victims and their family members impacted by major transportation disasters. Unlike regulatory transportation agencies, such as the Federal Aviation Administration, NTSB does not have the authority to promulgate regulations to promote safety, but instead makes recommendations in its accident reports and safety studies to agencies that have such regulatory authority. NTSB is comprised of a five-person board--a chairman, vice chairman, and three members--appointed by the President with the advice and consent of the Senate. The chairman is the NTSB's chief executive and administrative officer. The agency is headquartered in Washington, D.C., and maintains 7 regional offices and a training center located in Ashburn, Virginia. In fiscal year 2013, the board was supported by a staff of about 400, which includes nearly 140 investigators assigned to its modal offices--aviation; highway; marine; and rail, pipeline, and hazardous materials--as well as 73 investigation-related employees, such as engineers and meteorologists. NTSB's modal offices vary in size in relation to the number of investigators, with the Aviation Safety office being the largest. In addition, the Office of Research and Engineering provides technical, laboratory, analytical, and engineering support for the modal offices. Staff from this office interpret information from flight data recorders, create accident computer simulations, and publish general safety studies. This review focuses on the extent to which NTSB has achieved measurable improvements from actions the agency has taken in five management and operational areas based on prior GAO recommendations. Training Center utilization. Making efficient and effective use of resources provided by Congress is a key responsibility of federal agencies. NTSB's Training Center, which opened in August 2003 in Ashburn, Virginia, consists of classrooms, offices, and laboratory facilities used for instructional purposes and active investigations. NTSB uses this center to train its own staff and others from the transportation community to improve accident investigation techniques. NTSB charges tuition for those outside NTSB to take its courses, and generates additional revenue from space rentals to other organizations for events such as conferences on a cost reimbursable basis. Although there is no statutory requirement that NTSB cover the cost of its Training Center through the revenues generated from the facility, a 2007 review we conducted found that NTSB was not capitalizing on its lease flexibility to generate additional revenues and classrooms were significantly underutilized. For example, we found that less than 10 percent of the available classroom capacity was used in fiscal years 2005 and 2006. Furthermore, NTSB was encouraged in a Senate report accompanying the 2006 appropriations bill for DOT to be more aggressive in imposing and collecting fees to cover the costs of the Training Center. Since then, NTSB leased a large portion of the Training Center's non-classroom space to the Federal Air Marshall Service and provided short-term leases of classroom space to other organizations. In addition, NTSB increased the amount of training it delivered at the Training Center. Recommendation close-out process. Efficiently managing the recommendation tracking process is a key function, according to NTSB officials. The recommendation close-out process is managed by the Safety Recommendations and Quality Assurance Division, which has responsibility for tracking the status of its recommendations. When NTSB receives correspondence from an agency about an NTSB recommendation, this division ensures it is properly routed and reviewed and contacts the agency about whether the response is acceptable. If NTSB is delayed in communicating with agencies about whether NTSB considers actions to address recommendations acceptable, that agency could delay implementing a course of action pending approval. In fiscal year 2010, NTSB replaced a lengthy, paper-based process with an automated system-- the Correspondence, Notation, and Safety Recommendation system (CNS)--intended to facilitate the recommendation close out process by electronically storing and automatically routing agencies' proposals to the appropriate NTSB reviewers, allowing for concurrent reviews by multiple parties within NTSB, and more accurately tracking responses. It is important to note that an agency is not necessarily restricted from implementing action prior to formal NTSB approval of that action. Depending on the complexity of the issue, agencies may begin to address issues prior to NTSB's providing formal approval. In other circumstances, NTSB addresses safety deficiencies immediately before the completion of an investigation. For example, during the course of the TWA flight 800 investigation, NTSB issued an urgent safety recommendation once it was determined that an explosion in a fuel tank caused the breakup of the aircraft. Communication. Useful management practices include seeking and monitoring employee attitudes, encouraging two-way communication between employees and management, and incorporating employee feedback into new policies and procedures. This type of communication and collaboration across offices at all levels can improve an agency's ability to carry out its mission by providing opportunities to share best practices and helping to ensure that any needed input is provided in a timely manner. To this end, NTSB managers and board members began holding periodic meetings with staff, conducting outreach to regional offices, and surveying staff about the effectiveness of communication techniques. Diversity management. Implementing a diversity management strategy and a more diverse workforce helps foster a work environment that not only empowers and motivates people to contribute to mission but also provides accountability and fairness for all employees. Diversity management helps an organization create and maintain a positive work environment where the similarities and differences of individuals are valued, so that all can reach their potential and maximize their contributions to the organization's strategic goals. NTSB has developed diversity training courses and held events to educate staff on diversity and inclusiveness issues, created career development and mentoring programs to create upward mobility, targeted its recruitment program to reach a more diverse pool of applicants, and surveyed staff to assess the effectiveness of its efforts. Financial management. Sound financial management is crucial for responsible stewardship of federal resources. Traditionally, government financial systems and government managers have focused on tracking how agencies spend their budgets but did not focus on assessing the costs of activities to achieve efficiencies. More recently however, some government agencies have adopted cost accounting systems that track the cost of providing a service--in NTSB's case, an accident investigation. In 2006, GAO recommended that NTSB develop a cost accounting system to track the amount of time employees spend on each investigation and other activities. This approach allows management to link the cost of providing a service directly with the budget and allocate resources based on those costs. To determine the costs associated with conducting accident investigations, NTSB launched a time and attendance program tied to its cost accounting platform that allows the agency to collect and analyze labor and certain other costs associated with individual investigations. Investigators account for their time on investigations through the time and attendance system using specific codes that identify different investigations. Our analysis found varying degrees of improvement associated with NTSB's actions in each of the management and operational areas we selected for review. Our analysis showed that NTSB improved the utilization of the Training Center, which allowed it to recover a larger portion of its operating costs. NTSB increased utilization of both classroom and non-classroom space at the Training Center since we conducted our work in 2006. NTSB subleased all available office space at the Training Center to the Federal Air Marshal Service in 2007, and utilization of non-classroom spaces has been at 95 percent since then. At the same time, NTSB increased utilization of the classroom space, increasing its own use of classrooms, subleasing approximately one-third of the classroom space to the Department of Homeland Security in 2008, and providing short term leases to other outside parties for classroom use. Subsequently, NTSB reported classroom utilization rose from less than 10 percent in 2005 to 18 percent in fiscal year 2007. By fiscal year 2009, it had increased to over 60 percent--the target we identified in our 2008 report as the appropriate minimum level. Classroom utilization has remained above 60 percent through fiscal year 2012. We also found that improved Training Center utilization generated additional revenue over time, which allowed NTSB to recover a larger portion of the facility's operating costs. When the Training Center first opened in fiscal year 2004, NTSB recovered about 4 percent of its operating costs, resulting in a deficit of nearly $6.3 million. Portions of the Training Center's costs that are not covered by revenues from tuition and other sources such as facility rentals are offset by general appropriations to the agency; therefore, generating additional revenue makes those appropriated funds available for other uses. In 2011, NTSB indicated that it was committed to improving cost recovery at the Training Center. That year the agency set a goal to recover costs of the Training Center within 10 percent of the previous fiscal year. For example, in fiscal year 2010, NTSB recovered $2 million in operating costs, making the fiscal year 2011 goal to recover at least $1.8 million of the Training Center's costs. NTSB achieved its goal in fiscal years 2011 and 2012 by which time the agency was recovering about half of the Training Center's operating costs, reducing the operating deficit at the Training Center to $2.1 million, one-third of what it was in 2004. (See fig. 1 for changes in the Training Center's expenses and revenues.) The automation of NTSB's recommendation follow-up process has reduced the amount of time it takes to formally respond to agencies about whether planned actions to implement an NTSB recommendation are acceptable. In fiscal year 2010, NTSB deployed the previously described CNS to manage the Board's correspondence, including accident reports, safety studies, recommendation transmittals, and public notice responses. CNS allows for the relevant modal offices and the Research and Engineering Office to simultaneously review and assess planned actions to address NTSB recommendations. According to NTSB officials, prior to the implementation of CNS, the average time NTSB took to respond to an agency's proposals to address an NTSB recommendation was 216 days. After CNS was implemented, that figure dropped to 115 days--a reduction of 47 percent. At the same time, the number of responses the agency put out each quarter also increased. (See fig. 2.) NTSB officials have indicated that they have an internal goal to further reduce the response time to 90 days on average. Our analysis of improvements in NTSB's employee and management communication related to NTSB's efforts indicated uneven results; specifically we observed improvements in some but not all measures. We reviewed NTSB employees' responses to the three federal survey questions we determined related to employees' perceptions about managers' communication, as described below: Managers communicate the goals and priorities of the organization. NTSB respondents increased their positive responses to this question, from about 49 percent in 2004 to 57 percent in 2012. (See fig. 3.) We compared NTSB employees' responses with employees in a group of small federal and independent agencies and found that NTSB employees' satisfaction level increased while the proportion of employees from small agencies responding positively to this question during the same period was relatively unchanged from 57 percent in 2004 to 59 percent in 2012. How satisfied are you with the information you receive from management on what's going on in your organization? Responses to this question indicated an increase in the level of satisfaction, from 44 percent in 2004 to 49 percent in 2012. NTSB employees' responses were similar to the positive responses by employees from small agencies that showed an increase from 44 percent to 50 percent. Managers promote communication among different work units (for example, about projects, goals, needed resources). The proportion of respondents reporting positive responses on this survey question from 2004 to 2012 was relatively unchanged from 48 percent to 50 percent. Similarly, there was little change in the proportion of positive responses reported by federal employees from small agencies from 50 percent in 2004 and 49 percent in 2012. NTSB officials stated that their internal communication surveys, which the agency administered 2009 through 2011, provided information that helped them identify continuing barriers to employee and management communication. In 2012, NTSB developed an action plan in this area that included detailed activities, target dates, and regular status reports. Furthermore, because of lingering concerns, NTSB continues to monitor employees' views about employee and management communication to address any remaining weaknesses. Our analysis of outcomes associated with NTSB's efforts to improve its diversity management program indicated uneven results, with indications of improvements in some measures but not all. We reviewed NTSB employees' responses to the three federal survey questions that we determined related to employees' perceptions about managers' diversity and inclusiveness efforts, as described below: My supervisor/team leader is committed to a workforce representative of all segments of society. The federal survey of NTSB employees indicated an increase in the positive responses, from 54 percent to about 71 percent. (See fig. 4.) We compared NTSB employees' responses to those of employees in a group of small federal and independent agencies and found that NTSB employees were more positive in 2012 than the employees from small agencies, whose positive responses rose from 57 percent in 2004 to 69 percent in 2012. Policies and programs promote diversity in the workplace. Again, NTSB employees' responses to this question indicated an increase from 2004 to 2012, from 55 percent to 73 percent. NTSB employees' satisfaction level exceeded that of employees from small agencies, whose positive responses to this question stayed about the same, from 56 percent in 2004 to 57 percent in 2012. Managers/supervisors/team leaders work well with employees of different backgrounds. The proportion of NTSB employees reporting positive responses on this survey question from 2004 to 2012 declined from 66 percent to 60 percent. The decline in NTSB employees' level of satisfaction was greater than that shown by employees from small agencies, whose positive responses also declined from 66 percent in 2004 to 63 percent in 2012. One of the potential outcomes of a robust diversity management program is an increase in the diversity of the workforce. Based on our analysis of NTSB's workforce diversity data, we found that the proportion of white employees in NTSB's workforce declined from 77 percent in 2008 (289 employees) to 73 percent (293 employees) in 2012. (See table 1.) NTSB's total workforce increased 6 percent over the same period, from 378 in 2008 to 402 in fiscal year 2012. The proportion of women remained roughly the same at about 40 percent, as did the proportion of African American employees at 17 percent, and Hispanic employees at 2 percent of the total NTSB workforce. Conversely, NTSB increased the number of employees who are Native American although these employees represent only 2 percent of the overall workforce. We compared these figures to those representing comparative groups in the civilian labor force and found that NTSB's labor force had a larger proportion of some minority groups (e.g., African American) and smaller proportion of other groups (e.g., Hispanic) than the civilian labor force. Roughly half of NTSB's workforce performs investigations and investigation-related work--work directly related to NTSB's core mission. In 2008 there were 190 investigators and related staff; in 2010, there were 198, and in 2012 there were 206. Based on our analysis, we found that the proportion of investigator and investigation-related staff that were white was about 90 percent over the period of 2008 to 2012 and the proportion of women was about 19 percent. We compared these figures to those representing comparative groups in the civilian labor force and found that NTSB's investigator and investigation-related workforce had a smaller proportion of minority and other groups, including African American, Asian, Hispanic, and women than the civilian labor force. In addition, NTSB reported that from fiscal years 2008 to 2012, it had no minority group members among its 15 senior executives although the number of women increased from 3 to 4. Despite its efforts, NTSB has not been able to appreciably change its diversity profile for minority group members and women. However, as mentioned previously, NTSB has taken steps to implement initiatives as a result of its diversity management strategy, including its recently completed diversity and inclusiveness survey, which the agency plans to use to identify gaps in its diversity and inclusiveness efforts and to benchmark future progress. It is too soon to tell whether initiatives, such as its recruitment strategies, will lead to additional changes in its workforce diversity profile. We were unable to determine if NTSB's cost accounting system had improved the agency's ability to make operational decisions because it has not yet fully utilized the system for its intended purposes. For the implementation of a cost accounting system to be effective, it must be tailored to the needs of the organization, be a tool managers can use to make everyday decisions, and be based on sound data that captures time spent on all activities, such as investigations and training. In 2011, NTSB implemented a cost accounting system that includes a time and attendance program in response to a GAO recommendation. This program allows an investigator to assign his or her time to specific investigations through a series of codes, allowing NTSB to assess the cost of conducting investigations rather than simply tracking and managing a budget. NTSB officials stated that the time and attendance system has allowed it to obtain information about the cost of investigations more efficiently than its prior method. In a May 2011 advisory, NTSB management envisioned that the cost accounting system would enable NTSB to measure and compare performance with other organizations, and the data from the system would help the agency monitor and improve productivity and mission effectiveness by better utilizing personnel resources. However, officials provided no time frame for when the data might be used by management for making resource and operational decisions. NTSB officials stated that they are currently focused on ensuring the quality of the time and attendance data before developing goals, targets, and management tools or using such information to make resource or operational decisions. While ensuring the quality of data is a necessary step in fully implementing a cost accounting system, it has been over 2 years since NTSB first began collecting time and attendance data to establish the cost of conducting investigations. As a result, NTSB is not using the system's full capabilities. Thus, NTSB has not yet fully achieved its vision of using the data to improve labor productivity and mission effectiveness. NTSB has implemented a cost accounting system, but effective utilization is required to achieve the long-term rewards of those efforts. Although NTSB has been collecting data from this system to account for costs of investigations, it has not yet developed a management strategy that would allow the agency to maximize the utility of the cost accounting system. This has prevented NTSB from using that information to make the decisions necessary to better manage its labor resources. Without fully utilizing the cost accounting system, NTSB will not achieve the intended benefits of improving labor productivity and mission effectiveness. NTSB needs to continue its improvement efforts in each of the five areas discussed in this report. Further, to improve financial management and provide information to managers for operational decisions, we recommend that the Chairman of the NTSB direct senior management to develop a strategy for maximizing the utility of NTSB's cost accounting system. We provided a draft of this report to NTSB for its review and comment. The agency provided written comments (see app. II). NTSB agreed with our recommendation and provided technical clarifications that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Chairman of the National Transportation Safety Board. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objective in this review was to assess whether there have been management and operational improvements associated with the National Transportation Safety Board's (NTSB) actions in areas where GAO had conducted previous work and made recommendations. In that previous work, GAO made 21 recommendations within 12 management and operational areas. Some areas have more than one recommendation. For example, the communication area has two related recommendations; the first called for NTSB to develop mechanisms to facilitate communication from staff to management and the other called for NTSB to report to Congress on the status of the GAO recommendations. To determine whether there were any associated improvements based on NTSB's actions, we first identified a subset of the 12 original management and operational areas that were included in NTSB's 2013-2016 Strategic Plan. Being highlighted in NTSB's current strategic plan indicates that these areas were relevant and important areas for the agency. We identified 7 of the 12 original management and operational areas in NTSB's current strategic plan. These 7 areas contained 13 of the 21 original recommendations. Many of the 13 recommendations within these areas required only a single action by NTSB to implement (e.g. develop a strategic plan that follows performance based practices, articulate risk- based criteria for selecting which accidents to investigate, or obtain authority to use appropriations to make lease payments in order to correct its violation of the Anti-Deficiency Act) while other of those recommendations included strategies involving continual action and monitoring overtime in order to achieve desired improvements (e.g., increased utilization of the training center and improve the recommendation close out process). It is these latter types of recommendations that we focused on in this review, and based on these criteria, we identified 5 for review: increase the utilization of the Training Center, improve the process for changing the status of recommendations through computerization and concurrent review, develop mechanisms to facilitate communication from staff to develop strategies for diversity management as part of the human develop a full cost accounting system to track time employees spend on each investigation in training. We identified a sixth recommendation--maximize the delivery of core curriculum for each mode at the Training Center--based on our selection criteria. However, we did not include this recommendation in our review because NTSB lacked information about employee training and we could not identify an outcome measure without such data. To identify the outcome measures to assess changes associated with NTSB's actions, we used information from prior GAO reports, information from NTSB including strategic plans, workforce reports, and financial reports as well as information gathered from interviews with NTSB officials. The measures we identified for each recommendation are: (1) Training Center utilization--utilization of classroom and non-classroom space and operating deficit; (2) recommendation close-out process-- average time to respond to agency proposals; (3) employee and management communication--employee responses to Office of Personnel Management's (OPM) federal employee surveys; (4) diversity management--employee responses to OPM's federal employee surveys and NTSB employment levels of women and members of racial and ethnic groups; and (5) financial management--cost accounting reports used to measure performance. To measure whether there was improvement in the outcomes and results associated with NTSB's actions, we compared current conditions with the initial conditions at the time we performed our previous work or earlier in some instances in order to establish a baseline before actions occurred. We used NTSB's financial and program data, employee survey data from OPM, and workforce data from NTSB and the Bureau of Labor and Statistics (BLS). To ensure the data used were of sufficient reliability for our analysis, we examined program reporting procedures and quality assurance controls, and discussed various data elements with knowledgeable agency officials. We also spoke with NTSB officials who were knowledgeable about operations and management in these five selected areas. In addition to the contact named above, H. Brandon Haller (Assistant Director), Christopher Jones, Gail Marnik, Josh Ormond, Amy Rosewarne, and Jack Warner made key contributions to this report. National Transportation Safety Board: Implementation of GAO Recommendations. GAO-12-306R. Washington, D.C.: January 6, 2012 National Transportation Safety Board: Issues related to the 2010 Reauthorization. GAO-10-366T. Washington, D.C.: January 27, 2010 National Transportation Safety Board: Reauthorization Provides an Opportunity to Focus on Implementing Leading Management Practices and Addressing Human Capital and Training Center Issues. GAO-10-183T. Washington, D.C.: October 29, 2009 National Transportation Safety Board: Progress Made in Management Practices, Investigation Priorities, Training Center Use, and Information Security, But These Areas Continue to Need Improvement. GAO-08-652T. Washington, D.C.: April 23, 2008 National Transportation Safety Board: Observations on the Draft Business Plan for NTSB's Training Center. GAO-07-886R. Washington, D.C.: June 14, 2007 National Transportation Safety Board: Progress Made, Yet Management Practices, Investigation Priorities, and Training Center Use Should Be Improved. GAO-07-118. Washington D.C.: November 22, 2006.
The NTSB plays a vital role in transportation safety. It is charged with investigating all civil aviation accidents in the United States and selected accidents in other transportation modes, determining the probable cause of these accidents, and making appropriate recommendations, as well as performing safety studies. In 2006, NTSB's reauthorization legislation mandated GAO to annually evaluate its programs. From 2006 to 2008, GAO made 21 recommendations to NTSB aimed at improving management and operations across several areas. Since that time, NTSB has taken action to address all 21 recommendations. Some of these were completed by requiring only a single action, whereas others required continuing effort to achieve operational improvement. For this review, GAO examined the extent to which desired outcomes are being achieved in five areas where continuing effort was necessary. GAO analyzed workforce, financial, and program data, and interviewed agency officials about actions NTSB has taken. GAO's analysis found varying degrees of improvement associated with the National Transportation Safety Board's (NTSB) actions in areas selected for review. * Training Center utilization . NTSB increased utilization of its Training Center--both non-classroom and classroom space--since 2006. NTSB has also set and achieved its cost recovery goal at the Training Center in the last 2 fiscal years, allowing NTSB to recover half of its operating costs. * Recommendation close-out process . By automating the recommendation follow-up process, NTSB has reduced by about 3 months the amount of time it takes to respond to agencies on whether planned actions to implement NTSB recommendations are acceptable; this allows agencies to move forward with approved actions sooner than under NTSB's former paper-driven process. * Communication . NTSB employees' responses on federal employee surveys from 2004 to 2012 indicated an increase from 49 to 57 percent in employees' positive responses regarding managers' communication about agency goals, and from 44 to 49 percent regarding the amount of information received. We compared NTSB employees' responses to those of employees from a group of small agencies and found that NTSB employees' satisfaction level was about the same or more positive depending on the question. NTSB officials continue to monitor employees' views about communication to address any remaining concerns. * Diversity management . NTSB employees' positive responses to the federal employee survey questions about managers' commitment to diversity and NTSB's diversity policies and programs increased from about 54 percent to over 70 percent from 2004 to 2012. However, employees' positive responses to the question about managers' ability to work well with employees with different backgrounds declined 6 percentage points over the same period. In addition, the proportion of minority and women employees in NTSB's workforce, including in its investigator staff, showed little appreciable change over the period 2008 to 2012. NTSB's workforce had a smaller proportion of some minority groups than the civilian labor force. NTSB officials are using results from their recent diversity survey to identify gaps in their diversity management efforts and to benchmark future progress. It is too soon to tell whether NTSB's actions will lead to additional changes in its workforce diversity profile. * Financial management . To improve operational effectiveness, NTSB has implemented a cost accounting system that includes a time and attendance program to track staff hours and costs related to accident investigations. NTSB is currently focused on ensuring the quality of the time and attendance data, but has not yet developed a strategy to maximize the utility of its cost accounting system for making resource and operational decisions. Thus, NTSB has not yet fully achieved its vision of using the data to improve labor productivity and mission effectiveness. In each of the five areas NTSB needs to continue its improvement efforts. Further, GAO recommends that NTSB senior managers develop a strategy for maximizing the utility of NTSB's cost accounting system. GAO provided a draft of this report to officials at NTSB. NTSB officials concurred with the recommendation and provided technical comments, which GAO incorporated as appropriate.
5,480
855
DOE has broadly indicated the direction of the LGP but has not developed all the tools necessary to evaluate progress. DOE officials have identified a number of broad policy goals that the LGP is intended to support, including helping to ensure energy security, mitigate climate change, jumpstart the alternative energy sector, and create jobs. Additionally, through DOE's fiscal year 2011 budget request and a mission statement for the LGP, the department has explained that the program is intended to support the "early commercial production and use of new or significantly improved technologies in energy projects" that "avoid, reduce, or sequester air pollutants or anthropogenic emissions of greenhouse gases, and have a reasonable prospect of repaying the principal and interest on their debt obligations." To help operationalize such policy goals efficiently and effectively, principles of good governance identified in our prior work on GPRA indicate that agencies should develop associated performance goals and measures that are objective and quantifiable. These performance goals and measures are intended to allow comparison of programs' actual results with the desired results. Each program activity should be linked to a performance goal and measure unless such a linkage would be infeasible or impractical. DOE has linked the LGP to two departmentwide performance goals: "Double renewable energy generating capacity (excluding conventional hydropower) by 2012." "Commit (conditionally) to loan guarantees for two nuclear power facilities to add new low-carbon emission capacity of at least 3,800 megawatts in 2010." DOE has also established nine performance measures for the LGP (see app. II). However, the departmentwide performance goals are too few to reflect the full range of policy goals for the LGP. For example, there is no measurable performance goal for job creation. The performance goals also do not reflect the full scope of the program's authorized activities. For example, as of April 2010, DOE had issued two conditional commitments for energy efficiency projects--as authorized in legislation--but the energy efficiency projects do not address either of the performance goals because the projects are expected to generate little or no renewable energy and are not associated with nuclear power facilities. Given the lack of sufficient performance goals, DOE cannot be sure that the LGP's performance measures are appropriate. Thus, DOE lacks the foundation to assess the program's progress, and more specifically, to determine whether the projects it supports with loan guarantees contribute to achieving the desired results. As the LGP's scope and authority have increased, the department has taken a number of steps to implement the program for applicants. For example, DOE has substantially increased the LGP's staff and in-house expertise, and applicants we interviewed have commended the LGP staff's professionalism. DOE officials indicated that, prior to 2008, staffing was inadequate to review applications, but since June 2008, the LGP's staff has increased from 12 federal employees to more than 50, supported by over 40 full-time contractor staff. Also, the LGP now has in-house legal counsel and project finance expertise, which have increased the program's capacity to evaluate proposed projects. In addition, in November 2009, the Secretary named an Executive Director, reporting directly to the Secretary, to oversee the LGP and to accelerate the application review process. Other key steps that DOE has taken include the following: DOE has identified a list of external reviewers qualified to perform legal, engineering, financial, and marketing analyses of proposed projects. Identifying these external reviewers beforehand helps to ensure that DOE will have the necessary expertise readily available during the review process. DOE officials said that the department has also expedited the procurement process for hiring these external reviewers. DOE developed a credit policies and procedures manual for the LGP. Among other things, the manual contains detailed internal policies and procedures that lay out requirements, criteria, and staff responsibilities for determining which proposed projects should receive loan guarantees. DOE revised the LGP's regulations after receiving information from industry concerning the wide variety of ownership and financing structures that applicants or potential applicants would like to employ in projects seeking loan guarantees. Among other things, the modifications allow for ownership structures that DOE found are typically employed in utility-grade power plants and are commonly proposed for the next generation of nuclear power generation facilities. DOE obtained OMB approval for its model to estimate credit subsidy costs. The model is a critical tool needed for the LGP to proceed with issuing loan guarantees because it will be used to calculate each loan guarantee's credit subsidy cost and the associated fee, if any, that must be collected from borrowers. (We are evaluating DOE's process and key inputs for estimating credit subsidy costs in other ongoing work.) Notwithstanding these actions, the department is implementing the program in a way that treats applicants inconsistently, lacks systematic mechanisms for applicants to appeal its decisions or for applicants to provide feedback to DOE, and risks excluding some potential applicants unnecessarily. Specifically, we found the following: DOE has treated applicants inconsistently. Although our past work has shown that agencies should process applications with the goals of treating applicants fairly and minimizing applicant confusion, DOE's implementation of the program has favored some applicants and disadvantaged others in a number of ways. First, we found that, in at least five of the ten cases in which DOE made conditional commitments, it did so before obtaining all of the final reports from external reviewers, allowing these applicants to receive conditional commitments before incurring expenses that other applicants were required to pay. Before DOE makes a conditional commitment, LGP procedures call for engineering, financial, legal, and marketing reviews of proposed projects as part of the due diligence process for identifying and mitigating risk. If DOE lacks the in-house capability to conduct the reviews, external reviews are performed by contractors paid for by applicants. In one of the cases we identified in which DOE deviated from its procedures, it made a conditional commitment before obtaining any of the external reports. DOE officials told us this project was fast-tracked because of its "strong business fundamentals" and because DOE determined that it had sufficient information to proceed. However, it is unclear how DOE could have had sufficient information to negotiate the terms of a conditional commitment without completing the types of reviews generally performed during due diligence, and proceeding without this information is contrary to the department's procedures for the LGP. Second, DOE treats applicants with nuclear projects differently from applicants proposing projects that employ other types of technologies. For example, DOE allows applicants with nuclear projects that have not been selected to begin the due diligence process to remain in a queue in case the LGP receives additional loan guarantee authority, while applicants with projects involving other types of technologies that have not been selected to begin due diligence are rejected (see app. III). In order for applicants whose applications were rejected to receive further consideration, they must reapply and again pay application fees, which range from $75,000 to $800,000 (see app. IV). DOE also provided applicants with nuclear generation projects information on how their projects ranked in comparison with others before they submitted part II of the application and 75 percent of the application fees. DOE did not provide rankings to applicants with any other types of projects. DOE officials said that applicants with nuclear projects were allowed to remain in a queue because of the expectation that requests would substantially exceed available loan guarantee authority and that the applications would be of high quality. According to DOE officials, they based this expectation on information available about projects that are seeking licenses from the Nuclear Regulatory Commission. DOE officials also explained that they ranked nuclear generation projects for similar reasons--and also to give applicants with less competitive projects the chance to drop out of the process early, allowing them to avoid the expense involved in applying for a loan guarantee. However, all of the solicitations issued through 2008 initially received requests that exceeded the available loan guarantee authority (see app. V), so nuclear projects were not unique in that respect. In addition, applicants with coal-based power generation and industrial gasification facility projects paid application fees equivalent to those paid by applicants with nuclear generation projects but were not given rankings prior to paying the second application fee (see app. IV). To provide EERE applicants with earlier feedback on the competitiveness of their projects, DOE instituted a two-part application for the 2009 EERE solicitation--a change from the 2008 EERE solicitation. DOE officials stated that they made this change based on lessons learned from the 2008 EERE solicitation. While this change appears to reduce the disparity in treatment among applicants, it remains to be seen whether DOE will make similar changes for projects that employ other types of technologies. Third, DOE has allowed one of the front-end nuclear facility applicants that we contacted additional time to meet technical and financial requirements, including requirements for evidence that the technology is ready to move to commercial-scale operations, but DOE has rejected applicants with other types of technologies for not meeting similar technical and financial criteria. DOE has not provided analysis or documentation explaining why additional time was appropriate for one project but not for others. DOE lacks systematic mechanisms for applicants to appeal its decisions or provide feedback to DOE. In its solicitations, DOE states that a rejection is "final and non-appealable." Once a project has been rejected, the only administrative option left to an applicant under DOE's documented procedures is to reapply and incur all of the associated costs. Nevertheless, DOE said that, as a courtesy, it had rereviewed certain rejected applications. Some applicants did not know that DOE would provide such rereviews, which appear contrary to DOE's stated policy and have been conducted on an ad hoc basis. DOE also lacks a systematic mechanism for soliciting, evaluating, and incorporating feedback from applicants about its implementation of the program. Our past work has shown that agencies should solicit, evaluate, and incorporate feedback from program users to improve programs. Unless they do so, agencies may not attain the levels of user satisfaction that they otherwise could. For example, during our interviews with applicants, more than half said they received little information about the timing or status of application reviews. Applicants expressed a desire for more information about the status of DOE's reviews and said that not knowing when a loan guarantee might be issued created difficulties in managing their projects--for example, in planning construction dates, knowing how much capital they would need to sustain operations, and maintaining support for their projects from internal stakeholders. According to DOE officials, the department has reached out to stakeholders through its Web site, presentations to industry groups and policymakers, and other means. DOE has also indicated that it has changed the program to make it more user-friendly, based on lessons learned and applicant feedback. For example, unlike the 2008 EERE solicitation, the 2009 EERE solicitation includes rolling deadlines that give applicants greater latitude in when to submit their applications; a simplified part I application that provides a mechanism for DOE to give applicants early feedback on whether their projects are competitive; and delayed payment of the bulk of the "facility fee" that DOE charges applicants to cover certain program costs. While DOE said that these changes were based, in part, on feedback from applicants, because DOE has no systematic way of soliciting applicant feedback, the department has no assurance that the views obtained through its outreach efforts are representative, particularly since the means that DOE uses to obtain feedback do not guarantee anonymity. The department also has no assurance that the changes made in response to feedback are effectively addressing applicant concerns. DOE risks excluding some potential applicants. Even though the Recovery Act requires that applicants begin construction by the end of fiscal year 2011 to qualify for Recovery Act funding, DOE has not yet issued solicitations for the full range of projects eligible for Recovery Act funding under section 1705. DOE has issued two solicitations specific to the Recovery Act for the LGP, but neither invites applications for commercial manufacturing projects, which are eligible under the act. While DOE has announced that it will issue an LGP solicitation for commercial manufacturing projects, it has given no date for doing so. The 2009 EERE solicitation provided an opportunity for some manufacturing applicants to receive Recovery Act funding, but because DOE combined the Recovery Act's requirements with the original section 1703 requirements, applicants with commercial manufacturing projects were excluded. DOE officials told us that they combined the requirements to ensure that projects that are initially eligible under section 1705 but that fail to start construction by the deadline can remain in the LGP under section 1703. DOE has made substantial progress in building a functional program for issuing loan guarantees under Title XVII of EPAct; however, it may not fully realize the benefits envisioned for the LGP until it further improves its ability to evaluate and implement the program. Since 2007, we have been reporting on DOE's lack of tools necessary to evaluate the program and process applications and recommending that the department take steps to address these areas. While DOE has identified broad policy goals and developed a mission statement for the program, it will lack the ability to implement the program efficiently and effectively and to evaluate progress in achieving these goals and mission until it develops corresponding performance goals. As a practical matter, without such goals, DOE will also lack a clear basis for determining whether the projects it decides to support with loan guarantees are helping achieve the desired results, potentially undermining applicants' and the public's confidence in the legitimacy of those decisions. Such confidence could also be undermined by implementation processes that do not treat applicants consistently--unless DOE has clear and compelling grounds for disparate treatment--particularly if DOE skips steps in its review process prior to issuing conditional commitments or rereviews rejected applications for some applicants without having an administrative appeal process. Furthermore, while DOE has taken steps to increase applicants' satisfaction with the program, it cannot determine the effectiveness of those efforts without systematic feedback from applicants that preserves their anonymity. To improve DOE's ability to evaluate and implement the LGP, we recommend that the Secretary of Energy take the following four actions: Direct the program management to develop relevant performance goals that reflect the full range of policy goals and activities for the program, and to the extent necessary, revise the performance measures to align with these goals. Direct the program management to revise the process for issuing loan guarantees to clearly establish what circumstances warrant disparate treatment of applicants so that DOE's implementation of the program treats applicants consistently unless there are clear and compelling grounds for doing otherwise. Direct the program management to develop an administrative appeal process for applicants who believe their applications were rejected in error and document the basis for conclusions regarding appeals. Direct the program management to develop a mechanism to systematically obtain and address feedback from program applicants, and, in so doing, ensure that applicants' anonymity can be maintained, for example, by using an independent service to obtain the feedback. We provided a draft of this report to DOE for review and comment. In its written comments, DOE stated that it recognizes the need for continuous improvement to its Loan Guarantee Programs as those programs mature but neither explicitly agreed nor disagreed with our recommendations. In one instance, DOE specifically disagreed with our findings: the department maintained that applicants are treated consistently within solicitations. Nevertheless, the department stated that it is taking steps to address concerns identified in our report. Specifically, DOE pointed to the following recent or planned actions: Performance goals and measures. DOE stated that, in the context of revisions to its strategic plan, the department is revisiting the performance goals and measures for the LGP to better align them with the department's policy goals of growing the green economy and reducing greenhouse gases from power generation. Consistent treatment of applicants. DOE recognized the need for greater transparency to avoid the perception of inconsistent treatment and stated that it will ensure that future solicitations explicitly describe circumstances that would allow streamlined consideration of loan guarantee applications. Appeals. DOE indicated that its process for rejected applications should be made more transparent and stated that the LGP continues to implement new strategies intended to reduce the need for any kind of appeals, such as enhanced communication with applicants including more frequent contact, and allowing applicants an opportunity to provide additional data at DOE's request to address deficiencies DOE has identified in applications. While these actions are encouraging, they do not fully address our findings, especially in the areas of appeals and applicant feedback. We continue to believe that DOE needs systematic mechanisms for applicants to appeal its decisions and to provide anonymous feedback. DOE's written comments on our findings and recommendations, along with our detailed responses, are contained in appendix VI. In addition to the written comments reproduced in that appendix, DOE provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To assess the extent to which the Department of Energy (DOE) has identified what it intends to achieve through the Loan Guarantee Program (LGP) and is positioned to evaluate progress, we reviewed and analyzed relevant provisions of Title XVII of the Energy Policy Act of 2005 (EPAct), the American Recovery and Reinvestment Act of 2009 (Recovery Act); DOE's budget request documents; and Recovery Act planning information, as well as other documentation provided by DOE. We discussed strategic planning and program evaluation with cognizant DOE officials from the LGP office, the Office of the Secretary of Energy, the Office of the Chief Financial Officer, and the Credit Review Board (CRB) that is charged with coordinating credit management and debt collection activities as well as overall policies and procedures for the LGP. As criteria, we used the Government Performance Results Act (GPRA), along with our prior work on GPRA. To evaluate DOE's implementation of the LGP for applicants, we reviewed relevant legislation, such as EPAct and the Recovery Act; DOE's final regulations and concept of operations for the LGP; solicitations issued by DOE inviting applications for loan guarantees; DOE's internal project tracking reports; technical and financial review criteria for the application review process; minutes from CRB meetings held between February 2008 and November 2009; applications for loan guarantees; application rejection letters issued by DOE; and other various DOE guidance and procurement documents related to the process for issuing loan guarantees. We interviewed cognizant DOE officials from the LGP office, the Office of the Secretary of Energy, the Office of the Chief Financial Officer, the Office of Headquarters Procurement Services, and program offices that participated in the technical reviews of projects, including the Office of Electricity Delivery and Energy Reliability, the Office of Energy Efficiency and Renewable Energy, the Office of Nuclear Energy, and the National Energy Technology Laboratory (NETL). In addition, we interviewed 31 LGP applicants and 4 trade association representatives, using a standard list of questions for each group, to obtain a broad representation of views that we believe can provide insights to bolster other evidence supporting our findings. We selected the applicants and trade associations using a mix of criteria, including the amount of the loan guarantee requested and the relevant technology. As criteria, we used our prior work on customer service. We did not evaluate the financial or technical soundness of the projects for which applications were submitted. We conducted this performance audit from January 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOE has developed the following nine performance measures for the LGP: percentage of projects receiving DOE loan guarantees that have achieved and maintained commercial operations; contain the loss rate of guaranteed loans to less than 4 percent; contain the loss rate of guaranteed loans to less than 11.81 percent in fiscal year 2009 (11.85 percent for fiscal years 2010 and 2011) on a long-term portfolio basis; newly installed generation capacity from power generation projects receiving DOE loan guarantees; average cost per megawatthour for projects receiving DOE loan forecasted greenhouse gas emissions reductions from projects receiving loan guarantees compared to 'business as usual' energy generation; forecasted air pollutant emissions (nitrogen oxides, sulfur oxides, and particulates) reductions from projects receiving loan guarantees compared to 'business as usual' energy generation; average review time of applications for Section 1705 guarantees; and percentage of conditional commitments issued to qualified applicants relative to plan. Appendix III: Application Review Process Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Appendix IV: Standardized Fees Associated with Obtaining a Loan Guarantee, by Solicitation $600,000 1/2 of 1% of guaranteed 600,000 1/2 of 1% of guaranteed 600,000 1/2 of 1% of guaranteed 2008 Energy efficiency, renewable energy, and advanced transmission and distribution technologies (EERE) 1% of guaranteed amount $375,000 + 0.75% of guaranteed amount $1,625,000 + 0.50% of guaranteed amount 1% of guaranteed amount $375,000 + 0.75% of guaranteed amount $1,625,000 + 0.50% of guaranteed amount 600,000 1/2 of 1% of guaranteed 2009 Commercial technology renewable energy generation projects under the Financial Institution Partnership Program (FIPP) Coal-based power generation and industrial gasification facilities Energy efficiency, renewable energy, and advanced transmission and distribution technologies (EERE) Electric power transmission infrastructure projects Commercial technology renewable energy generation projects under the Financial Institution Partnership Program (FIPP) The 2006 mixed solicitation invited applications for all technologies eligible to receive loan guarantees under the Energy Policy Act of 2005 except for nuclear facilities and oil refineries. The following are GAO's comments on the Department of Energy's (DOE) letter dated June 17, 2010. 1. DOE appears to concur with the spirit of our recommendation. Best practices for program management indicate that DOE should have objective, quantifiable performance goals and targets for evaluating its progress in meeting policy goals DOE has identified for the LGP. Such goals and targets are important tools for ensuring public accountability and effective program management. 2. Our finding about inconsistent treatment of LGP applicants is based on information obtained from applicants corroborated by documents from DOE. In the instance we identified in which DOE made a conditional commitment before obtaining any of the required external reports, the external reviewers were not fully engaged until after DOE had negotiated the terms of the conditional commitment, which is contrary to DOE's stated procedures and provided an advantage to the applicant. Other applicants who received conditional commitments before completion of one or more of the reports called for by DOE's due diligence procedures also had a comparative advantage in that they were able to defer some review expenses until after DOE had publicly committed to their projects. We continue to believe that DOE should revise the process for issuing loan guarantees to treat applicants consistently unless there are clearly established and compelling grounds for making an exception. 3. We agree that there may be grounds for treating applicants differently depending on the type of technology they employ but do not believe that DOE has adequately explained the basis for the differences among the solicitations. For example, DOE's response does not address the possibility that lack of ranking information for fossil energy projects, combined with the knowledge that the solicitation was significantly oversubscribed, could have factored into applicants' decisions to drop out of the process, especially given the relatively high fees associated with submitting part II of the application. 4. We disagree that DOE's current process for rereviewing rejected applications is working. As we state in our report, some applicants did not know that DOE would provide rereviews. While we are encouraged by DOE's efforts to reduce the need for appeals, we believe that an administrative appeal process would allow DOE to better plan and manage its use of resources on rejected applications. 5. We applaud DOE's efforts to reach out to stakeholders and to use lessons learned to improve procedures and increase efficiencies and effectiveness. However, we continue to believe that DOE needs a systematic mechanism for applicants to provide anonymous feedback, whether through use of a third party or other means that preserves confidentiality. Several applicants we interviewed expressed concern that commenting on aspects of DOE's implementation of the LGP could adversely affect their current or future prospects for receiving a loan guarantee. Systematically obtaining and addressing anonymous feedback could enhance DOE's efforts to improve procedures and increase efficiencies and effectiveness. In addition to the individual named above, Karla Springer, Assistant Director; Marcia Carlsen; Nancy Crothers; Marissa Dondoe; Brandon Haller; Whitney Jennings; Cynthia Norris; Daniel Paepke; Madhav Panwar; Barbara Timmerman; and Jeremy Williams made key contributions to this report.
Since the Department of Energy's (DOE) loan guarantee program (LGP) for innovative energy projects was established in Title XVII of the Energy Policy Act of 2005, its scope has expanded both in the types of projects it can support and in the amount of loan guarantee authority available. DOE currently has loan guarantee authority estimated at about $77 billion and is seeking additional authority. As of April 2010, it had issued one loan guarantee for $535 million and made nine conditional commitments. In response to Congress' mandate to review DOE's execution of the LGP, GAO assessed (1) the extent to which DOE has identified what it intends to achieve through the LGP and is positioned to evaluate progress and (2) how DOE has implemented the program for applicants. GAO analyzed relevant legislation, prior GAO work, and DOE guidance and regulations. GAO also interviewed DOE officials, LGP applicants, and trade association representatives. DOE has broadly indicated the program's direction but has not developed all the tools necessary to assess progress. DOE officials have identified a number of broad policy goals that the LGP is intended to support, including helping to mitigate climate change and create jobs. DOE has also explained, through agency documents, that the program is intended to support early commercial production and use of new or significantly improved technologies in energy projects that abate emissions of air pollutants or of greenhouse gases and have a reasonable prospect of repaying the loans. GAO has found that to help operationalize such policy goals efficiently and effectively, agencies should develop associated performance goals that are objective and quantifiable and cover all program activities. DOE has linked the LGP to two departmentwide performance goals, namely to (1) double renewable energy generating capacity by 2012 and (2) commit conditionally to loan guarantees for two nuclear power facilities to add a specified minimum amount of capacity in 2010. However, the two performance goals are too few to reflect the full range of policy goals for the LGP. For example, there is no performance goal for the number of jobs that should be created. The performance goals also do not reflect the full scope of program activities; in particular, although the program has made conditional commitments to issue loan guarantees for energy efficiency projects, there is no performance goal that relates to such projects. Without comprehensive performance goals, DOE lacks the foundation to assess the program's progress and, more specifically, to determine whether the projects selected for loan guarantees help achieve the desired results. DOE has taken steps to implement the LGP for applicants but has treated applicants inconsistently and lacks mechanisms to identify and address their concerns. Among other things, DOE increased the LGP's staff, expedited procurement of external reviews, and developed procedures for deciding which projects should receive loan guarantees. However, GAO found: (1) DOE's implementation of the LGP has treated applicants inconsistently, favoring some and disadvantaging others. For example, DOE conditionally committed to issuing loan guarantees for some projects prior to completion of external reviews required under DOE procedures. Because applicants must pay for such reviews, this procedural deviation has allowed some applicants to receive conditional commitments before incurring expenses that other applicants had to pay. It is unclear how DOE could have sufficient information to negotiate conditional commitments without such reviews. (2) DOE lacks systematic mechanisms for LGP applicants to administratively appeal its decisions or to provide feedback to DOE on its process for issuing loan guarantees. Instead, DOE rereviews rejected applications on an ad hoc basis and gathers feedback through public forums and other outreach efforts that do not ensure the views obtained are representative. Until DOE develops implementation processes it can adhere to consistently, along with systematic approaches for rereviewing applications and obtaining and addressing applicant feedback, it may not fully realize the benefits envisioned for the LGP. GAO recommends that DOE develop performance goals reflecting the LGP's policy goals and activities; revise the loan guarantee process to treat applicants consistently unless there are clear, compelling grounds not to do so; and develop mechanisms for administrative appeals and for systematically obtaining and addressing applicant feedback. DOE said it is taking steps to address GAO's concerns but did not explicitly agree or disagree with the recommendations.
5,355
881
The existing NGA West campus consists of 15 facilities on 27 acres. Some of the buildings' original construction dates back to the early 1800s, and 22 acres of the site are on the National Register of Historic Places, according to NGA documents. In 2009 through 2010, NGA contracted with an independent firm to assess the condition of the existing NGA West. This assessment gave the facilities an overall condition rating of "poor," generally because of the insufficiency of the anti-terrorism and force protection measures, the average age of the structures, numerous code and accessibility shortfalls, and lack of seismic protection. Near the end of the completion of the NGA East headquarters consolidation in 2011, NGA focused its attention on the need to improve the operational capacity, security requirements, and modernization of its NGA West facilities. From approximately 2009 through 2012, NGA conducted a series of evaluations to inform its efforts to modernize NGA West. These analyses included a condition assessment of the existing facilities; an economic analysis of alternatives to evaluate the options of building a new facility ("build new"), fully renovating the existing facilities ("modernize"), or remaining in the current facilities with minimum essential repairs ("status quo"); and a qualitative analysis of non-cost considerations for the build new, modernize, and status quo options identified in the economic analysis. In 2012 NGA determined that a new NGA West would best meet the agency's mission and resource needs. After examining the options of renovating its current facility, leasing, or building a new government- owned facility, NGA determined that building a new, government-owned facility was the preferred option. NGA officials stated that they are in the process of soliciting design-build proposals and that the final design-build contract is planned for award near the end of fiscal year 2018. Construction is expected to begin approximately in the summer of 2019. We identified 22 best practices for an AOA process in October 2015, based on government and private-sector guidance and input from subject-matter experts. Many federal and industry guides have described approaches to analyses of alternatives; however, there was no single set of practices for the AOA process that was broadly recognized by both government and private-sector entities. We developed these best practices by (1) compiling and reviewing commonly mentioned AOA policies and guidance used by different government and private-sector entities and (2) incorporating experts' comments on a draft set of practices to develop a final set of practices. The 22 best practices are grouped into four characteristics that describe a high-quality, reliable AOA process and can be used to evaluate whether an AOA process meets the characteristics of well-documented, comprehensive, unbiased, and credible. These practices can be applied to AOA processes for a broad range of capability areas, projects, and programs, including military construction projects and decision-making processes, in which an alternative must be selected from a set of possible options. In September 2016, we recommended that DOD develop guidance that requires the use of AOA best practices, including those practices we identified, when conducting AOA processes for certain types of military construction decisions. DOD did not concur with this recommendation and disagreed that these best practices apply to military construction decision-making processes. We continue to believe that this recommendation is valid and that the principles demonstrated by the best practices--and the practices themselves--draw from related DOD and other practices. Our best practices also parallel those found in DOD and Air Force guidance on military construction and analysis for decision making. For example, according to an Air Force instruction governing the planning and programming of military construction projects, one of the required planning actions is to evaluate alternative solutions. According to a DOD directive pertaining to military construction, DOD must monitor the execution of its military construction program to ensure--among other things--that the program is accomplished in the most cost-effective way. This guidance for cost effectiveness aligns with our AOA best practice Develop Life-cycle Cost Estimates, which focuses on providing decision makers with the information they need to assess the cost- effectiveness of alternatives. Further, DOD Instruction 7041.03, on economic analysis for decision making, contains numerous cost estimating principles and procedures that align with those called for in our AOA best practices. As we reported in 2016, these policy documents and instructions align with the general intent of our best practices, and there are many similarities between our best practices and the department's guidance. Additionally, in our previous work reviewing AOA process for other national security facilities, agencies generally concurred with our recommendations to consider including our best practices in future guidance. For example, in 2014 we assessed three National Nuclear Security Administration construction projects and found each project's AOA partially met our best practices for conducting an AOA process. The Department of Energy agreed with our recommendation and has begun implementation. NGA launched its search for a new NGA West site in 2012 with a site location study conducted by an outside real estate firm, and it concluded the search with the issuance of a record of decision in June 2016. The site location study included a check for existing federal sites that could accommodate NGA West's workforce and mission. This search resulted in a total of 186 sites being identified initially as possible options; the list was narrowed to 6 sites in the St. Louis metropolitan area for further study. During preliminary master planning, 4 of the 6 sites identified by the site location studies were determined to be suitable for further analysis to select the agency's preferred alternative. Three of these sites are in the Missouri cities of Fenton, Mehlville, and St. Louis, and one is in St. Clair County, Illinois, near Scott Air Force Base. See figure 1 for the geographic distribution of the 4 sites. The subsequent site selection process included an environmental impact statement as required by the National Environmental Policy Act of 1969, analysis of NGA and the Corps of Engineers' compliance with related DOD policies and other federal laws and requirements, preliminary master planning conducted by the Corps of Engineers, and a site evaluation process initiated by the NGA West Program Management Office (PMO), which is responsible for managing the NGA West project. To select the final site from the four alternatives, NGA initiated a site evaluation process in August 2015 that was led by the NGA West PMO. This process involved various teams of experts analyzing the sites and evaluating them against defined criteria to identify the advantages and disadvantages of each site. Figure 2 provides an overview of the key elements and milestones of NGA's site selection process, beginning with its earlier decision to build and concluding with its 2016 selection of the new site and issuance of its record of decision. According to NGA officials, there was no NGA or DOD policy or set of practices to comprehensively guide NGA's site selection and AOA process. As a result, NGA relied on various DOD policies and instructions, other federal guidance, and industry standards. It incorporated these practices into the site selection process to ensure that it complied with federal requirements and industry practice to develop its AOA process, according to NGA and Corps of Engineers officials. Additionally, NGA officials stated that our AOA best practices would have been helpful in planning the site selection process for NGA West, but the process began in 2012, and our 22 best practices were not published until October 2015. At the outset of the site evaluation process in August 2015, the PMO set forth broad sets of criteria to use in analyzing the four alternatives. These broad sets of criteria, referred to as "evaluation factors," were mission, security, development and sustainability, schedule, cost, and environment. In addition, each site was assessed to ensure that it complied with key laws, regulations, and directives. The PMO divided the analysis of the evaluation factors among NGA and Corps of Engineers teams. The mission, security, and development and sustainability factors were assigned to two NGA evaluation teams of subject-matter experts--the "mission evaluation team" and "security, infrastructure and schedule evaluation team" (referred to here as security evaluation team). Each of these teams used its expertise to develop "sub-factors" to assess the advantages and disadvantages of each site. For example: The mission evaluation team developed 10 mission-related sub- factors based on the PMO guidance, NGA's mission, and the strategic goals outlined in the 2015 NGA Strategy. The mission-related sub- factors focused largely on elements pertaining to NGA's workforce and partnerships, such as the sites' proximity to the existing workforce, their distance from NGA's Arnold facility, and the likelihood that the sites would attract mission partners to create a "GEOINT Valley." The security evaluation team developed 13 sub-factors related to security and infrastructure based on PMO guidance, DOD and other federal security and energy requirements, threat analysis, and other subject-matter expertise. Examples of the sub-factors include a 500- foot setback, perimeter security elements, sustainable characteristics, and infrastructure resilience. Separate evaluations of cost, schedule, and environmental considerations were conducted by the Corps of Engineers in its role as construction agent as part of the environmental impact analysis. In addition, NGA and the Corps of Engineers conducted an assessment of relevant laws and regulations. The PMO integrated these analyses and provided an additional layer of review to each of the evaluation factors, in some cases adjusting them. For instance, the PMO reorganized the 10 mission-related sub-factors for its review. Specifically, while the mission evaluation team focused the sub-factors largely on NGA's strategic goals related to workforce and partnerships, the PMO's analysis reorganized those same mission-related sub-factors by how they supported all four of the 2015 strategic goals. The PMO listed under three "strategic effects"--"Create GEOINT Valley," "Enhance Operations," and "Attract and Sustain the Workforce"--all of the sub-factors related to that strategic effect. The PMO re-analyzed the sites by weighting those strategic effects and sub-factors that were linked to multiple strategic goals higher than those that were linked to fewer such goals. The PMO also adjusted some of the sub-factors used in the evaluation for security and for development and sustainability. The PMO's additional analysis did not change the overall outcome of the evaluation of the sites; rather, it validated the mission evaluation team's conclusion and generally supported all but one of the overall findings of the other analyses. At the conclusion of the PMO's analysis in December 2015, the PMO's conclusion was that no one site had emerged as a clear preferred alternative. Because the master planning and site evaluation process concluded that all four sites--Fenton, Mehlville, St. Louis City, and St. Clair--could meet the overall requirements and that no single site held substantial advantage over another, the NGA Director requested additional analysis with refined criteria to more clearly differentiate among the final four sites. Consequently, in January 2016 NGA initiated a new site selection team-- consisting of NGA and Corps of Engineers personnel who had previously been involved in various stages of the process--to reassess the sites against refined criteria and perspectives in order to determine the agency- preferred alternative. The site selection team carried forward five of the six original evaluation criteria from the start of the site evaluation process, as well as compliance with federal law, policy, and other regulations, to develop its six "refined criteria." In reviewing these refined criteria, the site selection team determined that cost and schedule accounted for the greatest differences among the sites. The team therefore used the cost and schedule assessments completed as part of the PMO process to narrow the sites, concluding that because the Mehlville and Fenton sites were the most expensive and posed the greatest schedule risk they should be eliminated from final consideration. The site selection team then focused its analysis on the final two sites-- St. Clair and St. Louis City--to inform the Director's selection. The team used the following six refined criteria to evaluate the sites: (1) cost, (2) schedule, (3) security, (4) mission efficiency and expansion, (5) applicability of and compliance with federal policies, executive orders, and federal initiatives; and (6) environmental considerations. The team proposed narrowing the relevant sub-criteria to those that provided the greatest differentiation among the sites, according to officials on the team. For example, the security criterion was narrowed to include 3 of the original 13 security and infrastructure evaluation sub-factors, and the adjusted "mission efficiency and expansion" criterion included one of the mission evaluation team's 10 original mission sub-factors. Subsequently, the NGA Director provided additional direction, including adding a review of potential support from Scott Air Force Base, based on the support NGA East receives from being located at Ft. Belvoir, as well as ensuring that the security-related sub-factors carried over from prior analyses were consistently defined. Additionally, the director added 2 sub-criteria to the mission-related criterion to ensure that the site evaluation continued in terms of NGA's strategic goals of partnership and people: 1. "Team GEOINT," which refers to NGA's current and future partnerships with academic, public, and private sector partners, and which parallels the "GEOINT Valley" element evaluated by the mission evaluation team and PMO. 2. "Team NGA," which refers to the potential effects of workforce recruitment and retention that were also analyzed in the mission evaluation team and PMO analyses. According to NGA officials, while certain sub-factors or criteria were adjusted to provide further layers of analysis, the most important factors were always seen as mission and security. Additionally, NGA and Corps of Engineers officials said that adding these two sub-criteria expanded the analysis of the mission-related criteria to resemble the scope of the PMO's analysis and incorporated the NGA Director's mission and vision perspective. Finally, the NGA Director determined the weighting of the final criteria to evaluate the last two sites, the site selection team provided input on which of the sites was more advantageous with respect to each criterion, and in March 2016 this information was used to inform the NGA Director's selection of the agency-preferred alternative. The weighting and final decisions are shown in table 1. The NGA Director selected the St. Louis City site as the agency-preferred alternative. It was identified in the publication of the final environmental impact statement and finalized with the issuance of the record of decision in June 2016. We compared NGA's AOA process for selecting a site for the new NGA West campus to our AOA best practices and determined that NGA's process substantially met three and partially met one characteristic of a high-quality, reliable AOA process. Although NGA's AOA process substantially met most of the characteristics, we did find areas where the process could have been strengthened if NGA had more fully incorporated the AOA best practices. See table 2 for a summary of our assessment and appendix I for additional details on our scoring of NGA's alignment with each of the 22 best practices. NGA's AOA process for selecting a site for the new NGA West substantially met the well-documented characteristic of a high-quality, reliable AOA process, although we did find areas for improvement. For example, NGA's AOA body of work demonstrated that the assumptions and constraints for each alternative for the site selection process were documented. NGA West's Prospective Sites Master Plan included a set of overall assumptions that guided the preliminary planning process and provided specific assumptions and constraints for each alternative. Specifically, the plan identified various assumptions and constraints for the four final sites, such as calculations of the site boundaries, the estimated number of parking spaces, the square footage of the buildings and estimates of the building's height, site utilities, and environmental constraints, among other things. In one instance, the plan documented the assumption that if the Mehlville site were to be used, all utilities would need to be removed from within the property line and existing buildings, parking lots, and roads would have to be demolished. In another example, the Corps of Engineers conducted a schedule and negotiation risk assessment and recorded scores for each site and some mitigation strategies for specific issues. The assessment documented risks to meeting the site acquisition schedule with the St. Louis site because, among other reasons, the site needed environmental cleanup that was expected to take six months. The Fenton site had high negotiation risks, in part because the asking price of the site was significantly higher than the appraised value. However, NGA did not provide information on other risks, such as technical feasibility and resource risks, and did not rank the risks or provide over-arching mitigation strategies for each alternative. According to the best practice, not documenting the risks and related mitigation strategies for each alternative prevents decision makers from performing a meaningful trade- off analysis, which is necessary to select an alternative to be recommended. NGA's AOA process for selecting a site for the new NGA West substantially met the comprehensiveness characteristic of a high-quality AOA process, but although it had strengths, we identified some limitations. NGA's AOA process considered a diverse range of alternatives to meet the mission need and conducted market surveillance and market research to develop as many alternative solutions as possible. According to our best practices, an AOA process that encompasses numerous and diverse alternatives ensures that the study provides a broad view of the issue and guards against biases to the AOA process. Specifically, NGA's AOA process included a site location study that provided a summary of the thorough analysis that NGA conducted to identify potential site locations for the new NGA West campus. The study relied on information from local real estate market databases and input from the local real estate community, multiple municipal officials and organizations, and the public to identify an original set of 186 possible sites and narrow that list to a final 6 for further analysis. However, although the NGA body of work provides evidence that the Corps of Engineers developed initial cost estimates that compared each alternative using different cost categories, NGA's AOA process did not include life-cycle cost estimates for the final 4 sites. NGA officials chose not to analyze total construction and other facility sustainment costs, because they assumed that since the sites were in the same geographic area, construction and operating costs would be similar. However, the estimates did not include sufficient details regarding all of the costs examined--specifically, how the cost estimates were developed for information technology trunk line costs. NGA stated that this best practice had limited application to its AOA process because it had determined that variation in the life-cycle cost estimates based on the location of the four sites--all in the St. Louis metropolitan area--was negligible. NGA officials also stated that the lack of final project design details constrained their ability to develop full life-cycle cost estimates. However, without estimates for full life-cycle costs, decision makers may not have a complete picture of the costs for each alternative and may have difficulty comparing the alternatives, because comparisons may not be based on accurate and complete information. NGA and Corps of Engineers officials said that they are in the process of developing full life-cycle cost estimates for the construction and design of the new NGA West campus, for the agency- preferred alternative. NGA's AOA process for selecting a site for the new NGA West substantially met the characteristic of an unbiased AOA process, although we did identify some limitations. NGA's AOA body of work demonstrated that NGA had developed functional requirements based on the mission need without a predetermined solution and that the requirements were realistic, organized, and clear. For example, NGA's AOA body of work provided facilities requirements and specifically listed 11 site location and campus requirements that were tied to mission needs, including requiring a facility that will support future changes to mission requirements and allow for continuity of NGA operations. NGA's AOA body of work also identified physical requirements for the new NGA West campus, for example, that the new facility must have at least 800,000 gross square feet and a 500- foot security buffer, and it must allow for a possible expansion in the future. However, although the NGA AOA body of work demonstrated a thorough comparison of the alternatives throughout the site evaluation process, it did not provide evidence that net present value was used to compare or differentiate among the alternatives, nor did it provide a rationale for why net present value could not be used. NGA officials acknowledged that they did not compare the alternatives using net present value. They stated that they had normalized some of the costs but that it was not necessary to normalize all costs, because the estimates were all done during the same time period. According to our best practice, if net present value is not used to compare the alternatives, then the AOA team should document the reason why and explain and describe the other methods applied. Additionally, comparing items that have been discounted or normalized with net present value allows for time series comparisons, since alternatives may have different life cycles or different costs and benefits. NGA's AOA process for selecting the site for the new NGA West campus partially met the credible characteristic for an agency's AOA process. Although NGA's AOA process had strengths, it also had limitations, such as lacking important information related to cost risks and sensitivity analyses for both cost and benefits identified. NGA's AOA body of work described the alternatives in sufficient detail to allow for robust analysis. Specifically, it provided descriptions of each of the alternatives at varying levels of detail. For example, the first site location study provided descriptions of the top 6 potential sites, including information on size, the sites' strengths and weaknesses, and any acquisition or development issues. The NGA AOA body of work also provided evidence that site master planning was conducted to provide additional details on the physical and environmental attributes of each site, as well as constraints and benefits. For example, the NGA West Prospective Sites Master Plan described the Mehlville site as having landscape features such as mature trees, waterways, areas of steep topography, options for public transportation, bike-friendly streets, and existing utility infrastructure. However, NGA did not fully include key information on either the risk or the uncertainty related to cost estimates or the sensitivity to the costs and benefits identified as part of its AOA process. For example, the NGA body of work did not include a confidence interval or range for the cost estimates for each viable alternative in order to document the level of risk associated with the estimate. NGA's AOA body of work documented the estimated alternatives' initial costs and included contingency costs across all four alternatives. Corps of Engineers officials told us that they had developed a 30 percent design and 5 percent construction contingency cost factor across the four alternatives to account for cost risks in those areas. However, the NGA AOA body of work did not provide evidence of a confidence interval or range for the costs provided. NGA acknowledged that while its AOA body of work did not identify the risk associated with specific cost elements for each alternative, it did provide a "level of confidence," because the methodology behind the cost components in the estimate implied a high level of confidence. Although we agree that NGA did provide a contingency factor for the site development costs and provided cost estimates for all four viable alternatives, NGA did not develop a confidence interval or risk range for those estimates. NGA's cost estimates were used as a determining factor in the final decision among the four alternatives. However, without understanding the cost risk and the uncertainty of those costs as outlined in the best practice, a decision maker might be unable to make an informed decision. Additionally, the NGA AOA body of work did not demonstrate that NGA had conducted a sensitivity analysis for the cost and benefit and effectiveness estimates for each alternative in order to examine how changes in key assumptions would affect the cost and benefit estimates. The NGA AOA body of work documented that some sensitivity analysis or level of risk was analyzed as part of the schedule analysis, and NGA officials stated that the project considered how different values and variables affect each other during the criteria and evaluation analysis. However, the NGA AOA body of work did not document the sensitivity of cost and benefit estimates to changes in key assumptions for each alternative, and a sensitivity analysis was not applied to the initial cost estimates or benefit assumptions that were used to make the final site selection. NGA officials stated that this best practice has limited application to its AOA process, because the lack of variables between sites constrained their ability to develop full life-cycle cost estimates and complete a sensitivity analysis. NGA officials stated that their sensitivity analysis was limited to those considerations that were measurable and sensitive to change--predominantly schedule risk associated with land acquisition activities. Further, NGA officials explained that because all the site alternatives were located within the St. Louis metropolitan area, any variations in conditions would have equal effect. Although we agree that NGA did conduct a sensitivity analysis for schedule risks, NGA neither documented how the schedule sensitivity affected its cost or benefit estimates nor performed a sensitivity analysis for the various assumptions used to develop the cost or benefit for each alternative. According to the DOD instruction on economic analysis, a sensitivity analysis is a "what-if" exercise that should be performed to test the conclusions and assumptions of changes in cost and benefit variables and should always be performed when the results of the economic analysis do not clearly favor any one alternative. According to our best practice, not conducting a sensitivity analysis to identify the uncertainties associated with different assumptions increases the chances that an AOA team will recommend an alternative without understanding the full effects of costs, which could lead to cost and schedule overruns. Although NGA's AOA process did not reflect all of the characteristics of a high-quality process, we are not making recommendations in this report, in part because NGA plans to conduct additional cost analysis and in part because we made an applicable recommendation to DOD in 2016. Specifically, although NGA's AOA process is complete, NGA and Corps of Engineers officials said that they are developing full life-cycle cost estimates for the construction and design of the new NGA West campus and that these estimates will include many elements from our best practices. Further, we continue to believe that our September 2016 recommendation that DOD develop guidance requiring the use of AOA best practices for certain military construction decisions could help ensure that future AOA processes conducted by DOD agencies like NGA are reliable and that agencies identify a preferred alternative that best meets mission needs. While DOD did not concur with our recommendation, as we reported in 2016, our best practices are based on longstanding, fundamental tenets of sound decision making and economic analysis. Additionally, our best practices align with many DOD and military policies, directives, and other guidance pertaining to military construction. Further, during this review NGA officials stated that DOD did not have a set of best practices for conducting an AOA to help NGA make decisions regarding its military construction project, and that our AOA best practices would have been helpful had they been published prior to the start of NGA's site selection process in 2012. Accordingly, we continue to believe our prior recommendation is relevant and that unless DOD has guidance directing that certain military construction AOA processes be conducted in accordance with identified best practices, it may not be providing Congress with complete information to inform its oversight of DOD's future military construction decisions. We provided a draft of this report of this report to NGA for review and comment. NGA's comments are reprinted in their entirety in appendix II. In comments on our report, NGA stated that it valued our assessment of its AOA process, which we judged to have substantially met the characteristics of a well-documented, comprehensive, and unbiased process, and would use our findings to continue to refine and improve its corporate decision making and processes. NGA raised a concern about our assessment that its AOA process used to select the site for its new NGA West project partially met the best practices that demonstrate a credible process. NGA's specific concern was that we concluded that the AOA process did not fully include information on risks and sensitivities to cost estimates. In its letter, NGA stated that its analysis demonstrated that cost was a factor but not the most important factor. Moreover, NGA stated that cost elements and details ranged from well-defined costs, such as real estate costs, to estimates based on analogy such as an information technology trunk line. NGA additionally stated that, due to the conceptual nature of the design of the facility at that time, more detailed cost analysis was judged to provide no discrimination among alternatives and were thus purposely excluded from the initial cost estimates that were used in the AOA process. While NGA may have concluded that the project's cost was not the most important factor, the agency estimates that construction of the campus will cost about $945 million and NGA used the cost estimate as a determining factor to select from the four final alternatives. Moreover, our assessment of the credibility characteristic is based only in part on NGA's initial cost estimates and did not penalize NGA for excluding additional cost estimates. Rather, we assessed that NGA's AOA body of work did not provide evidence of documenting the sensitivity of the cost-benefit or effectiveness estimates to changes in key assumptions for alternatives, nor was a sensitivity or risk analysis applied to the initial cost estimates used to make the final site selection. NGA also stated in its letter that our AOA best practices are not applicable in all circumstances, and pointed out that DOD did not concur with a recommendation in a prior report to develop AOA guidance requiring departmental components to use AOA practices, including the best practices we identified, for certain future military construction projects. Our prior report suggested that such guidance might only apply to certain military construction projects as determined by DOD. In addition, while DOD's existing relevant guidance does not require use of our AOA best practices, the guidance does not prohibit it either. Further, as discussed in our report, NGA officials told us the AOA best practices are helpful to such processes, and lacking such DOD guidance NGA had to draw on expertise, practices, and procedures from a variety of sources to conduct its AOA for the new NGA West site. Finally, in its letter NGA proposed that two documents--the environmental impact statement and record of decision--fulfill the best practice to document the AOA process in a single document. Specifically, NGA stated that within the context of the environmental impact statement process, the record of decision is the authoritative capstone document of the process, and that together the two documents include discussions of the decision-making and factors considered by the director in selecting the agency-preferred alternative. These two documents were prepared to fulfill requirements of the National Environmental Policy Act of 1969 in order to determine the environmental impacts of the project, as discussed earlier in our report. While we recognize that the record of decision and the environmental impact statement are significant documents that include summaries of aspects of NGA's AOA process, as NGA indicated these are two documents within an expansive AOA body of work. Further, many of the elements of NGA's AOA process are diffused throughout these and several other reports and documents--that were specifically identified by NGA as the key documentation of its AOA process--rather than clearly delineated in a single document as prescribed by the best practice (see appendix I). As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; the Secretary of the Army; the Under Secretary of Defense for Acquisitions, Technology and Logistics; the Under Secretary of Defense for Intelligence; and the Director, National Geospatial-Intelligence Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In our earlier discussion of the extent to which NGA's AOA process met best practices for such processes, we presented our analysis for specific best practices. These 22 best practices and their definitions were originally published and are listed in GAO-16-22. Table 3 summarizes our analysis of NGA's AOA process for selecting the site for the new NGA West and our ratings of that process against all 22 best practices. In addition to the contact named above, Brian Mazanec, Assistant Director; Jim Ashley; Tracy Barnes; Chris Businsky; George Depaoli; Richard Johnson; Joanne Landesman; Jennifer Leotta; Jamilah Moon; Joseph Thompson; and Sally Williamson made key contributions to this report. Amphibious Combat Vehicle Acquisition: Cost Estimate Meets Best Practices, but Concurrency between Testing and Production Increases Risk. GAO-17-402. Washington, D.C.: Apr. 18, 2017. Joint Intelligence Analysis Complex: DOD Needs to Fully Incorporate Best Practices into Future Cost Estimates. GAO-17-29. Washington, D.C.: Nov. 3, 2016. Joint Intelligence Analysis Complex: DOD Partially Used Best Practices for Analyzing Alternatives and Should Do So Fully for Future Military Construction Decisions. GAO-16-853. Washington, D.C.: Sept. 30, 2016. Patriot Modernization: Oversight Mechanism Needed to Track Progress and Provide Accountability. GAO-16-488. Washington, D.C.: Aug. 25, 2016. Amphibious Combat Vehicle: Some Acquisition Activities Demonstrate Best Practices; Attainment of Amphibious Capability to Be Determined. GAO-16-22. Washington, D.C.: Oct. 28, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: Dec. 11, 2014. Military Bases: DOD Has Processes to Comply with Statutory Requirements for Closing or Realigning Installations. GAO-13-645. June 27, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: Mar. 7, 2013. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency's Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but It Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: June 24, 2010. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C., Mar. 2, 2009.
NGA, a defense agency and element of the Intelligence Community, provides geospatial intelligence to military and intelligence operations to support national security priorities. It currently operates out of two primary facilities--its headquarters in Springfield, Virginia, and its NGA West campus in St. Louis, Missouri. In 2012, NGA determined that a new location for its NGA West facilities was necessary to meet security standards and better support its national security mission. NGA estimates that the construction of the new campus will cost about $945 million. GAO was asked to evaluate the AOA process that NGA used to select the site for its new campus. This report (1) describes the process NGA used, including the key factors it considered and (2) evaluates the extent to which the AOA process met best practices for such analyses. GAO visited the existing NGA West campus and the final four alternative sites that were considered, analyzed and assessed reports and information that document NGA's AOA process for selecting the site, and interviewed relevant officials about the process. GAO evaluated NGA's process against best practices identified by GAO as characteristics of a high-quality, reliable AOA process. In 2012, the National Geospatial-Intelligence Agency (NGA) began an analysis of alternatives (AOA) process to evaluate potential sites for its new NGA Campus West (NGA West) using key evaluation factors related to mission, security, development and sustainability, schedule, cost, and environment. NGA's process included levels of analysis and considerations to select the agency-preferred alternative from an original list of 186 potential sites, subsequently narrowed to the final four alternative sites (see figure). The process culminated in the June 2016 selection of the agency-preferred alternative, the St. Louis City site. NGA's process for selecting a site for the new NGA West campus substantially met three of the four characteristics of a high-quality, reliable AOA process. Specifically, NGA's process substantially met the characteristics that demonstrate a well-documented, comprehensive, and unbiased AOA process. It partially met the credibility characteristic, in part because it did not fully include information on the risks and sensitivities to cost estimates. NGA officials stated that there was no comprehensive DOD guidance to inform its AOA process, and although NGA's AOA process is complete, NGA plans to develop full cost estimates as part of construction, planning, and design. In September 2016, GAO recommended that DOD develop guidance for the use of AOA best practices for certain types of military construction decisions. While DOD did not concur and the recommendation remains open, GAO continues to believe such guidance would help ensure that future AOA processes are reliable and would result in decisions that best meet mission needs. GAO is not making recommendations to NGA. In commenting on a draft of this report, NGA expressed concerns about GAO's assessment of NGA's estimates of cost risks and sensitivities. GAO continues to believe its assessment accurately reflects NGA's process.
7,656
655
Despite some similarities, each of the recent attacks is very different in its makeup, method of attack, and potential damage. Generally, Code Red and Code Red II are both "worms," which are attacks that propagate themselves through networks without any user intervention or interaction. They both take advantage of a flaw in a component of versions 4.0 and 5.0 of Microsoft's Internet Information Services (IIS) Web server software. Code Red originally sought to do damage by defacing Web pages and by denying access to a specific Web site by sending it massive amounts of data, which essentially would shut it down. This is known as a denial-of- service (DoS) attack. Code Red II is much more discreet and potentially more damaging. Other than sharing the name of the original worm, the only similarity Code Red II has with Code Red is that it exploits the same IIS vulnerability to propagate itself. Code Red II installs "backdoors" on infected Web servers, making them vulnerable to hijacking by any attacker who knows how to exploit the backdoor. It also spreads faster than Code Red. Both attacks have the potential to decrease the speed of the Internet and cause service disruptions. More importantly, these worms broadcast to the Internet the servers that are vulnerable to this flaw, which allows others to attack the servers and perform other actions that are not related to Code Red. SirCam is a malicious computer virus that spreads primarily through E- mail. Once activated on an infected computer, the virus searches through a select folder and mails user files acting as a "Trojan horse" to E-mail addresses in the user's address book. A Trojan horse, or Trojan, is a program containing hidden code allowing the unauthorized collection, falsification, or destruction of information. If the user's files are sensitive in nature, then SirCam not only succeeds in compromising the user's computer, but also succeeds in breaching the data's confidentiality. In addition to spreading, the virus can attempt to delete a victim's hard drive or fill the remaining free space on the hard drive making it impossible to perform common tasks such as saving files or printing. This form of attack is extremely serious since it is one from which it is very difficult to recover. SirCam is much more stealthy than the Melissa and ILOVEYOU viruses because it does not need to use the victim's E-mail program to replicate. It has its own internal capabilities to mail itself to other computers. SirCam also can spread through another method. It can copy itself to other unsuspecting computers connected through a Windows network (commonly referred to as Windows network computers) that has granted read/write access to the infected computer. Like Code Red and Code Red II, SirCam can slow the Internet. However, SirCam poses a greater threat to the home PC user than that of the Code Red worms. Table 1 provides a high-level comparison of the attacks. The attachment to this testimony answers the questions in the table in greater detail. Systems infected by Code Red and SirCam can be fixed relatively easily. A patch made available by Microsoft can remove the vulnerability exploited by Code Red and rebooting the infected computer removes the worm itself. Updating and using antivirus software can help detect and partially recover from SirCam. Patching and rebooting an infected server is not enough when a system is hit by Code Red II. Instead, the system's hard drive should be reformatted, and all software should be reinstalled to ensure that the system is free of other backdoor vulnerabilities. Of course, there are a number of other immediate actions organizations can take to ward off attacks. These include: using strong passwords, verifying software security settings, backing up files early and often, ensuring that known software vulnerabilities are reduced by promptly implementing software patches available from vendors, ensuring that policies and controls already implemented are operating as using scanners that automatically search for system vulnerabilities, using password-cracking tools to assess the password strength of the using network monitoring tools to identify suspicious network activity, developing and distributing lists of the most common types of vulnerabilities and suggested corrective actions. Reports from various media and computer security experts indicate that the impact of these viruses has been extensive. On July 19, the Code Red worm infected more than 250,000 systems in just 9 hours, according to the National Infrastructure Protection Center (NIPC). An estimated 975,000 servers have been infected in total, according to Computer Economics, Inc. Code Red and Code Red II have also reportedly disrupted both government and business operations, principally by slowing Internet service and forcing some organizations to disconnect themselves from the Internet. For example, reports have noted that (1) the White House had to change the numerical Internet address that identifies its Web site to the public, and (2) the Department of Defense was forced to briefly shut down its public Web sites. Treasury's Financial Management Service was infected and also had to disconnect itself from the Internet. Code Red worms also reportedly hit Microsoft's popular free E-mail service, Hotmail; caused outages for users of Qwest's high-speed Internet service nationwide; and caused delays in package deliveries by infecting systems belonging to FedEx Corp. There are also numerous reports of infections in other countries. The economic costs resulting from Code Red attacks are already estimated to be over $2.4 billion. These involve costs associated with cleaning infected systems and returning them to normal service, inspecting servers to determine the need for software patches, patching and testing services as well as the negative impact on the productivity of system users and technical staff. Although Code Red's reported costs have not yet surpassed damages estimated for last year's ILOVEYOU virus, which is now estimated to be more than $8 billion, the Code Red attacks are reportedly more costly than 1988's Morris worm. This particular worm exploited a flaw in the Unix operating system and affected VAX computers from Digital Equipment Corp. and Sun 3 computers from Sun Microsystems, Inc. It was intended to only infect each computer once, but a bug allowed it to replicate hundreds of times, crashing computers in the process. Approximately 10 percent of the U.S. computers connected to the Internet effectively stopped at the same time. At that time, the network had grown to more than 88,000 computers and was a primary means of communication among computer security experts. SirCam has also reportedly caused some havoc. It is allegedly responsible for the leaking of secret documents from the government of Ukraine. And it reportedly infected a computer at the Federal Bureau of Investigation (FBI) late last month and sent some private, but not sensitive or classified, documents out in an E-mail. There are reports that SirCam has surfaced in more than 100 countries. GAO has identified information security as a governmentwide high risk issue since 1997. As these incidents continue, the federal government continues to face formidable challenges in protecting its information systems assets and sensitive data. These include not only an ever changing and growing sophistication in the nature of attacks but also an urgent need to strengthen agency security controls as well as a need for a more concerted and effective governmentwide coordination, guidance, and oversight. Today, I would like to briefly discuss these challenges. I would also like to discuss progress that has been made in addressing them, including improvements in agency controls, actions to strengthen warning and crisis management capabilities, and new legislation to provide a comprehensive framework for establishing and ensuring effectiveness of information security controls over information resources that support federal government operations and assets. These are positive steps toward taking a proactive stand in protecting sensitive data and assets. First, these latest incidents again show that computer attack tools and techniques are becoming increasingly sophisticated. The Code Red attack was more sophisticated than those experienced in the past because the attack combined a worm with a denial-of-service attack. Further, with some reprogramming, each variant of Code Red got smarter in terms of identifying vulnerable systems. Code Red II exploited the same vulnerability to spread itself as the original Code Red. However instead of launching a DoS attack against a specific victim, it gives an attacker complete control over the infected system, thereby letting the attacker perform any number of undesirable actions. SirCam was a more sophisticated version of the ILOVEYOU virus, no longer needing the victim's E-mail program to spread. In the long run, it is likely that hackers will find ways to attack more critical components of the Internet, such as routers and network equipment, rather than just Web site servers or individual computers. Further, it is likely that viruses will continue to spread faster as a result of the increasing connectivity of today's networks and the growing use of commercial-off-the-shelf (COTS) products, which, once a vulnerability is discovered, can be easily exploited for attack by all their users because of the widespread use of the products. Second, the recent attacks foreshadow much more devastating Internet threats to come. According to official estimates, over 100 countries already have or are developing computer attack capabilities. Further, the National Security Agency has determined that potential adversaries are developing a body of knowledge about U.S. systems and methods to attack them. Meanwhile, our government and our nation have become increasingly reliant on interconnected computer systems to support critical operations and infrastructures, including telecommunications, finance, power distribution, emergency services, law enforcement, national defense, and other government services. As a result, there is a growing risk that terrorists or hostile foreign states could severely damage or disrupt national defense or vital public operations through computer-based attacks on the nation's critical infrastructures. Third, agencies do not have an effective information security program to prevent and respond to attacks--both external attacks, like Code Red, Code Red II, and SirCam, and internal attempts to manipulate or damage systems and data. More specifically, we continue to find that poor security planning and management are the rule rather than the exception. Most agencies do not develop security plans for major systems based on risk, have not formally documented security policies, and have not implemented programs for testing and evaluating the effectiveness of the controls they rely on. Agencies also often lack effective access controls to their computer resources and consequently cannot protect these assets against unauthorized modification, loss, and disclosure. Moreover, application software development and change controls are weak; policies and procedures governing segregation of duties are ineffective; and access to the powerful programs and sensitive files associated with a computer systems operation is not well-protected. In fact, over the past several years, our analyses as well as those of the Inspectors General have found that virtually all of the largest federal agencies have significant computer security weaknesses that place critical federal operations and assets at risk to computer-based attacks. In recognition of these serious security weaknesses, we and the Inspectors General have made recommendations to agencies regarding specific steps they should take to make their security programs effective. Also, in 2001, we again reported information security as a high-risk area across government, as we did in our 1997 and 1999 high-risk series. Fourth, the government still lacks robust analysis, warning, and response capabilities. Often, for instance, reporting on incidents has been ineffective--with information coming too late for agencies to take proactive measures to mitigate damage. This was especially evident in the Melissa and ILOVEYOU attacks. There is also a lack of strategic analysis to determine the potential broader implications of individual incidents. Such analysis looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of national importance. Further, as we recently reported, the ability to issue prompt warnings about attacks is impeded because of (1) a lack of a comprehensive governmentwide or nationwide framework for promptly obtaining and analyzing information on imminent attacks, (2) a shortage of skilled staff, (3) the need to ensure that undue alarm is not raised for insignificant incidents, and (4) the need to ensure that sensitive information is protected, especially when such information pertains to law enforcement investigations underway. Lastly, government entities have not developed fully productive information-sharing and cooperative relationships. We recently made a variety of recommendations to the Assistant to the President for National Security Affairs and the Attorney General regarding the need to more fully define the role and responsibilities of the NIPC, develop plans for establishing analysis and warning capabilities, and formalize information-sharing relationships with the private sector and federal entities. Fifth, most of the nation's critical infrastructure is owned by the private sector. Solutions, therefore, need to be developed and implemented in concert with the private sector, and they must be tailored sector by sector, through consultation about vulnerabilities, threats, and possible response strategies. Putting together effective partnerships with the private sector is difficult, however. Disparate interests between the private sector and the government can lead to profoundly different views and perceptions about threats, vulnerabilities, and risks, and they can affect the level of risk each party is willing to accept and the costs each is willing to bear. Moreover, industry has raised concerns that it could potential face antitrust violations for sharing information. Lastly, there is a concern that an inadvertent release of confidential business material, such as trade secrets or proprietary information, could damage reputations, lower consumer confidence, hurt competitiveness, and decrease market shares of firms. Fortunately, we are beginning to see improvements that should help agencies ward off attacks. We reported earlier this year that several agencies have taken significant steps to redesign and strengthen their information security programs. For example, the Internal Revenue Service (IRS) has made notable progress in improving computer security at its facilities, corrected a significant number of identified weaknesses, and established a service-wide computer security management program. Similarly, the Environmental Protection Agency has moved aggressively to reduce the exposure of its systems and data and to correct weaknesses we identified in February 2000. Moreover, the Federal Computer Incident Response Center (FedCIRC) and the NIPC have both expanded their efforts to issue warnings of potential computer intrusions and to assist in responding to computer security incidents. In responding to the Code Red and Code Red II attacks, FedCIRC and NIPC worked together with Carnegie Mellon's CERT Coordination Center, the Internet Security Alliance, the National Coordinating Center for Telecommunications, the Systems Administrators and Network Security (SANS) Institute, and other private companies and security organizations to warn the public and encourage system administrators and home users to voluntarily update their software. We also recently reported on a number of other positive actions taken by NIPC to develop analysis, warning, and response capabilities. For example, since its establishment, the NIPC has issued a variety of analytical products to support computer security investigations. It has established a Watch and Warning Unit that monitors the Internet and other media 24 hours a day to identify reports of computer-based attacks. It has developed crisis management capabilities to support a multi-agency response to the most serious incidents from FBI's Washington, D.C., Strategic Information Operations Center. The administration is currently reviewing the federal strategy for critical infrastructure protection that was originally outlined in Presidential Decision Directive (PDD) 63, including provisions related to developing analytical and warning capabilities that are currently assigned to the NIPC. On May 9, 2001, the White House issued a statement saying that it was working with federal agencies and private industry to prepare a new version of the "national plan for cyberspace security and critical infrastructure protection" and reviewing how the government is organized to deal with information security issues. Lastly, the Congress recently enacted legislation to provide a comprehensive framework for establishing and ensuring the effectiveness of information security controls over information resources that support federal government operations and assets. This legislation--known as Government Information Security Reform (GISR)--requires agencies to implement an agencywide information security program that is founded on a continuing risk management cycle. GISR also added an important new requirement by calling for an independent evaluation of the information security program and practices of an agency. These evaluations are to be used by OMB as the primary basis for its summary report to the Congress on governmentwide information security. In conclusion, the attacks we are dealing with now are smarter and more threatening than the ones we were dealing with last year and the year before. But I believe we are still just witnessing warning shots of potentially much more damaging and devastating attacks on the nation's critical infrastructures. To that end, it's vital that federal agencies and the government as a whole become proactive rather than reactive in their efforts to protect sensitive data and assets. In particular, as we have recommended in many reports and testimonies, agencies need more robust security planning, training, and oversight. The government as a whole needs to fully develop the capability to strategically analyze cyber threats and warn agencies in time for them to avert damage. It also needs to continue building on private-public partnerships--not just to detect and warn about attacks--but to prevent them in the first place. Most of all, trust needs to be established among a broad range of stakeholders, roles and responsibilities need to be clarified, and technical expertise needs to be developed. Lastly, becoming truly proactive will require stronger leadership by the federal government to develop a comprehensive strategy for critical infrastructure protection, work through concerns and barriers to sharing information, and institute the basic management framework needed to make the federal government a model of critical infrastructure protection. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information, please contact Keith Rhodes at (202) 512-6412. Individuals making key contributions to this testimony included Cristina Chaplain, Edward Alexander, Jr., Tracy Pierson, Penny Pickett, and Chris Martin. Answer Code Red is a worm, which is a computer attack that propagates through networks without user intervention. This particular worm makes use of a vulnerability in Microsoft's Internet Information Services (IIS) Web server software--specifically, a buffer overflow. The worm looks for systems running IIS (versions 4.0 and 5.0) that have not patched the unchecked vulnerability, and exploits the vulnerability to infect those systems. Code Red was initially written to deface the infected computer's Web site and to perform a distributed denial of service (DDoS) attack against the numerical Internet address used by www.whitehouse.gov. Two subsequent versions of Code Red do not deface Web pages but still launch the DDoS attack. Code Red was first reported on July 17, 2001. The worm is believed to have started at a university in Guangdong, China. The worm scans the Internet, identifies vulnerable systems, and infects these systems by installing itself. Each newly installed worm joins all the others causing the rate of scanning to grow rapidly. The first version of Code Red created a randomly generated list of Internet addresses to infect. However, the algorithm used to generate the list was flawed, and infected systems ended up reinfecting each other. The subsequent versions target victims a bit differently, increasing the rate of infection. Users with a Microsoft IIS server installed with Windows NT version 4.0 and Windows 2000. The original variant of Code Red (CRv1) can deface the infected computer's Web site and used the infected computer to perform a DDoS attack against the Internet address of the www.whitehouse.gov Web site. Subsequent variants of Code Red (CRv2a and CRv2b) no longer defaced the infected computer's Web site making detection of the worm harder. These subsequent variants continued to target the www.whitehouse.gov Web site and used smarter methods to target new computers for infection. The uncontrolled growth in scanning can also decrease the speed of the Internet and cause sporadic but widespread outages among all types of systems. Although the initial version, CRv1, defaces the Web site, the primary impact to the server is performance degradation as a result of the scanning activity of this worm. This degradation can become quite severe since it is possible for a worm to infect the same machine multiple times. Other entities, even those that are not vulnerable to Code Red, are impacted because servers infected by Code Red scan their systems and networks. Depending on the number of servers performing this scan, these entities may experience network denial of service. This was especially true with the implementation of CRv1 since a "flaw" in the random number generator essentially targeted the same servers. As noted above, this behavior is not found in the later variants. However, the end result may be the same since CRv2a and CRv2b use improved randomization techniques that facilitate more prolific scanning. Install a patch made available by Microsoft and reboot the system. (The patch should also be installed as a preventative measure). Question Technical Details on How the Code Red Worm Operates The Code Red worm has three phases - discovery and propagation, attack, and dormancy. Execution of these phases is based upon the day of the month. Phase 1: Discovery and Propagation Between day 1 and day 19 of any month, Code Red performs its discovery and propagation function. It does this by generating 100 subprograms on an infected server. All but one of these subprograms has the task of identifying and infecting other vulnerable Web servers by scanning a generated list of Internet addresses. Once a target system is identified, Code Red uses standard Web server communication to exploit the flaw and send itself to the vulnerable server. Once a new server is infected, the process continues. CRv1 created a randomly generated list of Internet addresses to infect. However, the algorithm used to generate the random number list was "flawed", and infected systems ended up re- infecting each other because the random list that each computer generated was the same. CRv2a and CRv2b were modified to generate actual random lists of Internet addresses that were more effective at identifying potential servers that had not already been attacked. Therefore, these versions can ultimately infect greater numbers of unprotected servers. CRv1 also defaced the target system's Web site. This was done by replacing site's actual Web page with the message, "HELLO! Welcome to http://www.worm.com! Hacked by Chinese!"This message enabled system administrators to easily identify when their servers had been infected. CRv2a and CRv2b modified the functionality so it would no longer deface Web pages, forcing system administrators to be proactive in determining infection. Descriptions of the variants are listed below. CRv1: Web site defacement and "random" target selection for additional attacks. CRv2a: No Web defacement and modified random target selection CRv2b: No Web defacement and better target selection by optimizing the random number generation process, i.e., better target addresses are generated. Due to the target optimization, systems infected with version 2b are able to infect new systems at a faster rate than version 2a. Between day 20 and day 27 of any month is Code Red's attack phase. Once Code Red determines the date to be within this designated attack date range, each infected server participates in a DDoS attack by sending massive amounts of data to its intended target, the numeric Internet address of the White House Web site. Since all infected servers are set to attack the same target on the same set of dates, the large amount of Internet traffic is expected to flood the Internet with data and bombard a numeric address used by www.whitehouse.gov with more data than it can handle. This flooding of data would cause the Web server to stop responding to all Web server requests, including legitimate users surfing the White House Web site. From day 28 to the end of the month, the Code Red worm lays dormant, going into an infinite sleep phase. Although the worm remains in the computer's memory until the system is rebooted, Code Red will not propagate or initiate any attacks once it enters dormancy. According to testing performed by Internet Security Systems, Carnegie Mellon's CERT Coordination Center (CERT/CC), and the Federal Bureau of Investigation's (FBI) National Infrastructure Protection Center (NIPC), the dormant worm cannot be awakened to restart the process. Answer Code Red II is also a worm that makes use of a buffer overflow vulnerability in Microsoft's IIS Web server software. Except for using the buffer overflow injection mechanism, the worm is very different than the original Code Red and its variants. In fact, it is more dangerous because it opens backdoors on infected servers that allow any follow-on remote attackers to execute arbitrary commands. There is no DDoS attack function in Code Red II. Code Red II was reported on August 4, 2001, by industry analysts. Like Code Red, the worm scans the Internet, identifies vulnerable systems, and infects these systems by installing itself. Each newly installed worm joins all the others causing the rate of scanning to grow. Code Red II, however, mostly selects Internet addresses in the same range as the infected computer to increase the likelihood of finding susceptible victims. Users with Microsoft IIS Web server software (versions 4.0 and 5.0) installed with Windows 2000. Like Code Red, Code Red II can decrease the speed of the Internet and service disruptions. Unlike Code Red, it also leaves the infected system open to any attacker who can alter or destroy files and create a denial of service attack. Specifically, Because of the worm's preference to target its closest neighbors, combined with the enormous amount of scanning traffic generated by the numerous subprograms running in parallel, a large amount of broadcast request traffic is generated on the infected system's network. If several machines on a local network segment are infected, then the resulting attempt to propagate the infection to their neighbors simultaneously can generate broadcast requests at "flooding" rates. Systems on the receiving end of an effective "broadcast flood" may experience the effects of a DoS attack. Code Red II allows remote attackers and intruders to execute arbitrary commands on infected Windows 2000 systems. Compromised systems are then subject to files being altered or destroyed. This adversely entities that may be relying on the altered or destroyed files. Furthermore, compromised systems are also at high risk for being exploited to generate other types of attacks against other servers. Several anti-virus software vendors have created tools that remove the harmful effects of the worm and reverse the changes made by the worm. This fix, however, is useless if the infected computer had been accessed by an attacker who installed other backdoors on the system that would be unaffected by the Code Red II patch tool. According to FedCIRC (Federal Computer Incident Response Center), due to the malicious actions of this worm, patching and rebooting an infected server will not solve the problem. The system's hard drive should be reformatted and all software should be reinstalled. Technical Details of the Code Red II Worm The Code Red II worm also has three phases - preparation, propagation, and Trojan insertion. Based upon current analysis, Code Red II only affects Web servers running on the Microsoft Windows 2000 operating system platform. Phase 1: Preparation During the preparation phase, the worm checks the current date to determine whether it will run at all. If the date is later than October 1, 2001, then the worm will cease to function and will remain infinitely dormant. If the date is before October 1, 2001, then all functions will be Answer performed. Although this discovery may bring hope that after October 1, 2001, this worm will no longer be a threat, this date constraint can be easily changed in a variant. The other activities conducted during the preparation phase include: The functionality of Code Red II is dependent on both the system's environment and the current date. Code Red II checks the default system's language, e.g., English, Chinese, etc., and stores that information. The worm also checks if the system has been previously infected, by searching for the existence of a specific file. If the file exists, then Code Red II becomes dormant and does not re-infect the system. If the file does not exist, Code Red II creates the file and continues the process. Preparation is finalized when the worm disables the capability of the Windows 2000 operating system to repair itself if it discovers that one of its required system files has been modified in any way. This becomes important during the Trojan Insertion function. Once the worm has completed the preparation phase, it immediately starts the propagation and Trojan insertion phases to complete infection. Code Red II creates hundreds of subprograms to propagate itself. The number of subprograms created depends upon the default language that the worm identified in the Preparation phsse. If the system's default language is Chinese, then 600 subprograms are created. If the default language is not Chinese, then 300 subprograms are generated. The propagation phase is unique because Code Red II seeks to copy itself to computers that are mostly near the infected system. The algorithm uses the infected system's own Internet address to generate a list of random Internet addresses. The generated list is comprised of Internet addresses that are closely related to the infected system. The rationale is that similar systems should reside in the "neighborhood" of the infected system, resulting in an increased chance of infection. Each of the subprograms is tasked with scanning one of the randomly generated Internet addresses to identify and infect another vulnerable system. Like Code Red, this worm uses the buffer overflow vulnerability to infect its target. Once a new target is infected, the process continues. Code Red II is more malicious than the Code Red worm discussed earlier, due to the existence of the Trojan horse backdoor programs that Code Red II leaves behind on the infected computer. The basic process follows: Initially, executable files are copied to specific locations on the Web server, which by necessity, are accessible by any remote user. These executable files can run commands sent by a remote attacker to the server through the use of well-crafted Web commands. A Trojan horse program is planted on the server that allows further exploit of the infected computer. The Trojan horse program is named after a required system program that executes when the next user logs into the system. It is also placed in a location that ensures that the Trojan horse program will be run instead of the required system program. Upon execution, the Trojan horse changes certain system settings that grant remote attackers read, write, and execute privileges on the Web server. Twenty-four to forty-eight hours after the preparation function is initiated, Code Red II forces the infected system to reboot itself. Although the reboot eliminates the memory resident worm, the backdoor and the Trojan horse programs are left in place since they are stored on the system's disks. The reboot also restarts the IIS software, which, in turn, ensures that the Web server uses the newly compromised system settings. Answer Since the Trojan horse will always be executed each time a user logs on, Code Red II guarantees that remote attackers will always have access to the infected system. This is important, since even if the executable files copied at the beginning of the Trojan Insertion phase are deleted, the excessive privileges the Trojan sets at reboot are still in place. Therefore, the Trojan enables a remote attacker to perform similar exploits using these excessive privileges. Answer SirCam is a malicious computer virus that spreads through E-mail and potentially through unprotected Windows network connections. What makes SirCam stealthy is that it does not rely on the E-mail capabilities of the infected system to replicate. Other viruses, such as Melissa and ILOVEYOU, used the host machine's E-mail program while SirCam contains its own mailing capability. Once the malicious code has been executed on a system, it may reveal or delete sensitive information. SirCam was first detected on July 17, 2001. This mass-mailing virus attempts to send itself to E-mail addresses found in the Windows Address Book and addresses found in cached files. It may be received in an E-mail message saying "Hi! How are you?" and requesting help with an attached file. The same message could be received in Spanish. Since the file is sent from a computer whose owner is familiar enough with the recipient to have their E-mail address in their address book, there is a high probability that the recipient will trust the attachment as coming from a known sender. This helps ensure the virus's success in the wild and is similar to the social engineering approach used by Melissa and ILOVEYOU. The E-mail message will contain an attachment that will launch the code when opened. When installed on a victim machine, SirCam installs a copy of itself in two files. It then "steals" one of the target system's files and attempts to mail that file with itself as a Trojan, that is, a file with desirable features, to every recipient in the affected system's address book. It can also get E-mail addresses from the Web browser. SirCam can also spread to other computers on the same Windows network without the use of E- mail. If the infected computer has read/write access to specific Windows network computers, SirCam copies itself to those computers, infecting the other computer. Any E-mail user or any user of a PC with unprotected Windows network connections that is on the same Windows network as an infected computer. SirCam can publicly release sensitive information and delete files and folders. It can also completely fill the hard drive of the infected computer. Furthermore, it can also lead to a decrease in the speed of the Internet.
Organizations and individuals have recently had to contend with particularly vexing computer attacks. The most notable is Code Red, but potentially more damaging are Code Red II and SirCam. Together, these attacks have infected millions of computer users, shut down websites, slowed Internet service, and disrupted businesses and government operations. They have already caused billions of dollars of damage, and their full effects have yet to be completely assessed. Code Red and Code Red II are both "worms," which are attacks that propagate themselves through networks without any user intervention or interaction. Both take advantage of a flaw in a component of versions 4.0 and 5.0 of Microsoft's Internet Information Services Web server software. SirCam is a malicious computer virus that spreads primarily through E-mail. Once activated on an infected computer, the virus searches through a select folder and mails user files acting as a "Trojan horse" to E-mail addresses in the user's address book. In addition to spreading, the virus can delete a victim's hard drive or fill the remaining free space on the hard drive, making it impossible to save files or print. On July 19, 2001, the Code Red worm infected more than 250,000 systems in just nine hours, causing more than $2.4 billion in economic losses. SirCam is allegedly responsible for the leaking of secret documents from the Ukrainian government. U.S. government agencies do not have an effective information security program to prevent and respond to these attacks and often lack effective access controls to their computer resources and consequently cannot protect these assets against unauthorized modification, loss, and disclosure. However, several agencies have taken significant steps to redesign and strengthen their information security programs. Also, Congress recently enacted legislation to provide a comprehensive framework for establishing and ensuring the effectiveness of information security controls over information resources that support federal operations and assets.
7,465
407
The Navy ordnance business area, which consists of the Naval Ordnance Center (NOC) headquarters and subordinate activities, such as Naval weapons stations, operates under the revolving fund concept as part of the Navy Working Capital Fund. It provides various services, including ammunition storage and distribution, ordnance engineering, and missile maintenance, to customers who consist primarily of Defense organizations, but also include foreign governments. Revolving fund activities rely on sales revenue rather than direct congressional appropriations to finance their operations and are expected to operate on a break-even basis over time--that is, to neither make a profit nor incur a loss, but to recover all costs. During fiscal year 1996, the Navy ordnance business area reported revenue of about $563 million and costs of about $600 million, for a net operating loss of about $37 million. In accordance with current Department of Defense (DOD) policy, this loss and the $175 million the business area lost during fiscal years 1994 and 1995 will be recouped by adding surcharges to subsequent years' prices. As discussed in our March 1997 report, higher-than-expected overhead costs were the primary cause of the losses that the business area incurred during fiscal years 1994 through 1996. We also testified on this problem in May 1997, and recommended that the Secretary of the Navy develop a plan to streamline the Naval ordnance business area's operations and reduce its overhead costs. The Navy has initiated a restructuring of the business area that, according to the Secretary of the Navy, is "akin to placing it in receivership." The objective of our audit of the Navy ordnance business area was to assess the Navy's efforts to reduce costs and streamline its operations. Our current audit of the restructuring of the Navy ordnance business area is a continuation of our work on the business area's price increases and financial losses (GAO/AIMD/NSIAD-97-74, March 14, 1997). In that report we recommended that the Secretary of Defense direct the Secretary of the Navy to develop a plan to streamline the Navy ordnance business operations and reduce its infrastructure costs, including overhead. This plan should (1) concentrate on eliminating unnecessary infrastructure, including overhead, (2) identify specific actions that need to be accomplished, (3) include realistic assumptions about the savings that can be achieved, (4) establish milestones, and (5) clearly delineate responsibilities for performing the tasks in the plan. To evaluate the actions being taken or considered by the NOC to streamline its operations and reduce costs, we (1) used the work that we performed in analyzing the business area's price increases and financial losses and (2) analyzed budget reports to identify planned actions and discussed the advantages and disadvantages of the planned actions with Navy, OSD, U.S. Transportation Command, and Joint Staff officials. In analyzing the actions, we determined (1) if specific steps and milestones were developed by the NOC to accomplish the actions, (2) whether the initiatives appeared reasonable and could result in improved operations, (3) what dollar savings were estimated to result from the implementation of the actions, (4) whether the actions went far enough in reducing costs and improving operations, and (5) what other actions not being considered by the NOC could result in further cost reductions or streamlined operations. We did not independently verify the financial information provided by the Navy ordnance business area. We performed our work at the Office of the DOD Comptroller and Joint Staff, Washington, D.C.; Offices of the Assistant Secretary of Navy (Financial Management and Comptroller), Naval Sea Systems Command, Naval Air Systems Command, and Headquarters, Defense Finance and Accounting Service, all located in Arlington, Virginia; Headquarters, U.S. Atlantic Fleet, Norfolk, Virginia; Naval Ordnance Center Headquarters, Indian Head, Maryland; Naval Ordnance Center Atlantic Division, Yorktown, Virginia; Naval Ordnance Center Pacific Division, Seal Beach, California; Naval Weapons Station, Yorktown, Virginia; Naval Weapons Station, Charleston, South Carolina; Naval Weapons Station, Earle, New Jersey; Naval Weapons Station, Seal Beach, California; Naval Weapons Station, Concord, California; Naval Weapons Station Detachment, Fallbrook, California; Naval Warfare Assessment Division, Corona, California; and U.S. Transportation Command, Scott Air Force Base, Illinois. Our work was performed from June 1996 through September 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report. The Under Secretary of Defense (Comptroller) provided us with written comments, which we incorporated where appropriate. These comments are reprinted in appendix I. The Navy has incorporated a goal to reduce annual costs by $151 million into its ordnance business area's budget estimate and has identified the major actions that will be taken to achieve this goal. Our analysis of available data indicates that the planned actions should result in substantial cost reductions and more streamlined operations. However, we cannot fully evaluate the reasonableness of the cost reduction goal at this time because the Navy does not expect to finalize the cost reduction plan until October 1997. During the fiscal year 1998 budget review process, OSD officials worked with the Navy to formulate a restructuring of the Navy ordnance business area. According to the budget estimate the Navy submitted to the Congress in February 1997, this restructuring will allow the ordnance business area to achieve substantial cost and personnel reductions without adversely affecting ordnance activities' ability to satisfy their customers' peacetime and contingency requirements. Specifically, the budget estimate indicated that between fiscal years 1996 and 1999, the business area's civilian and military fiscal year end strengths will decline by 18 percent and 23 percent, respectively, and its annual costs will decline by $151 million, or 25 percent. The budget also indicated that the business area will increase its fiscal year 1998 prices in order to recover $224 million of prior year losses and achieve a zero accumulated operating result by the end of fiscal year 1998. The Navy's fiscal year 1998 budget submission also indicated that the planned restructuring of the business area (1) is based on an assessment of whether current missions should be retained in the business area, outsourced to the private sector, or transferred to other organizations and (2) will make fundamental changes in how the business area is organized and conducts its business. Our assessment of the individual actions--most of which are expected to be initiated by October 1997 and completed during fiscal year 1998--shows that the Navy is planning to reduce costs by eliminating or consolidating redundant operations and reducing the number of positions in the business area. These actions, which are listed below, should help to streamline the Navy ordnance operations and reduce costs. Properly sizing the business area's workforce to accomplish the projected workload by eliminating about 800 positions, or about 18 percent of the total, before the end of October 1997. Enhancing the business area's ability to respond to unanticipated workload changes by increasing the percentage of temporary workers in the work force from 8 percent to 20 percent. Enhancing the business area's ability to identify redundant ordnance engineering capability and to streamline its information resource functions by consolidating management responsibility for these areas by October 1, 1997. Reducing overall operating costs by significantly cutting back on operations at the Charleston and Concord Weapons Stations, beginning in October 1997. Eliminating redundant capability and reducing costs by consolidating (1) some weapons station functions, such as safety and workload planning, at fewer locations, (2) inventory management functions at the Inventory Management and Systems Division, and (3) maintenance work on the Standard Missile at the Seal Beach Naval Weapons Station. Reducing overhead contract costs, such as utilities and real property maintenance during fiscal year 1998. Enhancing business area managers' ability to focus on their core ordnance missions of explosive safety, ordnance distribution, and inventory management by transferring east coast base support missions to the Atlantic Fleet on October 1, 1997. The Navy's planned restructuring of its ordnance business area will reduce overhead costs and is an important first step toward the elimination of the redundant capability both within the business area and between the business area and other organizations. However, as discussed in the following sections, our analysis indicates that there are opportunities for additional cost reductions by (1) developing and implementing a detailed plan to eliminate redundant ordnance engineering capability, (2) converting military guard positions to civilian status, and (3) implementing two actions that Navy ordnance officials are currently considering. Navy ordnance officials plan to consolidate management responsibility for the business area's nine separate ordnance engineering activities under a single manager on October 1, 1997. This will allow this manager to have visibility over all of the business area's engineering resources and should facilitate more effective management of these engineering resources. However, it will not result in any savings unless action is also taken to eliminate the redundant ordnance engineering capability that previous studies have identified both within the ordnance business area and between the business area and other Navy organizations. For example, a 1993 Navy study estimated that 435 work years, or $22 million, could be saved annually by reducing Navy-wide in-service ordnance engineering functions from 20 separate activities to 8 consolidated activities. However, Navy ordnance officials stated that these consolidations were never implemented. They also stated that although they did not know why the consolidations were not implemented, they believe it was because (1) the Navy's ordnance engineering personnel are managed by the NOC and three different major research and development organizations and (2) the Navy did not require these four organizations to consolidate their ordnance in-service engineering functions. Since 1954, DOD Directive 1100.4 has required the military services to staff positions with civilian personnel unless the services deem a position military essential for reasons such as combat readiness or training. This is primarily because, as we have previously reported, on average, a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. Our analysis showed that the percentage of military personnel in the NOC workforce is about six times greater than in other Navy Working Capital Fund activities, with most of these positions being military guards such as personnel who guard access to the weapons station at the main entrance. Further, Navy ordnance officials indicated that they know of no reason why the guard positions should not be converted to civilian status. In fact, these officials said that they would prefer to have civilian guards since they are cheaper than military guards, and they noted that all of their activities already have some civilian security positions. Consequently, the Navy can save about $6.8 million annually by converting the NOC's guard positions to civilian status (based on the $15,000 per position savings estimate). NOC officials told us that they reviewed the need for all of their military positions, and indicated that they plan to eliminate some of these positions. However, they stated that they do not plan to convert any military guard positions to civilian status. A Navy Comptroller official told us that (1) all of the NOC's guard functions will probably be transferred to the Atlantic and Pacific fleets as part of the ordnance business area restructuring and (2) the fleet commanders, not the NOC, should, therefore, decide whether the military guard positions should be converted to civilian status. Navy ordnance officials are currently considering two additional actions--further consolidating the business area's missile maintenance work and charging individual customers for the storage of ammunition--that would result in additional cost reductions and a more efficient operation, if implemented. As discussed below, consolidating missile maintenance work would allow the business area to reduce the fixed overhead cost that is associated with this mission, and charging customers for ammunition storage services would give customers an incentive to either relocate or dispose of unneeded ammunition and, in turn, could result in lower storage costs. The Navy ordnance business area, which has had a substantial amount of excess missile maintenance repair capacity for several years, is being forced to spread fixed missile maintenance overhead costs over a declining workload base that is expected to account for only 3 percent of the business area's total revenue in fiscal year 1998. This problem, which is caused by factors such as force structure downsizing, continues even though the business area recently achieved estimated annual savings of $2.3 million by consolidating all maintenance work on the Standard Missile at one location. The following table shows the substantial decline in work related to four specific types of missiles. NOC officials are currently evaluating several alternatives for consolidating missile maintenance work, including (1) consolidating all work on air launched missiles at one Naval weapons station, (2) transferring all or part of the business area's missile maintenance work to the Letterkenny Army Depot, Ogden Air Logistics Center and/or a private contractor, and (3) accomplishing all or part of the work in Navy regional maintenance centers. According to DOD, the evaluation of these alternatives should be completed in the spring of 1998. Based on our discussions with Navy ordnance and maintenance officials, the NOC's evaluations of maintenance consolidation alternatives should identify the total cost of the various alternatives, including onetime implementation costs and costs that are not included in depot maintenance sales prices, such as the cost of shipping items from coastal locations to inland depots and/or contractor plants and assess each alternative's potential impact on readiness. The Navy ordnance business area incurs costs to store ammunition for customers that are not required to pay for this storage service. Instead, this storage cost is added to the price charged to load ammunition on and off Naval ships and commercial vessels. As shown in the following figure, the business area's inventory records indicate that 51,231 tons, or about 43 percent, of ammunition stored at the weapons stations was not needed as of May 1, 1997, because (1) there is no requirement for it or (2) the quantity on hand exceeds the required level. If the business area charged customers for ammunition storage, the costs of the storage service would (1) be charged to the customers that benefit from this service and (2) provide a financial incentive for customers to either relocate or dispose of unneeded ammunition. This, in turn, could allow the business area to reduce the number of locations where ammunition is stored and thereby reduce operating costs. This approach has been adopted by the Defense Logistics Agency, which also performs receipt, storage, and issue functions, and the agency stated that instituting such user charges has helped to reduce infrastructure costs by allowing it to eliminate unneeded storage space. In addition, we recently recommended such an approach in our report, Defense Ammunition: Significant Problems Left Unattended Will Get Worse (GAO/NSIAD-96-129, June 21, 1996). Navy ordnance officials told us that they are currently considering charging customers for the storage of ammunition and are taking steps to do so. These officials informed us that they (1) have discussed DLA's experience in charging a storage cost with DLA officials, (2) have discussed this matter with the torpedo program manager and sent a letter addressing the cost to move the torpedoes off the weapons stations, (3) are drafting similar letters to the other ordnance program managers, and (4) are in the process of determining ammunition storage costs for use in developing storage fees. Most aspects of the Navy's planned restructuring of its ordnance business area appear to be cost-effective alternatives. However, DOD budget documents indicate that the Navy's fiscal year 1998 budget submission for its ordnance business area did not adequately consider the impact that planned personnel reductions would have on the business area's ability to support non-Navy customers during mobilization. These documents also indicate that the Navy was proposing to reduce the operating status of some weapons stations, including Concord. However, OSD officials were concerned with the Navy's proposal because these weapons stations would handle a majority of all DOD-wide, Army, Air Force, and U.S. Transportation Command explosive cargo in the event of a major contingency; have 10 times the explosive cargo capacity of the ports considered for are having their facilities expanded by the Army to accomplish additional U.S. Transportation Command work; and have specialized explosive storage areas that must be retained to support current inventories of Navy missiles. OSD officials concluded that no alternative to these ports exists and that DOD must, therefore, keep these ports operational. The Deputy Secretary of Defense agreed with this assessment and, in December 1996, directed the Navy not to place any port in a functional caretaker status or reduce its ordnance handling capability until a detailed plan is (1) coordinated within OSD, the Joint Staff, and the other Military Departments and (2) approved by the Secretary of Defense. According to U.S. Transportation Command and Navy ordnance officials, a May 1997 DOD-wide paper mobilization exercise validated the OSD officials' concerns about Concord Naval Weapons Station performing its mobilization mission. Specifically, the exercise demonstrated that, among other things, (1) the Concord Naval Weapons Station is one of three ports that are essential to DOD for getting ordnance items to its warfighters during mobilization and (2) if Concord is not sufficiently staffed or equipped, there could be a delay in getting ordnance to the warfighter during mobilization. According to Navy ordnance, OSD, the Joint Staff, and U.S. Transportation Command officials, although there is widespread agreement that Concord is needed by all of the military services to meet ammunition out-loading requirements during mobilization, there is no agreement on how to finance the personnel that will be needed in order to accomplish this mission. The Army and Air Force do not believe they should subsidize the operations of a Navy base. At the same time, Navy officials do not believe they should finance the entire DOD mobilization requirement at Concord because (1) most of their facilities in the San Francisco Bay area have been closed and Concord is, therefore, no longer needed by the Navy during peacetime, (2) the Army and Air Force need Concord more than the Navy does, and (3) Concord does not receive enough ship loading and unloading work during peacetime to keep the current work force fully employed. Accordingly, the Navy plans to retain some personnel at Concord, but has shifted all of its peacetime ship loading and unloading operations out of Concord and plans to gradually transfer ammunition currently stored at Concord to other locations. Navy, OSD, and Joint Staff officials informed us that several actions are needed to ensure that Concord has sufficient, qualified personnel to load ammunition onto ships: (1) revalidate the ammunition out-loading mobilization requirements for Concord, (2) determine the minimum number of full-time permanent personnel that Concord needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission (the core workforce), (3) ensure that Concord's core workforce is sufficiently trained to accomplish its mobilization mission, and (4) determine a method, either through a direct appropriation or the Working Capital Funds, to finance the Concord's mobilization requirements. To the Navy's credit, it has acted to reduce its ordnance business area's annual cost by $151 million and has incorporated this cost reduction goal into the business area's budget estimate. Our analysis of available data indicates that, in general, the planned actions should result in substantial cost reductions and more streamlined Navy ordnance operations. The Navy could reduce its cost further and prevent a possible degradation of military readiness by taking the additional actions recommended in this report. Further, the Navy still needs to ensure that a final restructuring plan is completed so that it can tie together all of its planned actions and establish specific accountability, schedules, and milestones as needed to gauge progress. In order for the Concord Weapons Station to accomplish its mobilization mission, we recommend that the Secretary of Defense revalidate the amount of ammunition Concord Weapons Station needs to load onto ships during mobilization, direct the Secretary of the Navy to determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission, and ensure that Concord's core workforce is sufficiently trained to accomplish its mobilization mission. We recommend that the Secretary of the Navy incorporate into the NOC's detailed cost reduction plan (1) specific actions that need to be accomplished, (2) realistic assumptions about the savings that can be achieved, (3) milestones, and (4) clearly delineated responsibilities for performing the tasks in the plan; evaluate the cost-effectiveness of (1) consolidating all or most of the business area's missile maintenance workload at one location and/or (2) transferring all or some of this work to public depots or the private sector; develop and implement policies and procedures for charging customers for ammunition storage services; evaluate the appropriateness of converting military guard positions to direct the NOC Commander to determine if it would be cost-beneficial to convert non-guard military positions to civilian status; and eliminate the excess ordnance engineering capability that previous studies have identified both within the NOC and between the NOC and other Navy organizations. In its written comments on this report which identifies the actions the Navy ordnance business area is taking to reduce costs and streamline its operations, DOD agreed fully with five of our eight recommendations. It partially concurred with the remaining three recommendations, as discussed below. In our draft report, we recommended that the Secretary of Defense direct the Secretary of the Navy to (1) determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operation to accomplish its mobilization mission and (2) ensure that this core workforce is sufficiently trained to accomplish its mobilization mission. In partially concurring with this recommendation, DOD agreed that both of these tasks should be accomplished and that the Navy should be responsible for identifying the peacetime manning requirement. However, it indicated that this core workforce cannot be adequately trained for its mobilization mission unless it is given the appropriate amount and type of work during peacetime. DOD further stated it will take steps during the fiscal year 1999 budget process to ensure that adequate and funded workload is provided to Concord. We agree with DOD's comment and revised our final report to recommend that DOD act to ensure that the core workforce is sufficiently trained. Concerning our recommendation to charge customers for ammunition storage services, the Navy agreed that action should be taken to (1) store only necessary ammunition at its weapons stations and (2) transfer excess ammunition to inland storage sites or disposal. The Navy believes that this can be accomplished without imposing a separate fee for storing ammunition. However, Navy records show that 51,231 tons, or about 43 percent, of ammunition stored at weapons stations was not needed as of May 1997. As stated in this report, because of the persistent nature of this problem, we continue to believe that charging customers for ammunition storage will provide the financial incentive for customers to relocate or dispose of unneeded ammunition. Finally, concerning our recommendation to convert military guard positions to civilian positions, the Navy stated that it is in the process of transferring the Navy ordnance east coast security positions to the Atlantic Fleet and that it plans to transfer the west coast security positions to the Pacific Fleet. It believes that the two Fleet Commanders need time to evaluate the appropriateness of converting the military guard positions to civilian positions. We agree with DOD's comment that this decision should be made by the Fleet Commanders and have revised our recommendation accordingly. As part of this evaluation, the Navy needs to consider the cost of the guard positions since a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. We are sending copies of this report to the Ranking Minority Member of your Subcommittee; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Appropriations, Subcommittee on National Security; the Senate and House Committees on the Budget; the Secretary of Defense; and the Secretary of the Navy. Copies will also be made available to others upon request. If you have any questions about this report, please call Greg Pugnetti at (202) 512-6240. Other major contributors to this report are listed in appendix II. Karl J. Gustafson, Evaluator-In-Charge Eddie W. Uyekawa, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed financial and management issues related to the ordnance business area of the Navy Working Capital Fund, focusing on: (1) the Navy's proposed and ongoing actions to reduce the business area's costs; and (2) additional cost reduction opportunities. GAO noted that: (1) the Navy is in the process of developing the cost reduction plan GAO recommended in its March 1997 report and has proposed and begun implementing a number of actions to reduce its ordnance business area's annual operating costs by $151 million, or 25 percent, between fiscal year 1996 and 1999; (2) this is a significant step in the right direction and should result in substantial cost reductions and more streamlined operations; (3) GAO's review of the business area's operations and discussions with the Office of the Secretary of Defense (OSD) and Navy ordnance officials indicate that the Navy has both an opportunity and the authority to further reduce Navy ordnance costs; (4) specifically: (a) redundant ordnance engineering capability exists within the business area and other Navy organizations; (b) military personnel are performing work that could be performed by less expensive civilian employees; (c) redundant missile maintenance capability exists; and (d) no financial incentive exists for customers to store only needed ammunition (the business area's inventory records show that 43 percent of the ammunition stored was unneeded as of May 1, 1997) since they do not directly pay for storage costs; (5) while most of the planned cost reduction actions appear to be appropriate, it remains to be seen whether the business area will reduce costs by $151 million; (6) in addition, GAO's review of available data indicates that one of the cost reduction actions--the planned personnel reductions--may adversely affect the Concord Naval Weapons Station's ability to load ships during mobilization, thus creating potential readiness problems; and (7) these personnel reductions are likely to have little impact on the Navy, but could have a significant impact on the Army and Air Force, which would rely heavily on Concord during a major contingency operation.
5,464
447
Through its disability compensation program, VBA pays monthly benefits to veterans for injuries or diseases incurred or aggravated while on active military duty. VBA rates such disabilities by using its Schedule for Rating Disabilities. For each type of disability, the Schedule assigns a percentage rating that is intended to represent the average earning reduction a veteran with that condition would experience in civilian occupations. Veterans are assigned a single or combined (in cases of multiple disabilities) rating ranging from 0 to 100 percent, in increments of 10 percent. Basic monthly payments range from $115 for a 10 percent disability to $2,471 for a 100 percent disability. About 58 percent of veterans receiving disability compensation have disabilities rated at 30 percent and lower; about 9 percent have disabilities rated at 100 percent. The most common impairments for veterans who began receiving compensation in fiscal year 2005 were, in order, hearing impairments, diabetes, post-traumatic stress disorder, back-related injuries, and other musculoskeletal conditions. VA performs disability reevaluations for disabilities required by regulation and whenever it determines that it is likely that a disability has improved, or if evidence indicates there has been a material change in a disability or that the current rating may be incorrect. Federal regulations generally instruct VA to conduct reevaluations between 2 and 5 years after any initial or subsequent VA examination, except for disabilities where another time period is specifically mentioned in the regulations. The latter generally require a reexamination 6 or 12 months after the discontinuance of treatment or hospitalization. The reevaluation process starts when a VBA Rating Veterans Service Representative (RVSR) completes a disability compensation claim and determines whether the veteran should be reevaluated at some time in the future. RVSRs base this decision on a number of factors. The disability reevaluation may be mandated by the Schedule for Rating Disabilities. For example, a veteran with a 100 percent disability rating due to a heart valve replacement is required to be reevaluated 6 months after discharge from the hospital. Alternatively, the RVSR may determine that the severity of the disability may change. For instance, medical evidence may suggest that a veteran with limited range of motion will be continuing physical rehabilitation and is expected to improve. To ensure that the disability is reviewed in the future, the RVSR enters a diary date into VBA's claims processing system, which later generates a reminder that the disability needs to be reviewed. When this reminder is generated, the veteran's file is retrieved and an RVSR performs a preliminary assessment of whether a reevaluation should be conducted. If the RVSR determines that a reevaluation is no longer needed, the reevaluation is cancelled. For example, staff may cancel a reevaluation when a veteran dies or if the file is already being reviewed by VBA following the veteran's claim that his disability has worsened. If the RVSR determines that a reevaluation of the disability should be conducted, the RVSR can simply review the information in the file or, if needed, collect supplemental medical information which can include the results of a physical examination. Once all of the information has been analyzed, an RVSR can make a decision to increase, decrease, or continue the current rating. Figure 1 summarizes the disability reevaluation process. VBA maintains a quality assurance review program known as the Systematic Technical Accuracy Review (STAR) program. VBA selects random samples of each regional office's disability compensation decisions and assesses the regional office's accuracy in processing and deciding such cases. For each decision, the STAR quality review unit reviews the documentation contained in the regional office's claim file to determine, among other things, whether the regional office complied with the Veterans Claims Assistance Act duty-to-assist requirements for obtaining relevant records, made correct service connection determinations for each claimed condition, and made correct disability rating evaluations for each condition. VBA has a fiscal year 2008 performance goal that 90 percent of compensation decisions contain no errors that could affect decision outcomes; its long-term strategic goal is 98 percent. In addition to STAR, regional offices conduct their own local quality assurance reviews. The guidance for these local quality assurance reviews calls for reviewing a random sample of an average of five claims for each RVSR, per month. VA is currently projecting that it will fully implement a new processing and benefits payment system--VETSNET, for their disability compensation process in May 2008. VA anticipates that VETSNET will be faster, more flexible, and have a higher capacity than VBA's aging Benefits Delivery Network (BDN). For the past 40 years, BDN has been used to process compensation and pension benefits payments to veterans and their dependents each month. However, this system is based on antiquated software programs that have become increasingly difficult and costly to maintain. VBA's operational controls do not adequately ensure that staff schedule or conduct disability reevaluations as necessary. VBA's claims processing software does not ensure that diary dates are established. To the extent that staff do not enter diary dates, some cases that need reevaluations may never be brought to the attention of claims processing staff. As a result, some reevaluations may not be conducted. Staff can also cancel disability reevaluations and VBA does not track or review cancelled reevaluations. Thus, VBA does not have assurances that reevaluations are being cancelled appropriately. Also, completed reevaluations are not likely to receive quality assurance reviews. VBA plans on improving some of its control mechanisms through its new claims management system, VETSNET. However, VETSNET will not address all of the issues we found regarding VBA's operational controls. VBA operational controls do not ensure that cases that should be reevaluated are scheduled for disability reevaluations. VA's regulations require VBA to schedule disability reevaluations either when VBA determines that a veteran's disability is likely to change or when mandated by the Schedule for Rating Disabilities. For cases where VA determines that a disability is likely to change, VBA staff must manually enter diary dates into VBA's claims processing system in order to ensure that a reminder is generated. The diary date is the only VBA procedural trigger that alerts regional offices that a claim needs to be reviewed. However, claims processing staff can complete a rating decision on a disability claim without entering a reevaluation diary date. To the extent that staff do not enter a diary date, a case that needs to be reevaluated may never be brought to the attention of claims processing staff. As a result, the case will likely not be reevaluated. The VA Office of Inspector General has found some instances where this has occurred. For example, during a review at the Little Rock, Arkansas regional office, the VA IG found that staff failed to enter required dates for 10 of 41 cases sampled at that office. VBA's electronic claims processing system also does not automatically set up diary dates for all disabilities where a reevaluation is mandated by VA's Schedule for Rating Disabilities. According to VA, there are 31 disabilities where reevaluations are required by the Schedule. VBA has automated diary dates for 14 of these disabilities. As a result, staff must manually enter diary dates into the system for the remaining 17 disabilities. VBA does not currently have a plan for expanding its automated diary date protocol to include all disabilities where reevaluations are mandatory. VBA officials said that their first priority is to ensure VETSNET is operational and their conversion plan is completed. Once diary dates have been entered by RVSRs into the claims processing system, the dates are transferred to VBA's centralized data processing center in Hines, Illinois. When the diary dates mature, the data processing center prints and mails out paper notices to VBA's regional offices alerting them that reevaluations are needed. However, once the centralized data processing center prints out these notifications, the diary dates are erased from the centralized computer system. In addition, VBA does not track which disability cases were identified for reevaluation. Since the notices are single pieces of paper, they could be lost or misplaced. If this occurs, these disability reevaluations would likely be delayed or not performed at all. VBA is planning on improving its ability to track reevaluations. According to VBA officials, VETSNET will eliminate the paper notification of a matured diary date. Instead, once a disability reevaluation diary date matures, VETSNET will automatically create an electronic record, which can be tracked. Although VA plans on processing all disability compensation claims using VETSNET by May 2008, VBA officials told us that the automatically created electronic record would not be included. These officials were unable to provide us with a timetable for when such a control system would be rolled out. Once the regional office receives the paper notice that a reevaluation is due, staff perform a preliminary assessment of the veteran's claim file to determine if more comprehensive reevaluation should be conducted. If staff determine during this preliminary assessment that a reevaluation is no longer needed, they can cancel the reevaluation. Regional office staff noted several reasons for canceling reevaluations, such as when a veteran dies. Additionally, a reevaluation would be cancelled if the veteran reopens their claim because the disability has worsened. However, VBA does not track the number or reasons for cancellations. Also, cancelled reevaluations are not subject to quality assurance reviews. VBA plans on improving its ability to track cancellations using VETSNET. According to VBA officials, when VETSNET is fully implemented for disability compensation claims in May 2008, VBA will be able to track the number and reasons for cancelled disability reevaluations. While completed disability reevaluations are subject to quality assurance review, very few are likely to be reviewed. Disability reevaluations represent a small portion of the total disability claims workload that VBA reviews for quality. For example, reevaluations represented about 2 percent of the total number of disability claims decisions completed in fiscal year 2005. Since VBA randomly selects claims for review from the total number of disability decisions, it is not likely that VBA will review many reevaluations. Similarly, each regional office's quality assurance review would not likely select many reevaluation claims. Specifically, the local quality assurance guidance calls for reviewing a random selection of an average of five claims for each RVSR per month. Disability reevaluations are part of the sample, but since they are a small portion of the total caseload, they have a low likelihood of being selected. Some of the regional office quality assurance review staff we spoke with reported that in the course of a month, they may only see a handful of disability reevaluation claims. Thus, VBA may not have a sufficient handle on the accuracy and consistency of these reevaluations agencywide. VBA cannot effectively manage the disability reevaluation process because some of the data it collects are not consistent and it does not systematically collect and analyze key management data. While VBA collects data on the amount of time regional offices take to conduct disability reevaluations, these data are not reliable. Also, VBA does not know the number of reevaluation diary dates that mature in a year or the types of disabilities being reevaluated, the length of time before reevaluations are conducted, or if the reevaluation decisions result in an increase, decrease, or no change in the severity of veterans' disabilities. VBA's electronic system is unable to capture the entire amount of time it takes to complete a disability reevaluation and VBA does not currently collect and analyze outcome data. VBA's disability reevaluation timeliness data are inconsistent because regional offices use different starting points for measuring how long it takes to complete reevaluations. For example, staff at one regional office told us they start measuring the length of time to complete disability reevaluations from the date that VBA's centralized data processing center in Hines, Illinois, prints the paper notifications. Since the paper notifications are mailed from Hines to the regional office, several days can pass before the regional office receives the paper notifications. As a result, the actual time it takes this office to complete disability reevaluations would be overcounted. Other regional offices we visited indicated that measuring timeliness is not started until the date that staff review the claims file and determine that a reevaluation should proceed. Staff at one regional office we visited stated that it takes about 10 days for the claim to reach the desk of staff who perform the review. Since this review may not always take place as soon as the office receives the notification, the actual time it takes to complete disability reevaluations for these offices would be undercounted. VBA does not collect and analyze key management data on disability reevaluations. Thus, VBA does not have a firm grasp on its performance in handling claims that are due for a reevaluation. That is, while VA collects data on the number of revaluations that it completes, it does not compare this information to the number of claims that were initially scheduled for a reevaluation. Therefore, VA does not know if it is performing well in completing the claims scheduled for review. By not tracking this information, VA does not have a clear sense of the extent to which reevaluations are being cancelled (as noted) or whether some reevaluations are simply never started. According to VBA officials, VBA also does not collect data on the types of disabilities being reevaluated and how far in the future reevaluations are scheduled. Also according to VBA officials, VBA does not collect data on the outcomes of reevaluations and, as a result, does not have the benefit of historical results data that could be used to calibrate its decisions on which disabilities are likely to change and thus should be a higher priority for reevaluation. Regional office staff stated that such information on the disability reevaluation process could be useful in aiding their daily decision making on which disabilities to reevaluate and when to schedule them. Having such historical data could also aid VBA in workload management decisions. For example, in January 2002, as a temporary effort to free up staff for processing its backlog of disability compensation and pension claims, VBA postponed most of their currently scheduled reevaluations for 2 years. VBA made this decision without historical data on the extent to which reevaluations affect the benefit levels of disabilities and lost an opportunity to target only those cases likely to result in a change in status. As such, VBA did not know the potential number of veterans it could be over- or under-compensating for the 2 years the reevaluations were postponed. If VBA had a better data-driven feedback component, it could have avoided wholesale postponement of reviews for 2 years. Figure 2 summarizes the disability reevaluation process with an added data-driven feedback loop. It is important that veterans have confidence in the system designed to compensate them for their service-connected disabilities and that taxpayers have faith in VBA's stewardship of the disability compensation program. Inadequate management controls could result in some veterans being under-compensated for conditions that have worsened or over- compensated for conditions that have improved. VBA is improving some of its operational controls over reevaluations. For example, through its VETSNET system VBA plans to track the number and reasons for cancellations. However, without a system to remind staff to schedule disability reevaluation diary dates or a system that automatically schedules diary dates for all claims that require reevaluation, staff could inadvertently fail to enter diary dates, and reevaluations may not be scheduled and performed as needed. Meanwhile, measuring regional office performance requires reliable performance data. VBA cannot adequately measure how long it actually takes regional offices to complete disability reevaluations since offices use different starting points for measuring timeliness. For offices that start measuring their timeliness after the claim review has been started, the measurement can result in undercounting the total amount of time to complete a disability reevaluation. Also, without reliable performance data, VBA cannot accurately evaluate regional office timeliness or compare regional offices' performance. Therefore, VBA cannot reward good performance and take actions to improve lagging performance. In addition, without data on the results of reevaluations, VBA cannot ensure that it is prioritizing its resources to reevaluate those veterans whose disabilities are likely to change, and that it is reevaluating those disabilities at the appropriate point in time. Moving in this direction becomes increasingly more important given Operation Enduring Freedom and Operation Iraqi Freedom. Outcome data on the reevaluation process could be used to target certain disabilities in the future. For example, if VBA found that reevaluating a certain disability never resulted in a change in the rating level, then it could consider not reevaluating that disability in the future. In addition, data on the timing of reevaluations could also be used strategically to refine when disabilities are reevaluated. For example, some regional offices may be scheduling reevaluations for 2 years into the future for a particular disability, whereas other regional offices may be using a 3-year time period. This information could be combined with the outcomes of such reevaluations to refine guidance and training on scheduling reevaluations. We recommend that the Secretary of the Department of Veterans Affairs direct the Under Secretary for Benefits to take the following five actions to enhance VBA's disability reevaluation process: VA should modify its electronic claims processing system so that a rating decision cannot be completed without staff completing the diary date field. VA should modify its electronic claims processing system to ensure that a diary date is automatically generated by the system for all disabilities where a reevaluation is required by VA's Schedule for Rating Disabilities. VBA should include cancelled reevaluations in its quality assurance reviews and should evaluate the feasibility of periodically sampling a larger number of completed disability reevaluations for quality assurance review. VBA should clarify its guidance so that all regional offices consistently use the date they are notified of a matured diary date as the starting point for measuring timeliness. VBA should develop a plan to collect and analyze data on the results of disability reevaluations. To the extent necessary, this information could be used to refine guidance on the selection and timing of future disability reevaluations. In its written comments on a draft of this report (see app. II), VA generally agreed with our conclusions and concurred with our recommendations. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until two weeks after the date of this report. At that time, we will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at GAO's Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgements are listed in appendix III. To develop the information for this report, we analyzed Veterans Benefits Administration (VBA) workload and timeliness data on disability reevaluations. We found that VBA workload reports, which detail the length of time it takes regional offices to complete disability reevaluations are not reliable, since VBA guidance allows regional offices the ability to begin measuring when disability reevaluations begin at different points in time. Because VBA does not routinely collect and analyze data on the time allowed prior to reevaluating disabilities or the results of reevaluations, we requested a VBA analysis of claims-level data. In November 2006, VBA agreed to develop a one-time analysis of reevaluations completed in 2006. However, because of difficulties in developing the data VBA was unable to provide the analysis in time for us to incorporate the results into this report. We also reviewed federal regulations on disability reevaluations, VBA's written guidance and training materials on reevaluations, and VBA's procedures for conducting reevaluations. We discussed the procedures for ensuring that reevaluations are conducted and the information used to manage the reevaluation program with VBA headquarters and regional office officials and observed control procedures at 5 of VBA's 57 regional offices. Specifically, we visited VA's regional offices in Chicago, Illinois; Columbia, South Carolina; Muskogee, Oklahoma; Nashville, Tennessee; and Seattle, Washington. We selected the Columbia, Muskogee, and Nashville regional offices based on fiscal year 2005 VBA data that showed they completed reevaluations faster than the national average. Chicago and Seattle took longer than the national average. All five offices also completed a greater than average number of reevaluations. We also selected these five offices based on their geographic dispersion. During our site visits, we toured the regional office's facilities and interviewed regional office management, 30 staff involved in regional office claims processing, 6 staff tasked with quality assurance, and other staff. We did not perform a case file review during our visits. The VA Office of Inspector General had performed a limited case file review and found that in some instances reevaluations were not scheduled where required. We built on the Inspector General's work by looking at VBA's processes for ensuring that reevaluations are scheduled when required. The following individuals made important contributions to the report: Brett Fallavollita, Assistant Director; Martin Scire; David Forgosh; as well as Susannah Compton; James Rebbe; Christine San; and Walter Vance.
To help ensure that veterans are properly compensated for disabilities, VA is required to perform disability reevaluations for specific disabilities. VA also performs reevaluations whenever it determines there is a need to verify either the continued existence or current severity of veterans' disabilities. VBA completed about 17,700 reevaluations in fiscal year 2005. GAO was asked to review the Veterans Benefits Administration's (VBA) disability reevaluation program. This report assesses (1) the operational controls VA uses to ensure the effectiveness of the disability reevaluation process and (2) the management information VA collects and uses to manage the disability reevaluation process. To conduct this study, GAO analyzed VBA data, reviewed federal regulations and VBA procedures, conducted site visits, and interviewed VBA officials. VBA's operational controls do not adequately ensure that staff schedule or conduct disability reevaluations as necessary; however, VBA is planning to improve some of the controls. VBA claims processing software does not automatically establish or prompt regional office staff to schedule a time - known as a diary date - to determine whether a disability reevaluation should proceed. Consequently, some cases that require a reevaluation may never receive it. After the diary date matures, staff perform a preliminary review of a veteran's claim file to determine if a more comprehensive reevaluation should be conducted. If staff determine during this review that a reevaluation is no longer needed, the reevaluation is cancelled. However, cancellations are not tracked or subject to quality assurance reviews to ensure adherence to program policies and procedures. VBA plans on improving some of its control mechanisms through its new claims management system, the Veterans Service Network (VETSNET), including developing the ability to track cancellations. However, VBA has no plans to include a prompt for scheduling reevaluation diary dates in VETSNET. VBA cannot effectively manage the disability reevaluation process because some of the data it collects are inconsistent and it does not systematically collect and analyze key management data. While VBA collects data on the amount of time regional offices take to conduct disability reevaluations, these data are not consistentbecause regional offices use different starting points for measuring timeliness. Also, VBA does not know the types of disabilities being reevaluated, the length of time before reevaluations are conducted, or the results of the reevaluations. As a result, VBA cannot ensure that it is effectively and appropriately using its resources.
4,929
565
VA provides tax-free compensation to veterans who have service- connected disabilities.The payment amount is based on a disability rating scale that begins at 0 for the lowest severity and increases in 10-percent increments to 100 percent for the highest severity. More than half of initial applicants claim multiple disabilities, and veterans who believe their disabilities have worsened can reapply for higher ratings and more compensation. For veterans who claim more than one disability, VA rates each claim separately and then combines them into a single rating. About two-thirds of compensated veterans receive payments based on a rating of 30 percent or less. At the base compensation level, these payments range from $98 per month at 10-percent disability to $288 per month at 30-percent disability.Base compensation for veterans with a 100-percent disability rating is significantly higher--$2,036 per month in 2000. Disability ratings are also used to determine eligibility for certain other VA benefits. For example, veterans with a 30-percent disability rating are entitled to an additional allowance for dependents, and those with higher ratings can become eligible for free VA nursing home care and grants to adapt housing for their needs. In addition, priority for care for VA health care is partly tied to disability ratings. VA has had long-standing difficulties in keeping up with its claims processing workload, resulting in increasing backlogs of pending claims.In fiscal year 1999, VA received approximately 468,000 compensation claims--about 345,000 of which were repeat claims. More than 207,000 claims were still pending at the end of fiscal year 1999--an increase of nearly 50 percent from the end of fiscal year 1996--and the average processing time was 205 days. Of the 207,000 pending claims, about 69,000 were initial claims, and about 138,000 were repeat claims. In its 1996 report, the Veterans' Claims Adjudication Commission observed that 56 percent of veterans with pending repeat claims were rated as 30- percent or less disabled. Questioning whether VA should expend a significant share of its resources processing claims for veterans who are already compensated and have relatively minor disabilities, the Commission raised the possibility of offering lump sum payments to veterans with minimal disabilities. Other federal agencies have established this type of payment program. For example, under DOD's disability program, mandatory lump sum payments are given to separating military personnel with less than 20 years of service and a disability rating of less than 30 percent, and the Department of Labor allows injured civilian federal employees to request lump sum payments for bodily loss or impairment instead of the scheduled duration of weekly payments. Six countries--Australia, Canada, Germany, Great Britain, Israel, and Japan--provide lump sum payments to at least some of their disabled veterans. Britain, Canada, Israel, and Japan make these payments to veterans with minor disabilities, while Germany supplements veterans' pensions with a lump sum payment for those whose ability to work has been severely restricted. For peacetime service, Australia pays lump sum compensation for noneconomic losses from permanent impairments; it also provides a lump sum payment for a reduced capacity to work, if the incapacity is likely to be stable and would otherwise entitle the veteran to only a relatively small weekly pension. Veterans' views captured through our survey and focus groups were based on the following features of both the lump sum and monthly payment options: Both types of payment--monthly and lump sum--would be tax-free. Under both types of payment, veterans would continue to be entitled to VA medical and other current benefits. Under the monthly payment system, veterans could reapply for increased payments for a worsening disability; under the lump sum system, veterans could not reapply for additional payments for a worsening disability for which a lump sum had been received. When the lump sum recipient dies, the surviving family would not have to repay any portion of the lump sum. Reactions to this hypothetical framework yielded no clear consensus among compensated veterans about whether a choice between monthly payments and a lump sum should be offered to newly compensated veterans. Among compensated veterans, 49 percent said they would definitely or probably support a lump sum option for newly compensated veterans, 43 percent said they would definitely or probably not support it, and 8 percent were unsure. Respondents whose views were "definite" were also about equally split--about 24 percent definitely supported offering a choice, and about 28 percent definitely opposed it (see fig. 1). Veterans' responses indicate that experience could influence interest in taking a lump sum payment. Among all veterans, 32 percent reported they would have been interested in taking a lump sum payment when first compensated had such an option been available.Half as many--16 percent--reported that, knowing what they know today, it would have been a good choice for them. This ratio was borne out among supporters of offering a lump sum choice--56 percent indicated they would have been interested in a lump sum payment, and 28 percent said it would have been a good choice for them. Age and severity of disability also seemed to influence the degree of interest in taking a lump sum payment. For example, among veterans aged 43 or younger, 46 percent reported they definitely or probably would have been interested in taking a lump sum payment, compared to 21 percent of veterans aged 61 or older.Similarly, among veterans whose current disability rating is 10 percent or less, 39 percent reported definite or probable interest in a lump sum, compared to 22 percent with disability ratings of 40 percent or more (see fig. 2). Younger, more recently rated, and less severely disabled veterans--groups that expressed greater interest-- could be a better gauge of newly compensated veterans' interest in taking a lump sum payment because they may be more similar to potential recipients than are other veterans. Thus, if future newly compensated veterans are offered a lump sum option, the actual percentage of those interested in it could exceed the 32 percent found among current veterans. Although our results indicate some receptivity to a lump sum option, interest and support would likely depend on the specific design of the payment program. For example, among military retirees with 20 years of service--whose compensation is now a tax-free portion of their retirement pay--interest in a lump sum payment increased from 29 percent to 66 percent after they learned in the survey that the lump sum might be offered in addition to their full retirement pay. Veterans and military personnel in our focus groups expressed considerable interest in knowing additional details about the proposed lump sum option--particularly about the lump sum payment amount. Others asked for clarifications about the program, such as whether there could be circumstances under which lump sum recipients could reapply for additional compensation. In reacting to the option, some indicated that they had made assumptions about the amount. Others felt they could not give an informed opinion or make a decision without more information--or the "fine print," as one individual put it. Some were skeptical and suggested that the lump sum option was a way for the government to cut VA benefits and reduce its obligations to those whose disabilities may get worse. Through our focus group sessions and discussions with veteran and military organizations, we found that veterans and military personnel perceive advantages and disadvantages of offering a lump sum option. However, information on the actual effects of lump sum payments on veterans' financial well-being is limited. While some studies have examined how recipients use lump sum payments, they do not address how likely lump sum recipients are to be financially advantaged or disadvantaged as a result of receiving a lump sum payment rather than monthly payments. Veterans and military personnel identified several advantages and disadvantages associated with a lump sum payment option (see table 1). These advantages and disadvantages generally weigh the benefit of financial flexibility against the risk of financial loss. Veterans and military personnel who said the lump sum payment would put recipients at risk of being less well off or unable to pay for basic necessities such as food and housing provided several reasons to support their perception. Some reported that most lump sum recipients--particularly younger veterans and those already in financial need--would not have adequate money management skills. For example, some said that recipients may squander the one-time payment before reaching old age. They also said that more lump sum recipients would spend rather than invest the money, and those who did invest would be at risk of making poor investments. These veterans and military personnel also expressed concern that the lump sum amounts would be inadequate to protect recipients from financial setbacks that could result from a progressive disability and the inability to reapply for a higher disability rating. Some were similarly concerned that the initial rating could be inaccurate or unfairly low or that the average life span on which the lump sum was calculated would be insufficient to support recipients who outlived this average. Finally, veterans and military personnel said that choice creates risk because information may be incomplete or biased, individual judgment may be poor, or both. Some said a lump sum option would actually lead to more poor judgments because people would find a large sum of money so immediately attractive that they would not adequately consider the long- term financial consequences of taking it. On the other hand, others said that there would be benefits to a lump sum payment option. For example, some said lump sums could be used to make investments or large purchases, such as a house or an education; settle debts; or start a business. In addition, veterans and military personnel said that the benefit of providing a choice outweighed any risks. This high value placed on choice seems to underlie much of the option's support, since our survey indicated that, among veterans who supported the lump sum option, 28 percent thought in hindsight that a lump sum would have been the better choice for them. As one veteran said, "I don't believe that the lump sum option is a good idea, but it's America and veterans should have a choice." Another supporter of choice argued that, while a lump sum payment invested in stocks could be substantially reduced if the market falls, monthly payments could be routinely squandered. It was also pointed out that while a veteran who opted for a lump sum could outlive the average age used to calculate the payment, a veteran who chose monthly payments could die relatively young and therefore receive less total compensation. Moreover, focus group participants also said that veterans who were fully informed about their options, would have to take responsibility for the consequences of their choice. Little definitive information is available to validate perceptions about the potential financial effects on veterans taking a lump sum payment. Our review of the literature and inquiries about lump sum provisions for disabled veterans in several countries yielded very few studies on veterans receiving lump sum payments, and none addressing the long-term financial effects of such payments. We did find two qualitative accounts, provided to us by British and Australian officials, which told of financial difficulties among foreign disabled veterans who received lump sum compensation before World War II. In 1939, the British Ministry of Pensions stopped allowing veterans to convert their disability pensions into lump sum payments because it found that some recipients had sustained serious financial losses, particularly through business ventures. Allowing conversions of pensions to a lump sum has never been reinstated under the British War Pensions Scheme, but lump sums are paid for lower-rated disabilities. In Australia, a lump sum provision was discontinued when some impoverished World War I veterans returned for pension benefits after exhausting their lump sum payments. While Australia's act covering service during armed conflicts still does not provide for lump sum disability compensation, a separate act directs lump sum compensation for certain disabilities incurred during peacetime service, on essentially the same basis as for other government employees. Although not addressing long-term financial effects or disabled veterans, certain studies examine recipients' use of lump sum payments from other sources, indicating different ways recipients would typically manage a lump sum.In general, studies of retirement distributions suggest that many factors affect how individuals use lump sum payments. For example, one recent study of lump sum retirement distributions reported that recipients under age 25 spent almost half of their money on everyday expenses and consumer items, compared to older age groups who spent 22 percent or less.Another study reported that the recipient's age, education and income level, and the payment amount are influential factors, but together these factors explain less than 20 percent of the variation in saving behavior among lump sum recipients.However, findings from these studies depend on the definitions of savings, investment, and spending used, and may have less relevance for different populations and lump sum programs. Some veterans and active duty personnel we spoke with suggested certain strategies--some of which have been used in other lump sum payment programs--to minimize the potential risks associated with receiving a one- time payment. However, others had concerns about whether they would be effective, feasible, or fair. To help ensure that beneficiaries make a wise choice, some veterans and military personnel suggested that VA develop an information and education plan--one that would fully inform beneficiaries of the benefits and risks of the two payment types and project for individual beneficiaries the likely effects of each. They further suggested that such information and education be provided well before the time the choice would be made to allow beneficiaries sufficient time to consider their options. To ensure unbiased information, it was also suggested that independent counseling on the payment choices be encouraged, as well as a second medical opinion on the disability. However, some expressed concern about VA's ability to develop an effective information and education strategy. This skepticism was based on their perceptions that the government's past efforts to inform and educate veterans about benefits were inadequate and a lump sum decision would involve complex assessments of future disability, individual financial situations, and investment risks. Veterans and military personnel also suggested strategies that they believe would limit the risk of forgone compensation or other benefits if a veteran's disability were to progress. For example, one strategy would be to delay veterans' choice of a lump sum until they are comfortable with the stability of their condition. Others said that the progression--or stability--of an individual's disability could not be predicted accurately enough to allow fully informed choice. According to VA and medical experts in disability evaluation, definitive medical knowledge is often insufficient to fully inform veterans of whether their disabling condition would progress or remain the same.The course of disability is highly individualized and can be complicated by multiple impairments. The limited historical data from our survey suggest that while some veterans get higher ratings over time for worsening disabilities, others get lower ratings for improved disabilities.For example, among veterans who received their first ratings before 1970, about 21 percent reported higher current ratings than initial ratings, and almost 17 percent had lower current ratings. Another strategy veterans and military personnel suggested would be to estimate veterans' lifetime disabilities and use these estimates in calculating their lump sum payments. The VA Inspector General has similarly proposed that VA revise its disability rating criteria to reflect expected lifetime impairment.While projecting the progression of an individual's disability over his or her lifetime would prove difficult, determining average progression factors using VA historical data may be possible. Another suggested strategy would be to allow reevaluations of lump sum recipients' disability ratings--not for the purpose of providing additional payment but to determine their eligibility, and that of their dependents, for other VA benefits that are tied to disability ratings, such as medical care or survivor benefits.Some participants suggested, however, that disabled veterans should also be able to seek reevaluation for additional compensation payments if their disability progresses. Other strategies for reducing the financial risk associated with a lump sum payment were aimed at encouraging responsible financial management. For example, focus group respondents recommended financial counseling and education; investment options, such as in the federal government's Thrift Savings Plan; or payment allocations, such as paying lump sums in allotments or initially putting the money into a trustee account. It was also suggested that returns on investments could be tax-free. However, concerns were also raised about these strategies, including perceptions that the government would not be able to successfully instruct people on how to manage their money and that these strategies would increase government bureaucracy. Some of the suggested strategies were aimed at protecting vulnerable populations from financial risk. Specifically, some suggested that a lump sum payment option not be offered to those who would be least able to manage the money well--such as those who have been declared incompetent or have a history of significant psychological disabilities--or that the lump sum payment be assigned to someone who could manage the money for the payee.One concern that was raised with this type of strategy was that there would not be enough time to declare a newly compensated veteran incompetent or in need of a representative before the veteran was offered a choice. A similar strategy suggested by veterans and military personnel was to limit the lump sum option to the least financially vulnerable--that is, veterans who would not be likely to suffer great economic hardship if they were to lose the lump sum payment. These veterans would include those who would receive small monthly compensation payments, have stable or less severe disabilities, or have alternative income sources.However, respondents raised concerns that any safeguard restricting who would be offered the lump sum option could be viewed as unfair. In its written comments, VA highlighted our points that veterans had mixed views about offering this hypothetical lump sum program and that further development of program details could affect veterans' views. (The full text of VA's comments is presented in app. II.) We are sending copies of this report to the Honorable Hershel W. Gober, Acting Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-7101 or one of the GAO contacts listed in appendix III. Other key contributors to this report are also listed in this appendix. To gain an understanding of support for and interest in lump sum disability payments as a potential option for veterans, we surveyed and met with a variety of interested parties, including veterans currently receiving VA disability payments, active-duty service members, and military and veteran service organizations. For our survey, we mailed a questionnaire asking for views about a possible lump sum option to a representative sample of 2,481 veterans who currently receive disability compensation and reside at a domestic address. During pretests of the survey questionnaire with over 30 veterans, we discussed the perceived advantages and disadvantages of a lump sum option and what might be done to mitigate the disadvantages. We also discussed reactions to the option in focus groups of veterans and active-duty service members in the Air Force, Army, Marines, and Navy. To determine what is known about the impact on recipients of receiving a lump sum, we reviewed relevant literature on lump sum payments and communicated with representatives from other federal agencies and foreign countries that provide some form of lump sum payment to civilian and military beneficiaries. We performed our evaluation from August 1999 through November 2000 in accordance with generally accepted government auditing standards. The objective of our survey was to learn about views veterans with service- connected disabilities have about lump sum payments as a compensation option. To elicit their views, we asked veterans to react to a hypothetical program--offering newly compensated veterans a choice between monthly payments and a lump sum payment--with the following features: (1) Monthly payments and the lump sum payment would both be tax-free. (2) Regardless of the type of payment chosen, veterans would continue to be entitled to VA medical and other current benefits. (3) Under the monthly payment system, veterans could reapply for increased payments for a worsening disability, but under the lump sum system, veterans could not reapply for additional payments for a worsening disability for which a lump sum had been received. (4) When the lump sum recipient dies, the surviving family would not have to repay any portion of the lump sum. The survey questions asked veterans whether VA should offer veterans a choice between monthly payments and a lump sum when they are first granted compensation and whether they would have been personally interested in a lump sum had it been available at that time. They were also asked, with the advantage of hindsight, which option would have been better for them. We pretested questions in group and individual discussions with veterans in Denver and Littleton, Colorado; Baltimore, Maryland; Washington, D.C.; and Fairfax and Fredricksburg, Virginia. These sites were chosen because of their proximity to our staff. A sample of 2,484 veterans was drawn from VA's Compensation and Pension file as of October 23, 1999. Our population of interest was veterans currently receiving compensation for a service-connected disability who resided at domestic addresses.To minimize the probability of sending the questionnaire to veterans unable or incompetent to participate in the survey, we excluded veterans from the population with two or more psychological disabilities or a single psychological disability rated 60 percent or more, those whose records indicated incompetence, and those residing in nursing homes. After these exclusions, and also excluding those with nondomestic addresses, our sampled population covered about 94 percent of all compensated veterans in VA's file. In addition to determining the level of support among compensated veterans for a lump sum option, we also wanted to learn from our survey something about what the interest in a lump sum might be if such a choice were offered. We wanted to be able to estimate the level of interest for specific categories of veterans. Therefore, we oversampled various groups to ensure that we could construct these estimates of interest within an acceptable margin of error. The population was stratified by the characteristics in table 2. Before surveying, we checked our sample file against VA recordsto identify veterans who had been terminated from the compensation rolls subsequent to our sample draw. We also visually inspected the addresses and dropped from the sample three veterans whose mailing address indicated that they should have been excluded--that is, the address suggested the likelihood that the veteran was incapable of responding. A total of 2,481 questionnaires were mailed for our survey. Our survey response rate is based on the proportion of questionnaires that were returned with usable information. We mailed our questionnaire in January 2000. A second mailing to nonrespondents occurred approximately a month later. We accepted returned questionnaires through April 26, 2000. Of the sample, 1,921 usable questionnaires were returned, for an overall response rate of 78 percent. For 16 of the sampled veterans, we received notification that the veteran had died or was ineligible for the survey. These cases were removed from the sample. Table 3 details the final disposition of the questionnaires mailed. Response rates were above 65 percent for each stratification level in the sample. To produce our estimates of responses in the population from which we sampled, we weighted each respondent's answers based on our stratification scheme. All sample surveys are subject to sampling error, that is, the extent to which the survey results differ from what would have been obtained if the whole population had received and returned the questionnaire. Measures of sampling error are defined by two elements, the width of the confidence interval around the estimate (sometimes called precision of the estimate) and the confidence level at which the interval is computed. The confidence interval refers to the fact that estimates actually encompass a range of possible values, not just a single point. This interval is often expressed as a point, plus or minus some value (the precision level). For example, an estimate of 75 percent plus or minus 2 percentage points means that the true population value is estimated to lie between 73 percent and 77 percent, at some specified level of confidence. The confidence level of the estimate is a measure of the certainty that the true value lies within the range of the confidence interval. We calculated the sampling error for each statistical estimate in this report at the 95- percent confidence level. This means, for example, that if we repeatedly sampled veterans from the same population and performed the analyses again, 95 percent of the samples would yield values that fall within the confidence intervals of our estimates. Sampling errors in this report range from 1 to 7 (plus or minus) percentage points, with most being less than 5 percentage points. In addition to sampling errors, surveys can also be subject to other types of nonsystematic (noise) or systematic (bias) error that can affect results, such as differences in interpretation of the question or respondents' inability or unwillingness to provide correct information. Unlike sampling errors, the magnitude of the effect of nonsampling errors is not normally known; however, steps can be taken to minimize their impact. One potential source of nonsampling error that may be especially important in this survey is questionnaire construction. Our early pretests revealed that compensation benefits can be an emotion-laden subject for veterans. Some veterans had strong, unanticipated reactions to language used to phrase the question about offering a choice of payments. To ensure that the question was as clear and unbiased as possible, we did extensive pretesting of the questionnaire, making modifications based on veterans' comments. We also consulted with an outside expert in questionnaire design, who reviewed our survey instrument and provided recommendations. In addition, veterans found it difficult to respond to questions about a lump sum choice without details about what that choice might entail, especially the amount of the lump sum payment. It may be that given a more detailed and specific lump sum option, a larger or smaller proportion of veterans would support VA's offering veterans a choice. The magnitude of the effect of these potential biases, if any, on survey results is unknown. To more fully understand why surveyed veterans supported or opposed a lump sum option, we conducted focus groups with veterans receiving disability compensation at VA Medical Centers in Cheyenne, Wyoming, and Grand Junction, Colorado. To gauge the opinions of people who could be affected by such a change in policy, we also conducted focus groups with active-duty military members in all four services. We spoke with members of the Air Force at Buckley Air National Guard Base, Colorado; Army at Fort Carson, Colorado; Marine Corps at Quantico, Virginia; and Navy at Norfolk, Virginia. Since compensated veterans in our survey pretests had also discussed their reasons for support or opposition, we considered their input in our analysis. The sites for veterans' focus groups were in smaller cities, in part because we had already gathered reactions of veterans in some large metropolitan areas during our pretests. Regardless, findings from focus groups and pretest respondents cannot be generalized to larger populations. To obtain information on the experiences of other government agencies offering lump sum payments to the disabled, we contacted officials administering the Department of Labor's Federal Employees Compensation, State Employment Compensation, Black Lung, and Longshore and Harbor Worker's programs. We also obtained information from the Social Security Administration about its Disability Insurance program and from DOD about its disability separation and retirement benefits. To capture the experiences of foreign governments with this type of payment, we reviewed the compensation programs for disabled veterans in Australia, Canada, Israel, Germany, Great Britain, and Japan. We selected these countries because they were the focus of lump sum discussions in the 1999ReportoftheCongressionalCommissiononServicemembersand VeteransTransitionAssistance. We contacted officials from these countries directly or through the Department of State. Both Germany and Japan provided information about their programs in their native languages. We used translators from the Department of State to translate their responses into English. In addition to those named above, the following staff made key contributions to this report: Sandra Davis, Linda Diggs, Deborah Edwards, Susan Lawes, Karen Sloan, Vanessa Taylor, Joan Vogel, and Greg Whitney. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Currently, veterans who are disabled while serving their country are compensated for average reduction in earning capacity. Monthly compensation is based on the severity of a veteran's disability. After an initial rating for compensation has been determined, veterans who believe their condition has worsened may file a claim with the Department of Veterans' Affairs (VA) to reevaluate their disability rating. These repeat claims outnumbered initial disability applications by nearly three to one in fiscal year 1999, dominating VA's workload. To help reduce the volume of repeat claims, the Veterans' Claims Adjudication Commission asked Congress to consider paying less severely disabled veterans compensation in a lump sum. GAO surveyed veterans who are now being compensated on their reaction to a lump sum option. Veterans had mixed views. Many veterans and military personnel could see advantages and disadvantages to this new option. They also suggested some strategies that they believed could minimize the financial risks a lump sum payment option might introduce.
6,216
205
Strategic planning provides an important mechanism for HHS to establish long-term goals and strategies to improve the performance of its many health care workforce programs. HHS, as with all executive branch agencies, is required by GPRAMA to engage in performance management tasks, such as setting goals, measuring results, and reporting progress toward these goals. As one of its GPRAMA responsibilities, HHS issues a strategic plan at least every 4 years. HHS's most recent plan covers fiscal year 2014 through fiscal year 2018 and describes four broad strategic goals: 1. Strengthen health care. 2. Advance scientific knowledge and innovation. 3. Advance the health, safety, and well-being of the American people. 4. Ensure efficiency, transparency, accountability, and effectiveness of HHS programs. Within each strategic goal, the plan presents specific objectives that are linked to a set of strategies for accomplishing that objective, as well as a representative set of performance goals or measures for assessing progress. (See fig. 1.) Annually, HHS releases a performance report that presents progress on the performance measures that contribute to its strategic plan. According to HHS officials, the performance measures included in this performance report are a small but representative subset of HHS's work as a whole and of the performance measures that HHS tracks annually and publishes through other means. HHS's strategic planning efforts are led by staff offices that report directly to the Secretary of Health and Human Services. Specifically, the office of the Assistant Secretary for Planning and Evaluation (ASPE) is responsible for the strategic plan, and the office of the Assistant Secretary for Financial Resources (ASFR) is responsible for monitoring performance of HHS's various efforts and programs. ASPE and ASFR are to coordinate with HHS agencies to facilitate department-wide strategic planning and performance measurement efforts looking across all of HHS's programs and initiatives, including health-workforce related efforts. These agencies include HRSA, CMS, ACF, the Indian Health Service (IHS), and the Substance Abuse and Mental Health Services Administration (SAMHSA). (See fig. 2.) In addition, HHS has several advisory bodies that make recommendations to the Secretary about several topics. For example, the Council on Graduate Medical Education (COGME), supported by HRSA, provides an ongoing assessment of physician workforce needs, training issues, and financing policies and recommends appropriate federal and private sector efforts to address identified needs. In fiscal year 2014, HHS obligated about $14 billion to 72 health care workforce education, training, and payment programs administered primarily through five of its agencies. HRSA manages the most programs related to health care workforce development, while CMS, ACF, IHS, and SAMHSA also oversee other such programs. HRSA managed 49 of the 72 HHS workforce programs in fiscal year 2014. These programs generally provide financial assistance to students and institutions--in the form of scholarships, loan repayments, or grants--to encourage students to train and work in needed professions and regions. These programs accounted for about 8 percent of HHS's $14 billion in obligations for workforce development programs. In contrast, CMS managed 3 GME payment programs that together accounted for about 89 percent of this funding. These payments reimburse hospitals for the cost of training medical residents and are calculated, in part, based on the number of residents at the hospital. Some of these payments are included as part of each payment that CMS makes to reimburse hospitals that train residents for care to eligible patients. CMS also manages 1 additional payment program that supports the training of nurses and allied health professionals, which accounts for about 2 percent of total HHS workforce funding. The remaining agencies collectively managed 19 programs, accounting for about 1 percent of total HHS workforce funding. (See fig. 3.) In addition to funding health care workforce programs, HHS examines information about the future demand for health care services, as well as the related supply and distribution of health professionals. Specifically, HRSA periodically publishes health care workforce projections regarding the supply and demand of the health care workforce. We previously found that the agency's projections had not been updated and recommended that the agency develop a strategy and time frames to update national health care workforce projections regularly. Following that, HRSA awarded a contract to develop a new model, which according to officials has enabled more accurate estimates of workforce projections as well as workforce projections for a wider array of health professions. The agency subsequently issued many of those projections later in 2014 and in 2015. HHS's current strategic plan lacks specificity regarding how health care workforce programs contribute to its strategic plan goals. Additionally, the performance targets that HHS publicly reports do not comprehensively assess the department's progress towards achieving its broader strategic plan goals regarding the health care workforce. Finally, the department engages in some coordinated planning, but lacks comprehensive planning and oversight to ensure that its many workforce efforts address identified national needs. HHS's strategic plan includes broad strategies to which the department's health care workforce efforts relate, but these strategies do not focus specifically on workforce issues. For example, the current 2014-2018 HHS strategic plan does not have a goal or objective specifically dedicated to health care workforce. Instead, HHS officials stated that workforce development efforts are distributed across various broad access and quality objectives within the plan's goal of strengthening health care. Specifically, the plan includes seven strategies that contain a health care workforce component and that are distributed among three broad objectives. These strategies generally do not explicitly reference health care workforce training or education, but instead use broad statements that concurrently encompass numerous different components and methods for improving access to and quality of health care. For example, as part of one strategy, HHS seeks to improve access to comprehensive primary and preventative medical services in historically underserved areas and to support federally funded health centers. While not explicit in the plan, HHS officials indicated that developing the health care workforce is one element that contributes to these strategies. (See app. II for more details about the strategic plan.) Past HHS strategic plans included more specific strategies and objectives related to the department's workforce programs. However, HHS officials told us that the department decided to remove most of workforce-specific language when developing the current plan. The prior HHS strategic plan (2010-2015) included a dedicated goal related to workforce development, with one objective and six corresponding strategies that were specific to health care workforce planning. Further, while an earlier HHS strategic plan (2007-2012) did not have a workforce-specific strategic goal, it also had one objective and corresponding strategies on health care workforce planning, similar to the detailed strategies in the 2010-2015 plan. However, HHS officials told us that in developing the current strategic plan, the department decided to remove most of the workforce-specific language because it determined that workforce development was an intermediate step in achieving the department's broader strategic goals of improving access to and quality of health care. They also noted that, due to the size of HHS--for fiscal year 2015, HHS spent over $1 trillion through its operating divisions--its strategic plan has to be high level and not too specific about any one topic. Rather than include specificity in HHS's strategic plan, the department expects that the strategic plans of its agencies will contain additional detail about their health care workforce efforts, as is consistent with their mission, budget, and authorities. However, we found that these agency plans did not always contain this additional detail and were not always updated in a timely manner. For example, while HRSA's 2016-2018 strategic plan contains a dedicated health care workforce goal with three related objectives and eleven strategies, CMS's current 2013-2017 plan did not elaborate on any health care workforce issues presented in the HHS strategic plan, nor did it contain other related strategies. Moreover, IHS's strategic plan was last updated in 2006, and agency officials stated that the agency was not working to update the plan. (See app. III for more details about agency strategic plans.) Consistent with GPRAMA leading practices, for strategic planning to achieve meaningful results on a crosscutting topic like workforce development, it is important that planning efforts across these agencies be coordinated with those of the department. In addition to developing its quadrennial strategic plans, HHS has previously developed other planning documents and workforce development efforts. For example, HHS has published a series of strategic initiatives outlining the Secretary's priorities. In 2014, one of these priorities was developing the health care workforce. This document identified specific strategies for developing the health care workforce, such as targeting resources to areas of high need and strengthening the primary care workforce. The paper also discussed the need for reform to payment policy to better support emerging health care delivery models. However, as of November 2015, HHS had not released an update to this series of initiatives since the new Secretary assumed office in June 2014. In addition, previous presidential annual budgets have included proposals for improving HHS's health care workforce programs. Moreover, in conjunction with the President's fiscal year 2015 budget, HHS released a report in February 2014 providing additional information about the fiscal year 2015 proposals. The report described the challenges to ensuring more diversity among and a better distribution of health care professionals and explained the costs and benefits associated with each of the proposals. HHS has identified a subset of performance measures that are intended to represent the effect that all existing health care workforce programs have on the department's broader goal of strengthening health care by improving access and quality. Among other places, HHS reports these workforce performance measures in its annual performance plan and report, which indicate progress towards achieving HHS strategic planning goals. From fiscal year 2014 to 2016, the number of healthcare workforce performance measures tracked by HHS within its annual performance report dropped from five to two, and they are focused on a small percentage of programs. Specifically, for fiscal year 2016, the two measures assess progress related to several HRSA programs-- 4 National Health Service Corps (NHSC) programs and 14 Bureau of Health Workforce primary care training programs--that combined represent about 3 percent of the overall funding for all HHS health care workforce programs in fiscal year 2014. According to HHS officials, the department chooses not to include as part of the annual performance report all of the performance measures tracked by HHS agencies. Officials said that these two measures were chosen to be tracked in the strategic plan and annual report because HHS identified them as part of a subset of measures that are representative of the department's overall programming efforts. However, these measures are specific to the 18 programs and do not fully assess the adequacy of the department's broader workforce efforts. Moreover, HHS has no stated targets to assess the effectiveness of existing health care workforce programs on achieving the department's broader goal of strengthening health care by improving access and quality. As part of a larger measurement project, HHS tracks data on workforce that it indicates are closely related to the supply of trained health care providers. Specifically, HHS tracks (1) the percentage of individuals that have a usual source of medical care and (2) the number of primary care practitioners (such as physicians, nurse practitioners, and physician assistants). However, neither of these broad workforce indicators has a stated target related to necessary provider levels. GPRAMA leading practices indicate that for performance measures to be effective, agencies need to set specific targets. While it is important to have specific performance measures that assess individual programs, health care workforce is an issue that straddles many different programs and HHS agencies, and therefore it is also important to have broader measures that assess how these individual programs contribute to the department's overall effectiveness in developing the health care workforce to improve access to care. In the absence of these broader measures, HHS lacks information to help it comprehensively determine the extent to which programs are meeting the department's goal of strengthening health care or identify gaps between current health care workforce programs and unmet health care needs. According to HHS, over three-quarters of the 72 health care workforce programs had performance measures tracked by the relevant HHS agency. HHS indicated that its smaller health care workforce programs generally had performance measures and targets. For example, HRSA's 7 largest health care workforce programs, which accounted for about 5 percent of HHS's health care workforce obligations in fiscal year 2014, each had related performance measures. For most of these 7 programs, HRSA reported meeting or exceeding its performance targets in fiscal year 2014. For example, according to HRSA, the NHSC and Advanced Nurse Education programs exceeded their performance targets for fiscal year 2014, and the Children's Hospital GME program met or exceeded its fiscal year 2014 performance targets. (See app. I for more information about the performance measures of the 12 largest health care workforce programs.) However, HHS lacks performance measures related to workforce development for their largest programs. Specifically, HHS's 2 largest health care workforce programs--the Medicare GME programs run by CMS, accounting for 77 percent of obligations in fiscal year 2014--did not have performance measures directly aligned with areas of health care workforce needs identified in HRSA's workforce projection reports. According to HHS officials, the GME programs are not aligned with the workforce objectives in HHS's strategic plan. Leading practices identified in prior GAO work show that for individual programs to address strategic goals and objectives, it is important that the programs be aligned with these goals, and that these goals influence the daily operation of the programs. HHS does not have a consistent and ongoing effort to coordinate all of the workforce planning efforts and resources that are distributed across the department's various offices and agencies. According to HHS officials, HHS delegates responsibility for many of its health care workforce planning efforts to its component agencies, based on each agency's mission and expertise. These agencies also collaborate with each other occasionally on various health care workforce development efforts, such as projection reports and workforce programs. Officials said that HHS's coordination of these efforts generally occurs during the department's larger planning and budget process. Within the HHS Office of the Secretary, ASFR coordinates the annual budget development process, and ASPE coordinates the quadrennial strategic planning process, and, in doing so, both lead activities to include workforce development in these efforts. In addition, ASPE occasionally collaborates with HHS advisory bodies, such as COGME, supports research on the health care workforce, and periodically creates interagency workgroups to discuss specific priorities that support the development of its budget proposals. However, outside of the context of these strategic planning and budgetary processes, ASPE does not have an ongoing formal effort to coordinate the workforce planning efforts of HHS agencies. Additionally, it does not regularly monitor or facilitate HHS interagency collaborations on workforce efforts or fully communicate identified gaps to stakeholders. A separate entity within the Office of the Secretary, ASFR, coordinates the monitoring of those performance measures for health care workforce programs that the department tracks in its annual performance plan and report. Similar to ASPE, according to officials, ASFR engages in these coordinating and monitoring efforts primarily within the context of developing the annual performance report. To achieve meaningful results, GPRAMA leading practices emphasize the importance of coordinating planning efforts across agencies and departments. Leading practices state that when multiple federal programs that address a similar issue are spread over many agencies and departments, a coordinated planning approach is important to ensuring that efforts across multiple agencies are aligned. Specifically, a coordinated planning approach is essential to (1) setting targets for these workforce programs and other efforts; (2) identifying and communicating whether there are gaps between existing workforce programs and future needs; and (3) determining whether agencies have the necessary information to assess the reach and effectiveness of their programs. While coordination at the program level is important, it does not take the place of, or achieve the level of, overall department coordination that we have previously found to be the key to successful coordination. Recently, multiple stakeholders reported that a more coordinated federal effort--possibly managed at the department level--could help to ensure a more adequate supply and distribution of the health care workforce, especially given changes in the delivery of care. For example, in examining federal GME funding, IOM and COGME each stated that the GME program lacks the oversight and infrastructure to track outcomes, reward performance, and respond to emerging workforce challenges. IOM's recommendations for reforming GME include developing a strategic plan for oversight of GME funding, as well as taking steps to modernize GME payment methods based on performance to ensure program oversight and accountability. Both entities suggested that GME payments are neither sustainable in the long run nor the most effective method to developing the health care workforce to meet future projected health care needs. In each case, the organization recommended the creation of a new entity to provide oversight of national workforce efforts. Moreover, Congress also recognized the need for greater coordinated attention to workforce planning when it authorized the creation of the National Health Care Workforce Commission. Although the Commission has not received an appropriation, it was authorized to provide advice to HHS and Congress about workforce supply and demand and to study various mechanisms to finance education and training. According to HHS officials, to the extent possible, the department has been able to complete work related to a number of the Commission's planned functions, but certain critical functions remain unaddressed. For example, the Commission would have been required to submit an annual report to Congress and the administration that would include, among other things, information on implications of current federal policies affecting the health care workforce and on workforce needs of underserved populations. Without a comprehensive and consistently coordinated approach, it will be difficult for HHS to ensure that workforce funding and other resources are aligned with future health needs and to provide effective oversight of this funding. Some of HHS's largest health care workforce programs do not target areas of identified workforce needs. While HHS's ability to adjust existing programs to target areas of emerging needs is subject to certain statutory limitations, the department has taken some steps and proposed new authorities that would allow it to better align certain programs to areas of national need. However, the proposed authorities may not fully address the alignment of HHS's largest workforce programs with national needs. Our review showed that although HHS's health care workforce programs support education and training for multiple professions, the biggest programs do not specifically target areas of workforce need. The two CMS Medicare GME programs, which accounted for 77 percent of HHS's fiscal year 2014 obligations for health care workforce development, support hospital-based training of the many different types of physician specialties. However, HHS cannot specifically target existing Medicare GME program funds to projected workforce needs--such as primary care and rural areas-- because the disbursement of these funds is governed by requirements unrelated to workforce shortages. As a result, the majority of Medicare GME funding is disbursed based on historical patterns. Therefore the Medicare-supported residency slots, supported by this Medicare GME funding, are most highly concentrated in northeastern states. However, the areas of emerging health workforce need identified by HRSA in its health care workforce projection reports include the supply of primary care physicians, as well as various physician and non- physician providers in rural communities as well as ambulatory settings across the country. According to the Rand Corporation, between 1996 and 2011, the number of primary care residents has increased 8.4 percent, while there has been a 10.3 percent increase in other specialties and a 61.1 percent increase in subspecialty residents, such as cardiology. In contrast, HHS's smaller health care workforce programs typically target emerging health care workforce needs. For example, HRSA's five largest health care workforce education and training programs, which accounted for about 4 percent of HHS's workforce obligations in fiscal year 2014, targeted the health professions identified as areas of need, such as primary care physicians and nurse professions, including registered nurses. According to HHS officials, the department generally has limited authority to better target workforce programs to address projected health care workforce needs. Our review showed that funding for many of the largest HHS health care workforce programs--for example, CMS's Medicare GME program--are governed by statutory requirements unrelated to workforce needs. Specifically, the funding formulas for 5 of the 12 largest programs that we reviewed do not provide HHS with authority to realign funding for these specific programs to address projected workforce needs, while the remaining 7 programs provided the department with some flexibility to realign funding. For all of these 7 programs, HRSA officials reported that the agency has been able to exercise some flexibility in reallocating resources within a program or between similar programs, subject to existing law. HHS officials stated that although they have no authority to realign funding for programs governed by formula, the department has utilized other authorities to better align resources with health care needs. According to these officials, the agency has used demonstration authorities to test new payment models for the Medicaid GME program. HHS officials identified several demonstration projects approved by CMS for states and residency programs using their GME funds. For example, the State Innovation Model demonstration at the Center for Medicare and Medicaid Innovation is providing financial and technical support to states for the development and testing of state-led, multi-payer health care payment and service delivery models that are intended to improve health system performance, increase quality of care, and decrease costs for Medicare, Medicaid, and Children's Health Insurance Program beneficiaries-and for residents of participating states. According to HHS, a number of states are using the opportunity of participating in the demonstration to reform GME and make investments in the physician training more accountable. Both Vermont and Connecticut have been awarded funding as "model testing states" based on their state innovation proposals. HHS indicated that these states proposed innovative GME mechanisms, outside of those traditionally supported by CMS, to help better meet their health workforce needs. HHS officials also reported that the Center for Medicare & Medicaid Innovation conducted demonstrations to test new approaches to paying providers. For example, in 2012, the center initiated the graduate nurse education demonstration to increase the number of graduate nursing students enrolled in advance practice registered nurse training programs. The demonstration increased reimbursement for their clinical training by $200 million over 4 years. HHS has proposed additional authorities intended to help address changing health care workforce needs, although they may not fully align the department's programs with national needs. In both fiscal years 2015 and 2016, the President's budget proposed to reduce a portion of Medicare's GME payments made to hospitals by 10 percent. It also proposed investing in a new program to provide additional GME funding for primary care and rural communities. The 10-year, $5.3 billion "targeted support" grant to be run by HRSA would build upon the work of HRSA's Teaching Health Centers Graduate Medical Education Payment Program and would train 13,000 physicians in primary care or other high-need specialties in teaching hospitals and other community-based health care facilities, with a focus on ambulatory and preventive care. According to HHS officials, these proposals are intended to serve as a first step to improve the alignment of GME funding with health care needs. The fiscal year 2016 budget also proposed continuation of programs, authorized under PPACA and Health Care and Education Reconciliation Act of 2010 (HCERA), to provide primary care providers with higher Medicare and Medicaid payments as a way to incentivize health care providers to offer primary care services to Medicare and Medicaid beneficiaries. HHS officials indicated that the implementation of these proposals would help officials recognize any successes and gaps, and, if necessary, they could then develop any additional proposals to supplement the programs. However, while implementing these proposed programs would provide greater funding to some areas of need, HHS officials stated that they do not know the full extent to which these proposals are sufficient to address identified health care workforce needs. While HHS officials indicated that the department planned to determine their sufficiency and identify remaining gaps after these proposals were enacted, the department does not have a comprehensive plan from which to evaluate the impact of the new programs or make a complete assessment of any gaps. External stakeholders--such as IOM, Medicare Payment Advisory Commission (MedPAC), and COGME--have identified various reforms to HHS's largest health care workforce programs to better target these programs to emerging areas of health care workforce need. HHS officials told us that some of the reforms proposed by these stakeholders were the basis for some of the department's past budget and legislative proposals. For example, IOM convened a panel of experts that recommended restructuring Medicare GME payments to help align the programs to the health needs of the nation. In its 2014 report, IOM proposed, among other things, (a) developing a new center at CMS to administer GME program payment reform and manage demonstrations of new GME payment models and (b) the creation of a transformation fund within the GME programs to finance payment demonstrations. In a report released in 2014, COGME concurred with IOM that there is a need to reform GME payments and, among other things, recommended expanding the GME program's clinical training environment into the ambulatory and community setting. While an advocate for teaching hospitals-- Association of American Medical Colleges (AAMC)--has cautioned against reductions in GME funding, it has also proposed reforms to the GME program and Medicare payment policy to bolster primary care training and reduce geographic disparities. While HHS maintains that developing an adequate supply and distribution of the health care workforce is a priority for the department, it has removed explicit language about goals and objectives related to workforce issues from its current strategic plan, which is the primary planning mechanism to address this issue. HHS's lack of specific planning goals for the health workforce in its current strategic plan makes it challenging for the department to plan and to maintain accountability. Moreover, the department does not currently have a comprehensive set of performance measures and targets to assess whether its workforce efforts and the specific individual workforce programs managed by its agencies are collectively meeting the department's broader strategic goal of strengthening health care by improving access and quality. Because the responsibilities for HHS's workforce efforts, programs, and resources are dispersed among many agencies, it is important that HHS have a department-wide approach regarding its strategies and the actions needed to ensure an adequate supply and distribution of the nation's workforce. For example, while HRSA manages the largest number of workforce programs and the development of workforce projections, the vast majority of workforce development funds are administered by CMS-- for which workforce planning is not a key mission. It is also important for HHS to comprehensively assess the extent to which its many workforce programs, collectively, are adequate to address changing health care workforce needs. Multiple stakeholders have made recommendations to improve these programs. However, because the majority of workforce funds must be dispersed based on statutory requirements unrelated to projected workforce needs, HHS has limited options to retarget them. HHS has proposed additional authorities in the past, but these have not been enacted, and HHS officials acknowledge that these additional authorities may not be sufficient to fully address the existing program limitations identified by stakeholders. Without a comprehensive and coordinated approach to program planning, HHS cannot fully identify the gaps between existing programs and national needs, identify actions needed to address these gaps, or determine whether additional legislative proposals are needed to ensure that its programs fully meet workforce needs. To ensure that HHS workforce efforts meet national needs, we recommend that the Secretary of Health and Human Services develop a comprehensive and coordinated planning approach to guide HHS's health care workforce development programs--including education, training, and payment programs--that includes performance measures to more clearly determine the extent to which these programs are meeting the department's strategic goal of strengthening health care; identifies and communicates to stakeholders any gaps between existing programs and future health care workforce needs identified in HRSA's workforce projection reports; identifies actions needed to address identified gaps; and identifies and communicates to Congress the legislative authority, if any, the Department needs to implement the identified actions. We provided a draft copy of this report to HHS for its review and HHS provided written comments, which are reprinted in appendix IV. In commenting on this draft, HHS concurred with our recommendation that it is important that the department have a comprehensive and coordinated approach to guide its health care workforce development programs. HHS identified areas where comprehensive and coordinated planning efforts are already underway and where additional efforts are needed. HHS identified several health care workforce planning efforts related to the elements of our recommendation, many of which we described in the draft report. For example, HHS noted that it coordinates workforce planning efforts through the HHS department-level and agency-specific budget, legislative development, health policy research and innovation work, performance management, and strategic planning. HHS also indicated that the National Health Care Workforce Commission, created under PPACA, has the potential to enhance HHS's ability to implement a more comprehensive and coordinated planning approach, but that the commission has not received federal appropriations. In the absence of appropriations for this commission, HHS stated that it has undertaken some of the commission's health care workforce planning and coordination activities, to the extent possible. In response to our recommendation, HHS indicated that it could convene an interagency group to assess (a) existing workforce programs, (b) performance measurement, (c) budgetary and other proposals, (d) gaps in workforce programs, and (e) potential requests to the Congress for modified or expanded legislative authority. We agree that a regular and ongoing initiative focused on the coordination of health care workforce programs could provide an important first step toward ensuring a more comprehensive and coordinated planning approach. HHS also reiterated that its health care workforce programs contribute to its broad goals of access and quality from its strategic plan, as was described in our draft report. However, it indicated that, in response to our recommendation, the department plans to add two new workforce-specific strategies to its strategic plan when it next updates the plan. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. Related Objectives Objective E: Ensure access to quality, culturally competent care, including long-term services and support, for vulnerable populations (1 of 6 Objectives in Goal 1) Field strength of the National Health Service Corps through scholarship and loan repayment agreements. Percentage of individuals supported by Bureau of Health Workforce Programs who completed a primary care training program and are currently employed in underserved areas. across population groups, and work with federal, state, local, tribal, urban Indian, and nongovernmental actors to address observed disparities and to encourage and facilitate consultation and collaboration among them. Evaluate the impact of the Affordable Care Act provisions on access to and quality of care for vulnerable populations, as well as on disparities in access and quality. Promote access to primary oral health care services and oral disease preventive services in settings including federally funded health centers, school- based health centers, and Indian Health Services-funded programs that have comprehensive primary oral health care services, and state and community- based programs that improve oral health, especially for children, pregnant women, older adults, and people with disabilities. (2 of 13 Metrics in Objective E) Help eliminate disparities in health care by educating and training physicians, nurses, and allied health professionals on disparities and cultural competency, while increasing workforce diversity in medical and allied health care professions. Improve access to comprehensive primary and preventive medical services to historically underserved areas and support federally funded health centers, the range of services offered by these centers, and increased coordination with partners including the Aging Services Network. (5 of 18 Strategies in Objective E) Objective F: Improve health care and population health through meaningful use of health information technology (1 of 6 Objectives in Goal 1) Expand the adoption of telemedicine technologies, including more remote patient monitoring, electronic intensive care units, home health, and telemedicine networks, to increase access to health care services for people living in tribal, rural, and other underserved communities, and other vulnerable and hard-to-reach populations. (1 of 18 Strategies in Objective F) Related Objectives Objective C: Invest in the HHS workforce to help meet America's health and human services needs (1 of 4 Objectives in Goal 4) Promote the Commissioned Corps as a health resource to provide public health services in hard-to-fill assignments as well as to respond to public health emergencies. (1 of 13 Strategies in Objective C) Related Strategies We will promote efforts to increase family economic security and stability by supporting our state, tribal, and community grantee partners in designing and implementing programs that focus simultaneously on parental employment and child and family well-being, including drawing from promising models in health and career pathways demonstrations. Support curriculum development and the training of health professionals to ensure the learning, enhancement, and updating of essential knowledge and skills. Support training and other activities that enhance the health workforce's competency in providing culturally and linguistically appropriate care. Expand the number and type of training and technical assistance opportunities that educate students and providers to work in interprofessional teams and participate in practice transformations. Support technical assistance, training, and other opportunities to help safety-net providers expand, coordinate, and effectively use health information technology to support service delivery and quality improvement. Provide information and technical assistance to ensure that HRSA- supported safety-net providers know and use current treatment guidelines, appropriate promising practices, and evidence-based models of care. Facilitate and support the recruitment, placement, and retention of primary care and other providers in underserved communities in order to address shortages and improve the distribution of the health workforce. Support outreach and other activities to increase the recruitment, training, placement, and retention of under- represented groups in the health workforce. Field strength of the National Health Service Corps through scholarship and loan repayment agreements. Percentage of individuals supported by the Bureau of Health Workforce who completed a primary care training program and are currently employed in underserved areas. Percentage of trainees in Bureau of Health Workforce- supported health professions training programs who receive training in medically underserved communities. Percentage of trainees in Bureau of Health Workforce programs who are underrepresented minorities and/or from disadvantaged backgrounds. Support pre-entry academic advising, mentoring, and enrichment activities for underrepresented groups in order to promote successful health professions training and career development. Promote training opportunities within community-based settings for health professions students and residents by enhancing partnerships with organizations serving the underserved. Develop and employ approaches to monitoring, forecasting, and meeting long-term health workforce needs. Provide policy makers, researchers, and the public with information on health workforce trends, supply, demand, and policy issues. 2015 Goal Six: Workforce Develop and disseminate workforce training and education tools and core competencies to address behavioral health issues. Develop and support deployment of peer providers in all public health and health care delivery settings. Develop consistent data collection methods to identify and track behavioral health care workforce needs. Increase the number of behavioral health providers (professional, paraprofessional, and peers) addressing children, adolescents, and transitional-age youth. Increase the number of individuals trained as behavioral health peer providers. In addition to the contact named above, William Hadley, Assistant Director; N. Rotimi Adebonojo; Arushi Kumar; Jennifer Whitworth; and Beth Morrison made key contributions to this report.
An adequate, well-trained, and diverse health care workforce is essential for providing access to quality health care services. The federal government--largely through HHS--funds programs to help ensure a sufficient supply and distribution of health care professionals. Some experts suggest that maintaining access to care could require an increase in the supply of providers, while others suggest access can be maintained by, among other things, greater use of technology. GAO was asked to review HHS's workforce efforts. In this report, GAO examines (1) HHS's planning efforts for ensuring an adequate supply and distribution of the nation's health care workforce and (2) the extent to which individual HHS health care workforce programs contribute to meeting national needs. GAO reviewed strategic planning documents, workforce projection reports, and other related documents obtained from HHS agencies; interviewed HHS officials; and analyzed performance measures for the largest health care workforce programs operated by HHS. The Department of Health and Human Services (HHS) engages in some planning for the 72 health care workforce programs administered by its agencies, but lacks comprehensive planning and oversight to ensure that these efforts meet national health care workforce needs. HHS's current strategic plan includes broad strategies--such as improving access to comprehensive primary and preventive medical services in historically underserved areas and supporting federally funded health centers--to which department officials said the health care workforce programs relate. However, these strategies do not explicitly reference workforce issues or specify how these programs contribute towards HHS's current strategic goals and performance targets. The health care workforce performance measures tracked by HHS and its agencies are specific to individual workforce programs and do not fully assess the overall adequacy of the department's workforce efforts. The Office of the Secretary leads workforce planning efforts, but it does not have an ongoing formal effort to ensure that the workforce programs distributed across its different agencies are aligned with national needs. Multiple external stakeholders, such as the Institute of Medicine and the Council on Graduate Medical Education, have reported that graduate medical education (GME) funding lacks the oversight and infrastructure to track outcomes, reward performance, and respond to emerging workforce challenges and that a more coordinated effort could help to ensure an adequate supply and distribution of the health care workforce. Consistent with leading practices, a coordinated department-wide planning effort is important to ensure that these efforts are aligned and managed effectively to meet workforce needs. While HHS's workforce programs support education and training for multiple health professions, its largest programs do not specifically target areas of workforce need, such as for primary care and rural providers. For example, its two Medicare GME programs accounted for about three-quarters of HHS's fiscal year 2014 obligations for health care workforce development. However, HHS cannot target existing Medicare GME program funds to projected workforce shortage areas because the programs were established by statute and funds are disbursed based on a statutory formula that is unrelated to projected workforce needs. HHS has limited legal authority to target certain existing programs to areas of emerging needs and has taken steps to do so within its existing authorities, such as the approval of certain demonstration projects to test new payment models for Medicaid GME funds. Further, the President's budget has proposed additional authorities that would allow HHS to implement new education and training programs and payment reforms intended to support primary care providers, but these authorities have not been enacted and officials did not know the extent to which they would be sufficient to address identified needs. External stakeholders have recommended additional reforms that would allow these programs to better targets areas of need. Without a comprehensive and coordinated planning approach, HHS cannot fully identify gaps and actions to address those gaps, including determining whether additional legislative proposals are needed to ensure that its programs fully meet workforce needs. GAO recommends that HHS develop a comprehensive and coordinated planning approach that includes performance measures, identifies any gaps between its workforce programs and national needs, and identifies actions to close these gaps. HHS concurred with GAO's recommendations and provided technical comments, which GAO incorporated as appropriate.
7,665
854
The President's fiscal year 2014 budget request included plans for the federal government to spend over $82 billion on IT. The stated goal of the President's IT budget request is to support making federal agencies more efficient and effective for the American people; it also states that the strategic use of IT is critical to success in achieving this goal. Of the $82 billion budgeted for IT, the budget provides that 26 key agencies plan to spend the bulk of it, approximately $76 billion. Further, of the $76 billion, over $59 billion is to be spent on O&M investments with the remainder ($17 billion) being budgeted for development of new capabilities. As shown in figure 1, the $59 billion represents a significant majority (i.e., 77 percent) of total budgeted spending for these agencies ($76 billion). Although O&M spending by these agencies is about 77 percent of total IT spending, the amount spent by each agency varies from a high of 98 percent to a low of 46 percent (as shown in the following table). Development spending, which is intended for the inclusion of new capabilities, accounts for approximately 23 percent of the total amount to be spent on IT in fiscal year 2014 by these agencies. However, the investments in development vary greatly, from 54 percent by the Department of Transportation to a low of 2 percent by the National Aeronautics and Space Administration. Further, in addition to including amounts to be spent on IT development and O&M, the budget also further specifies how the total $76 billion budgeted for IT is to be spent on agency IT investments by the following three categories: those solely under development ($6 billion), those involving activities and systems that are in both development and O&M--known as mixed life cycle ($40 billion), and those existing operational systems--commonly referred to by OMB as steady state investments--that are solely in O&M ($30 billion). To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a Further, the act places responsibility for result of these investments.managing investments with the heads of agencies and establishes chief information officers to advise and assist agency heads in carrying out this responsibility. In carrying out its responsibilities, OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires federal departments and agencies to provide information to it related to their IT investments (called exhibit 53s) and capital asset plans and business cases (called exhibit 300s). Exhibit 53. The purpose of the exhibit 53 is to identify all IT investments--both major and nonmajor within a federal organization. Information included on agency exhibit 53s is designed, in part, to help OMB better understand what agencies are spending on IT investments. The information also supports cost analyses prescribed by the Clinger-Cohen Act. As part of the annual budget, OMB publishes a report on IT spending for the federal government representing a compilation of exhibit 53 data submitted by agencies. According to OMB guidance, a major IT investment requires special management attention because of its importance to the mission or function to the government; significant program or policy implications; high executive visibility; high development, operating, or maintenance costs; unusual funding mechanism; or definition as major by the agency's capital planning and investment control process. Exhibit 300. The purpose of the exhibit 300 is to provide a business case for each major IT investment and to allow OMB to monitor IT investments once they are funded. Agencies are required to provide information on each major investment's cost, schedule, and performance. In addition, in June 2009, to further improve the transparency into and oversight of agencies' IT investments, OMB publicly deployed a website, known as the Federal IT Dashboard (Dashboard), which replaced its Management Watch List and High-Risk List. As part of this effort, OMB issued guidance directing federal agencies to report, via the Dashboard, the performance of their IT investments. Currently, the Dashboard publicly displays information on the cost, schedule, and performance of major federal IT investments at key federal agencies. In addition, the Dashboard allows users to download exhibit 53 data, which include information on both major and nonmajor investments. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB, other oversight bodies, and the general public to hold the government agencies accountable for results and progress. Since the Dashboard has been implemented, we have reported and made recommendations to improve the data accuracy and reliability. In 2010, 2011, and 2012, we reported on the progress of the Dashboard and made recommendations to further improve how it rates investments relative to current performance. OMB concurred with our recommendations and has actions planned and underway to address them. Further, OMB has developed guidance that calls for agencies to develop an OA policy for examining the ongoing performance of existing legacy IT investments to measure, among other things, that the investment is continuing to meet business and customer needs and is contributing to meeting the agency's strategic goals.provide for an annual OA of each investment that addresses the following: cost, schedule, customer satisfaction, strategic and business results, financial goals, and innovation. To address these areas, the guidance specifies the following 17 key factors that are to be addressed: This guidance calls for the policy to assessment of current costs against life-cycle costs; a structured schedule assessment (i.e., measuring the performance of the investment against its established schedule); a structured assessment of performance goals (i.e., measuring the performance of the investment against established goals); identification of whether the investment supports customer processes as designed and is delivering goods and services it was designed to deliver; a measure of the effect the investment has on the performing a measure of how well the investment contributes to achieving the organization's business needs and strategic goals; a comparison of current performance with a pre-established cost areas for innovation in the areas of customer satisfaction, strategic and business results, and financial performance; indication if the agency revisited alternative methods for achieving the same mission needs and strategic goals; consideration of issues, such as greater utilization of technology or consolidation of investments to better meet organizational goals; an ongoing review of the status of the risks identified in the investment's planning and acquisition phases; identification of whether there is a need to redesign, modify, or terminate the investment; an analysis on the need for improved methodology (i.e., better ways for the investment to meet cost and performance goals); lessons learned; cost or schedule variances; recommendations to redesign or modify an asset in advance of potential problems; and overlap with other investments. With regard to overseeing the agencies' development of policies and annual performance, OMB officials responsible for governmentwide OA policy stated that they expect agencies to perform all the steps specified in the guidance and to be prepared to show documentation as evidence of compliance with the guidance. In October 2012 we reported on five agencies' use of OAs (during fiscal year 2011) and how they varied significantly. Specifically, of the five agencies, we found that three--namely, DOD, Treasury, and VA--did not perform analyses on their 23 major steady state investments with annual budgets totaling $2.1 billion. The other two agencies--DHS and HHS-- performed analyses but did not do so for all investments. For example, DHS analyzed 16 of its 44 steady state investments, meaning 28 investments with annual budgets totaling $1 billion were not analyzed; HHS analyzed 7 of its 8 steady state investments, thus omitting a single investment totaling $77 million from being assessed. We also found that of those OAs performed by these two agencies, none fully addressed all the key factors. Specifically, our analysis showed that only about half of the key factors were addressed in these assessments. Consequently, we recommended, among other things, that the agencies conduct annual OAs and in doing, ensure they are performed for all investments and that all factors are fully assessed. To ensure this is done and to provide transparency into the results of these analyses, we also recommended that OMB revise its guidance to include directing agencies to post the results on the Dashboard. OMB and the five agencies agreed with our recommendations and have efforts planned and underway to address them. In particular, OMB issued guidance (dated August 2012) to the agencies directing them to report OA results along with their fiscal year 2014 budget submission documentation (e.g., exhibit 300) to OMB. According to OMB officials, they are currently establishing a process on how agencies are to provide the information to OMB which they plan to have in place over the next 6 months. As part of this, OMB is defining a process for what they plan to do with the information once they receive it. The 10 federal IT O&M investments with the largest budgets, identified during our review, support agencies in a variety of ways such as providing worldwide telecommunications infrastructure and information transport for DOD operations; enabling HHS to conduct research, award grants, and disseminate biomedical research and health information to the public and National Institutes of Health stakeholders; and providing SSA the capability to maintain demographic, wage, and benefit information on all American citizens. Including ensuring the availability, changeability, stability, and security of SSA's IT operations for the entire agency. These investments are operated by eight agencies, such as the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and Social Security Administration (SSA). In total, the investments accounted for about $7.9 billion in O&M spending for fiscal year 2012, which was approximately 14 percent of all such spending for federal IT O&M. The following table identifies the 10 investments and describes the agency responsible for each investment, the amount budgeted for O&M and development for fiscal year 2012, investment type, and how each investment supports the organization's mission. Although required to do so, seven of the eight agencies did not conduct OAs on their largest O&M investments. Specifically, of the 10 O&M IT investments (with the largest budgets) we reviewed, only one agency-- DHS--conducted an analysis on its investment. In doing so, the department addressed most of the required OMB factors. However, the other seven agencies--DOD, DOE, HHS, Treasury, VA, NASA, and SSA--did not conduct OAs on their O&M investments, which have combined annual O&M budgets of $7.4 billion. The following table lists the 10 investments and whether an analysis was completed for fiscal year 2012. Further, it provides the total O&M amount for the investment that had an OA and for the investments that did not have one--$529 million and $7.4 billion, respectively. With regard to the OA DHS performed on its investment (the Customs and Border Protection Infrastructure), the department addressed 14 of the 17 OMB factors. For example, in addressing the factor on assessing performance goals, DHS made efforts to consolidate software licenses and maintenance in order to eliminate redundancy and reduce costs associated with software licenses and maintenance. Although DHS addressed these factors, it excluded 3 factors. Specifically, the department did not (1) assess current costs against life-cycle costs, (2) perform a structured schedule assessment, and (3) compare current performance against cost baseline and estimates developed when the investment was being planned. These factors are important because, among other things, they provide information to agency decision makers on whether an investment's actual annual O&M costs are as they were planned to be and whether there is a need to examine more cost effective approaches to meeting agency mission objectives. Table 5 shows our analysis of DHS's assessment of its Customs and Border Protection Infrastructure investment. With regard to why DHS's analyses did not address all OMB factors, officials from the DHS Office of the Chief Information Officer (who are responsible for overseeing the performance of OAs departmentwide) attributed this to the department still being in the process of updating their Management Directive 102-01 and its related guidance, which will provide additional instructions for completing OAs. As part of this update, department officials told us they plan to provide additional guidance on conducting OAs for programs once they have achieved full operational capability. The department expects the guidance to be completed in calendar year 2014. Further, according to DHS, once completed, this guidance will complement existing program review processes--referred to by DHS as program health assessments--that requires all major IT investments, in support or mixed lifecycle phases, to complete an OA every 12 months. The other seven agencies attributed not performing OAs on these investments to several factors, including relying on other management and performance reviews--such as those used as part of developing their annual exhibit 300 submissions to OMB--although OMB has stated that these reviews are not to be a substitute for conducting annual analyses. The specific reasons cited by each agency are as follows: DOD: Officials from DOD's Defense Information Systems Agency stated that they did not conduct an OA for the Defense Information Systems Network due to the fact that the investment undergoes constant oversight through weekly meetings to review issues such as the project status and accomplishments. Further, they said that the program manager exercises cost, schedule, and performance oversight using earned value management techniques. In addition, they stated that monthly reviews of actual versus planned spending are collected to flag any discrepancies from expected cost and schedule objectives. While these reviews are important steps to monitoring performance management, OMB states such ongoing efforts to manage investment performance are not a substitute for conducting an annual OA. According to the OMB guidance, OAs are to be conducted for all existing IT investments to ensure that, among other things, an investment is continuing to meet business and customer needs and is contributing to meeting the agency's strategic goals. With regard to the Next Generation Enterprise Network, officials from the Navy who manage and oversee this investment stated an OA was not performed due to it going through a transition from a mature fielded system to a new service delivery model, which will become operational in 2014. Nonetheless, OMB guidance calls for agencies to also conduct annual analyses on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. DOE: Officials from the Office of the Chief Information Officer stated that an OA was not conducted on its Consolidated Infrastructure, Office Automation, and Telecommunications Program investment because in the summer of 2012 they began to separate it into smaller, more manageable pieces--referred to by these agency officials as deconsolidation-- to better provide insight into the departmentwide infrastructure. In addition, to gain further insight into the infrastructure spending, the DOE Chief Information Officer led an in-depth analysis in collaboration with senior IT executives, which included a commodity IT TechStat review in the fall of 2011, and a commodity IT PortfolioStat review in the fall of 2012. While these latter reviews are helpful in monitoring performance, our analysis shows that they do not fully address all 17 OMB factors. Specifically, the reviews do not address, among other things, factors in the areas of customer satisfaction, strategic and business results, and financial performance. Addressing these factors is important because it provides information to agency decision makers on whether the investment supports customer processes and is delivering the goods and services it was designed to deliver. HHS: According to officials from the department's National Institutes of Health, the National Institutes of Health IT Infrastructure investment, which had an annual budget of $371 million for fiscal year 2012, did not undergo an OA because this investment is an aggregation of all the components' infrastructure and not a particular system or set of systems suited for this kind of macro analysis. In addition, they noted that National Institutes of Health does monitor the operational performance of its IT infrastructure and conducts a more strategic analysis of services within its IT infrastructure to evaluate the operational effectiveness at a strategic level. While these types of performance monitoring efforts are important, OMB guidance nonetheless calls for agencies to also conduct annual analyses on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. Treasury: Officials from the department's IT Capital Planning and Investment Control branch (within the office of Treasury's Chief Information Officer) noted that its Internal Revenue Service Main Frames and Servers Services and Support investment, which had a budget of $482 million for fiscal year 2012, was deconsolidated in fiscal year 2011 to allow for greater visibility into the infrastructure and that it is currently undergoing an OA but were not able to provide documentation at the time of our work. VA: Officials from VA's Office of Information and Technology said an OA was not conducted on its Medical IT Support or Enterprise IT Support investments because performance is currently being reported monthly via the Federal IT Dashboard and internally through monthly performance reviews. The officials added that the department plans to develop a policy and begin conducting OAs on investments. However, VA has not yet determined when these analyses will be completed. NASA: Officials from NASA's Office of the Chief Information Officer stated while they did not conduct a formal OA on the NASA IT Infrastructure investment, they did review the performance of the investment using monthly performance status reviews and bimonthly service delivery transition status updates. The officials noted that these reviews address financial performance, schedule, transformation initiatives, risks, customer satisfaction, performance metrics, and business results. According to officials, the investment underwent a service delivery transition status update and a performance status review in May 2012. While these NASA reviews are essential IT management tools, they do not incorporate all 17 OMB factors. For example, the reviews do not address, among other things, innovation and whether the investment overlapped with other systems. Fully addressing the OMB factors is essential to ensuring investments continue to deliver value and do not unnecessarily duplicate or overlap with other investments. SSA: According to officials from SSA's Office of the Chief Information Officer, SSA's Infrastructure Data Center investment did not undergo an analysis because it has significant development content and therefore an earned value analysis was conducted, which is called for by SSA guidance for mixed life-cycle investments. Officials stated they generally perform either an earned value analysis or OA, as applicable to the investment. While earned value management analyses are important to evaluating investment performance, OMB guidance nonetheless calls for agencies to also conduct annual OAs on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. Until the agencies address these shortcomings and ensure all their O&M investments are fully assessed, there is increased risk that these agencies will not know whether these multibillion dollar investments fully meet intended objectives, including whether there are more efficient ways to deliver their intended purpose, therefore increasing the potential for waste and duplication. For the eight selected agencies, the majority of their 401 major IT investments--totaling $29 billion-- were in the mixed life-cycle phase in both spending and number of investments. Specifically, of the $29 billion, our analysis, as shown in figure 2, found that mixed life-cycle investments accounted for approximately $18 billion, or 61 percent; steady state investments accounted for approximately $8 billion, or 27 percent; and development investments accounted for approximately $3 billion, or 12 percent. With regard to the number of investments by phase, our analysis, as shown in figure 3, found that of the total 401 investments 193, or 48 percent, were in the mixed life-cycle phase, 139, or 35 percent, were in the steady state phase, and 69 or 17 percent, were in the development phase. On an individual agency basis, table 6 provides the total amount each agency reportedly spent on IT. It also shows of how each agency allocates this total by development, mixed life cycle, or steady state investments. Further for the mixed investments, it shows the amounts for O&M and development. Further, the following table provides for each of the eight agencies, their total number of investments and of that total, the number of investments in development, mixed life cycle, and steady state. The implications of the above analyses--especially the results in table 6 that show mixed investments having significant amounts of funding for both development and O&M activities--are noteworthy, particularly as it relates to the oversight of such investments. More specifically, overseeing these investments will involve a set of IT management capabilities for those portions of the investment that are operational and a different set of IT management capabilities for those portions that are still under development. In the case of those portions that are operational, this will include agencies having the capability to perform thorough OAs, the importance of which is discussed earlier in this report. For those portions still under development, OMB guidance and our best practices research and experience at federal agencies show such effective oversight will involve agencies having structures and processes--commonly referred to as IT governance and program management disciplines--that include instituting an investment review board to define and establish the management structure and processes for selecting, controlling, and evaluating IT investments; ensuring that a well-defined and disciplined process is used to select new IT proposals; and overseeing the progress of IT investments--using predefined criteria and checkpoints--in meeting cost, schedule, risk, and benefit expectations and to take corrective action when these expectations are not being met. Having these disciplines are important because they help agencies, among other things, ensure such investments are supporting strategic mission needs and meeting cost, schedule, and performance expectations. However, our experience at federal agencies has shown that agencies have not yet fully established effective governance and program management capabilities essential to managing IT investments. For example, we reported in April 2011 that many agencies did not have the mechanisms in place for investment review boards to effectively control their investments. More specifically, we reported that while these agencies largely had established IT investment management boards, these boards did not have key policies and procedures in place for ensuring that projects are meeting cost, schedule, and performance expectations. In addition, our experience at federal agencies, along with the results from this audit, has found that agencies do not consistently conduct OAs. Specifically, as noted in the background, we reported in 2012 on five agencies' use of them and how they varied significantly. Of the five agencies, we found that three--namely, DOD, Treasury, and VA--did not perform analyses on 23 major steady state investments with annual budgets totaling $2.1 billion. The other two agencies--DHS and HHS-- performed them but did not do so for all investments. Accordingly, we have made recommendations to these agencies to improve their use of OAs and fully implement effective governance and program management capabilities. They have in large part agreed to our recommendations and have efforts underway and planned to implement them. GAO-13-87. reprogram IT O&M funds to be used on development activities and we identified no evidence to the contrary; two agencies--Treasury and VA-- reported they did so in two instances. With regard to Treasury, the department--on its CADE 2 investment which has a total O&M budget of $40 million--reallocated a total of $10,000 to fund development activities planned for the investment. According to Treasury documentation, the cost of the investment's operations and maintenance came in under budget by $10,000 so the department reallocated the funds to be used on new CADE 2 development efforts. Treasury reported this reallocation was discussed and approved by the Internal Revenue Service's investment review board (the Internal Revenue Service is responsible for overseeing CADE 2) during its monthly executive steering meetings held during fiscal year 2012. With regard to VA, it reprogrammed a total of $13.3 million from O&M to development on investments within an investment category which VA referred to as a portfolio. Specifically, during fiscal year 2012, the department reprogrammed $13.3 million from an O&M investment within its Medical Portfolio to investments under development within the portfolio requiring additional funding. This reprogramming of funds was approved by the Secretary of Veterans Affairs in June 2012. The 10 largest federal O&M IT investments represent a significant part of the federal government's multibillion dollar commitment to operating and maintaining its IT investments. Although OMB has established that agencies are to use OAs to evaluate the performance of such investments, their use by the agencies on these investments was very limited. DHS was the only agency to perform such an assessment and in doing so largely addressed the required OMB factors. While Treasury and VA had planned to perform analyses, they had not done so. Further, DOD, DOE, HHS, NASA, and SSA had not intended to perform these analyses on their large O&M investments. This limited use of OAs is due in part to a number of factors, including agencies relying on other types of performance oversight reviews that can be helpful but are not intended to be a substitute for these assessments. Until these agencies address these shortcomings and ensure all their large O&M investments are fully assessed, there is increased risk that these agencies will not know whether these multibillion dollar investments fully meet intended objectives, including whether there are more efficient ways to deliver their intended purpose. To ensure that the largest IT O&M investments are being adequately analyzed, we recommend that the Secretary of Defense direct appropriate officials to perform OAs on the two investments identified in this report, including ensuring the analyses include all OMB factors; Secretary of Energy direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; Secretary of Health and Human Services direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; Secretary of Treasury direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis include all OMB factors; Secretary of Veterans Affairs direct appropriate officials to perform OAs on the two investments identified in this report, including ensuring the analyses include all OMB factors; NASA Administrator direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; and Commissioner of Social Security direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors. In addition, we recommend that the Secretary of Homeland Security direct appropriate officials to ensure the department's OA for the Customs and Border Protection Infrastructure is complete and assesses missing OMB factors identified in this report. In commenting on a draft of this report, four agencies--DHS, NASA, SSA, and VA--agreed with our recommendations; two agencies--DOD and DOE--partially agreed; and two agencies--HHS and Treasury--had no comments. The specific comments from the four agencies that agreed are as follows: DHS in its written comments, which are reprinted in appendix II, stated that it concurred with our findings and recommendation. It also commented that DHS's Office of the Chief Information Officer and the Office of Information Technology within Customs and Border Protection (the DHS component agency responsible for the Customs and Border Protection Infrastructure investment) are to work closely to ensure future OAs conducted on the investment fully address the OMB assessment factors. NASA, in its written comments--which are reprinted in appendix III--stated it concurred with our recommendation. NASA also stated that it planned to conduct an OA on its NASA IT Infrastructure investment in April 2014 that is to include all OMB factors. In its written comments, SSA stated it agreed with our recommendation. It also stated that since 2008, SSA has had a process to perform OAs on investments that were solely in O&M and that it recently expanded the process to include mixed life cycle IT investments that have significant systems in O&M. SSA further commented that it was in the process of performing OAs on the SSA mixed life cycle investment identified in our report and other similar agency investments, with the goal of completing these analyses by September 30, 2013. SSA's comments are reprinted in appendix IV. VA, in its written comments, stated it agreed with our conclusions and concurred with our recommendation. It also said that it had scheduled OAs for the two investments identified in our report to begin in the second half of fiscal year 2014. VA's comments are reprinted in appendix V. The specific comments of the two agencies that partially agreed are as follows: DOD, in its written comments, stated that it partially concurred with our recommendation. Specifically, DOD said it agreed with our recommendation that its OAs should address all OMB assessment factors and said it is establishing an OA policy in coordination with OMB. The department further agreed with our recommendation that it perform an OA on its Defense Information System Network investment. The department disagreed with our recommendation to perform an OA on its Next Generation Enterprise Network investment stating the investment is no longer in O&M and such investments, per OMB policy, do not require an OA. More specifically, as noted earlier in this report, DOD is transitioning the investment from a mature fielded system to a new service delivery model, which will become operational in 2014, and has moved the entire investment back into planning and acquisition. Nonetheless, consistent with our recommendation and as required by OMB policy, DOD plans to conduct an OA on this investment once the department begins to make it operational in 2014. DOD's comments are reprinted in appendix VI. In its written comments--which are reprinted in appendix VII-- DOE commented that it partially concurred with our recommendation. DOE stated it was not required to perform an OA on the Consolidated Infrastructure, Office Automation, and Telecommunications Program because the investment no longer exists. Specifically, DOE said it decided in 2012 to separate this large investment into smaller, more manageable pieces--referred to by DOE as deconsolidation--to better provide insight into its departmentwide infrastructure, and that since the investment no longer exists, there is no reason to perform an OA on it. Nonetheless, consistent with our recommendation, DOE added that it will ensure that OAs are conducted on the O&M components of all current major IT investments in DOE's IT portfolio. DOE stated that it had already performed OAs on applicable operational components that used to comprise the Consolidated Infrastructure, Office Automation, and Telecommunications Program. For example, DOE commented that one of the investments created during deconsolidation-- called Consolidated Infrastructure--had already undergone an OA most recently in August 2013. While DOE reported this progress in its comments to us, it did not provide us with documentation to support that this OA had been performed and whether it addressed all the OMB assessment factors. Consequently, we are revising our recommendation to DOE that it ensure OAs are performed on the applicable operational components that used to comprise the Consolidated Infrastructure, Office Automation, and Telecommunications Program, including the newly created Consolidated Infrastructure investment. With regard to HHS and Treasury, HHS, in comments provided via e-mail from its GAO Intake Coordinator within the Office of the Assistant Secretary for Legislation, stated that it did not have any general comments on this report, and Treasury in its written response said it had no comments on our report; the department's comments are reprinted in appendix VIII. DHS and HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of the Departments of Defense, Energy, Health and Human Services, Homeland Security, Treasury, and Veterans Affairs; the Administrator of the National Aeronautics and Space Administration; and the Commissioner of the Social Security Administration. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Our objectives were to (1) identify the federal IT O&M investments with the largest budgets, including their responsible agencies and how each investment supports its agency's mission; (2) determine the extent to which these investments have undergone OAs; and (3) assess whether the responsible agency's major IT investments are in development, mixed life cycle, or steady state, and the extent to which funding for investments in O&M have been used to finance investments in development. To identify those federal IT O&M investments with the largest budgets, we used data reported to the Office of Management and Budget (OMB) as part of the budget process, and focused on the 10 largest reported budgets in O&M and the responsible eight agencies (the Departments of Defense, Energy, Homeland Security, Health and Human Services, the Treasury, and Veteran Affairs; and the National Aeronautics and Space Administration and Social Security Administration) that operate these investments. In addition, to determine how these 10 investments support their agencies' missions, we reviewed OMB and agency documentation (e.g., exhibit 300s, exhibit 53s) and interviewed agency officials. To determine the extent to which OAs were conducted to manage these investments in accordance with OMB guidance, we analyzed agency documentation and interviewed responsible agency officials to determine whether any operational analyses had been performed on these 10 investments during fiscal year 2012 because it was the last full year for completing OAs. In those cases where an OA had been performed, we compared it against OMB guidance on conducting them, including the 17 factors that are to be addressed as part of such assessments, to identify any variances. Where there were variances, we reviewed agency documentation and interviewed agency officials responsible for the OA to identify the cause of their occurrence. In those instances where an analysis was not performed, we reviewed documentation and interviewed agency officials to identify why it was not done. To assess whether each of the eight agency's major IT investments are in development, mixed life cycle, or steady state, we analyzed agencies' reported spending data provided to OMB as part of the budget process to determine what phase the majority of the investments were in and where the majority of funds were invested (i.e., development, mixed, or steady state). To assess the reliability of the data we analyzed, we corroborated them by interviewing investment and other agency officials to determine whether the OMB information we used was consistent with that reported by the agencies; based on this assessment, we determined the data were reliable for the purposes of this report. Further, to assess the extent to which these and other agency IT O&M investments involve development activities, we analyzed agency data and evaluated whether the eight agencies were using their O&M funds for development activities (i.e., through the reprogramming or reallocation of funds). Specifically, we compared what agencies planned to spend on development and O&M with what was reported to have been spent to identify any variances that indicated O&M funds were reprogrammed and used for development activities. In addition, we reviewed agencies' documentation to determine if agencies had any processes in place to manage investments transitioning from development to O&M. Lastly, we reviewed agency documentation and interviewed agency IT budget and investment officials to verify whether any reprogramming occurred, its causes, and the extent of which any reprogramming was subject to management oversight. We conducted this performance audit from December 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, individuals making contributions to this report included Gary Mountjoy (Assistant Director), Gerard Aflague, Camille Chaires, Rebecca Eyler, and Lori Martinez.
Of the over $82 billion that federal agencies plan to spend on IT in fiscal year 2014, at least $59 billion is to be spent on O&M, which consists of legacy systems (i.e., steady state) and systems that are in both development and O&M (known as mixed life cycle). OMB calls for agencies to perform annual OAs, which are a key method for examining the performance of O&M investments. GAO was asked to review IT O&M investments and agency use of OAs. The objectives of this report were to among other things (1) identify the federal IT O&M investments with the largest budgets, including their responsible agencies and how each investment supports its agency's mission; (2) determine the extent to which these investments have undergone OAs; and (3) assess whether the responsible agency's major IT investments are in development, mixed life cycle, or steady state. To do so, GAO focused on the 10 IT investments with the largest budgets in O&M and their responsible eight agencies, and assessed whether OAs were conducted on the investments. In addition, GAO evaluated what agencies spent on mixed, development, and O&M investments and whether agencies were using O&M funds for development activities. The 10 federal information technology (IT) operations and maintenance (O&M) investments with the largest budgets in fiscal year 2012--and the eight agencies that operate them--are identified by GAO in the table below. They support agencies by providing, for example, global telecommunications infrastructure and information transport services for the Department of Defense. Of the 10 investments, only the Department of Homeland Security (DHS) investment underwent an operational analysis (OA)--a key performance evaluation and oversight mechanism required by the Office of Management and Budget (OMB) to ensure O&M investments continue to meet agency needs. DHS's OA addressed most factors that OMB calls for; it did not address three factors (e.g., comparing current cost and schedule against original estimates). DHS officials attributed these factors not being addressed to the department still being in the process of implementing its new OA policy. The remaining agencies did not assess their investments, which accounted for $7.4 billion in reported O&M spending. Agency officials cited several reasons for not doing so, including relying on budget submission and related management reviews that measure performance; however, OMB has noted that these are not a substitute for OAs. Until the agencies ensure their operational investments are assessed, there is a risk that they will not know whether these multibillion dollar investments fully meet intended objectives. For the eight agencies in this review, the majority of their 401 major IT investments were mixed life cycle (i.e., having activities and systems that are in both development and O&M) with regard to total spending and number of investments. Specifically, 193 (48 percent) of the investments were mixed investments, accounting for about $18 billion (61 percent) of planned spending. As such, successful oversight of such investments should involve a combination of conducting OAs to address operational portions of an investment and establishing IT governance and program management disciplines to manage those portions under development. GAO's experience at the agencies and this report have identified agency inconsistencies in conducting OAs and establishing the capabilities that are key to effectively managing IT investments; accordingly, GAO has made prior recommendations to strengthen agency efforts in these areas. GAO is recommending that the seven agencies that did not perform OAs on their large IT O&M investments do so, and that DHS ensure that its OA is complete and addresses all OMB factors. Of the seven agencies, three agreed with GAO's recommendations; two partially agreed; and two had no comments. DHS agreed with the GAO recommendation to it.
7,821
814
Nationwide, VA provides or pays for veterans' nursing home care in three settings: CLCs, community nursing homes, and state veterans' nursing homes.the cost of care that is covered for eligible veterans. These settings vary in terms of their characteristics, as well as VA provides nursing home care to veterans in 134 CLCs nationwide. CLCs are typically within or in close proximity to VA medical centers. VA requires CLCs to meet The Joint Commission's long-term care standards. veterans in CLCs, while discretionary veterans may be required to pay a copayment depending on their income or other factors. See Veterans Health Administration Handbook 1142.01, Criteria and Standards for VA Community Living Centers (Aug. 13, 2008). The Joint Commission is an independent organization that accredits and certifies health care organizations and programs in the United States. Medicare is the federal health insurance program for people age 65 and older, individuals under age 65 with certain disabilities, and individuals diagnosed with end-stage renal disease. Medicaid is a federal-state program that provides health care coverage to certain categories of low-income individuals. pays for the full cost of care for mandatory veterans, while discretionary veterans may be subject to a similar copayment as in CLCs depending on their income or other factors. However, VA is generally restricted by law from paying for more than 6 months of care for discretionary veterans. These veterans must therefore have other sources of payment, and, according to VA officials, most long-stay residents may enroll in Medicaid, have private long-term care insurance, or pay for care through out-of- pocket spending. VA also pays for all or part of veterans' care in 140 state veterans' nursing homes nationwide. For state veterans' nursing homes, VA pays at least a portion of the cost of providing nursing home care for eligible veterans in these homes, but does not control the admission process. Veterans are admitted based on eligibility criteria as established by state requirements. For state veterans' nursing homes to participate in VA's program, however, VA generally requires that at least 75 percent of the In addition, VA requires state veterans' nursing residents be veterans. homes to be certified by VA annually, and ensures compliance with its standards through surveys and audits. Each fiscal year, VA establishes the per diem rates paid to state veterans' nursing homes for care provided to veterans. For mandatory veterans, VA pays a higher per diem For discretionary that covers the full cost of care, including medications.veterans, VA pays the lesser of the basic per diem established by VA or one-half of the total daily cost of care.nursing homes, there is no restriction on the number of days for which VA may pay for care for discretionary veterans in state veterans' nursing homes. As part of VA's support and oversight of state veterans' nursing homes, VA medical centers of jurisdiction process and approve per diem reimbursements for the state veterans' nursing homes located in their geographic areas. In addition to paying some or all of the cost of providing nursing home care to veterans, VA supports state veterans' nursing homes by awarding grants to states for construction or renovation of facilities. These grants are awarded following VA's review and approval of proposals submitted by state officials. In addition to per diem payments and construction grants from VA, state veterans' nursing homes may receive payments from a number of different sources, including Medicare and Medicaid. While veterans of all ages may need VA nursing home care, the need for such care increases with age because elderly veterans are more likely to have functional or cognitive limitations that indicate a need for nursing home care. in 2014 and decline thereafter (see fig. 1). However, the percentage of elderly veterans is expected to remain relatively unchanged due to a decline in the overall veteran population. Although the need for VA nursing home care remains, for over a decade VA has highlighted the potential benefits of providing veterans with alternative options for long- term care--specifically, less costly home and community-based care--in an effort to lessen the need for more costly nursing home care. For example, in its 2014 budget justification, VA proposed legislation that would authorize VA to pay for care in VA-approved medical foster homes for veterans who would otherwise need nursing home care. Functional limitations are physical problems that limit a person's ability to perform routine daily activities, such as eating, bathing, dressing, paying bills, and preparing meals. Cognitive limitations are losses in mental acuity that may also restrict a person's ability to perform such activities. Institutionalization in a nursing home is more common at older ages--in 2010, about 1 in 8 people age 85 or older resided in institutions, compared with 1 percent of people ages 65 to 74. See Congressional Budget Office: Rising Demand for Long-Term Services and Supports for Elderly People (Washington D.C.: June 2013). Decisions about the nursing home setting in which a veteran will receive care are decentralized to the local level because of several factors, including variability in the choice of available settings, the nursing home care services available, and admissions policies at each type of setting. In addition, veterans' nursing home service needs, eligibility status, and preferences about the location of care are considered in deciding the setting to choose. VA program officials told us that the decisions as to which nursing home setting would be used are decentralized to the local level because they are dependent upon several factors including the type of setting available in each community, and availability varies considerably across locations. For example, veterans in need of nursing home care in Augusta, Maine who wish to stay within about a 50-mile radius may have the option of receiving care from several settings including 1 CLC, 12 community nursing homes and 2 state veterans' nursing homes, assuming availability of beds and resources. However, veterans in Saginaw, Michigan wishing to stay within a similar radius may have the option of receiving care from 1 CLC and 3 community nursing homes, depending on the availability of beds and resources, since the only 2 state veterans' nursing homes are over 100 miles away. In addition, VA officials told us that decisions about which setting is used are based upon veterans' specific nursing home service needs, and settings varied in the type of specific services offered. For example, officials told us that in certain geographic areas CLCs provide certain services that are not available in the community, such as dementia care, behavioral health services, and care for ventilator-dependent residents. In other areas, however, officials told us that these specialized services might not be available in a CLC and instead might be available at a community nursing home. When an individual medical center has more than one CLC in its service area, each CLC may offer a unique set of services. Therefore, according to officials, the availability of different types of services in each nursing home setting depends largely on location. VA officials further told us that admissions policies are generally the same for all CLCs, but may vary for community nursing homes and state veterans' nursing homes. CLCs are required to provide care based on agency-wide policies; therefore, eligibility criteria and admissions policies are generally uniform across the country. An interdisciplinary team-- including personnel such as a registered nurse, social worker, recreation therapist, medical provider, dietitian, and any other discipline(s) directly involved in the care of the resident--at the VA medical center of jurisdiction determines whether the veteran has a clinical need for nursing home care. This determination is to be based on a comprehensive clinical assessment of medical, nursing and therapy needs; level of functional impairment; cognitive status; rehabilitation needs; and special emphasis care needs, such as spinal cord injury or end-of-life care. Each CLC is required to use a standardized instrument that is used by all Centers for Medicare & Medicaid Services-certified nursing homes for assessment, treatment planning, and documentation and evaluation of care and services. Three key factors are considered at the time of admission: (1) the specific services to be provided; (2) whether the stay is short or long; and (3) the setting to which the resident will be discharged. Admissions policies are generally standardized for community nursing homes, but may vary for state veterans' nursing homes based on state requirements. Community nursing homes are required to be certified for participation in the Medicare and Medicaid programs, and use the same standardized instrument for assessment, evaluation and treatment planning as CLCs. VA officials told us that community nursing homes are required to accept all eligible veterans referred by VA, subject to availability of beds and required resources. State veterans' nursing homes are not required to provide VA with documentation of their admissions policies. Since these homes are state-owned and operated entities they are subject to admissions and eligibility criteria that vary from state to state. For example, for admission, state veterans' nursing homes in Alabama require the veteran to have 90 days of service, at least one day of which was wartime service. In contrast, state veterans' nursing homes in New York require the veteran to have only 30 days of active service, while homes in Florida do not require any wartime service. In addition to the type of available settings, the nursing home care services available, and admissions policies at each type of setting, VA officials told us that veterans' eligibility status, and preferences about remaining close to home and family or willingness to travel to a nursing home setting were important considerations. For example, a discretionary veteran with a preference for staying close to home might be a candidate for admission to a community nursing home or a state veterans' nursing home if a CLC was too far away. However, officials told us that because of the veteran's discretionary status, he or she would be informed of VA's restriction of coverage to only the first 180 days of care in the community nursing home, and staff would assist the veteran in obtaining Medicaid coverage. The veteran's eligibility status would be less important if admission was made to the state veterans' home since the restriction on length of coverage would not apply to this setting. However, officials emphasized that these considerations were made within the context of the availability of specific settings, specific services, and eligibility criteria and admissions policies across locations. Given the variability in these factors, veterans in two different communities with the same service needs, eligibility status and preferences might be admitted to different settings. Of the three VA nursing home settings, state veterans' nursing homes provided care for just over half of VA's nursing home workload in fiscal year 2012. VA's total nursing home workload was primarily long stay that year. Most of the nursing home care that VA provided or paid for in fiscal year 2012 was for discretionary veterans and for residents ages 65 to 84 years old. In addition, veterans' eligibility status and age varied by setting. State veterans' nursing homes accounted for 53 percent of the workload--measured by average daily census--for which VA provided or paid for care in fiscal year 2012. CLCs provided care for 28 percent of the total workload, and community nursing homes provided care for 19 percent of the workload. (See fig. 2.) In fiscal year 2012, the most recent year for which data were available, state veterans' nursing homes provided care to an average of 19,355 residents per day, out of the total average daily workload of 36,250 residents for whom VA provided or paid for nursing home care. This and other workload patterns we examined have been consistent in recent years based on VA data for fiscal years 2010-2012. At the network level, the proportion of nursing home workload in each setting varied widely by network, particularly the range of workload in state veterans' nursing homes compared to the other settings. For example, state veterans' nursing homes comprised 74 percent of total VA nursing home workload in Veterans Integrated Service Network (VISN) 16 (South Central VA Health Care Network), compared to 20 percent of the nursing home workload in VISN 21 (Sierra Pacific Network). (See app. I for more information on nursing home workload by network and setting.) Overall, long-stay care accounted for nearly 90 percent of VA's total nursing home workload in fiscal year 2012 (31,750 of the 36,250 residents for whom VA provided or paid for care each day), and long-stay care accounted for at least three-quarters of all workload in each of VA's three nursing home settings. (See fig. 3.) Of the three settings, state veterans' nursing homes had the largest proportion of long-stay workload (97 percent) compared to community nursing homes (80 percent) and VA's CLCs (76 percent). These patterns were consistent from fiscal year 2010 through fiscal year 2012. VA officials told us that they examine workload data by length of stay for planning purposes, but do not make these data available publicly. For all of the networks, the majority of workload was long-stay. The proportion of long-stay workload ranged from 80 percent to 95 percent, with the lowest proportion of long-stay workload in VISN 18 (VA Southwest Health Care Network), VISN 21 (Sierra Pacific Network), and VISN 22 (Desert Pacific Healthcare Network), and the highest proportion of long-stay workload in VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. II for information on workload by network and length of stay.) VA officials said that they thought long-stay care (measured by average daily census) accounted for a high proportion of CLC workload because CLCs provide a number of long-stay programs for veterans who are unable to access certain nursing home services in other settings. For example, according to VA officials, some CLCs offer specialized long-stay programs for residents with dementia or spinal cord injuries, and may also serve residents with mental or behavioral health conditions who are not eligible for nursing home care in other settings. Nearly two-thirds (62 percent) of VA's nursing home care in fiscal year 2012 was provided to discretionary veterans, while just over one-third (35 percent) was provided to mandatory veterans. When examined by age group, nursing home residents 65 through 84 years of age comprised a larger proportion of the workload than other age groups, amounting to 45 percent of VA's nursing home workload. Residents age 85 and older amounted to 37 percent, and those under 65 years of age amounted to 16 percent. (See fig. 4.) At the network level, workload by resident characteristics generally mirrored overall patterns. Workload for most networks was largely discretionary, with discretionary care comprising at least half of the workload in 20 of VA's 21 networks. Discretionary workload ranged from 40 percent of total workload in VISN 21 (Sierra Pacific Network) to 69 percent of total workload in VISN 7 (VA Southeast Network) and VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. III for information on workload by network and eligibility status.) In addition, residents 65 through 84 years of age comprised similar proportions of workload in each network. Specifically, the proportion of workload for residents 65 through 84 years of age ranged from 42 percent in VISN 1 (VA New England Healthcare System), VISN 3 (VA New York/New Jersey Veterans Healthcare Network), VISN 15 (VA Heartland Network), and VISN 21 (Sierra Pacific Network) to 50 percent in VISN 6 (VA Mid- Atlantic Health Care Network) and VISN 11 (Veterans in Partnership). (See app. IV for information on workload by network and resident age.) The proportion of nursing home workload by veterans' eligibility status-- mandatory veterans compared to discretionary veterans--varied widely by setting. State veterans' nursing homes provided the highest proportion of discretionary care compared to the other nursing home settings-- 84 percent of workload in state veterans' nursing homes was for care provided on a discretionary basis, compared to 48 percent of workload in CLCs and just 18 percent in community nursing homes. Conversely, community nursing homes provided the highest proportion of mandatory care (82 percent of workload), followed by CLCs (52 percent) and state veterans' nursing homes (9 percent). (See table 1.) The proportion of workload by age group also varied among the three settings. Of the three settings, state veterans' nursing homes had the highest proportion of workload for veterans age 85 and older. State veterans' nursing homes also had the smallest proportion of workload for residents under age 65, who constituted less than a tenth of the workload. CLCs and community nursing homes had about the same proportions of workload for each age group, and in contrast to state veterans' nursing homes, they had higher proportions of workload for residents under age 65 (27 and 24 percent of workload, respectively, compared to 8 percent). These patterns indicate that the characteristics of resident populations varied distinctly across settings. A higher proportion of workload in state veterans' nursing homes was for discretionary and older veterans than in the other two settings. In addition, while workload in CLCs and community nursing homes had a similar age distribution of residents, community nursing homes had a higher proportion of workload for mandatory veterans than CLCs. VA officials told us that, at the national level, they rely on workload data for planning and budgeting purposes, especially to ensure that there are adequate resources to serve mandatory veterans. Officials said that reviewing the mix of mandatory versus discretionary veterans is particularly important since VA is required to serve the needs of mandatory veterans. VA officials told us that age data are also becoming important because VA now has a cohort of younger residents, and VA needs to be attuned to these population changes to ensure the required services are available. However, VA does not currently publish data on nursing home workload disaggregated by length of stay and resident characteristics in its budget justification. As a result, VA is not providing workload data on nursing home care provided or paid for to the maximum extent possible as encouraged by OMB guidance to justify staffing and other requirements. Congressional stakeholders therefore have incomplete information on the type of workload (long-stay or short-stay) being provided in each nursing home setting, as well as how settings differ in eligibility status and age of residents they serve. The lack of such information could hinder congressional budgeting and program decision making and oversight regarding VA's staffing and resource requirements for providing nursing home care. Just under three-quarters of VA's total nursing home expenditures in fiscal year 2012 were for care provided in CLCs. In addition, per diem expenditures in CLCs--i.e., the average daily cost per resident--were significantly higher than the per diem expenditures in community nursing homes and state veterans' nursing homes. Over half of VA's nursing home spending was for discretionary care and nearly half of spending was for veterans age 65 to 84. However, spending by eligibility status and age cohort varied by VA nursing home setting. In fiscal year 2012, VA spent more for care provided in CLCs than in the other two settings combined. Seventy-one percent ($3.5 billion) of VA's total expenditures was spent on care provided in CLCs (see fig. 5), although CLCs accounted for 28 percent of the total workload. Conversely, VA spent 16 percent (about $800 million) for nursing home care in state veterans' nursing homes, although state veterans' nursing homes accounted for 53 percent of VA's nursing home workload. The share of VA expenditures in each setting and other expenditure patterns we examine below remained relatively unchanged between fiscal years 2010-2012. Similar to workload, the proportion of VA nursing home expenditures accounted for by each setting varied widely by network. For example, nearly 90 percent of VA's expenditures in VISN 5 (VA Capitol Health Care Network) were for care provided in CLCs, whereas in VISN 19 (Rocky Mountain Network) just 48 percent of expenditures were for care provided in CLCs. (See app. V for more information on nursing home expenditures by network and setting.) In addition to CLCs accounting for most of VA's total nursing home expenditures, the per diem expenditure in CLCs was considerably higher than that for community nursing homes and state veterans' nursing homes, as also reported in VA's annual budget justification. Specifically, while the per diem expenditure across all settings in fiscal year 2012 was $370, the per diem expenditure in CLCs was nearly 4 times higher than for community nursing homes, and about 8 times higher than for state veterans' nursing homes--$953 compared to $244 and $113, respectively. (See table 2.) We also found that the per diem expenditure for CLCs was substantially higher than the per diem expenditure for community nursing homes and state veterans' nursing homes, regardless of the resident's length of stay. Although the short-stay per diem expenditure in CLCs ($1,167) was substantially more than the per diem expenditure for long stays ($884), both were considerably higher than the per diem expenditures for community nursing homes and state veterans' nursing homes. VA officials told us that they have not done any studies comparing the reasons for differences in per diem expenditures across settings because such expenditures are not comparable. However, VA officials provided us with a breakdown of the various components of total and per diem expenditures in CLCs. (See table 3.) VA officials indicated that "core" CLC expenditures, which account for about 40 percent of total CLC per diem expenditures, would be comparable to the care that VA pays for in community nursing homes and state veterans' nursing homes. In addition to these core expenditures, VA's expenditures for CLCs in fiscal year 2012 included direct care expenditures for physicians and other medical personnel staffing, indirect care expenditures for education and research, and overhead expenditures related to VA national programs, among others. In particular, VA officials noted that CLCs are often located in or within close proximity to a VA medical center, and that the facility expenditures alone for CLCs are generally higher than those of community nursing homes and state veterans' nursing homes, which are generally stand-alone facilities. In fiscal year 2012, for example, the per diem expenditure for CLC facility costs alone was $234, roughly comparable to the entire per diem that VA paid for veterans to receive care in community nursing homes that year. VA officials also told us that while the amount of nursing staff in community nursing homes is similar to that of CLCs, the skill level may not be as high. For example, CLCs may hire more licensed and registered nurses due to the needs of the residents in CLCs. VA officials also told us that expenditures for emergency medical care would not be included in the community nursing home per diem expenditures, but that these expenditures are included for CLC residents. In addition, while the cost of routine medications is covered under the community nursing home per diem, any high-cost medications are not, although they are accounted for in the overall expenditures for operating the community nursing home program. Officials also noted that per diem expenditures for state veterans' nursing homes only represent a portion of the total expenditures for care, with the remainder being paid for by the state and the veteran. The majority of VA's spending for nursing home care in fiscal year 2012-- $3.7 billion, or 75 percent--was on long-stay care. Long-stay care accounted for most of VA's expenditures in each nursing home setting, and accounted for all but a small percentage of spending in state veterans' nursing homes. (See fig. 6.) Although VA officials said they examine data on nursing home spending by length of stay for planning and budgeting purposes, VA does not include such data in its budget justification. Similarly, at the network level, the majority of expenditures for all networks were for long-stay care. The proportion of expenditures for long- stay care ranged from 62 percent for VISN 18 (VA Southwest Health Care Network) to 90 percent for VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. VI for information on expenditures by network and length of stay.) Overall, discretionary care accounted for just over half (52 percent or $2.5 billion) of all VA nursing home spending--a slightly lower proportion than workload, of which discretionary care comprised 62 percent. Just under half (47 percent or $2.3 billion) was spent on care for residents age 65 to 84. About one-quarter was spent on residents under age 65 and about the same percent for residents age 85 and over. (See fig. 7.) At the network level, total expenditures by eligibility status varied by network, with the proportion of spending for discretionary care ranging from 37 percent in VISN 6 (VA Mid-Atlantic Health Care Network) to 61 percent in VISN 15 (VA Heartland Network) and VISN 23 (VA Midwest Health Care Network). (See app. VII for information on expenditures by network and eligibility status.) Spending by age group did not vary as substantially between networks, however. Specifically, the proportion of spending for residents 65 through 84 ranged from 45 percent in VISN 1 (VA New England Healthcare System), VISN 15 (VA Heartland Network), and VISN 19 (Rocky Mountain Network), to 51 percent in VISN 6 (VA Mid-Atlantic Health Care Network). (See app. VIII for information on expenditures by network and resident age.) Similar to workload, spending for discretionary care varied widely by setting, with state veterans' nursing homes having the highest proportion of their total spending (84 percent) for discretionary care compared to the other two settings (50 percent in CLCs and 18 percent in community nursing homes). (See table 4.) Also, state veterans' nursing homes had the highest proportion (46 percent) of their total spending for residents age 85 and older, and the lowest proportion (8 percent) on residents under age 65. CLCs and community nursing homes had similar proportions of their total spending on care for residents in each age group. Similar to workload data, VA program officials told us that they rely on expenditure data by length of stay and resident characteristics for planning and budgeting purposes. This type of analysis is important given the significant differences in short- and long-stay per diem expenditures, particularly for CLCs, as well as differences in per diems that VA pays for mandatory and discretionary veterans in state veterans' nursing homes. In addition, according to officials, expenditure data on community nursing homes are especially important from a program perspective since VA looks at unit costs to help in rate negotiations. However, VA does not currently include expenditure data disaggregated by length of stay and resident characteristics in its budget justification, and therefore does not provide information on unit costs to the maximum extent possible as encouraged by OMB to justify staffing and other requirements. As a result, congressional stakeholders have incomplete information on the budget that is approved for VA nursing home care, including the proportion of expenditures that is allocated for long-stay and short-stay care, as well as expenditures by resident characteristics. The lack of such information could hinder congressional budgeting and program oversight regarding VA's staffing and resource requirements for providing nursing home care. VA now has key data on workload and expenditures for its three nursing home settings that were lacking in the past, and VA officials told us that they use these data for budgeting and planning purposes. Our analysis of these data show that the nursing home care that VA provides or pays for is primarily for long-stay care of 90 days or more for residents with chronic physical and mental limitations across all three nursing home settings, rather than short-stay care for residents with postacute care needs. Most of VA's nursing home workload is for discretionary care, rather than mandatory care, and more care is provided for residents 65 to 84 years of age than for other age groups, though these patterns vary by setting. As VA determines budget estimates and plans for future care needs, these data provide a foundation for understanding the type of care provided, the characteristics of the residents receiving it, and differences among the three settings. We believe that having and using the key workload and expenditure data that we analyzed in this report provides VA with more complete data to better inform its budget estimates and conduct program oversight than in the past. VA is to be commended for collecting and using the information to improve its decision making. However, VA only includes data on total nursing home workload, total expenditures, and per diem expenditures by the three nursing home settings in its budget justification and does not include workload or expenditures disaggregated by length of stay or resident characteristics to the maximum extent possible to justify staffing and other requirements. As a result, Congress does not have complete nursing home data on workload and expenditures by the three settings. The lack of such information could hinder congressional decision making and oversight of budgeting of VA nursing home care staffing and resource needs for care, which accounts for a significant portion of VA's health care budget and serves a vulnerable population. To provide more complete data for Congress, we recommend that the Secretary of Veterans Affairs supplement nursing home workload and expenditure data currently included in VA's budget justification with the following information: Average daily census by length of stay and resident characteristics, including veterans' eligibility status and age. Total expenditures and per diem expenditures by length of stay and resident characteristics, including veterans' eligibility status and age. We provided a draft of this report to VA for comment. In its written comments--reproduced in appendix IX--VA concurred with our recommendation and stated that it will provide supplemental data on both nursing home workload and expenditures by length of stay and resident characteristics upon release of its fiscal year 2015 budget. VA stated that it would provide data for state veterans' nursing homes to the extent the data are available. We are sending copies of this report to the Secretary of Veterans Affairs, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Vijay D'Souza at (202) 512-7114, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. In addition to the contact named above, James C. Musselwhite, Assistant Director; Iola D'Souza; Linda Galib; Drew Long; and Hemi Tewarson made key contributions to this report. Veterans' Health Care Budget: Improvements Made, but Additional Actions Needed to Address Problems Related to Estimates Supporting President's Request, GAO-13-715 (Washington, D.C.: Aug. 8, 2013). Veterans' Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities, GAO-13-220 (Washington, D.C.: Feb. 22, 2013). Veterans' Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification, GAO-12-908 (Washington, D.C.: Sept. 18, 2012). Veterans' Health Care Budget Estimate: Changes Were Made in Developing the President's Budget Request for Fiscal Years 2012 and 2013, GAO-11-622 (Washington, D.C.: Jun. 14, 2011). Veterans' Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President's Budget Request, GAO-11-205 (Washington, D.C.: Jan. 31, 2011). VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement, GAO-09-145 (Washington, D.C.: Jan. 23, 2009). VA Long-Term Care: Data Gaps Impede Strategic Planning for and Oversight of State Veterans' Nursing Homes, GAO-06-264 (Washington, D.C.: March 31, 2006). VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans, GAO-06-333T (Washington, D.C.: Jan. 9, 2006). VA Long-Term Care: Oversight of Nursing Home Program Impeded by Data Gaps, GAO-05-65 (Washington, D.C.: Nov. 10, 2004).
In fiscal year 2012, about $4.9 billion of VA's $54 billion health care services budget was spent on nursing home care. To inform Congress of its budgeting priorities, VA prepares a budget justification, which is reviewed by OMB, that includes data on nursing home workload and expenditures in the three settings. VA also collects data on length of stay (long- and short-stay) and resident characteristics, including eligibility status, as VA is required to pay for mandatory veterans' nursing home care and may pay for discretionary care as resources permit. These data are important for Congress to understand how funding is allocated for long- and short-stay care and for residents in each setting. GAO was asked to examine VA's nursing home program. Among other things, GAO examined (1) VA's nursing home workload in each setting, by length of stay and resident characteristics; and (2) VA's expenditures for nursing home care in each setting, by length of stay and resident characteristics. GAO analyzed VA nursing home workload and expenditure data, including fiscal year 2012, by setting, length of stay, and resident characteristics; and interviewed VA officials. In fiscal year 2012, the Department of Veterans Affairs' (VA) nursing home workload--the average number of veterans receiving nursing home care per day--was 36,250 across all of the three nursing home settings in which VA provided or paid for veterans' nursing home care. The three settings include Community Living Centers (CLCs), which are VA-owned and operated; community nursing homes with which VA contracts to provide care for veterans; and state veterans' nursing homes, which are owned and operated by states. Over half (53 percent) of this workload was provided in state veterans' nursing homes, 28 percent in CLCs, and 19 percent in community nursing homes. Nearly 90 percent of total workload was long-stay (91 days or more for residents with chronic conditions), and at least 75 percent of care provided in each of VA's three settings was long-stay. In addition, 62 percent of VA's total workload was provided to discretionary veterans (those veterans without certain levels of service-connected disabilities). In fiscal year 2012, VA spent $3.5 billion (71 percent) of its total nursing home expenditures on care provided in CLCs, 16 percent in state veterans' nursing homes and 13 percent in community nursing homes. Seventy-five percent of total spending was for long-stay care, and at least 70 percent of spending in each setting was for long-stay care. About half of total VA spending was for discretionary veterans. GAO found that VA does not provide nursing home workload and expenditure data by length of stay and resident characteristics in its budget justification, although the Office of Management and Budget (OMB) encourages agencies to provide such information to the maximum extent possible to justify staffing and other requirements and improve congressional decision making. As a result, VA does not provide complete information, which could hinder Congress' budgeting and oversight of VA's nursing home staffing and resource requirements. To enhance congressional oversight of VA's nursing home program, GAO recommends that VA supplement data currently included in its budget justification with workload and expenditures by length of stay and resident characteristics. VA concurred with GAO's recommendation and stated it will provide these data upon release of its fiscal year 2015 budget.
6,977
704
DOD's readiness assessment and reporting system was designed to assess and report on military readiness at three levels--(1) the unit level; (2) the joint force level; and (3) the aggregate, or strategic, level. Unit-level readiness is assessed with the Global Status of Resources and Training System (GSORTS), which is an automated system that assesses the extent to which military units possess the required resources and training to undertake their wartime missions. To address joint readiness, the Chairman of the Joint Chiefs of Staff established the Joint Monthly Readiness Review (now called the Joint Quarterly Readiness Review or JQRR), that compiles readiness assessments from the combatant commands, the combat support agencies, and the military services. The Joint Staff and the services use these assessments to brief DOD's leadership on the Senior Readiness Oversight Council--an executive-level forum for monitoring emerging readiness issues at the strategic level. The briefings to the council are intended to present a view of readiness at the aggregate force level. From these briefings to the council, DOD prepares a legislatively mandated quarterly readiness report to Congress. Figure 1 provides an overview of DOD's readiness assessment process. We have issued several reports containing recommendations for improving readiness reporting. In 1994, we recommended DOD develop a more comprehensive readiness system to include 26 specific readiness indicators. In 1998, we reported on shortcomings in DOD's readiness assessment system. At that time, we stated GSORTS' limitations included lack of precision in measurements, late reporting, subjective input, and lack of standardization. Secondly, we reported that while the Quarterly Readiness Reports to the Congress accurately reflected briefs to the Senior Readiness Oversight Council, they lacked specific details on deficiencies and remedial actions and thus did not meet the requirements of 10 U.S.C. 482 (b). DOD concurred with our recommendation that the Secretary of Defense take steps to better fulfill the legislative reporting requirements under 10 U.S.C. 482 by providing (1) supporting data on key readiness deficiencies and (2) specific information on planned remedial actions. Finally, we reported that deficiencies identified as a result of the Joint Monthly Readiness Reviews remained open because the solutions require funding over the long term. In 2002, we issued a classified report on DOD's process for tracking the status of deficiencies identified in the Joint Monthly Readiness Reviews. We made recommendations to improve DOD's deficiency status reporting system and for DOD to develop funding estimates for correcting critical readiness deficiencies. In its comments, DOD generally agreed with the report's findings and recommendations. Although DOD has made progress in resolving readiness reporting issues raised in our 1998 report, we found that some of the same issues still exist today. For example, DOD has added information to its Quarterly Readiness Reports to the Congress (hereafter referred to as the quarterly reports). However, we found that the reports still contain vague descriptions of readiness problems and remedial actions. Even though some report annexes contain detailed data, the data as presented are not "user friendly"--it is largely unevaluated and is not linked to readiness issues mentioned in the report plus the report text does not explain how the data relates to units' readiness. Thus, as we reported in 1998, these reports do not specifically describe readiness problems or remedial actions as required under 10 U.S.C. 482 (b). We believe that this kind of information would be useful for Congress to understand the significance of the information in these reports for use in its oversight role. DOD has improved some aspects of its unit-level reporting system, the Global Status of Resources and Training System (GSORTS). For example, in 1998 GSORTS' data were maintained in multiple databases and data were not synchronized. As of September 2002, the data are reported to a central site, and there is one database of record. Also in 1998, U.S. Army GSORTS review procedures delayed submission of Army data, and all the services' data entry was manual. As of September 2002, Army reporting procedures require reporting consistent with GSORTS' requirements, and all the services have automated data entry, which reduces errors. In 1998, combat units only reported on readiness for wartime missions. As of September 2002, combat units report on assigned mission readiness in addition to wartime mission readiness. Conversely, DOD has not resolved some issues we raised in 1998. For example, readiness ratings are still reported in broad bands and actual percentages of required resources are not externally reported. These issues remain because the manual specifying readiness reporting rules has not changed in these areas. The manual's definition of readiness levels for personnel has not changed since our 1998 report--it still defines readiness levels in bands of 10 percentage points or more and does not require external reporting of actual percentages. For example, the highest personnel rating can range from 90 percent to 100 percent, and there is no requirement to report the actual percentage outside of DOD. We have also reported that GSORTS does not always reflect training and equipment deficiencies. For example, we reported in April and June 2002 that readiness data do not reflect the effect of training range restrictions on unit readiness. We have also reported that GSORTS does not include whether a unit's chemical/biological equipment is usable. In commenting on our analysis, the OUSD P&R office responsible for readiness reporting stated that it recognized the imprecision of the current measurements. According to that office, an effort to develop the planned new readiness reporting system, which is discussed later in this report, includes working with the DOD components to enhance and expand readiness reporting. Since our 1998 report, the quarterly reports improved in some areas, but degraded in others. Although some information was added, we found that some of the same quality issues remain--namely, that the reports do not specifically describe readiness problems, their effects on readiness, or remedial actions. DOD has added information to the quarterly reports in response to legislative direction. For example, DOD added information on the services' cannibalization rates. Also, DOD added annual reports on infrastructure and institutional training readiness. However, some information was eliminated from the quarterly reports. For example, the law requires results of joint readiness reviews to be reported to Congress. DOD included these results until the July-September 2001 Quarterly Readiness Report to the Congress. Since that report, four quarterly reports have been issued without the joint force assessments. Defense officials responsible for readiness reporting said that the joint readiness reviews were not included because the scenarios were based on the former national security strategy of two major wars. The officials stated they plan to include results from the joint readiness reviews in future reports. In commenting on our analysis the OUSD P&R office responsible for readiness reporting stated that it continues to seek better ways to provide concise, quality information. As we reported in 1998, we found that the quarterly reports still contain broad statements of readiness issues and remedial actions, which are not supported by detailed examples and are not related to data in the reports' annexes. Among other things, the law requires the quarterly reports to specifically describe each readiness problem and deficiency as well as planned remedial actions. The reports did not specifically describe the nature of each readiness problem or discuss the effects of each on unit readiness. Also, the reports included only broad statements of remedial actions that lacked details on timelines, objectives, or funding requirements. For example, one report said that the Air Force continued to experience shortages in critical job skills that affected the service's ability to train. The report did not refer the reader to data in annexes showing training readiness ratings; it did not state which skills were short, which training was not accomplished, or whether this shortage had or was expected to affect units' readiness ratings. Further, the report did not explain the remedial actions taken or planned to reverse the skill shortage, how long it would take to solve this problem, or what funding was programmed to implement remedial actions. Defense readiness officials agreed, stating that information in the quarterly reports is summarized to the point that there are no details on readiness deficiencies, remedial actions, or funding programmed to implement remedial actions. We believe the Congress needs this type of information to understand the significance of the information reported. Although some of the quarterly report annexes contain voluminous data, the data are not adequately explained or related to units' readiness. The law does not mandate specific explanations of these "readiness indicators," but we believe it is essential for Congress to understand the significance of the information in these reports for use in its oversight role. For example, DOD is required to report on the maintenance backlog. Although the report provides the quantity of the backlog, it does not explain the effect the backlog had on readiness. Specifically, the report did not explain whether units' readiness were affected because maintenance was not accomplished when needed. In addition, DOD is required to report on training commitments and deployments. The Expanded Quarterly Readiness Report to Congress Implementation Plan dated February 1998 stated that "either an excessive or a reduced level of commitment could be an indicator of potential readiness problems." However, OUSD P&R did not define what kind of "readiness problems" this data may indicate would occur as a result of "excessive or reduced" levels of training and deployments, such as degraded equipment or training. The data reported are the amount of training away from home station and the amount of deployments. However, these data are not explained or related to a unit's equipment or training ratings. Further, criteria have not been established to distinguish between acceptable and unacceptable levels of the training and deployment data reported. As a result, the reader does not know whether the data reported indicate a problem or the extent of the problem. In commenting on our analyses, OUSD P&R acknowledged "the Department would be better served by providing more information as to how various data relates to readiness." Generally, the quarterly reports also do not contain information on funding programmed to implement specific remedial actions. For example, one quarterly report included the statement that budgets were revised "to address readiness and capabilities issues," but no examples were provided. Also, the report lacked statements explaining how this "budget revision" would improve readiness. Although not required by law, we believe it would prove useful for Congress to understand how DOD addresses specific readiness problems. In commenting on our analysis, OUSD P&R officials stated that they would continue to work with the services to provide more fidelity with the information presented in the quarterly report annexes. However, they also said that detailed examples require significant staff effort throughout DOD and that the added time for more detailed analysis could render the report a historical document. They further said that complete information would certainly be desired and agreed it is important for the Congress to understand the significance of the information in the quarterly reports for use in its oversight role. DOD has complied with most, but not all, of the readiness reporting requirements added by Congress in the National Defense Authorization Acts for Fiscal Years 1998 through 2002. Congress added readiness reporting requirements out of concern over contradictions between assessment of military unit readiness in official readiness reports and observations made by military personnel in the field. In a review of these acts, we identified both recurring readiness reporting requirements that were added to existing law and one-time reporting requirements related to military readiness. We compared current readiness reporting to the requirements in these acts to make an overall judgment on the extent of compliance. We did not develop a total count of the number of reporting requirements because the acts included a series of sections and subsections that could be totaled in various ways. Because DOD is not reporting on all the requirements added over the past several years, the Congress is not receiving all the information mandated by law. Our analysis showed that DOD has complied with most of the requirements added in the National Defense Authorization Acts for Fiscal Years 1998-2002. For example, DOD took the following actions in response to legislative requirements: DOD is now reporting on the readiness of prepositioned equipment and is listing individual units that have reported low readiness as required by the National Defense Authorization Act for Fiscal Year 1998. DOD is reporting on infrastructure and institutional training readiness as required by the National Defense Authorization Act for Fiscal Year 1999. DOD contracted for an independent study of requirements for a comprehensive readiness reporting system and submitted the study report to the Congress as required by the National Defense Authorization Act for Fiscal Year 2000. DOD has added quarterly information on the military services' cannibalization rates as required by the National Defense Authorization Act for Fiscal Year 2001. DOD is reporting on some, though not all, of the items Congress required be added to the quarterly readiness reports. For example, the National Defense Authorization Act for Fiscal Year 1998 required 19 specific items be reported that are consistent with our previously cited 1994 report on readiness reporting. The 1994 report included a list of 26 readiness indicators that DOD commanders said were important for a more comprehensive assessment of readiness. A 1994 DOD-funded study by the Logistics Management Institute found that 19 of the 26 indicators could help DOD monitor critical aspects of readiness. The 19 items listed in the National Defense Authorization Act for Fiscal Year 1998 are very similar to those identified in the 1994 Logistics Management Institute study. DOD is reporting on 11 of the 19 items and is not reporting on the other 8. The eight items are (1) historical personnel strength data and trends, (2) personnel status, (3) borrowed manpower, (4) personnel morale, (5) operations tempo, (6) training funding, (7) deployed equipment, and (8) condition of nonpacing equipment as required in the Act. In an implementation plan setting forth how it planned to comply with reporting on the 19 items, which was also required by the National Defense Authorization Act for Fiscal Year 1998, DOD stated that it would not report on these eight indicators for the following reasons: Deployed equipment was considered part of the status of prepositioned equipment indicator. Historical personnel strength data and trends were available from the annual Defense Manpower Requirements Report. Training funding and operations tempo were believed to be represented adequately in the budget requests as flying hours, steaming days, or vehicle miles and were not considered good measures of readiness output. Personnel strength status was considered to be part of the personnel rating, but DOD agreed to investigate other ways to evaluate the effect of service personnel working outside the specialty and grade for which they were qualified. Borrowed manpower data was only captured in a limited sector of the deployable force and may not be meaningful until a better method is developed to capture the data. Personnel morale had no existing data sources. The condition of nonpacing equipment had no reasonable measurement to use as an indicator. Notwithstanding the reasoning that DOD stated, these eight indicators continue to be required by law, and we saw no indication in our work that DOD is working to develop data for them. Also, DOD is not complying with some of the requirements in the National Defense Authorization Act for Fiscal Year 1999. Examples are as follows: The act required DOD to establish and implement a comprehensive readiness reporting system by April 2000. As of January 2003, DOD had not implemented a new system, and officials said it is not expected to be fully capable until 2007 or 7 years later than required. The act also required DOD to develop implementing regulations for the new readiness reporting system. DOD had not developed implementing regulations as of January 2003. The act required DOD to issue regulations for reporting changes in the readiness of training or defense infrastructure establishments within 72 hours. Although DOD has provided some guidance, officials stated they have not issued regulations because no mechanism exists for institutional training or defense infrastructure establishments to report changes and because these entities are not part of an established readiness reporting system. In commenting on our analyses, DOD officials acknowledged "the Department is not in full compliance" and stated that they plan to achieve compliance with the new readiness reporting system under development. OUSD P&R officials said that the shortfalls in reporting are unwieldy under the current system; OUSD P&R intends to correct these shortfalls when the new system is functional. However, as noted above, DOD does not plan to implement its new system until 2007. As of January 2003, DOD also had not targeted incremental improvements in readiness reporting during the period in which the new system is being developed. Until then, Congress will receive less readiness information than it mandated by law. DOD issued a directive in June 2002 to establish a new readiness reporting system. The Undersecretary of Defense for Personnel and Readiness is to oversee the system to ensure the accuracy, completeness, and timeliness of its information and data, its responsiveness, and its effective and efficient use of modern practices and technologies. Officials in the OUSD P&R readiness office responsible for developing the new system said that they plan to use the new system to comply with the requirements in the National Defense Authorization Acts and to address many of the recommendations contained in a congressionally directed independent study. However, as of January 2003, there are few details of what the new system would include. Although the new system may have the potential to improve readiness reporting, as of January 2003, it is only a concept without detailed plans to guide development and monitor implementation. As a result, the extent to which the new system will address existing shortcomings is unknown. The National Defense Authorization Act for Fiscal Year 1999 required DOD to establish a comprehensive readiness reporting system. In doing so, the Congress expressed concern about DOD's lack of progress in developing a more comprehensive readiness measurement system reflective of operational realities. The Congress also noted that past assessments have suffered from DOD's inability to create and implement objective and consistent readiness reporting criteria capable of providing a clear picture to senior officials and the Congress. Subsequently, the August 2001 Defense Planning Guidance for Fiscal Years 2003-2007 called for the development of a strategic plan for transforming DOD readiness reporting. In June 2002, DOD issued a directive establishing the Department of Defense Readiness Reporting System. The system will measure and report on the readiness of military forces and the supporting infrastructure to meet missions and goals assigned by the Secretary of Defense. All DOD components will align their readiness reporting processes in accordance with the directive. The directive assigns oversight and implementing responsibility to the Undersecretary of Defense for Personnel and Readiness. The Undersecretary is responsible for developing, fielding, maintaining, and funding the new system and scenario assessment tools. The Undersecretary--in collaboration with the Joint Chiefs of Staff, Services, Defense Agencies, and Combatant Commanders--is to issue implementing instructions. The Chairman of the Joint Chiefs of Staff, the Service Secretaries, the commanders of the combatant commands, and the heads of other DOD components are each assigned responsibilities related to readiness reporting. OUSD P&R established a timetable to implement the new readiness reporting system. OUSD P&R plans to achieve initial capability in 2004 and reach full capability in 2007. OUSD P&R officials involved in developing the system said that they have been briefing the concept for the new reporting system since October 2002. As of January 2003 these officials stated that they are continuing what they have termed the "concept demonstration" phase, which began in October 2002. This phase consists of briefing various offices within DOD, the Joint Staff, and the services to build consensus and refine the new system's concept. These officials also said that the new system will incorporate many, but not all, of the recommendations contained in a legislatively mandated independent study of readiness reporting, which concluded that improvements were needed to meet legislatively mandated readiness reporting requirements and included numerous recommendations for what a new system should include. For example, the study recommended that (1) DOD report on all elements essential to readiness, such as depots, combat support agencies, and Defense agencies; (2) reporting should be in terms of mission essential tasks; and (3) DOD should measure the capability to carry out the full range of National Security Strategy requirements--not just a unit's wartime mission. We believe that successfully developing and implementing a large-scale effort, such as DOD's new readiness reporting system, requires an implementation plan that includes measurable performance goals, identification of resources, performance indicators, and an evaluation plan. As discussed earlier, full implementation of DOD's new readiness reporting system is several years away, and much remains to be done. In January 2003 the OUSD P&R office responsible for developing the new system said that the new readiness reporting system is a large endeavor that requires buy-in from many users and that the development of the system will be challenging. This office also wrote that it had just been given approval to develop the new readiness reporting system, was targeting development of an implementing instruction in the March 2003 time frame, and had not developed an implementation plan to assess progress in developing and implementing the new reporting system. The directive establishing the new reporting system requires the Undersecretary of Defense for Personnel and Readiness, in collaboration with others, to issue implementing instructions for the new system. DOD has experienced delays in implementing smaller readiness improvements than envisioned in the new readiness reporting system. One such effort involved development of an interface to query the existing readiness data base (GSORTS). In a July 2002 report, the DOD Inspector General reported that the planned implementation of this interface slipped 44 months, or just over 3.5 years. Also, history has shown it takes DOD time to make changes in the readiness reporting system. As illustrated in figure 2, DOD began reporting on specific readiness indicators 4 years after it agreed with GAO recommendations to include them in readiness reporting (see fig. 2). Other DOD development efforts recognize the need for effective planning to guide development. For example, DOD is working to transform military training as directed by the Defense Planning Guidance for Fiscal Years 2003-07. A March 2002 Strategic Plan for Transforming DOD Training developed by a different office within OUSD P&R discusses a training transformation road map with major tasks subdivided into near-, mid-, and long-term actions. The plan includes a list of near-term actions to be completed by October 2003 and definition of mid- and long-term actions in a comprehensive implementation plan that will identify specific tasks, responsibilities, timelines, resources, and methods to assess completion and measure success. The May 2002 Defense Planning Guidance update for fiscal years 2004-2009 directs OUSD P&R, working with other DOD components, to develop a comprehensive program to implement the strategic training transformation plan and provide it to the Deputy Secretary of Defense by April 1, 2003. Since the directive for creating a new readiness reporting system established broad policy with no specifics and since DOD has not developed an implementation plan, the extent to which the new system will address the current system's shortcomings will remain unknown until the new system is fully capable in 2007. Until then, readiness reporting will continue to be based on the existing system. Commenting on its plans for the new system, OUSD P&R said that it is in the process of creating an Advanced Concept Technology Demonstration (ACTD) structure for the new system and will produce all necessary planning documents required within the established ACTD process. However, this process is intended to provide decision makers an opportunity to understand the potential of a new concept before an acquisition decision. We do not believe the ACTD process will necessarily result in an implementation plan to effectively monitor development and assess whether the new system is being implemented on schedule and achieving desired results. DOD's ACTD guidelines state the principal management tool for ACTDs is a management plan, which provides a top-level description of the objectives, critical events, schedule, funding, and measures of evaluation for the project. We reported in December 2002 that these guidelines contain advice and suggestions as opposed to formal directives and regulations. DOD's guidelines state that the ACTD should plan exercises or demonstrations to provide an adequate basis for utility assessment. We also reported in December 2002 that DOD lacks specific criteria to evaluate demonstration results, which may cause acquisition decisions to be based on too little knowledge. Therefore, we still believe an implementation plan is necessary since the ACTD process does not require a detailed implementation plan and does not always include specific criteria to evaluate effectiveness. While DOD has made some improvements in readiness reporting since 1998, some of the same issues remain unresolved today. Although DOD is providing Congress more data than in 1998, the voluminous data are neither evaluated nor explained. The quarterly reports do not link the effects of "readiness issues" or deficiencies to changes in readiness at the unit level. Also, as in 1998, the reports contain vague descriptions of remedial actions not linked to specific deficiencies. Finally, the quarterly reports do not discuss funding that is programmed to implement specific remedial actions. As a result, the information available to Congress is not as effective as it could be as an oversight tool. Even though DOD directed development of a new readiness reporting system, it has not yet developed an implementation plan identifying objective and measurable performance goals, the resources and personnel needed to achieve the goals, performance indicators, and an evaluation plan to compare program results with goals, and milestones to guide overall development of the new readiness system. Even though the new system may have the potential to improve readiness reporting, without an implementation plan little assurance exists that the new system will actually improve readiness assessments by the time full capability is planned in 2007. Without such a plan, it will also remain difficult to gauge progress toward meeting the 2007 target date. This concern is reinforced in light of the (1) years-long delays in implementing other readiness reporting improvements and (2) the deficiencies in existing reporting that OUSD P&R plans to rectify with the new system. Furthermore, without an implementation plan neither senior DOD leadership nor the Congress will be able to determine if the resources spent on this system are achieving their desired results. To improve the information available to Congress for its use in its oversight role, we recommend that the Secretary of Defense direct the OUSD P&R to improve the quality of information contained in the quarterly reports. Specifically, we recommend that DOD's reports explain (in the unclassified section) the most critical readiness issues that are of greatest concern to the department and the services. For each issue, we recommend that DOD's reports include an analysis of the readiness deficiencies, including a clear explanation of how the issue affects units' readiness; a statement of the specific remedial actions planned or implemented; and clear statements of the funding programmed to implement each remedial action. To be able to assess progress in developing the new readiness system, we recommend that the Secretary of Defense direct the OUSD P&R to develop an implementation plan that identifies performance goals that are objective, quantifiable, and measurable; the cost and personnel resources needed to achieve the goals, including an identification of the new system's development and implementation costs in the President's Budget beginning in fiscal year 2005 and Future Years Defense Plan; performance indicators to measure outcomes; an evaluation plan to compare program results with established goals; and milestones to guide development to the planned 2007 full capability date. To assist Congress in its oversight role, we recommend that the Secretary of Defense give annual updates to the Congress on the new readiness reporting system's development to include performance measures, progress toward milestones, comparison of progress with established goals, and remedial actions, if needed, to maintain the implementation schedule. In written comments on a draft of this report, which are reprinted in appendix II, the Department of Defense did not agree with our recommendations. In response to our recommendation that DOD improve the quality of information contained in its quarterly readiness reports, DOD said that the Quarterly Readiness Report to the Congress is one of the most comprehensive and detailed reports submitted to the Congress that discusses serious readiness issues and ways in which these issues are being addressed. DOD further stated that the department presents briefings on specific readiness issues to the Congress and that spending more time and resources expanding the existing written report would be counterproductive. We recognize that the Quarterly Readiness Reports to the Congress contain voluminous data. However, as discussed in this report, we found that the quarterly reports' annexes are large and mostly consist of charts or other data that are not adequately explained and are not related to units' readiness. In some cases, criteria have not been established to enable the reader to distinguish between acceptable and unacceptable levels of the data reported. As a result, the reader cannot assess the significance of the data because it is not at all clear whether the data reported indicate a problem or the extent of the problem. Considering that the quarterly reports contain inadequately explained data and that much of the information is not "user friendly," we continue to believe the quality of information in the quarterly reports can be improved. In fact, we reviewed all the quarterly reports provided to Congress since 1998 and found that through the January-June 2001 report the reports did include an unclassified summary of readiness issues for each service addressing four topics--personnel, equipment, training, and enablers (critical units or capabilities, such as specialized aircraft, essential to support operations). However, the reports did not include supporting data or a discussion of remedial actions. Since that time, these summaries have been eliminated from the quarterly reports. For example, the unclassified narrative of the last two reports available at the time we performed our work--January- March 2002 and April-June 2002--were less than two pages long and neither discussed readiness issues nor ways in which these issues are being addressed. One report discussed the new readiness reporting system, and the other discussed a review of seven environmental laws. Given that DOD has highlighted key issues in the past, we believe that improving the quarterly reports would be beneficial if DOD were to focus on the most critical readiness issues that are of greatest concern to the services and includes supporting data and a discussion of remedial actions. Therefore, we have modified our recommendation that DOD improve the quality of readiness reporting to focus on readiness issues deemed to be critical by the Secretary and the military services and to provide more detailed data and analyses of those issues and the remedial actions planned for each one. DOD did not agree with our recommendations that it (1) develop an implementation plan with, among other things, performance goals that are objective, quantifiable, and measurable and (2) provide annual updates to the Congress on the new readiness reporting system's development. DOD said that it had undertaken an initiative to develop better tools for assessing readiness and that it intended to apprise Congress on its efforts to develop tools for readiness assessment. DOD further stated that the effort to improve readiness reporting is in its infancy, but that it has established milestones, cost estimates, functional responsibilities, and expected outcomes. DOD believes that further planning and a prescriptive annual update to the Congress is unnecessary. We agree that the new readiness reporting system may have the potential to improve readiness reporting. However, as discussed in this report, the directive establishing the new system contains very broad, high-level statements of overall functional responsibilities and outcomes, but no details on how these will be accomplished. Further, DOD has established two milestones--initial capability in 2004 and full capability in 2007. DOD does not have a road map explaining the steps needed to achieve full capability by 2007, which is seven years after Congress mandated a new system be in place. In addition, as discussed earlier in this report, DOD has experienced delays in implementing much smaller readiness improvements. While DOD has undertaken an initiative to develop better tools for assessing readiness and intends to routinely and fully apprise the Congress on its development efforts, tools are the mechanics for evaluating readiness data. As such, tools are not the same thing as the comprehensive readiness reporting system mandated by Congress that DOD has said will include new metrics and will evaluate entities within DOD that currently do not report readiness. Considering that Congress expressed concern about DOD's lack of progress in developing a comprehensive system, that developing and implementing DOD's planned new system is scheduled to take 4 more years, and that delays have been experienced in earlier efforts to make small improvements in readiness reporting, we continue to believe that it is important for DOD to develop an implementation plan to gauge progress in developing and implementing the new readiness reporting system and to provide annual updates to the Congress. Such a plan would be consistent with DOD's approach to other major initiatives such as transforming training. We have therefore retained these two recommendations. DOD also provided technical corrections and we have modified the report where appropriate. We are sending copies of this report to the Ranking Minority Member, Subcommittee on Readiness, House Committee on Armed Services; the Chairman and Ranking Minority Member, Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; other interested congressional committees; Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me on (757) 552-8111 or by E-mail at [email protected]. Major contributors to this report were Steven Sternlieb, Brenda Waterfield, James Lewis, Dawn Godfrey, and Herbert Dunn. To assess the progress the Department of Defense (DOD) has made in resolving issues raised in our prior report concerning both the unit level readiness reporting system and the lack of specificity in DOD's Quarterly Readiness Reports to the Congress, we met with DOD officials and reviewed regulations and quarterly reports. Specifically, we met with officials of the Office of the Undersecretary of Defense for Personnel and Readiness (OUSD P&R) responsible for readiness reporting, the Joint Staff, and the military services to discuss their individual progress in each of these areas. To assess progress regarding unit level readiness reporting, we reviewed the Chairman of the Joint Chiefs of Staff manual governing this system and the related service implementing instructions to determine if these documents had changed since our 1998 report or if the manual and service instructions continued to allow reporting in the same manner as reflected in our earlier report. Through a comparison of the current and prior documents, discussions with pertinent officials, and our analysis, we determined whether the readiness reporting issues we raised in 1998 had been resolved. We also reviewed the content of quarterly reports to assess their quality and usefulness, and assess whether the problems we reported in 1998 had been rectified. We discussed our analysis with OUSD P&R officials and provided them with our analyses in order that they could fully consider and comment on our methodology and conclusions. We did not assess the accuracy of reported readiness data. To determine the extent to which DOD has complied with legislative reporting requirements enacted since our prior report, we compared a complete listing of these requirements to DOD's readiness reporting. First, we identified the legislatively mandated readiness reporting requirements enacted since our 1998 report. To accomplish this, we reviewed the National Defense Authorization Acts for Fiscal Years 1998-2002 to list the one-time and recurring reporting requirements related to military readiness. We also requested congressional staff and OUSD P&R to review the list, and officials from both offices agreed it was accurate. We did not develop a total count of the number of reporting requirements because the acts included a series of sections and subsections that could be totaled in various ways. Once we obtained concurrence that this listing was complete and accurate, we compared this list to current readiness reporting to make an overall judgment on the extent of compliance. To assess how DOD plans to improve readiness reporting, we reviewed the June 2002 DOD directive establishing a new readiness reporting system and a progress update briefing on the new system. We also obtained readiness briefings from each of the services, OUSD P&R, and Joint Staff officials. We performed several electronic searches of the Deputy Under Secretary of Defense (Readiness) electronic Web site to determine the status of readiness reporting. To assess how smoothly other readiness improvements progressed, we reviewed DOD audit reports. We discussed our findings with OUSD P&R officials and worked proactively with them in conducting our analyses. Specifically, we provided them drafts of our analyses for their comments and corrections. We conducted our review from June 2002 through January 2003 in accordance with generally accepted government auditing standards.
The Department of Defense's (DOD) readiness assessment system was designed to assess the ability of units and joint forces to fight and meet the demands of the national security strategy. In 1998, GAO concluded that the readiness reports provided to Congress were vague and ineffective as oversight tools. Since that time, Congress added reporting requirements to enhance its oversight of military readiness. Therefore, the Chairman asked GAO to examine (1) the progress DOD made in resolving issues raised in the 1998 GAO report on both the unit-level readiness reporting system and the lack of specificity in DOD's Quarterly Readiness Reports to the Congress, (2) the extent to which DOD has complied with legislative reporting requirements enacted since 1997, and (3) DOD's plans to improve readiness reporting. Since 1998, DOD has made some progress in improving readiness reporting--particularly at the unit level--but some issues remain. For example, DOD uses readiness measures that vary 10 percentage points or more to determine readiness ratings and often does not report the precise measurements outside DOD. DOD included more information in its Quarterly Readiness Reports to the Congress. But quality issues remain--in that the reports do not specifically describe readiness problems, their effects on readiness, or remedial actions to correct problems. Nor do the reports contain information about funding programmed to address specific remedial actions. Although current law does not specifically require this information, Congress could use it for its oversight role. DOD complied with most, though not all, of the legislative readiness reporting requirements enacted by Congress in the National Defense Authorization Acts for Fiscal Years 1998-2002. For example, DOD (1) is now listing the individual units that have reported low readiness and reporting on the readiness of prepositioned equipment, as required by the fiscal year 1998 Act; (2) is reporting on 11 of 19 readiness indicators that commanders identified as important and that Congress required to be added to the quarterly reports in the fiscal year 1998 Act, but is not reporting on the other 8 readiness indicators; and (3) has not yet implemented a new comprehensive readiness reporting system as required in the fiscal year 1999 Act. As a result, Congress is not receiving all the information mandated by law. DOD issued a directive in June 2002 to establish a new comprehensive readiness reporting system that DOD officials said they plan to use to comply with the reporting requirements specified by Congress. The new system is intended to implement many of the recommendations included in a congressionally directed independent study for establishing such a system. However, the extent to which the new system will actually address the current system's shortcomings is unknown, because the new system is currently only a concept, and full capability is not scheduled until 2007. As of January 2003, DOD had not developed an implementation plan containing measurable performance goals, identification of resources, performance indicators, and an evaluation plan to assess progress in developing the new reporting system. Without such a plan, neither DOD nor the Congress will be able to fully assess whether the new system's development is on schedule and achieving desired results.
7,806
645
The United States has the largest, most extensive aviation system in the world with over 19,000 airports ranging from large commercial transportation centers handling millions of passengers annually to small grass airstrips serving only a few aircraft each year. Of these, roughly 3,300 airports are designated by FAA as part of the national airport system and thus are eligible for federal assistance. The national airport system consists of two primary types of airports-- commercial service airports, which have scheduled service and enplane 2,500 or more passengers per year, and general aviation (GA) airports, which have no scheduled service and enplane fewer than 2,500 passengers annually. FAA divides commercial service airports into primary airports (enplaning more than 10,000 passengers annually) and commercial service nonprimary airports. The 395 current primary airports are classified by hub type--large-, medium-, small-, and nonhub--based on passenger traffic. Passenger traffic is highly concentrated: 88 percent of all passengers in the United States enplaned at the 63 large- or medium-hub airports in 2013 (see fig. 1). More than 2,900 airports in the national system are designated as GA airports. These airports range from large business aviation and cargo shipment centers that handle thousands of operations a year to small rural airports that may handle only a few hundred operations per year but may provide important access to the national transportation system for their communities. Generally, the level of aviation activity, whether commercial passenger and cargo or general aviation business and private aircraft, helps to generate the funds that finance airport development. The three primary sources of funding for airport development are Airport Improvement Program (AIP) grants, PFCs, and locally generated revenue. All three sources of funds are linked to passenger aviation activity. AIP is supported by the Airport and Airway Trust Fund (AATF), which is funded by airline ticket taxes and fees; GA flights contribute to the AATF through a tax on aviation jet fuel. Airports included in FAA's NPIAS are eligible to receive AIP entitlement (apportionment) grants based on airports' size and can also compete for AIP discretionary grants. AIP grants can only be used for eligible capital projects, generally those that enhance capacity, safety, and environmental conditions, such as runway construction and rehabilitation, airfield lighting and marking, and airplane noise mitigation. The amount made available in AIP appropriations totaled $3.35 billion in fiscal year 2014. The grants generally require matching funds from the local match ranging from 10 to 25 percent depending on the size of the airport and type of project. PFCs, another source of funding for airport development projects, are a federally authorized, statutorily-capped, airport-imposed fee of up to a maximum of $4.50 per enplaned passenger per flight segment, and a maximum of $18 per round trip ticket. The PFC is collected by the airline on the passenger ticket and remitted to the airports (minus a small administrative fee retained by the airline). Introduced in 1991, and capped at $3.00 per flight segment, PFC collections can be used by airports for the same types of projects as AIP grants, but also to pay interest costs on debt issued for those projects. Since its inception, landside development projects--including, for example, new terminal projects--and interest payments on debt used to finance eligible projects have each accounted for 34 percent of total PFC collections spent. The maximum level of PFCs was last increased in Collections totaled almost $2.8 billion in calendar year 2014. 2000.According to FAA, 358 commercial service airports are collecting PFCs as of February 2015. Airports also fund development projects from revenues generated directly by the airport. Airports generate revenues from aviation activities such as aircraft landing fees and terminal rentals, and non- aviation activities such as concessions, parking, and land leases. Aviation revenues are the traditional method for funding airport development and, along with PFCs, are used to finance the issuance of local tax-exempt debt. Because of the size and duration of some airport development projects--for example, a new runway can take more than a decade and several billion dollars to complete--long-term debt can be the only way to finance these types of projects. FAA's main planning tool for identifying future airport-capital projects is the NPIAS. FAA relies on airports, through their planning processes, to identify individual projects for funding consideration. According to FAA officials, FAA reviews input from individual airports and state aviation agencies and validates both eligibility and justification for the project over the ensuing five-year period. Because the estimated cost of eligible airport projects that airports plan to perform greatly exceeds the available grant funding available for these projects, FAA uses a priority system based on airport and project type to allocate the available funds. The Airports Council International-North America (ACI-NA), a trade association for airports, also estimates the cost of planned airport capital projects. While almost all airport sponsors in the United States are states, municipalities, or specially created public authorities, there is still a significant reliance on the private sector for finance, expertise, and control of airport assets. For example, we have previously reported that the majority of airport employees at the nation's major airports are employed by private sector firms, such as concessionaires, and some airports are also operated by private companies. Pursuant to statutory authorization, since 1996, FAA has been piloting an airport privatization program that relaxes certain restrictions on the sale or lease of airports to private entities. A variety of factors has had a substantial impact on the airline industry. We reported in June 2014 that economic issues such as volatile fuel prices and the economic recession have affected the industry as have airlines' consolidation and an adoption of business models that focus For instance, the 2007-2009 recession more on capacity management.combined with a spike in fuel prices, helped spur industry mergers and a change in airline business models. Specifically, Delta acquired Northwest in 2008, United and Continental merged in 2010, Southwest acquired AirTran in 2011, and US Airways and American Airlines merged in 2014. Although passenger traffic has generally rebounded as the economy has recovered, the number of commercial aircraft operations has not returned to 2007 levels as airlines are flying larger and fuller aircraft. In June 2014, we found that one outcome of economic pressures and industry changes had been reductions in U.S. passenger aircraft operations as measured by scheduled flight operations. Many airports lost both available seats and flights since 2007 when aircraft operations last peaked. However, medium- and small-hub airports had proportionally lost more service than large-hub or nonhub airports, as major airlines merged and consolidated their flight schedules at the largest airports. In June 2014, we found--based on our analysis of Department of Transportation's (DOT) data--that there were about 1.2 million fewer scheduled domestic flights in 2007 as compared to 2013 at large-, medium-, small-hub, and nonhub airports. The greatest reduction in scheduled flights occurred at medium-hub airports, which decreased nearly 24 percent from 2007 to 2013, compared to a decrease of about 9 percent at large-hub airports and about 20 percent at small-hub airports. Medium-hub airports also experienced the greatest percentage reduction in air service as measured by available seats (see fig. 2). While 2014 passenger activity as represented by the number of passengers onboard aircraft departing U.S. airports has rebounded nearly back to 2007 levels (down 4 percent), the total number of commercial passenger and cargo aircraft departures (operations) in 2014 is still down 18.5 percent since 2007. Declining operations reduces pressure on airports' airside capacity, while rebounding passenger traffic could put pressure on airports' terminals and gates to accommodate passengers. We found in June 2014 that air service to small airports, which generally serve small communities, has declined since 2007 due, in part, to volatile fuel costs and declining populations in small communities. According to a study by the Massachusetts Institute of Technology (MIT), regional aircraft--those mostly used to provide air service to small communities-- are 40 to 60 percent less fuel efficient than the aircraft used by mainline carriers at larger hub airports. Further, from 2002 to 2012, fuels costs quadrupled and became the airlines' largest expense at nearly 30 percent of airlines' operating costs. While more recently oil prices have dropped, it remains uncertain whether currently low oil prices will continue. The second major factor affecting small community service is declining population in many regions of the country over the last 30 years. As a result, in previous work, we have found that population movement has decreased demand for air service to certain small communities. For example, geographic areas, especially in the Midwest and Great Plains states, lost population from 1980 through 2010, as illustrated in figure 3 below. As a result, certain areas of the country are less densely populated than they were 35 years ago when the airlines were deregulated and the Essential Air Service (EAS) was created. For small communities located close to larger cities and larger airports, a lack of local demand can be exacerbated by passengers choosing to drive to airports in larger cities to access better service and lower fares. The EAS program was created in 1978 to provide subsidies to some small communities that had service at the time of deregulation. We reported last year that EAS has grown in cost but did help stem the declines in service to those communities as compared to other airports. In June 2014, we reported that GA activity has also declined since 2007, particularly affecting airports that rely on general aviation activity for a large share of their revenue. For GA airports--which generate revenues from landing fees, fuel sales, and hangar rents--the loss of traffic can have a significant effect on their ability to fund development. A 2012 MIT study that examined trends for GA operations at U.S. airports with air- traffic control towers indicated that from 2000 to 2010, total GA operations dropped 35 percent. According to the MIT study, the number of annual hours flown by GA pilots, as estimated by FAA, has also decreased over the past decade. Numerous factors affect the level of GA operations including the level of fuel prices, the costs of owning and operating personal aircraft, and the total number of private pilots and GA aircraft. For example, we recently reported on the availability of airline pilots and found that the GA pilot supply pipeline has decreased as fewer students enter and complete collegiate pilot-training programs and fewer military pilots are available than in the past. Earlier this year, FAA reported on airport capacity needs through 2030. The focus of FAA's analysis was not on the broad range of investments airports make to serve passengers and aircraft, but on the capacity of airports to operate without significant delay. Therefore, the primary focus was on airside capacity, especially runway capacity. To do this, FAA modeled recent and forecasted changes in aviation activity, current and planned FAA investments in air-traffic-control modernization, and airport investments in infrastructure, such as new runways, to determine which airports are likely to be congested or capacity constrained in future years.with previous studies in 2004 and 2007 following a similar methodology. The most recent study found that the number of capacity-constrained airports expected in the future has fallen dramatically from the number projected in earlier reports, referred to as FACT1 and FACT2 (see fig. 4). For example, in 2004, FAA projected that 41 airports would be capacity constrained by 2020 unless additional investment occurred. However, in the 2015 report, FAA projected that 6 airports will be capacity constrained in 2020. FAA attributed this improvement to changes in aviation activity, investment in air-traffic-control modernization, and the addition of airport runways. In the September 2014 NPIAS, FAA estimated that airports have roughly $33.5 billion in planned development projects for the period 2015 through 2019 that are eligible for federal support in the form of AIP grants.estimate is roughly 21 percent less than FAA's previous estimate of $42.5 billion for the period 2013 through 2017 (see fig. 5). FAA reported a decrease in estimated needs for most hub-airport categories and all types of airport development except projects to reconstruct or rehabilitate airport facilities, security related infrastructure projects, and safety projects (see fig. 6). Notably, according to FAA, planned capacity-related development decreased to $4.9 billion, a 50-percent decrease. Planned terminal-related development also saw a major decline, down by 69 percent from the previous estimate. The ACI-NA also estimated airports' planned development for the 2015 through 2019 period for projects both eligible and not eligible for AIP funding. According to ACI-NA, the total estimated planned-development cost for 2015 through 2019 is $72.5 billion, more than twice FAA's estimate for just AIP eligible projects.percent over its prior estimate of $68.7 billion for the prior 2013-2017 ACI-NA's estimate increased 6.2 estimating period. According to ACI-NA, the difference in the respective estimates is attributable to ACI-NA's including all projects rather than just AIP-eligible projects like the NPIAS, as well as including projects with identified funding sources, which the NPIAS excludes. For example, ACI- NA's estimate includes AIP-ineligible projects such as parking facilities, airport hangars, and commercial space in large passenger terminal buildings. ACI-NA attributed more than half of the development costs to the need to accommodate growth in passenger and cargo activity. ACI- NA estimated that 36 percent of planned development costs were for terminal projects. We are currently analyzing FAA and ACI-NA's most recent plan estimates and will be reporting later this year on the results. In Fiscal Year 2015, Congress made $3.35 billion available in appropriations acts for AIP funding, a reduction from the annual appropriations of $3.52 billion for fiscal years 2007 through 2011. The President's 2016 budget proposal calls for a reduction in annual AIP funding to $2.9 billion in conjunction with an increase in the PFC cap. As we testified in June 2014, if amounts made available in appropriations acts for AIP fall below the $3.2 billion level established in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century of 2000and no adjustments are made, under the 2000 Act the amount of AIP entitlement grants would be reduced, but more AIP discretionary grants could be made as a result. The larger amount of AIP funding that would go to discretionary grants would give FAA greater decision-making power over the development projects that receive funding. Previous proposals have considered changing how GA airports are allocated their share of AIP funds, which represented approximately one- quarter of total AIP funds in fiscal year 2014. For example, in 2007, the Administration's FAA reauthorization proposal suggested changing the funding structure for GA airports. Specifically, FAA would have tiered GA airports' funding based on level of and type of aviation activities. AIP entitlement funding would then range, based on the tier, up to $400,000. While this proposal was not adopted, FAA recently undertook an exercise to classify GA airports based on their activity levels. reported that 281 airports remained unclassified because they did not meet the criteria for inclusion in any of the new categories, thus having no clearly defined federal role.airports with few or no based aircraft. According to the most recent NPIAS report, many of these 227 airports have received AIP funding in the past and may be considered for future funding if and when their activity levels meet FAA's criteria for inclusion. In a 2012 report, FAA categorized GA airports as National (84), Regional (467), Local (1,236), and Basic (668). In addition, another 497 GA airports were unclassified. Federal Aviation Administration, General Aviation Airports: A National Asset (ASSET 1), May 2012. large- and medium-hub airports collecting PFCs must return a portion of their AIP entitlement grants, which are then redistributed to smaller airports through the AIP. As previously noted, 68 percent of PFCs have been used to pay for landside development (terminals) and interest charges on debt. In addition, many airports' future PFC collections are already committed to pay off debt for past projects, leaving little room for new development. For example, at least 50 airports have leveraged their PFCs through 2030 or later, according to FAA data. The President's fiscal year 2016 budget proposal and airports have called for increasing the PFC cap to $8--which is intended to account for inflation since 2000, when the maximum PFC cap was last raised--and eliminate AIP entitlements for large-hub airports. reported on the effects of increasing PFCs on airport revenues and passenger demand. Specifically, we found that increasing the PFC cap would significantly increase PFC collections available to airports under the three scenarios we modeled but could also marginally slow passenger growth and therefore the growth in revenues to the AATF. We modeled the potential economic effects of increased PFC caps for fiscal years 2016 through 2024 as shown in figure 7 below. Under all three scenarios, trust fund revenues, which totaled $12.9 billion in 2013 and fund FAA activities, would likely continue to grow overall based on current projections of passenger growth; however, the modeled cap increases could reduce the growth in total AATF revenues by roughly 1 percent because of reduced passenger demand if airlines pass the full amount of the PFC increase along to consumers in the form of increased ticket prices. Airport trade associations, the ACI-NA and the American Association of Airport Executives, have made prior proposals to raise the PFC cap to $8.50 with periodic adjustments for inflation. Pub. L. No. 112-95, SS 112, 126 Stat. 11, 18 (2012). totaled $5.2 billion, while nonaviation revenues were just over $5 billion. According to ACI-NA, non-aviation revenue has grown faster than passenger growth since 2004, over 4 percent on average for non-aviation revenue versus 1.5 percent average growth in passenger boardings over the same period. Further, some airports have developed unique commercial activities with stakeholders from local jurisdictions and the private sector to help develop airport properties into retail, business, and leisure destinations. Some examples include: Non-aviation development on airport property: Airports have turned to an increasing range of unique developments on airport property, including high-end commercial retail and leisure activities, hotels and business centers, and medical facilities for non-aviation revenues. For example, airports in Denver, Miami, and Indianapolis have built cold storage facilities on airport property in an effort to generate revenue by leasing cold storage space to freight forwarders and businesses that transport low-volume, high-valued goods, including pharmaceuticals, produce, and other time-sensitive or perishable items. Public-private partnerships: Airports can fund airport improvements with private sector participation. Public-private partnerships, involving airports and developers, have been used to finance airport development projects without increasing the amount of debt already incurred by airports. For example, the Port Authority of New York and New Jersey has recently received responses for its request for proposals for the private sector to demolish old terminal buildings and construct, partially finance, operate, and maintain a new Central Terminal Building for LaGuardia Airport in New York City. Privatization: FAA's Airport Privatization Pilot Program (APPP), which was established in 1997 to reduce barriers to airport privatization that we identified in 1996, has generated limited interest from the public and private sectors. As we reported in November 2014, 10 airports have applied to be part of the pilot program and one airport--San Juan Luis Munoz Marin International Airport in Puerto Rico--has been privatized (see fig. 8). In our report, we noted that several factors reduce interest in the APPP--such as higher financing costs for privatized airports, the lack of state and local property tax exemptions, and the length of time to complete a privatization under the program. Public sector airport owners have also found ways to gain some of the potential benefits of privatization without full privatization, such as entering airport management contracts and joint development agreements for managing and building an airport terminal. In conclusion, last year commemorated one century since the first commercial airline flight,commercial aviation has grown at an amazing pace to become an and in that relatively short time span ubiquitous and mature industry in the United States. While commercial aviation still has many exciting growth prospects for its second century, it also faces many challenges--among them how to ensure that the aviation system can accommodate millions of flights and hundreds of millions of passengers every year in the midst of shifting aviation activity and constrained federal funding. Despite recent declines in airport operations, it remains important for airports to be maintained as well as upgraded to maintain safety and accommodate future growth. Declines in airport operations have reduced demands on AIP, but rebounded passenger activity could continue to put pressure on PFCs to finance terminal and other projects. Developing airports will require the combined resources of federal, state, and local governments, as well as private companies' capital and expertise. Effectively supporting this development involves focusing federal resources on FAA's key priorities of maintaining the world's safest aviation system and providing adequate system capacity, while allowing sufficient flexibility for local airport sponsors to maximize local investment and revenue opportunities. In deciding the best course for future federal investment in our national airport system, Congress is faced with weighing the interests of all aviation stakeholders, including airports, airlines, other airport users, and most importantly passengers, to help ensure a safe and vibrant aviation system. Madam Chair Ayotte, Ranking Member Cantwell, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this testimony, please contact Gerald L. Dillingham at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Paul Aussendorf (Assistant Director), Amy Abramowitz, David Hooper, Delwen Jones, Josh Ormond, Melissa Swearingen, and Russell Voth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. airports are key contributors to the national and regional economies, providing mobility for people and goods, both domestically and internationally. Since 2007 when GAO last reported on airport funding, airports of all sizes have experienced significant changes in aviation activity. Financing for airport capital improvements is based on a mix of federal AIP grants, federally authorized but statutorily-capped PFCs, and locally generated aviation-related and non-aviation-related revenues. As deliberations begin in advance of FAA's 2015 reauthorization, Congress is faced with considering the most appropriate type, level, and distribution of federal support for development of the National Airspace System. This testimony discusses trends in (1) aviation activity at airports since 2007, (2) forecasted airport capacity needs and airports' planned development costs, and (3) financing for airport development. This testimony is based on previous GAO reports issued from June 2007 through December 2014, with selected updates conducted through April 2015. To conduct these updates, GAO reviewed recent information on FAA's program activities and analyses outlined in FAA reports, including the 2015 aviation forecast, and the 2015-2019 planned airport-development estimates. Economic factors, since 2007, have led to fewer scheduled commercial flights, a trend more pronounced for some types of airports. These economic factors include not just the volatile fuel prices and the 2007 to 2009 recession but also evolving airline practices, such as airline mergers and the adoption of business models that demonstrate capacity management. For example, as GAO reported in June 2014, the number of scheduled flights at medium- and small-hub airports has declined at least 20 percent from 2007 to 2013, compared to about a 9 percent decline at large-hub airports. General Aviation (GA) has also declined in activity, as measured by the number of GA aircraft operations and hours flown, due to similar economic factors. In recent years, however, passenger growth has rebounded. According to the Federal Aviation Administration's (FAA) projections, U.S. airline passenger growth is predicted to grow 2 percent per year through 2035--a growth rate that is slightly lower than that of past forecasts. According to FAA estimates, the number of airports that require additional capacity to handle flight operations to avoid delays has declined since 2004. Similarly, the future cost of planned airport development has also declined in recent years. Earlier this year, FAA projected that 6 airports will be capacity constrained in 2020 compared to 41 in the 2004 projection. Even with this improvement, some airports--like those in the New York City area region--will remain capacity constrained, according to FAA. The overall improved capacity situation is also reflected in reduced estimates of future airport-development costs that are eligible for federal grants. In September 2014, the FAA estimated that for the period 2015 through 2019, airports have about $33.5 billion in planned development projects eligible for federal Airport Improvement Program (AIP) grants--a 21 percent reduction from the $42.5 billion estimate for the time period 2013 through 2017. The biggest decline in planned development costs among project categories is in capacity projects such as new runway projects. However, an airport industry association estimated planned airport capital project costs, both those eligible and not eligible for AIP, of $72.5 billion for 2015 through 2019, an increase of 6.2 percent from the association's prior 5-year estimate for 2013 through 2017. As traditional funding sources for airport development have generally declined, airports have increasingly relied on other sources of financing. Specifically, federal AIP grants and Passenger Facility Charges (PFC) are two primary sources of federally authorized funding for airports. The amount made available for AIP decreased from over $3.5 billion for fiscal years 2007 through 2011 to less than $3.4 billion for fiscal year 2015. Further the President's 2016 proposed budget calls for additional reductions in AIP, though it would be offset with a proposed increase in the PFC cap, which is currently $4.50 per flight segment. Airports have sought additional opportunities to collect non-aviation revenues. As a result, according to FAA, non-aviation revenue has increased each year from 2008 through 2014. For example, airports have 1) partnered with the private sector to fund airport improvements; 2) identified new business ventures on airport property including the development of commercial retail, leisure activities, and medical facilities; and 3) explored options for privatization.
4,976
928
The JWST project continues to report that it remains on schedule and budget with its overall schedule reserve currently above its plan. However, the project is now entering a difficult phase of development-- integration and testing--which is expected to take another 3.5 years to complete. Maintaining as much schedule reserve as possible is critical during this phase to resolve known risks and unknown problems that may be discovered. Being one of the most complex projects in NASA's history, significant risks lie ahead for the project, as it is during integration and testing where problems are likely to be found and as a result, schedules tend to slip. As seen in figure 1, only two of five elements and major subsystems--ISIM and OTE--have entered the integration and testing phase. Integration and testing for the spacecraft and sunshield and for the ISIM and OTE when they are integrated together begins in 2016 and the entire observatory will begin this phase in late 2017. In December 2014, we reported that schedule risk was increasing for the project because it had lost schedule reserve across all elements and major subsystems. As a result, all were within weeks of becoming the critical path of the project and driving the project's overall schedule. Figure 2 shows the different amounts of schedule reserve remaining on all elements and major subsystems, their proximity to the critical path, and the total schedule reserve for the critical path at the time of our review. The proximity of all the elements and major subsystem schedules to the critical path means that a delay on any of the elements or major subsystems may reduce the overall project schedule reserve further, which could put the overall project schedule at risk. As a result, the project has less flexibility to choose which issues to mitigate. While the project has been able to reorganize work when necessary to mitigate schedule slips thus far, with further progression into subsequent integration and testing periods, flexibility will be diminished because work during integration and testing tends to be more serial, as the initiation of work is often dependent on the successful and timely completion of the prior work. This is particularly the case with JWST given its complexity. Challenges with the development and manufacturing of the sunshield and the cryocooler were the most significant causes of the decline in schedule reserve that we reported on in December 2014. The sunshield experienced a significant manufacturing problem during the construction of the large composite panel that forms part of the sunshield's primary support structure. The cryocooler compressor assembly--one component of the cryocooler--delivery is a top issue for the project and its development has required a disproportionate amount of cost reserves to fund additional work, caused in part, by issues such as a manufacturing error and manufacturing process mistake that caused delays to the schedule. The development of the cryocooler has been a concern for project officials as far back as 2006. Since that time, the cryocooler has faced a number of technical challenges, including valve leaks and cryocooler underperformance, which required two subcontract modifications and significant cost reserves to fund. The contractor and subcontractor were focused on addressing valve problems, which limited their attention to the cooling underperformance issue. This raised questions about the oversight of the cryocooler and why it did not get more attention sooner before significant delays occurred. In August 2013, the cryocooler subcontract was modified to reflect a 69 percent cost increase and that the workforce dedicated to the cryocooler effort at the subcontractor increased from 40 staff to approximately 110 staff. Since we issued our December 2014 report, JWST schedule reserve continued to decline: project schedule reserve decreased by 1 month, leaving 10 months of schedule reserve remaining, and the critical path switched from the cryocooler to the ISIM. The project is facing additional challenges with the testing of the ISIM and OTE and the manufacturing of the spacecraft in addition to continuing challenges with the cryocooler compressor assembly that further demonstrates continued schedule risk for the project. For example, after the second test for the ISIM--the element of JWST that contains the telescope's four different scientific instruments--electronic, sensor, and heat strap problems were identified that impact two of the four instruments. Mitigating some of these issues led to a 1.5-month slip to the ISIM schedule and made ISIM the current critical path of the project to allow officials time to replace the unusable and damaged parts. As a result, ISIM's third and final cryovacuum test scheduled to begin in August 2015 has slipped until September 2015. The OTE and spacecraft efforts are also experiencing challenges that may impact the schedules for those efforts. For example, it was discovered that over 70 harnesses on the OTE potentially had nicks on some wires and the majority will need to be repaired or rebuilt. The effects of these challenges on the project's schedule are still being determined. Finally, the cryocooler compressor assembly has yet to be delivered and will be more than 16 months late if the current delivery date holds. Since our December 2014 report, the cryocooler compressor assembly's delivery slipped almost an additional 2 months due to manufacturing and build issues and for an investigation of a leak to a joint with the pulse tube pre-cooler. Currently, the cryocooler compressor assembly is expected to be delivered in mid-June 2015 and is only 1 week off of the project's critical path. Entering fiscal year 2015, the JWST project had limited short-term cost reserves to address technical challenges and maintain schedule. We reported the project had committed approximately 40 percent of the fiscal year 2015 cost reserves before the start of the fiscal year. As a result, one of the project's top issues for fiscal year 2015 is its cost reserve posture, which the project reported is less than desired and will require close monitoring. At the end of February, project officials had committed approximately 60 percent of the fiscal year 2015 cost reserves and noted that maintaining fiscal year 2016 reserves needed close watching. The types of technical problems JWST is experiencing are not unusual for a project that is unique and complex. They are an inherent aspect of pushing technological, design, and engineering boundaries. What is important when managing such a project is having a good picture of risks, which can shift from day to day, and having effective tools for mitigating risks as they surface. Using up-to-date and thorough data on risks is also integral to estimating resources needed to complete the project. Given the cost of JWST, its previous problems with oversight, and the fact that the program is entering its most difficult phases of development, risk analysis and risk management have been a key focus of our work. JWST officials have taken an array of actions following the 2011 replan to enable the program to have better insight into risks and to mitigate them. For instance, we reported in 2012 that the project had implemented a new risk management system after it found the previous system lacked rigor and was ineffective for managing risks. The project instituted meetings at various levels throughout NASA and its contractors and subcontractors to facilitate communication about risks. The project also added personnel at contractor facilities, which allowed for more direct interaction and quicker resolution of issues. However, we reported in December 2014 that neither NASA nor the prime contractor had updated the cost risk analysis that underpinned the cost and schedule estimates for the 2011 replan. A cost risk analysis quantifies the cost impacts of risks and should be used to develop and update a credible estimate that accounts for all possible risks--technical, programmatic, and those associated with budget and funding. Moreover, conditions have changed significantly since the replan. For example, the delivery of the cryocooler compressor assembly is one of the project's top issues and was not an evident risk when the cost risk analysis was conducted in 2011. On the prime contract, our analysis found that 67 percent of risks tracked by Northrop Grumman in April 2014 at the time of our analysis were not present in September 2011 at the time of the replan. We determined that a current and independent cost risk analysis was needed to provide Congress with insight into JWST's remaining work on the Northrop Grumman prime contract--the largest (most expensive) portion of work remaining. A key reason for this determination was and continues to be the significant potential impact that any additional cost growth on JWST would have on NASA's broader portfolio of science projects. To provide updated and current insight in to the project's cost status, we took steps to conduct an independent, unbiased analysis.were, however, unable to conduct the analysis because Northrop Grumman did not allow us to conduct anonymous interviews of technical experts without a manager present. In order to collect unbiased data, interviewees must be assured that their opinions on risks and opportunities remain anonymous. Unbiased data would have allowed us to provide a credible assessment of risks for Northrop Grumman's remaining work. NASA and the JWST project disagreed that an independent cost risk analysis conducted by an outside organization at this point in the project would be useful. Neither believed that an organization external to NASA could fully comprehend the project's risks. Further, they noted that any such analysis would be overly conservative due to the complexities of the risks and not representative of the real risk posture of the project. GAO's best practices call for cost estimates to be compared to independent cost estimates in addition to being regularly updated. Without an independent and updated analysis, both the committee members' and NASA's oversight and management of JWST will be constrained since the impact of newer risks have not been reflected in key tools, including the cost estimate. Moreover, our methodology would have provided both NASA and Northrop Grumman with several opportunities to address concerns with our findings, including concerns about conservatism. After we were unable to conduct the cost risk analysis, NASA decided to conduct its own cost risk analysis of the Northrop Grumman remaining work. However, a NASA project official said that they did not plan to use data from the cost risk analysis to manage the project. Instead, they indicated that they planned to use the information to inform committee members of the project's cost risk and would continue to rely on other tools already in place to project the future costs of the project, such as earned value management (EVM) analysis. To maintain quality cost estimates over the life of a project, best practices state that cost risk analyses should be updated regularly to incorporate new risks and be used in conjunction with EVM analysis to validate cost estimates. While EVM is a very useful tool for tracking contractor costs and potential overruns, the analyses are based on past performance that do not reflect the potential impact of future risks. We reported that if the project did not follow best practices in conducting its cost risk analysis or use it to inform project management, the resulting information may be unreliable and may not substantively provide insight into JWST's potential cost to allow either Congress or project officials to take any warranted action. To better ensure NASA's efforts would produce a credible cost risk analysis, in December 2014, we recommended that officials follow best practices while conducting a cost risk analysis on the prime contract for the work remaining and update it as significant risks emerged. Doing so would ensure it provided information to effectively manage the program. NASA partially concurred with our recommendation, again noting that it has a range of tools in place to assess all contractors' performance, the approach the project has in place is consistent with best practices, and officials will update the cost risk analysis again when required by NASA policy. We found that NASA best practices for cost estimating recommend updating the cost risk analysis while a project is being designed, developed, and tested, as changes can impact the estimate and the risk assessment. Since our report was published, NASA completed its analysis and provided the results to us. We are currently examining the analysis to assess its quality and reliability and the extent to which it was done in accordance with NASA and GAO best practices. Our initial examination indicates the JWST project took the cost risk analysis seriously and took into account best practices in the execution of the analysis. The project has also recently begun conducting a new analysis of EVM data which they term a secondary estimate at completion analysis for two of its largest contractors-Northrop Grumman and Exelis--on work to go. This analysis should provide the project additional insight on the probabilities of outcomes while incorporating current risks against the cost reserves that remain. The initial analysis we have reviewed indicates that both contracts are forecasted to generally cost more at completion than the information produced using EVM analysis alone, but within the JWST life- cycle cost. However, we still have work to do to understand how NASA is analyzing the information and what assumptions it is putting into its analysis. In conclusion, with approximately 3.5 years until launch, project officials have made much progress building and testing significant pieces of hardware and are currently on schedule--achieving important milestones--and on budget. They have also taken important steps to increase their insight and oversight into potential problems. What is important going forward is having good insight into risks and preserving as much schedule reserve as possible--particularly given the complexity of the project, the fact it is entering deeper into its integration and testing cycle, and the fact that it has limited funds available in the short term to preserve schedule. Any cost growth on JWST may have wider implications on NASA's other major programs. While we are concerned about NASA's reluctance to accept an independent cost risk assessment, particularly in light of past problems with oversight, we are also encouraged that NASA took steps to conduct an updated risk analysis of Northrop Grumman's work and that NASA has sustained and enhanced its use of other tools to monitor and manage risk. As we undertake this year's review of JWST, we will continue to focus on risk management, the use of cost reserves, progress with testing, as well as the extent to which its cost risk analysis followed best practices. We look forward to continuing to work with NASA on this important project and reporting to Congress on the results of our work. Chairman Palazzo, Ranking Member Edwards, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to answer questions related to our work on JWST and acquisition best practices at this time. For questions about this statement, please contact Cristina Chaplain at (202) 512-4841, or at [email protected]. Contact points for our Offices of Congressional Relationship and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Shelby Oakley, Assistant Director; Karen Richey, Assistant Director; Jason Lee, Assistant Director; Patrick Breiding; Laura Greifner; Silvia Porres; Carrie Rogers; Ozzy Trevino; and Sylvia Schatz. James Webb Space Telescope: Project Facing Increased Schedule Risk with Significant Work Remaining. GAO-15-100. Washington, D.C.: December 15, 2014. James Webb Space Telescope: Project Meeting Commitments but Current Technical, Cost, and Schedule Challenges Could Affect Continued Progress. GAO-14-72. Washington, D.C.: January 8, 2014. James Webb Space Telescope: Actions Needed to Improve Cost Estimate and Oversight of Test and Integration. GAO-13-4. Washington, D.C.: December 3, 2012. NASA's James Webb Space Telescope: Knowledge-Based Acquisition Approach Key to Addressing Program Challenges. GAO-06-634. Washington, D.C.: July 14, 2006. NASA: Assessments of Selected Large-Scale Projects. GAO-15-320SP. Washington, D.C.: March 24, 2015. NASA: Assessments of Selected Large-Scale Projects. GAO-14-338SP. Washington, D.C.: April 15, 2014. NASA: Assessments of Selected Large-Scale Projects. GAO-13-276SP. Washington, D.C.: April 17, 2013. NASA: Assessments of Selected Large-Scale Projects. GAO-12-207SP. Washington, D.C.: March 1, 2012. NASA: Assessments of Selected Large-Scale Projects. GAO-11-239SP. Washington, D.C.: March 3, 2011. NASA: Assessments of Selected Large-Scale Projects. GAO-10-227SP. Washington, D.C.: February 1, 2010. NASA: Assessments of Selected Large-Scale Projects. GAO-09-306SP. Washington, D.C.: March 2, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
JWST is one of the National Aeronautics and Space Administration's (NASA) most complex and expensive projects. At an anticipated cost of $8.8 billion, JWST is intended to revolutionize understanding of star and planet formation, advance the search for the origins of the universe, and further the search for earth-like planets. Since entering development in 1999, JWST has experienced significant schedule delays and increases to project costs and was rebaselined in 2011. With significant integration and testing planned during the approximately 3.5 years until the launch date in October 2018, the JWST project will need to address many challenges before NASA can conduct the science the telescope is intended to produce. GAO has reviewed JWST for the last 3 years as part of an annual mandate and for the last 7 years as part of another annual mandate to review all of NASA's major projects. Prior to this, GAO also issued a report on JWST in 2006. This testimony is based on GAO's third annual report on JWST ( GAO-15-100 ), issued in December 2014, with limited updated information provided where applicable. That report assessed, among other issues, the extent to which (1) technical challenges were impacting the JWST project's ability to stay on schedule and budget, and (2) budget and cost estimates reflected current information about project risks. To conduct that work, GAO reviewed monthly JWST reports, interviewed NASA and contractor officials, reviewed relevant policies, and conducted independent analysis of NASA and contractor data. James Webb Space Telescope (JWST) project officials report that the effort remains on track toward the schedule and budget established in 2011. However, the project is now in the early stages of its extensive integration and testing period. Maintaining as much schedule reserve as possible during this phase is critical to resolve challenges that will likely surface and negatively impact the schedule. JWST has begun integration and testing for only two of five elements and major subsystems. While the project has been able to reorganize work when necessary to mitigate schedule slips thus far, this flexibility will diminish as work during integration and testing tends to be more serial, as initiating work is often dependent on the successful and timely completion of the prior work. a The cryocooler chills an infrared light detector on one of JWST's four scientific instruments. In December 2014, GAO reported that delays had occurred on every element and major subsystem schedule, each was at risk of driving the overall project schedule, and the project's schedule reserve had decreased from 14 to 11 months. As a result, further delays on any element or major subsystem would increase the overall schedule risk for the project. At the time of the report, challenges with manufacturing of the cryocooler had delayed that effort and it was the driver of the overall project schedule. Since the December report, the project's overall schedule reserve decreased to 10 months as a result of several problems that were identified following a test of the Integrated Science Instrument Module (ISIM), which contains the telescope's scientific instruments. ISIM is now driving the overall project schedule. Furthermore, additional schedule impacts associated with challenges on several other elements and major subsystems are still being assessed. At the time of the December 2014 report, the JWST project and prime contractor's cost risk analyses used to validate the JWST budget were outdated and did not account for many new risks identified since 2011. GAO best practices for cost estimating call for regularly updating cost risk analyses to validate that reserves are sufficient to account for new risks. GAO recommended, among other actions, that officials follow best practices while conducting a cost risk analysis on the prime contract and update the analysis as significant risks emerged. NASA partially concurred, noting that it has a range of tools in place to assess performance and would update the analysis as required by policy. Since then, officials completed the analysis and GAO is currently examining the results.
3,657
833
Charter schools are a new and increasingly popular entrant in the debate on restructuring and improving U.S. public education. The model offered by charter schools differs substantially from the traditional model for governing and funding public schools. Charter schools operate under a charter or contract that specifies the terms under which the schools may operate and the student outcomes they are expected to achieve. Charter schools may be exempt from most local and state rules, hire their own staff, determine their own curriculum, receive funding directly from the state, and control their own budgets. In contrast, traditional public schools are subject to substantial external controls, such as local, state, and federal requirements, which limit their authority over curriculum and personnel decisions. Federal, state, and local funding for traditional public schools usually flows through the district, and individual schools often have little control over their budgets. Between 1991 and 1994, 11 states enacted legislation authorizing charter schools to achieve a variety of purposes, including encouraging innovative teaching, promoting performance-based accountability, expanding choices in the types of public schools available, creating new professional opportunities for teachers, improving student learning, and promoting community involvement. The federal government has also acted on behalf of charter schools. Two major pieces of federal education legislation passed in 1994 include provisions on charter schools. The Improving America's Schools Act, which reauthorized and amended the ESEA of 1965, includes a new federal grant program to support the design and implementation of charter schools (see app. II for a description of this program). The Improving America's Schools Act also specifies the conversion of a school to charter school status as a possible corrective action that a school district can require of a school that has been identified for school improvement. The Goals 2000: Educate America Act allows states to use federal funds provided under the act to promote charter schools. As of January 1995, nine states had approved 134 charter schools with diverse instructional and operating characteristics. Another two states--Georgia and Kansas--had adopted laws authorizing charter schools but had not yet approved any. (See table 1.) As many as 14 more states may consider legislation in 1995. Approved charter schools include 85 new schools and 49 conversions of existing schools (see fig. 1), with some states only allowing such conversions (see table 2). Charter schools' diverse instructional programs include approaches such as instructing children of multiple ages in the same classroom, known as multiage grouping; teaching subjects in the context of a certain theme, known as thematic instruction; and using the Internet as an instructional tool. Some charter schools specialize in certain subject areas, such as the arts, sciences, or technology; others emphasize work experience through internships or apprenticeships. Some charter schools target specific student populations, including students at risk of school failure, dropouts, limited English proficient students, noncollege-bound students, or home-schooled students. Under the state laws in California, Colorado, Kansas, and Wisconsin, charter schools that target students at risk of school failure receive preference for approval. As some advocates envision them, charter schools would operate with far greater autonomy than traditional schools. They would operate independently from the school districts where they are located and unconstrained by government regulations; they would control their own budgets, personnel, curriculum, and instructional approaches. While this is the case for charter schools in some states, other states have laws that authorize charter schools with more limited autonomy. State laws influence charter schools' autonomy by how they provide for their (1) legal status, (2) approval, (3) funding, and (4) exemption from rules. Charter schools under four states' laws are legally independent from the school districts where they are located; that is, the charter schools are legally responsible for their operations (see table 3). Charter schools in Minnesota, for example, operate as nonprofit corporations or cooperatives. In five states, charter schools must be part of a school district that is legally responsible for the school's operations (see table 3). In one state, California, a charter school's legal status is determined through negotiation with the local school board that approves its charter. Some charter schools in California have organized as legally independent nonprofit corporations; others are legally part of a district; and some schools' legal status remains to be determined. In one state, Hawaii, the legal status of charter schools remains uncertain and awaits a decision by the State Attorney General. The legal status of a charter school may influence its authority over budgeting and personnel decisions. Legally independent charter schools generally control their own budgets and make their own hiring and firing decisions. Charter schools that remain legally part of a school district may have little control over budgeting or personnel, although this varies. All charter schools must be approved by some public institution. Most have been approved by a school district or state board of education, although some states involve neither. State laws vary considerably in the options they give to charter schools seeking approval. State laws also vary in allowing applicants to appeal a decision to reject a charter school application. (See table 4.) Required school district approval could result in less autonomous charter schools if districts use their leverage with the schools to maintain more traditional relationships with them. The availability of multiple approval options could result in more autonomous charter schools because applicants could seek the least restrictive situation. As a condition for approving a charter, for example, one district required charter schools' terms of employment--for teacher tenure, salary, and schedule for advancement--to be the same as those for other schools in the district. Evidence from California indicates that districts were least supportive of charter schools seeking the most independence. Charter schools' funding arrangements vary in (1) the extent to which the funding amounts are negotiable and (2) how funds flow to the schools. Charter schools' autonomy could be limited when funding amounts are subject to negotiation with the school district that approves the charter. Districts may seek to retain control over some funds as a condition for approval. In six states, the amount of state or local funding for charter schools is subject to negotiation with the school districts that approve the charters. In four states, funding for charter schools is set by the state, and the amount is not subject to negotiation with school districts. In one state, Arizona, funding is subject to negotiation when charter schools are approved by school districts, but not when they are approved by the state. In states in which funding is not subject to negotiation, funds flow from the state directly to the charter school, with the exception of Massachusetts and Michigan. In states in which funding is subject to negotiation, funds flow from the state to the district to the charter school. (See table 5.) Charter schools' autonomy from state and district rules varies considerably across states. Some state laws exempt charter schools from most state education rules; that is, charter schools receive a blanket exemption. Other states require charter schools to request exemption from specific rules (rule-by-rule exemption), requests that are subject to district or state approval or both. (See table 6.) Legally independent charter schools are not subject to district rules unless agreed to as part of negotiations leading to charter approval. In contrast, charter schools that are legally part of a district are subject to district rules unless waivers are negotiated. Some districts have denied waivers from local rules requested by charter schools. The extent to which charter schools can be held accountable depends on how the schools assess student performance and report results to the public institutions responsible for their oversight and contract renewal. The schools' charters indicate plans to use a wide variety of assessment methods to measure a wide variety of student outcomes. Some of these assessments and outcomes were subject to negotiation with the charter-granting institution; others are mandated under law in some states (see table 7). Some charter schools state their plans for assessment in great detail, have their assessment systems in place, and have begun collecting data. Others--including some schools already open--state their plans in more general terms and are still developing their assessment systems. Student assessments used by charter schools include portfolios, exhibitions, demonstrations of students' work, and often standardized achievement tests. Student outcomes include objective outcomes--such as specific achievement levels or gains on standardized tests, attendance and graduation rates--and subjective outcomes, such as becoming an independent learner, understanding how science is applied to the real world, participating in community service, and understanding the responsibilities of citizenship. Because charter schools' efforts to assess and report student performance are fairly recent, several important questions about accountability are unanswered. First, are charter schools collecting adequate baseline data to judge changes in student performance? Accurate judgments may be difficult in schools that opened before their assessment methods were developed. Second, will charter schools report data by race, sex, or socioeconomic status so that the performance of specific student groups can be assessed? No state laws require charter schools to do so; some include no reporting requirements; and most leave the type of reporting to local discretion (see table 7). Third, what are the implications of requiring charter schools to meet state performance standards and to use standardized, norm-referenced tests? Will it discourage charter schools with specialized purposes or that target low-achieving student populations? Will it encourage charter schools to have more traditional instructional programs? Charter schools pose new challenges for federal programs in allocating funds, providing services, and assigning legal responsibility. These challenges stem from the lack of connection of some charter schools to school districts--the usual local point of federal program administration. School districts are considered LEAs for the purposes of federal program administration; they receive allocations of federal funds from their states and are held legally responsible for meeting program requirements. However, an important issue is whether some charter schools--those with legal independence--can be considered LEAs. While legally independent charter schools appear to meet the definition of an LEA, states are uncertain about this and have approached the issue differently. Title I and special education programs illustrate challenges posed by charter schools to federal education program administration. As an LEA, a charter school would be eligible to receive Title I funds directly from its state education agency (SEA) and held legally responsible for its Title I program. As a school considered part of a traditional school district, a charter school would be eligible for Title I funds just as any other school in a district and would not be eligible to receive funds directly from its SEA. Current law provides SEAs flexibility in allocating grants to LEAs that could apply to charter schools considered LEAs. However, SEAs using census data to calculate LEA allocations face a complication because census data do not exist for charter schools, and SEAs must use the same measure of low income throughout the state. It is uncertain, for example, whether an SEA could use other data adjusted to be equivalent to census data for this purpose. An SEA might be able to apply for a waiver under the new charter schools grant program to permit use of such adjusted data; however, language in different waiver provisions makes this unclear. In commenting on a draft of this report, the Department of Education stated that it intends to use the broader authority to grant waivers under the charter schools provision to promote flexibility in charter schools (see app. III). Of those states that authorized legally independent charter schools, Arizona and Massachusetts have not yet decided on how to treat them concerning Title I. California, Minnesota, and Michigan have decided on contrasting approaches. The California Department of Education has not decided whether its legally independent charter schools are LEAs for Title I purposes. To avoid creating a new funding structure, it treats all charter schools as regular schools within a district for Title I funding. If a charter school is eligible for Title I funding, then the district must determine the charter school's share the same way it does for other eligible schools. While state officials in Minnesota consider charter schools LEAs, the state Title I office has delegated responsibility for Title I to districts and given them two options for serving charter schools. Under the first option, the district employs the Title I staff and provides services at the charter school. Under the second option, the district allocates part of its Title I funds to the charter school, and the charter school employs the Title I staff. Under either option, the state considers the district legally responsible for the charter school's Title I program. The state adopted this arrangement because it lacked census data on charter schools but was required to use census data as part of its statewide distribution approach to allocating Title I funds to LEAs. The state Title I office in Michigan considers charter schools LEAs and plans to allocate Title I funds directly to them; it considers the schools legally responsible for administering their own Title I programs. To ensure that charter schools get a fair share of Title I funding, the state Title I office devised a way to divide a traditional LEA's Title I allocation with a charter school within its boundaries. As of September 1994, Michigan had used this method in one charter school, the charter school at Wayne State University in Detroit. The state Title I office, with the consent of the district and the charter school, allocated part of Detroit's Title I allocation to the charter school on the basis of the number of students eligible for free or reduced-price lunch at the school. The state expects to use the same method for other charter schools, although this may be more difficult when students from more than one district attend a charter school, the state coordinator said. Whether charter schools are LEAs or part of a traditional school district has implications for (1) which institution--the school or the district--is legally responsible for meeting federal special education requirements and (2) how states and districts fund special education services. Under the IDEA, LEAs must provide a "free appropriate public education" to disabled children. Regulations implementing the act specify requirements that LEAs must follow in identifying children with disabilities and selecting their special education services. While the IDEA provides some federal funding for special education, most funding comes from state and local sources. Charter schools pose a particular challenge to funding special education when local revenues are used for this purpose. Since charter schools do not levy taxes, another institution must provide the revenue. Minnesota, which treats its charter schools as individual LEAs, resolved issues of legal responsibility and funding after some uncertainty and may serve as a useful example for other states. The SEA in Minnesota decided that legal responsibility for meeting federal special education requirements for children in charter schools depends on whether the district or the parent places the child in the charter school. If the district where the student lives places the child in a charter school, then the district remains legally responsible. If the parent places the student in a charter school, then this is "akin to the child moving to another district," and the charter school becomes legally responsible. These decisions were established in rulings on complaint investigations. In one case, the complainant alleged that the district where the student lived failed to implement the student's individualized education plan (IEP) at the Metro School for the Deaf. The Minnesota Department of Education ruled that the district was in violation and was responsible for ensuring service provision because it had placed the student in the charter school.In another case, the complainant also alleged that the district had failed to implement the student's IEP at a charter school, specifically, that the student had received no speech services during the school year. The Minnesota Department of Education ruled that, because the student was placed at the Cedar Riverside Charter School by parental choice, the district of residence was not responsible for providing the student a free appropriate public education and that the charter school was now responsible for doing so. In Minnesota, the SEA allocates state funds directly to charter schools as a partial reimbursement for special education costs. Charter schools, in turn, bill unreimbursed costs to the districts where the students live. The districts are expected to use revenues from property taxes or federal special education funds to fund the unreimbursed amount. In the future, the SEA may allocate federal special education funds directly to charter schools. Officials in several districts said they were unhappy with the state's expectation that they use local property taxes for unreimbursed costs for charter schools' special education programs because the charter schools are legally independent. Charter schools offer a new model for autonomous public schools that provides opportunities for diverse and innovative approaches to education. A great deal, however, remains to be learned about these schools, for example, whether limits on their autonomy will stifle innovation. Furthermore, this autonomy poses challenges for holding charter schools accountable for student performance and administering federal programs. Accountability for student performance is a critical aspect of the charter schools model, given the schools' autonomy from external controls that govern traditional public schools. Whether charter schools can be held accountable for student performance depends in part on how well student performance is assessed and reported. Important issues for future evaluations of these schools include whether charter schools (1) collect adequate baseline data to judge changes in student performance and (2) report data by race, sex, or socioeconomic status to assess the performance of specific student groups. The challenges charter schools pose for federal program administration concern their status as single schools operating as LEAs. Current law and regulations did not anticipate such an arrangement. Unless the Department of Education clarifies (1) whether charter schools may be considered LEAs and (2) how these schools can be treated for purposes of administering Title I and special education programs, uncertainty will persist that could impede charter schools' implementation. We recommend that the Secretary of Education determine whether states may consider charter schools LEAs for federal program administration. In addition, if charter schools may be LEAs, the Secretary should provide guidance that specifies how states may allocate Title I funds to charter schools, particularly in states that use census data to count low-income children, and how states may determine charter schools' legal responsibility for providing special education services. The Department of Education provided written comments on a draft of this report (see app. III). The Department said our report raised thoughtful issues about the challenges facing charter schools and presented an informative survey of their development nationally. The Department also commented on our recommendations to the Secretary and questions we raised about the applicability of different waiver provisions. In its comments on our recommendations, the Department stated that it (1) encourages states to develop legal arrangements that best support state and local strategies and (2) intends to work with states on a case-by-case basis to address issues raised in our report concerning federal program administration in charter schools. We support the Department's intention to work with states to resolve these issues. However, the Department's response does not fully clarify whether, and under what conditions, charter schools can be considered LEAs and we believe the Department should do so. In the draft reviewed by the Department, we also noted that the applicability of different waiver provisions in the Improving America's Schools Act was uncertain in regard to charter schools. In its comments, the Department stated that it intends to use the broader authority to grant waivers under the charter schools provision of the act to promote flexibility in charter schools. We revised the report to incorporate the Department's comments on this matter. We are sending copies of this report to congressional committees, the Secretary of Education, and other interested parties. Please call Richard Wenning, Evaluator-in-Charge, at (202) 512-7048, or Beatrice Birman, Assistant Director, at (202) 512-7008 if you or your staff have any questions about this report. Other staff who contributed to this report are named in appendix V. Vistas-Bear Valley Charter School P. O. Box 6057 Big Bear Lake, CA 92315 El Dorado Charter Community 6767 Green Valley Road Placerville, CA 95667 Early Intervention- Healthy Start Charter School Folsom Middle School 500 Blue Ravine Road Folsom, CA 95630 Grass Valley Alternative 10840 Gilmore Way Grass Valley, CA 95945 Accelerated School P. O. Box 341105 Los Angeles, CA 90034 Canyon School 421 Entrada Drive Santa Monica, CA 90402 Edutrain 1100 S. Grand Avenue Los Angeles, CA 90015 Fenton Avenue School 11828 Gain Street Lake View Terrace, CA 91342 Marquez School 16821 Marquez Avenue Pacific Palisades, CA 90272 The Open School 1034 Steams Drive Los Angeles, CA 90035 Palisades Elementary Charter School 800 Via De La Paz Pacific Palisades, CA 90272 Palisades High School 15777 Bowdoin Street Pacific Palisades, CA 90272 210 in charter school component 9-10 in charter school component (continued) Vaughn Next Century Learning Center 13330 Vaughn Street San Fernando, CA 91340 Westwood School Los Angeles Unified School District, CA 2050 Selby Avenue Los Angeles, CA 90025 Natomas Charter School 3700 Del Paso Road Sacramento, CA 95834 Jingletown Middle School 2506 Truman Avenue Oakland, CA 94605 Linscott Charter School 220 Elm Street Watsonville, CA 95076 Sonoma County Charter 1825 Willowside Road Santa Rosa, CA 95401 Pioneer Primary/Pioneer Middle 8810 14th Avenue Stanford, CA 93230 Schnell 2871 Schnell School Road Placerville, CA 95667 Ready Springs Home Study Ready Springs Union School District, CA 10862 Spenceville Road Penn Valley, CA 95946 The Eel River School P. O. Box 218 Covelo, CA 95428 (continued) July 1993 (Two charter schools housed together but working independently) 150 (90 in Homeschool and 60 in White Oak) Peabody Charter School 3018 Calle Noguera Santa Barbara, CA 93105 Santa Barbara Charter School 6100 Stow Canyon Road Goleta, CA 93117 (continued) Altimira P. O. Box 1546 Sonoma, CA 95476 Twin Ridges Alternative Charter School P. O. Box 529 North San Juan, CA 95960 Options for Youth 29 Foothill La Placenta, CA 91214 176 students in two centers (Victor Valley - 103 and Hesperia Unified District - 73) New school serving K-12 and adults. No adults presently enrolled. Lincoln High 1081 7th Street Lincoln, CA 95648 Sheridan Elementary 4730 H Street Sheridan, CA 95681 Mailing address: P.O. Box 268 Sheridan, CA 95681 Yucca Mesa P. O. Box 910 Yucca Valley, CA 92286 GAO was unable to get this information before publication. Planning to open in fall 1995 120 (expected) (Table notes on next page) GAO was unable to get this information before publication. Benjamin Franklin Classical 390 Oakland Parkway Franklin, MA 02038 270 (expected) Boston Renaissance 529 5th Avenue New York, NY 10017 700 (expected) Boston University 775 Commonwealth Avenue Boston, MA 02115 150 (expected) Cape Cod Lighthouse P. O. Box 968 South Orleans, MA 02662 100 (expected) City on a Hill Charter School 39 Jordan Road Brookline, MA 02146 60 (expected) Community Day 190 Hampshire Street Lawrence, MA 01840 140 (expected) Fenway II 250 Rutherford Avenue Charlestown, MA 02129 Francis W. Parker 234 Massachusetts Avenue Harvard, MA 01451 Lowell Charter School 529 5th Avenue New York, NY 10017 400 (expected) Lowell Middlesex Academy 33 Kearney Square Lowell, MA 01852 100 (expected) Neighborhood House 232 Centre Street Dorchester, MA 02124 45 (expected) South Shore 936 Nantasket Avenue Hull, MA 02045 60 (expected) (continued) Western Massachusetts Hilltown 3 Edward Street Haydenville, MA 01039 35 (expected) Worcester 529 5th Avenue New York, NY 10017 500 (expected) YouthBuild 173A Norfolk Avenue Roxbury, MA 02119 50 (expected) GAO was unable to get this information before publication. Toivola-Meadowlands 7705 Western Avenue P.O. Box 215 Meadowlands, MN 55765 City Academy St. Paul, MN School District 1109 Margaret Street St. Paul, MN 55106 New Heights Schools, Inc. 614 W. Mulberry Stillwater, MN 55082 (continued) Minnesota New Country School P. O. Box 423 Henderson, MN 56044 Parents Allied With Children and Teachers (PACT) 600 East Main Street Anoka, MN 55303 School site: 440 Pierce Street Anoka, MN GAO was unable to get this information before publication. The Improving America's Schools Act, which reauthorized the Elementary and Secondary Education Act of 1965, includes a provision establishing a new federal grant program to support the design and implementation of charter schools. The text of this provision appears here. SEC. 10301. FINDINGS AND PURPOSE. (a) FINDINGS.--The Congress finds that (1) enhancement of parent and student choices among public schools can assist in promoting comprehensive educational reform and give more students the opportunity to learn to challenging State content standards and challenging State student performance standards, if sufficiently diverse and high-quality choices, and genuine opportunities to take advantage of such choices, are available to all students; (2) useful examples of such choices can come from States and communities that experiment with methods of offering teachers and other educators, parents, and other members of the public the opportunity to design and implement new public schools and to transform existing public schools; (3) charter schools are a mechanism for testing a variety of educational approaches and should, therefore, be exempted from restrictive rules and regulations if the leadership of such schools commits to attaining specific and ambitious educational results for educationally disadvantaged students consistent with challenging State content standards and challenging State student performance standards for all students; (4) charter schools, as such schools have been implemented in a few States, can embody the necessary mixture of enhanced choice, exemption from restrictive regulations, and a focus on learning gains; (5) charter schools, including charter schools that are schools-within-schools, can help reduce school size, which reduction can have a significant effect on student achievement; (6) the Federal Government should test, evaluate, and disseminate information on a variety of charter schools models in order to help demonstrate the benefits of this promising education reform; and (7) there is a strong documented need for cash flow assistance to charter schools that are starting up, because State and local operating revenue streams are not immediately available. (b) PURPOSE.--It is the purpose of this part to increase national understanding of the charter schools model by-- (1) providing financial assistance for the design and initial implementation of charter schools; and (2) evaluating the effects of such schools, including the effects on students, student achievement, staff, and parents. SEC. 10302. PROGRAM AUTHORIZED. (a) IN GENERAL.--The Secretary may award grants to State educational agencies having applications approved pursuant to section 10303 to enable such agencies to conduct a charter school grant program in accordance with this part. (b) SPECIAL RULE.--If a State educational agency elects not to participate in the program authorized by this part or does not have an application approved under section 10303, the Secretary may award a grant to an eligible applicant that serve such State and has an application approved pursuant to section 10303(c). (c) PROGRAM PERIODS.-- (1) GRANTS TO STATES.--Grants awarded to State educational agencies under this part shall be awarded for a period of not more than 3 years. (2) GRANTS TO ELIGIBLE APPLICANTS.--Grants awarded by the Secretary to eligible applicants or subgrants awarded by State educational agencies to eligible applicants under this part shall be awarded for a period of not more than 3 years, of which the eligible applicant may use-- (A) not more than 18 months for planning and (B) not more than 2 years for the initial implementation of a charter school. (d) LIMITATION.--The Secretary shall not award more than one grant and State educational agencies shall not award more than one subgrant under this part to support a particular charter school. SEC. 10304. ADMINISTRATION. (a) SELECTION CRITERIA FOR STATE EDUCATIONAL AGENCIES.--The Secretary shall award grants to State educational agencies under this part on the basis of the quality of the applications submitted under section 10303(b), after taking into consideration such factors as (1) the contribution that the charter schools grant program will make to assisting educationally disadvantaged and other students to achieving State content standards and State student performance standards and, in general, a State's education improvement plan; (2) the degree of flexibility afforded by the State educational agency to charter schools under the State's charter schools law; (3) the ambitiousness of the objectives for the State charter school grant program; (4) the quality of the strategy for assessing achievement of those objectives; and (5) the likelihood that the charter school grant program will meet those objectives and improve educational results for students. (b) SELECTION CRITERIA FOR ELIGIBLE APPLICANTS.--The Secretary shall award grants to eligible applicants under this part on the basis of the quality of the applications submitted under section 10303(c), after taking into consideration such factors as-- (1) the quality of the proposed curriculum and (2) the degree of flexibility afforded by the State educational agency and, if applicable, the local educational agency to the charter school; (3) the extent of community support for the application; (4) the ambitiousness of the objectives for the charter school; (5) the quality of the strategy for assessing achievement of those objectives; and (6) the likelihood that the charter school will meet those objectives and improve educational results for students. (c) PEER REVIEW.--The Secretary, and each State educational agency receiving a grant under this part, shall use a peer review process to review applications for assistance under this part. (d) DIVERSITY OF PROJECTS.--The Secretary and each State educational agency receiving a grant under this part, shall award subgrants under this part in a manner that, to the extent possible, ensures that such grants and subgrants-- (1) are distributed throughout different areas of the Nation and each State, including urban and rural areas; and (2) will assist charter schools representing a variety of educational approaches, such as approaches designed to reduce school size. (e) WAIVERS.--The Secretary may waive any statutory or regulatory requirement over which the Secretary exercises administrative authority except any such requirement relating to the elements of a charter school described in section 10306(1), if-- (1) the waiver is requested in an approved application under this part; and (2) the Secretary determines that granting such a waiver will promote the purpose of this part. (f) USE OF FUNDS.-- (1) STATE EDUCATIONAL AGENCIES.--Each State educational agency receiving a grant under this part shall use such grant funds to award subgrants to one or more eligible applicants in the State to enable such applicant to plan and implement a charter school in accordance with this part. (2) ELIGIBLE APPLICANTS.--Each eligible applicant receiving funds from the Secretary or a State educational agency shall use such funds to plan and implement a charter school in accordance with this part. (3) ALLOWABLE ACTIVITIES.--An eligible applicant receiving a grant or subgrant under this part may use the grant or subgrant funds only for-- (A) post-award planning and design of the educational program, which may include-- (i) refinement of the desired educational results and of the methods for measuring progress toward achieving those results; and (ii) professional development of teachers and other staff who will work in the charter school; and (B) initial implementation of the charter school, (i) informing the community about the school; (ii) acquiring necessary equipment and educational materials and supplies; (iii) acquiring or developing curriculum (iv) other initial operational costs that cannot be met from State or local sources. (4) ADMINISTRATIVE EXPENSES.--Each State educational agency receiving a grant pursuant to this part may reserve not more than 5 percent of such grant funds for administrative expenses associated with the charter school grant program assisted under this part. (5) REVOLVING LOAN FUNDS.--Each State educational agency receiving a grant pursuant to this part may reserve not more than 20 percent of the grant amount for the establishment of a revolving loan fund. Such fund may be used to make loans to eligible applicants that have received a subgrant under this part, under such terms as may be determined by the State educational agency, for the initial operation of the charter school grant program of such recipient until such time as the recipient begins receiving ongoing operational support from State or local financing sources. SEC. 10305. NATIONAL ACTIVITIES. The Secretary may reserve not more than ten percent of the funds available to carry out this part for any fiscal year for-- (1) peer review of applications under section 10304(c); (2) an evaluation of the impact of charter schools on student achievement, including those assisted under this part; and (3) other activities designed to enhance the success of the activities assisted under this part, such as-- (A) development and dissemination of model State charter school laws and model contracts or other means of authorizing and monitoring the performance of charter schools; and (B) collection and dissemination of information on successful charter schools. SEC. 10306. DEFINITIONS As used in this part: (1) The term 'charter school' means a public school (A) in accordance with an enabling State statute, is exempted from significant State or local rules that inhibit the flexible operation and management of public schools, but not from any rules relating to the other requirements of this paragraph; (B) is created by a developer as a public school, or is adapted by a developer from an existing public school, and is operated under public supervision and direction; (C) operates in pursuit of a specific set of educational objectives determined by the school's developer and agreed to by the authorized public chartering agency; (D) provides a program of elementary or secondary education, or both; (E) is nonsectarian in its programs, admissions policies, employment practices, and all other operations, and is not affiliated with a sectarian school or religious institution; (F) does not charge tuition; (G) complies with the Age Discrimination Act of 1975, title VI of the Civil Rights Act of 1964, title IX of the Education Amendments of 1972, section 504 of the Rehabilitation Act of 1973, and part B of the Individuals with Disabilities Education Act; (H) admits students on the basis of a lottery, if more students apply for admission than can be accommodated; (I) agrees to comply with the same Federal and State audit requirements as do other elementary and secondary schools in the State, unless such requirements are specifically waived for the purpose of this program; (J) meets all applicable Federal, State, and local health and safety requirements; and (K) operates in accordance with State law. (2) The term 'developer' means an individual or group of individuals (including a public or private nonprofit organization), which may include teachers, administrators and other school staff, parents, or other members of the local community in which a charter school project will be carried out. (3) The term 'eligible applicant' means an authorized public chartering agency participating in a partnership with a developer to establish a charter school in accordance with this part. (4) The term 'authorized public chartering agency' means a State educational agency, local educational agency, or other public entity that has the authority pursuant to State law and approved by the Secretary to authorize or approve a charter school. SEC. 10307. AUTHORIZATION OF APPROPRIATIONS. For the purpose of carrying out this part, there are authorized to be appropriated $15,000,000 for fiscal year 1995 and such sums as may be necessary for each of the four succeeding fiscal years. GAO would like to acknowledge the assistance of the following experts. These individuals provided valuable insights on the issues discussed in this report; however, they do not necessarily endorse the positions taken in the report. In addition to those named above, the following individuals made important contributions to this report: Patricia M. Bundy, Evaluator; Sarah Keith, Intern; Julian P. Klazkin, Senior Attorney; Sheila Nicholson, Evaluator; Diane E. Schilder, Senior Social Science Analyst. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the growth of charter schools, focusing on: (1) the number of charter schools that have been approved under state laws; (2) the characteristics of charter schools' instructional programs; (3) whether charter schools operate autonomously and are held accountable for student performance; and (4) the challenges charter schools pose for federal education programs. GAO found that: (1) 9 states have approved 134 charter schools developed by teachers, school administrators, parents, and private corporations; (2) as charter schools increase in number, so do their diversity and innovation; (3) charter school instructional programs focus on multiage classes and often teach subjects within a common theme; (4) some charter schools specialize in certain subjects, while other charter schools target specific student populations; (5) charter schools' autonomy varies among the states based on their legal status, approval, funding, and exemption from rules; (6) charter schools vary in how they measure student performance and it is too soon to determine whether these schools will meet their student performance objectives; (7) the major challenge for federal program administration is determining whether those charter schools that are legally independent of their school districts can be considered local education agencies (LEA) for program administration purposes; and (8) although states have taken different approaches to address charter schools' status as LEA, further clarification is needed on how charter schools can be treated for federal program administration and whether these schools are eligible for educational funds.
8,178
297
DOT's proposal to reauthorize surface transportation included a 6-year, $600 million Access to Jobs program to support new transportation services for low-income people seeking jobs. The funding levels and other program details of such an initiative may change as the Congress completes final action in 1998 to reauthorize surface transportation programs. The House and Senate reauthorization proposals would authorize appropriations of $900 million over 6 years for similar programs to be administered by DOT. The Senate proposal would also authorize appropriations of an additional $600 million (bringing the total to $1.5 billion) over the same period for a reverse commute program that the Department could use to support its welfare-to-work initiatives. While these programs have not been established, several federal departments currently provide states and localities with federal funds to support transportation welfare reform initiatives. The Department of Health and Human Services (HHS) administers the Temporary Assistance for Needy Families (TANF) program--a $16.5 billion program of annual block grants to the states that replaced Aid to Families With Dependent Children (AFDC). The states may use TANF funds to provide transportation assistance to people on or moving off of public assistance. However, the states generally may not use TANF funds to provide assistance to a family for more than 60 months and must require parents to work within 24 months of receiving assistance. The Balanced Budget Act of 1997 established a 2-year, $3 billion Welfare-to-Work program administered by the Department of Labor (DOL). Among other things, this grant program provides funding for job placement, on-the-job training, and support services (including transportation) for those who are the most difficult to move from welfare to work. The states receive about 75 percent of the funds on the basis of a formula, while local governments, private industry councils, and private, community-based organizations receive most of the remaining 25 percent on a competitive basis. Although not specifically designed to address welfare-to-work issues, HUD's $17 million Bridges to Work program provides funds to support transportation, job placement, and counseling services for a small number of low-income people living in the central cities of Baltimore, Chicago, Denver, Milwaukee, and St. Louis. HUD provided an $8 million grant for the program in fiscal year 1996, while the Ford, Rockefeller, and MacArthur Foundations provided $6 million and local public and private organizations contributed the remaining $3 million. The demonstration program began in late 1996 and will be completed in 2000. Access to transportation is generally recognized by social service and transportation professionals as a prerequisite for work and for welfare reform. According to the Census Bureau, in 1992, welfare recipients were disproportionately concentrated in inner cities--almost half of all people who received AFDC or state assistance lived in central cities, compared with 30 percent of the U.S. population. However, as cited in the 1998 report entitled Welfare Reform and Access to Jobs in Boston (the 1998 Boston study), national trends since 1970 show that most new jobs have been created in the suburbs rather than in the inner cities. In addition, this study indicated that about 70 percent of the jobs in manufacturing, retailing, and wholesaling--sectors employing large numbers of entry-level workers--were located in the suburbs. Many of these newly created entry-level suburban jobs should attract people moving from welfare to work since many welfare recipients lack both higher education and training. However, most welfare recipients seeking employment live in central cities that are located away from these suburban jobs. Thus, the less-educated, urban poor need either a car or public transportation to reach new suburban employment centers. However, both modes of transportation have posed challenges to welfare recipients. The 1998 Boston study and a 1995 GAO study found that the lack of transportation is one of the major barriers that prevent welfare recipients from obtaining employment. A significant factor limiting welfare recipients' job prospects has been their lack of an automobile. According to a 1997 HHS study, less than 6 percent of welfare families reported having a car in 1995 and the average reported value of the car was $620. According to DOT's Bureau of Transportation Statistics (BTS), these figures are probably low because previous welfare eligibility rules limiting the value of assets may have led some recipients to conceal car ownership. Under AFDC, families that received assistance were not allowed to accumulate more than $1,000 in resources such as bank accounts and real estate. This limit excluded the value of certain assets, including vehicles up to $1,500 in value. However, a 1997 study of welfare mothers found that car ownership ranged from 20 to 40 percent. Without a car, welfare recipients must rely on existing public transportation systems to move them from their inner-city homes to suburban jobs. However, recent studies show important gaps between existing transit system routes and the location of entry-level jobs. For example, the 1998 study of Boston's welfare recipients found that while 98 percent of them lived within one-quarter mile of a bus route or transit station, just 32 percent of potential employers (those companies located in high-growth areas for entry-level employment) were within one-quarter mile of public transit. The study noted that it was presumed that welfare recipients living in or near a central city with a well-developed transit system could rely on public transit to get to jobs. However, the study found that Boston's transit system was inadequate because (1) many high-growth areas for entry-level employment were in the outer suburbs, beyond existing transit service; (2) some areas were served by commuter rail, which was expensive and in most cases did not provide direct access to employment sites; and (3) when transit was available, the trips took too long or required several transfers, or transit schedules and hours did not match work schedules, such as those for weekend or evening work. Similar findings were reported in a July 1997 study of the Cleveland-Akron metropolitan area. The study found that since inner-city welfare recipients did not own cars, they had to rely on public transit systems to get to suburban jobs. The study found that welfare recipients traveled by bus at times outside the normal rush-hour schedule and often had significant walks from bus stops to their final employment destinations. The study concluded that these transportation barriers would be difficult to overcome using traditional mass transit since the locations of over one-half of the job openings were served by transit authorities other than the one serving inner-city Cleveland residents. The study further indicated that even within areas where employers were concentrated, such as in industrial parks, employers' locations were still too dispersed to be well served by mass transit systems. According to BTS, transportation for welfare mothers is particularly challenging because they do not own cars and must make more trips each day to accommodate their child care and domestic responsibilities. According to 1997 Census and Urban Institute information, most adult welfare recipients were single mothers, about half of these mothers had children under school age, and more than three-fourths had a high school diploma or less education. To reach the entry-level jobs located in the suburbs without access to a car they would have to make a series of public transit trips to drop children off at child care or schools, go to work, pick their children up, and shop for groceries. According to BTS, traditional transit service is unlikely to meet the needs of many welfare mothers, given their need to take complex trips. For those who do not live in a city, transportation to jobs is also important. In 1995, the National Transit Resource Center, a federally funded technical assistance resource, found that about 60 million rural Americans were underserved or unserved by public transportation.Forty-one percent of rural Americans lived in counties that lacked any public transportation services, and an additional 25 percent of rural residents lived in areas with below-average public transit service. According to the Community Transportation Association of America--a nationwide network of public and private transportation providers, local human services agencies, state and federal officials, transit associations, and individuals--the rural poor have less access to public transportation than their urban counterparts and must travel greater distances to commute to work, obtain essential services, and make needed purchases. In addition, members of low-income rural groups generally own cars that are not maintained as well as they need to be for long-distance commutes. Both DOT and HUD have implemented initiatives to support transportation strategies for moving welfare recipients off federal assistance and into full-time employment. Primarily through FTA's demonstration programs and seminars and HUD's Bridges to Work program, these agencies have provided limited funding for programs that support transportation research and demonstration programs aimed at helping the poor move from welfare to work. While the number of welfare recipients moved into jobs has been low, the programs have identified programmatic and demographic factors that local transportation and welfare officials should consider to ensure that the most effective transportation strategies are employed to support welfare reform. According to an FTA official, the agency is supporting welfare-to-work initiatives by funding demonstration projects, working with state and local partners to encourage the development of collaborative transportation plans, providing states and localities with technical assistance, and developing a program that would increase the financial resources available for welfare initiatives. Of the estimated $5 million that FTA has provided for welfare initiatives in 1993 through 1998, the agency's largest effort has been its JOBLINKS demonstration program. JOBLINKS, a $3.5 million demonstration program administered by the Community Transportation Association of America, began in 1995 to fund projects designed to help people obtain jobs or attend employment training and to evaluate which types of transportation services are the most effective in helping welfare recipients get to jobs. As of March 1998, JOBLINKS had funded 16 projects located in urban and rural areas of 12 states. Ten projects are completed and six are ongoing. While the projects' objectives are to help people obtain jobs or attend employment training, the projects' results have differed. For example, a JOBLINKS project in Louisville, Kentucky, was designed to increase by 25 percent the number of inner-city residents hired at an industrial park. The JOBLINKS project established an express bus from the inner city to the industrial park, thereby reducing a 2-hour commute for inner-city residents to 45 minutes. Although an April 1997 evaluation of the project did not indicate if the project had met the 25-percent new-hire goal, it stated that 10 percent of the businesses in the industrial park were able to hire inner-city employees as a result of the express service. Another JOBLINKS project--in Fresno, California--was established to provide transportation services to employment training centers and thereby reduce dropout rates and increase the number of individuals who found jobs. The April 1997 evaluation of the project found that of the 269 participants in a job training program, 20 had completed the program and 3 had found jobs. FTA has also helped state and local transportation agencies develop plans for addressing the transportation needs of their welfare recipients. In 1997, FTA and the Federal Highway Administration provided the National Governors' Association (NGA) with $330,000 to develop plans that identify the issues, costs, and benefits associated with bringing together the transportation components of various social service programs. In January 1997, NGA solicited grant applications and 24 states and one territory applied for grants. All 25 applicants received grants and are participating in the demonstration project; final plans are expected by September 1998. FTA has also sponsored regional seminars that focus on the transportation issues involved in welfare reform and the actions that states and local agencies need to take to address these issues. The seminars are intended to encourage the states to develop transportation strategies to support their welfare reform programs and to facilitate transportation and human services agencies working together to develop plans that link transportation, jobs, and support services. In addition, FTA helps fund the National Transit Resource Center, which provides technical assistance to communities. For example, the Resource Center developed an Internet site that provides up-to-date information on federal programs, transportation projects, and best practices. HUD's Bridges to Work program is a 4-year research demonstration program that began in late 1996 with $17 million in public and private funding. This program is intended to link low-income, job-ready, inner-city residents with suburban jobs by providing them with job placement, transportation, and support services (such as counseling). The program was conceived by Public/Private Ventures, a nonprofit research and program development organization located in Philadelphia. Under the program, a total of about 3,000 participants in five cities--Baltimore, Chicago, Denver, Milwaukee, and St. Louis--will receive employment, transportation, and support services. According to HUD, it became involved in welfare reform because a large portion of its clients are low-income or disadvantaged persons who rely upon welfare benefits. Several HUD programs, according to Bridges to Work program documents, are intended to address the geographic mismatch between where the jobless live and where employment centers operate. Bridges to Work researchers identified three solutions to this mismatch: (1) disperse urban residents by moving them closer to suburban jobs, (2) develop more jobs in the urban community, or (3) bridge the geographic gap by providing urban residents with the mobility to reach suburban jobs. HUD's Bridges to Work program is intended to address the third solution. It was designed to determine whether the geographic separation of jobs and low-income persons could be overcome by the coordinated provision of job, transportation, and support services. The program's goal is to place 3,000 low-income people in jobs during the 4 years of the program. Through March 1998, the Bridges to Work program had placed 429 low-income, urban residents in suburban jobs. According to the project's sponsors, the number of placements has been low in part because the program accepts only job-ready applicants--a criterion that limits the number of eligible participants when unemployment rates are low and job-ready people are already employed. A Bridges to Work participant must meet the following criteria: He/she must be at least 18, have a family income of 80 percent or less of the median family income for the metropolitan area (e.g., $29,350 for a family of one in Milwaukee), live in the designated urban area, and be able to work in the designated suburban area. In addition, no more than one-third of the participants can be former AFDC recipients. The pilot phase of the program found jobs paying between $6.00 and $7.99 per hour for over 70 percent of the first 239 placements and one-way commutes of between 31 and 60 minutes each day for over 76 percent of these placements. Bridges to Work officials have found that the five demonstration sites have encountered two key challenges. First, each site needed to establish a collaborative network consisting of transportation, employment, and social services agencies working together with employers to ensure the successful placement of applicants. Baltimore's network, for example, includes the state transportation agency, the area's Metropolitan Planning Organization, employment service providers, the city's employment office, a community-based organization, the Private Industry Council, and the Baltimore-Washington International Business Partnership. Second, recruiting job-ready participants has been difficult. During the current healthy economy, many potential job-ready individuals can find their own jobs closer to home because jobs are plentiful and unemployment is low. The Bridges to Work project's co-director noted that, in some instances, the sites did not identify an adequate pool of job-ready individuals and therefore needed to change their recruiting and marketing strategies to better locate potential participants for the program. FTA's JOBLINKS program, HUD's Bridges to Work program, individual cities' projects, and past research have reported common strategies for designing and implementing a transportation program that supports welfare to work. Preliminary results show that the following factors appear to support a program's success: (1) collaboration among transportation, employment, and other human services organizations; (2) an understanding of local job markets; and (3) flexible transportation systems. According to the 1997 JOBLINKS evaluation report and Bridges to Work project managers, welfare-to-work programs must establish a collaborative network among transportation, employment, and other human services organizations to ensure a successful program. Officials noted that for welfare recipients and the poor to move from welfare to work, they need employers' support, transportation services, and human services organizations' support to find child care and resolve workplace conflicts. A Bridges to Work director in St. Louis noted that the area's metropolitan planning organization was motivated to participate in the program because prior welfare-to-work attempts focused on transportation alone, rather than providing participants with the job placement and counseling services needed to find and retain jobs. In addition, the JOBLINKS program concluded in a 1997 evaluation of its 10 projects that coordination among transportation providers, human services agencies, and employers was an important element of successful welfare-to-work programs. Studies conducted in the late 1960s to early 1970s support this experience. For example, in the late 1960s, the Los Angeles Transportation-Employment Project found that improved public transportation alone was not sufficient to increase employment opportunities; other factors, such as the shortage of suitable jobs, obsolete skills, or inadequate education, also had to be addressed. According to the 1997 JOBLINKS evaluation report and Bridges to Work officials, analyses of the local labor and job markets are essential before local welfare-to-work sponsors select transportation strategies to serve their projects' participants. According to officials, these market analyses should first identify which employers are willing to participate in the program and if their locations provide program participants with reasonable commutes. Next, each employer's needs, such as shift times and the willingness to offer "living wages," must be evaluated. For example, a Chicago official said that requiring participants to commute 2 hours each way is not reasonable, particularly for a low-wage job. Milwaukee's Bridges to Work officials developed a bus schedule to meet the 12-hour shift times of a large employer participating in the program. JOBLINKS' and Bridges to Work's preliminary experiences also show that flexible transportation systems are needed to address employers' locations and shift times. As explained earlier, many studies, including BTS' study of Boston, showed that lower-income residents could not rely on mass transit to go from the inner city to suburban employment in a timely manner. Mass transit systems ran infrequently to the suburbs, or at night, and often did not stop close to employers. The Denver Bridges to Work site illustrates the importance of a flexible transportation strategy. Denver originally extended the hours of service and added stops to its existing bus system to address a variety of shift times. However, Denver officials soon found that the bus system could not address all the employers' and employees' needs and added vanpools and shuttles. Under DOT's Access to Jobs proposal, as well as the proposals passed by the House of Representatives and the United States Senate, DOT's financial support of welfare-to-work initiatives would increase substantially. The attention given to the transportation component of welfare reform would increase dramatically as well. However, the Access to Jobs program, as currently defined by DOT, does not contain key information about the program's objectives and expected outcomes or explain how the results from JOBLINKS and other federal welfare-to-work programs will be reflected in the program's operation. Accordingly, it is difficult to evaluate how funds provided for an Access to Jobs program would effectively support national welfare reform goals. Details may not be available until after a program is authorized and DOT begins implementation. DOT's proposal and related documents generally indicate what the Access to Jobs program is to accomplish. The program would provide grants to the states, local governments, and private, nonprofit organizations to help finance transportation services for low-income people seeking jobs and job-related services. The program would provide localities with flexibility in determining the transportation services and providers most appropriate for their areas. Among other things, grant recipients could use the funds to pay for the capital and operating costs of transportation services for the poor, promote employer-provided transportation, or integrate transportation and welfare planning activities. However, the lack of specific information on the program's purpose, objectives, performance criteria, and evaluation approach makes it difficult to assess how the program would improve mobility for low-income workers and contribute to overall welfare reform objectives. The Government Performance and Results Act of 1993 (Results Act), enacted to improve the effectiveness of and accountability for federal programs, requires agencies to identify annual performance goals and measures for their program activities. DOT's fiscal year 1999 performance plan under the Results Act showcases the Access to Jobs program under DOT's goals to improve mobility, but the plan does not define performance goals for measuring the program's success. In contrast, the plan establishes benchmarks for other mobility goals, such as the average age of bus and rail vehicles or the percentage of facilities and vehicles that meet the requirements of the Americans With Disabilities Act. Since an Access to Jobs program is intended to move people to jobs, rather than build and sustain public transportation systems, evaluation criteria that correspond to this goal would be needed. In addition, DOT's Access to Jobs Program, as currently defined, does not fully describe how lessons learned through the JOBLINKS and Bridges to Work programs would be incorporated into an Access to Jobs program.For example, although the proposal would require DOT to consider grant applicants' coordination of transportation and human resource services planning, the proposal would not specifically require grant recipients to carry out such coordination. However, the proposal would allow other federal transportation-eligible funds to be used to meet the program's matching requirement. According to DOT officials, this provision will help promote coordination between transportation and social service funding. In addition, the proposed program does not specify that grant recipients evaluate the local job and labor markets before selecting the optimal transportation services to provide welfare recipients. Bridges to Work officials expressed concern that FTA would provide Access to Jobs grants primarily to local transportation agencies that may be unwilling to support nontraditional transportation services. For example, in Denver, traditional mass transit systems did not provide sufficient flexibility to transport Bridges to Work participants to their jobs. Accordingly, program officials had to add private van pools and shuttle services to take participants from public transit stops to their new jobs. FTA's challenge in efficiently managing the Access to Jobs program would be to go beyond its customary mass transit community and work with different local groups (employment, community services) to support non-mass-transit solutions to welfare-to-work mobility problems. Finally, under its proposal, DOT would be required to coordinate its Access to Jobs program with other federal agencies' efforts. This requirement is particularly important to ensure that FTA's welfare reform funds are working with, rather than duplicating, those of other federal agencies. HHS and DOL have significant levels of funding that the states and localities can use for transportation services in their welfare-to-work programs. In addition, smaller programs, such as HUD's Bridges to Work program, have been used to transport welfare recipients to jobs. For example, in Chicago, a local organization has received $1.6 million through the Bridges to Work program; another local organization has applied for a $5.4 million DOL grant to assist welfare recipients in paying for their transportation to work; and these and other local organizations would probably be eligible for grants under the proposed Access to Jobs program. It is therefore important that DOT's new program ensure that grant recipients are effectively applying and coordinating their federal welfare-to-work grants to successfully move people from welfare to work. Welfare and transportation experts agree that current welfare recipients need many supporting services, such as transportation, job counseling, and child care, to successfully make the transition from welfare to work. An Access to Jobs program would authorize significant funding ($900 million) to support the transportation element of welfare reform. However, the program's success will depend in part on how FTA defines the program's specific objectives, performance criteria, and measurable goals and the extent to which the program balances two national needs: the need to provide a supportive framework for helping welfare recipients and the need to oversee federal dollars so that the program does not duplicate other federal and state welfare programs. In addition, a successful Access to Jobs program should build on lessons learned from existing welfare-to-work programs. These lessons learned focus on the need to coordinate transportation strategies with other local job placement and social services, the importance of assessing the local labor and employer markets, and the inclusion of many transportation strategies (not just existing mass transit systems) in implementing welfare reform. If the Congress authorizes an Access to Jobs program, we recommend that the Secretary of Transportation (1) establish specific objectives, performance criteria, and measurable goals for the program when the Department prepares its Fiscal Year 2000 Performance Plan; (2) require that grant recipients coordinate transportation strategies with local job placement and other social service agencies; and (3) work with other federal agencies, such as the departments of Health and Human Services, Labor, and Housing and Urban Development, to coordinate welfare-to-work activities and to ensure that program funds complement and do not duplicate other welfare-to-work funds available for transportation services. To obtain information about the need for transportation in welfare reform, we interviewed FTA, HUD, Community Transportation Association of America, Public/Private Ventures, and National Governors' Association officials. These officials also provided insights into identifying transportation strategies that programs like FTA's JOBLINKS, HUD's Bridges to Work demonstration project, and the NGA's Transportation Coordination Demonstration project have used to help low-income people secure jobs. In addition, we interviewed program staff at each of the five Bridges to Work demonstration sites and visited one of the sites--the suburban office of Chicago's Bridges to Work program. We examined the Bridges to Work program's documentation, preliminary reports, brochures on individual programs, and other descriptive materials. We also reviewed the results of two studies that FTA's Coordinator for Welfare-to-Work activities identified as significant studies on transportation and welfare reform--BTS' January 1998 report entitled Welfare Reform and Access to Jobs in Boston and the July 1997 report entitled Housing, Transportation, and Access to Suburban Jobs by Welfare Recipients in the Cleveland Area. To obtain information on the DOL's grant applications, we spoke with transportation officials in Chicago and Los Angeles. Finally, we reviewed legislative proposals and spoke to transportation and federal officials to obtain information about FTA's proposed Access to Jobs program. We performed our review from December 1997 through May 1998 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOT and HUD for review and comment. We met with DOT officials from the Office of the Secretary and the Federal Transit Administration's Coordinator for Welfare-to-Work activities to discuss the Department's comments on the draft report. DOT agreed with our recommendations and stated that it has begun to take actions to implement our recommendations related to coordinating with local and federal agencies providing welfare-to-work services. First, DOT provided a May 4, 1998, memorandum signed by the Secretaries of Transportation, Health and Human Services, and Labor that encourages coordination among transportation, workforce development, and social service providers. Second, DOT provided examples of how it has begun to encourage collaboration among state and local transit and social service providers and how provisions in the Access to Jobs proposal would foster collaboration further. We have included information in the report on DOT's collaboration efforts and the provisions of the Access to Jobs proposal that will foster collaboration. Finally, DOT disagreed with our assessment that an Access to Jobs program will require the Federal Transit Administration to undergo a cultural change--a change whereby the agency may have to accept nontraditional transportation solutions to address barriers to welfare-to-work programs. DOT noted that innovative or nontraditional transportation strategies do not exclusively offer the best strategies for helping welfare recipients; traditional mass transit systems may also provide welfare recipients with the means to reach employment centers. In addition, DOT stated that as a result of its collaborative efforts on welfare reform with local and other federal agencies, it believes that it has been a cultural change leader. First, we agree that states and localities should not routinely exclude traditional bus and rail transit systems as one approach to helping welfare recipients get to jobs. Nonetheless, the DOT and HUD studies cited in this report consistently emphasized the limitations of existing mass transit systems as the transportation solution to welfare-to-work barriers. These systems do not adequately serve job-rich suburban markets that inner-city welfare recipients must reach to find employment. Second, we acknowledge the initial work that the Federal Transit Administration has undertaken to prepare state and local transportation officials for their new welfare-to-work responsibilities and included examples of this effort in this report. However, the Access to Jobs program would represent a significant federal commitment. Accordingly, a change in the traditional mass transit culture at the Federal Transit Administration will still be needed to ensure that Access to Jobs funds address innovative and nontraditional transportation solutions to welfare-to-work problems. DOT had additional technical comments that we incorporated throughout the report, where appropriate. In its comments, HUD stated that we should expand our recommendations to the Secretary of Transportation to include HUD's suggested changes to the Access to Jobs program. (See app. I.) These suggested changes would allow Access to Jobs grant recipients to (1) use program funds for planning and coordination purposes and (2) apply "soft expenditures" (such as the value of staff reassigned to the program) to fund their required local match. In addition, HUD suggested that it be included among the federal agencies with which DOT must coordinate program implementation. HUD's first two suggestions may be important for the Congress to consider as it completes programmatic and funding decisions for the Access to Jobs program through its reauthorization of surface transportation programs. However, we have not included these as recommendations in our report because they address policy issues that were not part of our review's scope. We agree with HUD's last suggested change and have modified our recommendations to include HUD as one of the federal agencies that DOT should work with when it begins implementing the Access to Jobs program. HUD also had minor technical comments that we incorporated throughout the report, where appropriate. We will send copies of this report to interested congressional committees, the Secretary of Transportation, the Secretary of Housing and Urban Development, and the Administrator of the Federal Transit Administration. We will also make copies available to others on request. If you have any questions about this report, please call me at (202) 512-2834. Major contributors to this report were Ruthann Balciunas, Joseph Christoff, Catherine Colwell, Gail Marnik, and Phyllis F. Scheinberg. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) whether current studies and research demonstrate the importance of transportation services in implementing welfare reform; (2) the preliminary results of the Federal Transit Administration's (FTA) current welfare-to-work programs and the Department of Housing and Urban Development's (HUD) Bridges to Work program; and (3) how an Access to Jobs program would support welfare reform. GAO noted that: (1) transportation and welfare studies show that without adequate transportation, welfare recipients face significant barriers in trying to move from welfare to work; (2) existing public transportation systems cannot always bridge the gap between where the poor live and where jobs are located; (3) the majority of entry-level jobs that the welfare recipients and the poor would be likely to fill are located in suburbs that have limited or no accessibility through existing public transportation systems; (4) FTA has funded welfare-to-work demonstration projects, planning grants, and regional seminars, while HUD's Bridges to Work research program is in the early stages of placing inner-city participants in suburban jobs; (5) although these programs began recently and have limited funding, they have identified programmatic and demographic factors that state and local officials should consider when they select the best transportation strategies for their welfare-to-work programs; (6) these factors include: (a) collaboration among transportation providers and employment and human services organizations; (b) analyses of local labor markets to help design transportation strategies that link employees to specific jobs; and (c) flexible transportation strategies that may not always rely on existing mass transit systems; (7) if authorized, an Access to Jobs program would bring additional resources and attention to the transportation element of welfare reform; (8) however, limited information about the program's objectives or expected outcomes makes it difficult to evaluate how the program would improve mobility for low-income workers or support national welfare-to-work goals; (9) the new program may require FTA and local transit agencies to undergo a cultural change whereby they are willing to accept nontraditional approaches for addressing welfare-to-work barriers; (10) the agency must ensure that the millions of dollars it contributes to welfare reform support rather than duplicate the transportation funds provided through other federal and state agencies; and (11) while FTA has begun to consider some of these important issues, addressing all of them before the program is established would help ensure that the transportation funds provided for an Access to Jobs program would be used efficiently and effectively in support of national welfare goals.
6,780
515
Approximately 4 percent of discretionary spending in the United States' federal budget is appropriated for the conduct of foreign affairs activities. This includes funding for bilateral and multilateral assistance, military assistance, and State Department activities. Spending for State, taken from the "150 Account," makes up the largest share of foreign affairs spending. Funding for State's Diplomatic and Consular Programs--State's chief operating account, which supports the department's diplomatic activities and programs, including salaries and benefits--comprises the largest portion of its appropriations. Embassy security, construction, and maintenance funding comprises another large portion of State's appropriation. Funding for the administration of foreign affairs has risen dramatically in recent fiscal years, due, in part, to enhanced funding for security-related improvements worldwide, including personnel, construction, and equipment following the bombings of two U.S. embassies in 1998 and the events of September 11, 2001. For example, State received about $2.8 billion in fiscal year 1998, but by fiscal year 2003, State's appropriation was approximately $6 billion. For fiscal year 2004, State is seeking approximately $6.4 billion, which includes $4 billion for diplomatic and consular affairs and $1.5 billion for embassy security, construction, and maintenance. In addition, State plans to spend $262 million over fiscal years 2003 and 2004 on information technology modernization initiatives overseas. Humanitarian and economic development assistance is an integral part of U.S. global security strategy, particularly as the United States seeks to diminish the underlying conditions of poverty and corruption that may be linked to instability and terrorism. USAID is charged with overseeing U.S. foreign economic and humanitarian assistance programs. In fiscal year 2003, Congress appropriated about $12 billion--including supplemental funding--to USAID, and the agency managed programs in about 160 countries, including 71 overseas missions with USAID direct-hire presence. Fiscal year 2004 foreign aid spending is expected to increase due, in part, to substantial increases in HIV/AIDS funding and security- related economic aid. I would like to discuss State's performance in managing its overseas real estate, overseeing major embassy construction projects, managing its overseas presence and staffing, modernizing its information technology, and developing and implementing strategic plans. State manages an overseas real property portfolio valued at approximately $12 billion. The management of real property is an area where State could achieve major cost savings and other operational efficiencies. In the past, we have been critical of State's management of its overseas property, including its slow disposal of unneeded facilities. Recently, officials at State's Bureau of Overseas Buildings Operations (OBO), which manages the government's real property overseas, have taken a more systematic approach to identifying unneeded properties and have significantly increased the sale of these properties. For example, in 2002, OBO completed sales of 26 properties totaling $64 million, with contracts in place for another $40 million in sales. But State needs to dispose of more facilities in the coming years as it embarks on an expensive plan to replace embassies and consulates that do not meet State's security requirements and/or are in poor condition. Unneeded property and deteriorating facilities present a real problem-- but also an opportunity to improve U.S. operations abroad and achieve savings. We have reported that the management of overseas real estate has been a continuing challenge for State, although the department has made improvements in recent years. One of the key weaknesses we found was the lack of a systematic process to identify unneeded properties and to dispose of them in a timely manner. In 1996, we identified properties worth hundreds of millions of dollars potentially excess to State's needs or of questionable value and expensive to maintain that the department had not previously identified for potential sale. As a result of State's inability to resolve internal disputes and sell excess property in an expeditious manner, we recommended that the Secretary of State appoint an independent panel to decide which properties should be sold. The Secretary of State created this panel in 1997. As of April 2002, the Real Property Advisory Board had reviewed 41 disputed properties and recommended that 26 be sold. By that time, State had disposed of seven of these properties for about $21 million. In 2002, we again reviewed State's processes for identifying and selling unneeded overseas real estate and found that it had taken steps to implement a more systematic approach that included asking posts to annually identify properties for disposal and increasing efforts by OBO and officials from State's OIG to identify such properties when they visit posts. For example, the director of OBO took steps to resolve disputes with posts that have delayed the sale of valuable property. OBO has also instituted monthly Project Performance Reviews to review all aspects of real estate management, such as the status of acquisitions and disposal of overseas property. However, we found that the department's ability to monitor property use and identify potentially unneeded properties was hampered by errors and omissions in its property inventory. Inaccurate inventory information can result in unneeded properties not being identified for potential sale. Therefore, we recommended that the department improve the accuracy of its real property inventory. In commenting on our report, OBO said that it had already taken action to improve its data collection. For example, State sent a cable to all overseas posts reminding them of their responsibilities to maintain accurate real estate records. State has significantly improved its performance in selling unneeded property. In total, between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million. Funds generated from property sales are being used to help offset embassy construction costs in Berlin, Germany; Luanda, Angola; and elsewhere. State estimates it will sell additional properties between fiscal years 2003 and 2008 valued at approximately $300 million. More recently, State has taken action to sell two properties (a 0.4 acre parking lot and an office building) in Paris identified in a GAO report as potentially unneeded. After initially resisting the sale of the parking lot, the department reversed its decision and sold both properties in June 2003 for a total of $63.1 million--a substantial benefit to the government. The parking lot alone was sold conditionally for $20.7 million. Although this may be a unique case, it demonstrates how scrutiny of the property inventory could result in potential savings. The department should continue to look closely at property holdings to see if other opportunities exist. If State continues to streamline its operations and dispose of additional facilities over the next several years, it can use those funds to help offset the cost of replacing about 160 embassies and consulates for security reasons in the coming years. In the past, State has had difficulties ensuring that major embassy construction projects were completed on time and within budget. For example, in 1991 we reported that State's previous construction program suffered from delays and cost increases due to, among other things, poor program planning and inadequate contractor performance. In 1998, State embarked on the largest overseas embassy construction program in its history in response to the bombings of U.S. embassies in Africa. From fiscal years 1999 through 2003, State received approximately $2.7 billion for its new construction program and began replacing 25 of 185 posts identified as vulnerable by State. To better manage this program, OBO has undertaken several initiatives aimed at improving State's stewardship of its funds for embassy buildings, including cutting costs of planned construction projects, using standard designs, and reducing construction duration through a "fast track" process. Moreover, State hopes that additional management tools aimed at ensuring that new facilities are built in the most cost-effective manner, including improvements in how agencies determine requirements for new embassies, will help move the program forward. State is also pursuing a cost-sharing plan that would charge other federal agencies for the cost of their overall overseas presence and provide additional funds to help accelerate the embassy construction program. While State has begun replacing many facilities, OBO officials estimated that beginning in fiscal year 2004, it will cost an additional $17 billion to replace facilities at remaining posts. As of February 2003, State had begun replacing 25 of 185 posts identified by State as vulnerable after the 1998 embassy bombings. To avoid the problems that weakened the previous embassy construction program, we recommended that State develop a long-term capital construction plan that identifies (1) proposed construction projects' cost estimates and schedules and (2) estimated annual funding requirements for the overall program. Although State initially resisted implementing our recommendation, OBO's new leadership reconsidered this recommendation and has since produced two annual planning documents titled the "Long-Range Overseas Building Plan." According to OBO, the long-range plan is the roadmap by which State, other departments and agencies, the Office of Management and Budget (OMB), the Congress, and others can focus on defining and resolving the needs of overseas facilities. In addition to the long-range plan, OBO has undertaken several initiatives aimed at improving State's stewardship of its embassy construction funds. These measures have the potential to result in significant cost savings and other efficiencies. For example, OBO has developed Standard Embassy Designs (SED) for use in most embassy construction projects. SEDs provide OBO with the ability to contract for shortened design and construction periods and control costs through standardization; shifted from "design-bid-build" contracting toward "design-build" contracts, which have the potential to reduce project costs and construction time frames; developed and implemented procedures to enforce cost planning during the design phase and ensure that the final designs are within budget; and increased the number of contractors eligible to bid for construction projects, thereby increasing competition for contracts, which could potentially result in lower bids. OBO has set a goal of a 2-year design and construction period for its mid- sized, standard embassy design buildings, which, if met, could reduce the amount of time spent in design and construction by almost one year. We reported in January 2003 that these cost-cutting efforts allowed OBO to achieve $150 million in potential cost savings during fiscal year 2002. These savings, according to OBO, resulted from the application of the SEDs and increased competition for the design and construction of these projects. Despite these gains, State will face continuing hurdles throughout the life of the embassy construction program. These hurdles include meeting construction schedules within the estimated costs and ensuring that State has the capacity to manage a large number of projects simultaneously. Because of the high costs associated with this program and the importance of providing secure facilities overseas, we believe this program merits continuous oversight by State, GAO, and the Congress. In addition to ensuring that individual construction projects meet cost and performance schedules, State must also ensure that new embassies are appropriately sized. Given that the size and cost of new facilities are directly related to agencies' anticipated staffing needs, it is imperative that future requirements be predicted as accurately as possible. Embassy buildings that are designed too small may require additional construction and funding in the future; buildings that are too large may have unused space--a waste of government funds. State's construction program in the late 1980s encountered lengthy delays and cost overruns in part because it lacked coordinated planning of post requirements prior to approval and budgeting for construction projects. As real needs were determined, changes in scope and increases in costs followed. OBO now requires that all staffing projections for new embassy compounds be finalized prior to submitting funding requests, which are sent to Congress as part of State's annual budget request each February. In April 2003, we reported that U.S. agencies operating overseas, including State, were developing staffing projections without a systematic approach. We found that State's headquarters gave embassies little guidance on factors to consider when developing projections, and thus U.S. agencies did not take a consistent or systematic approach to determining long-term staffing needs. Based on our recommendations, State in May 2003 issued a "Guide to Developing Staffing Projections for New Embassy and Consulate Compound Construction," which requires a more serious, disciplined approach to developing staffing projections. When fully implemented, this approach should ensure that overseas staffing projections are more accurate and minimize the financial risks associated with building facilities that are designed for the wrong number of people. Historically, State has paid all costs associated with the construction of overseas facilities. Following the embassy bombings, the Overseas Presence Advisory Panel (OPAP) noted a lack of cost sharing among agencies that use overseas facilities. As a result, OPAP recommended that agencies be required to pay rent in government-owned buildings in foreign countries to cover operating and maintenance costs. In 2001, an interagency group put forth a proposal that would require agencies to pay rent based on the space they occupy in overseas facilities, but the plan was not enacted. In 2002, OMB began an effort to develop a mechanism that would require users of overseas facilities to share the construction costs associated with those facilities. The administration believes that if agencies were required to pay a greater portion of the total costs associated with operating overseas facilities, they would think more carefully before posting personnel overseas. As part of this effort, State has presented a capital security cost-sharing plan that would require agencies to help fund its capital construction program. State's proposal calls for each agency to fund a proportion of the total construction program cost based on its respective proportion of total overseas staffing. OBO has reported that its proposed cost-sharing program could result in additional funds, thereby reducing the duration of the overall program. State maintains a network of approximately 260 diplomatic posts in about 170 countries worldwide and employs a direct-hire workforce of about 30,000 employees, about 60 percent of those overseas. The costs of maintaining staff overseas vary by agency but in general are extremely high. In 2002, the average annual cost of placing one full-time direct-hire American family of four in a U.S. embassy was approximately $339,000. These costs make it critical that the U.S. overseas presence is sized appropriately to conduct its work. We have reported that State and most other federal agencies overseas have historically lacked a systematic process for determining the right number of personnel needed overseas-- otherwise known as rightsizing. Moreover, in June 2002, we reported that State faces serious staffing shortfalls at hardship posts--in both the number of staff assigned to these posts and their experience, skills, and/or language proficiency. Thus, State has been unable to ensure that it has "the right people in the right place at the right time with the right skills to carry out America's foreign policy"--its definition of diplomatic readiness. However, since 2001, State has directed significant attention to improving weaknesses in the management of its workforce planning and staffing issues that we and others have noted. Because personnel salaries and benefits consume a huge portion of State's operating budget, it is important that the department exercise good stewardship of its human capital resources. Around the time GAO designated strategic human capital management as a governmentwide high-risk area in 2001, State, as part of its Diplomatic Readiness Initiative (DRI), began directing significant attention to addressing its human capital needs, adding 1,158 employees over a 3-year period (fiscal years 2002 through 2004). In fiscal year 2002, Congress allocated nearly $107 million for the DRI. State requested nearly $100 million annually in fiscal years 2003 and 2004 to hire approximately 400 new staff each year. The DRI has enabled the department to boost recruitment. However, State has historically lacked a systematic approach to determine the appropriate size and location of its overseas staff. To move the rightsizing process forward, the August 2001 President's Management Agenda identified it as one of the administration's priorities. Given the high costs of maintaining the U.S. overseas presence, the administration has instructed U.S. agencies to reconfigure the number of overseas staff to the minimum necessary to meet U.S. foreign policy goals. This OMB-led initiative aims to develop cost-saving tools or models, such as increasing the use of regional centers, revising the Mission Performance Planning (MPP) process, increasing overseas administrative efficiency, and relocating functions to the United States. According to the OPAP, although the magnitude of savings from rightsizing the overseas presence cannot be known in advance, "significant savings" are achievable. For example, it said that reducing all agencies' staffing by 10 percent could yield governmentwide savings of almost $380 million a year. GAO's Rightsizing Framework In May 2002, we testified on our development of a rightsizing framework. The framework is a series of questions linking staffing levels to three critical elements of overseas diplomatic operations: security of facilities, mission priorities and requirements, and cost of operations. It also addresses consideration of rightsizing options, such as relocating functions back to the United States or to regional centers, competitively sourcing functions, and streamlining operations. Rightsizing analyses could lead decision makers to increase, decrease, or change the mix of staff at a given post. For example, based on our work at the U.S. embassy in Paris, we identified positions that could potentially be relocated to regional centers or back to the United States. On the other hand, rightsizing analyses may indicate the need for increased staffing, particularly at hardship posts. In a follow-up report to our testimony, we recommended that the director of OMB ensure that our framework is used as a basis for assessing staffing levels in the administration's rightsizing initiative. In commenting on our rightsizing reports, State endorsed our framework and said it plans to incorporate elements of our rightsizing questions into its future planning processes, including its MPPs. State also has begun to take further actions in managing its overseas presence--along the lines that we recommended in our June 2002 report on hardship posts-- including revising its assignment system to improve staffing of hardship posts and addressing language shortfalls by providing more opportunities for language training. In addition, State has already taken some rightsizing actions to improve the cost effectiveness of its overseas operating practices. For example, State plans to spend at least $80 million to purchase and renovate a 23-acre, multi-building facility in Frankfurt, Germany--slated to open in mid- 2005--for use as a regional hub to conduct and support diplomatic operations; has relocated more than 100 positions from the Paris embassy to the regional Financial Services Center in Charleston, South Carolina; and is working with OMB on a cost-sharing mechanism, as previously mentioned, that will give all U.S. agencies an incentive to weigh the high costs to taxpayers associated with assigning staff overseas. In addition to these rightsizing actions, there are other areas where the adoption of industry best practices could lead to cost reductions and streamlined services. For example, in 1997, we reported that State could significantly streamline its employee transfer and housing relocation processes. We also reported in 1998 that State's overseas posts could potentially save millions of dollars by implementing best practices such as competitive sourcing. In light of competing priorities as new needs emerge, particularly in Iraq and Afghanistan, State must be prepared to make difficult strategic decisions on which posts and positions it will fill and which positions it could remove, relocate, or regionalize. State will need to marshal and manage its human capital to facilitate the most efficient, effective allocation of these significant resources. Up-to-date information technology, along with adequate and modern office facilities, is an important part of diplomatic readiness. We have reported that State has long been plagued by poor information technology at its overseas posts, as well as weaknesses in its ability to manage information technology modernization programs. State's information technology capabilities provide the foundation of support for U.S. government operations around the world, yet many overseas posts have been equipped with obsolete information technology systems that prevented effective interagency information sharing. The Secretary of State has made a major commitment to modernizing the department's information technology. In March 2003, we testified that the department invested $236 million in fiscal year 2002 on key modernization initiatives for overseas posts and plans to spend $262 million over fiscal years 2003 and 2004. State reports that its information technology is now in the best shape it has ever been, including improved Internet access and upgraded computer equipment. The department is now working to replace its antiquated cable system with a new integrated messaging and retrieval system, which it acknowledges is an ambitious effort. State's OIG and GAO have raised a number of concerns regarding the department's management of information technology programs. For example, in 2001, we reported that State was not following proven system acquisition and investment practices in attempting to deploy a common overseas knowledge management system. This system was intended to provide functionality ranging from basic Internet access and e-mail to mission-critical policy formulation and crisis management support. We recommended that State limit its investment in this system until it had secured stakeholder involvement and buy-in. State has since discontinued the project due to a lack of interagency buy-in and commitment, thereby avoiding additional costs of more than $200 million. Recognizing that interagency information sharing and collaboration can pay off in terms of greater efficiency and effectiveness of overseas operations, State's OIG reported that the department recently decided to merge some of the objectives associated with the interagency knowledge management system into its new messaging system. We believe that the department should try to eliminate the barriers that prevented implementation of this system. As State continues to modernize information technology at overseas posts, it is important that the department employ rigorous and disciplined management processes on each of its projects to minimize the risks that the department will spend large sums of money on systems that do not produce commensurate value. Linking performance and financial information is a key feature of sound management--reinforcing the connection between resources consumed and results achieved--and an important element in giving the public a useful and informative perspective on federal spending. A well-defined mission and clear, well understood strategic goals are essential in helping agencies make intelligent trade-offs among short- and long-term priorities and ensure that program and resource commitments are sustainable. In recent years, State has made improvements to its strategic planning process both at headquarters and overseas that are intended to link staffing and budgetary requirements with policy priorities. For instance, State has developed a new strategic plan for fiscal years 2004 through 2009, which, unlike previous strategic plans, was developed in conjunction with USAID and aligns diplomatic and development efforts. At the field level, State revised the MPP process so that posts are now required to identify key goals for a given fiscal year, and link staffing and budgetary requirements to fulfilling these priorities. State's compliance with the Government Performance and Results Act of 1993 (GPRA), which requires federal agencies to prepare annual performance plans covering the program activities set out in their budgets, has been mixed. While State's performance plans fell short of GPRA requirements from 1998 through 2000, the department has recently made strides in its planning and reporting processes. For example, in its performance plan for 2002, State took a major step toward implementing GPRA requirements, and it has continued to make improvements in its subsequent plans. As we have previously reported, although connections between specific performance and funding levels can be difficult to make, efforts to infuse performance information into budget deliberations have the potential to change the terms of debate from simple outputs to outcomes. Continued improvements to strategic and performance planning will ensure that State is setting clear objectives, tying resources to these objectives, and monitoring its progress in achieving them--all of which are essential to efficient operations. Now I would like to discuss some of the challenges USAID faces in managing its human capital, evaluating its programs and measuring their performance, and managing its information technology and financial systems. I will also outline GAO's findings from our reviews of USAID's democracy and rule of law programs in Latin America and the former Soviet Union. Since the early 1990s, we have reported that USAID has made limited progress in addressing its human capital management issues and managing the changes in its overseas workforce. A major concern is that USAID has not established a comprehensive workforce plan that is integrated with the agency's strategic objectives and ensures that the agency has skills and competencies necessary to meet its emerging foreign assistance challenges. Developing such a plan is critical due to a reduction in the agency's workforce during the 1990s and continuing attrition--more than half of the agency's foreign service officers are eligible to retire by 2007. According to USAID's OIG, the steady decline in the number of foreign service and civil service employees with specialized technical expertise has resulted in insufficient staff with needed skills and experience and less experienced personnel managing increasingly complex programs. Meanwhile, USAID's program budget has increased from $7.3 billion in 2001 to about $12 billion in fiscal year 2003, due primarily to significant increases in HIV/AIDS funding and supplemental funding for emerging programs in Iraq and Afghanistan. The combination of continued attrition of experienced foreign service officers, increased program funding, and emerging foreign policy priorities raises concerns regarding USAID's ability to maintain effective oversight of its foreign assistance programs. USAID's lack of progress in institutionalizing a workforce planning system has led to certain vulnerabilities. For example, as we reported in July 2002, USAID lacks a "surge capacity" that enables it to quickly hire the staff needed to respond to emerging demands and post-conflict or post- emergency reconstruction situations. We also reported that insufficient numbers of contract officers affected the agency's ability to deliver hurricane reconstruction assistance in Latin America in the program's early phases. USAID is aware of its human capital management and workforce planning shortcomings and is now beginning to address some of them with targeted hiring and other actions. USAID continues to face difficulties in identifying and collecting the data it needs to develop reliable performance measures and accurately report the results of its programs. Our work and that of USAID's OIG have identified a number of problems with the annual results data that USAID's operating units have been reporting. USAID has acknowledged these concerns and has undertaken several initiatives to correct them. Although the agency has made a serious effort to develop improved performance measures, it continues to report numerical outputs that do not gauge the impact of its programs. Without accurate and reliable performance data, USAID has little assurance that its programs achieve their objectives and related targets. In July 1999, we commented on USAID's fiscal year 2000 performance plan and noted that because the agency depends on international organizations and thousands of partner institutions for data, it does not have full control over how data are collected, reported, or verified. In April 2002, we reported that USAID had evaluated few of its experiences in using various funding mechanisms and different types of organizations to achieve its objectives. We concluded that with better data on these aspects of the agency's operations, USAID managers and congressional overseers would be better equipped to analyze whether the agency's mix of approaches takes full advantage of nongovernmental organizations to achieve the agency's purposes. USAID's information systems do not provide managers with the accurate information they need to make sound and cost-effective decisions. USAID's OIG has reported that the agency's processes for procuring information technology have not followed established guidelines, which require executive agencies to implement a process that maximizes the value and assesses the risks of information technology investments. In addition, USAID's computer systems are vulnerable and need better security controls. USAID management has acknowledged these weaknesses and the agency is making efforts to correct them. Effective financial systems and controls are necessary to ensure that USAID management has timely and reliable information to make effective, informed decisions and that assets are safeguarded. USAID has made progress in correcting some of its systems and internal control deficiencies and is in the process of revising its plan to remedy financial management weaknesses as required by the Federal Financial Management Improvement Act of 1996. To obtain its goal, however, USAID needs to continue efforts to resolve its internal control weaknesses and ensure that planned upgrades to its financial systems are in compliance with federal financial system requirements. Our reviews of democracy and rule of law programs in Latin America and the former Soviet Union demonstrate that these programs have had limited results and suggest areas for improving the efficiency and impact of these efforts. In Latin America, we found that U.S. assistance has helped bring about important criminal justice reforms in five countries. This assistance has also help improve transparency and accountability of some government functions, increase attention to human rights, and support elections that observation groups have considered free and fair. In several countries of the former Soviet Union, U.S. agencies have helped support a variety of legal system reforms and introduced some innovative legal concepts and practices in the areas of legislative and judicial reform, legal education, law enforcement, and civil society. In both regions, however, sustainability of these programs is questionable. Establishing democracy and rule of law in these countries is a complex undertaking that requires long-term host government commitment and consensus to succeed. However, host governments have not always provided the political support and financial and human capital needed to sustain these reforms. In other cases, U.S.-supported programs were limited, and countries did not adopt the reforms and programs on a national scale. In both of our reviews, we found that several management issues shared by USAID and the other agencies have affected implementation of these programs. Poor coordination among the key U.S. agencies has been a long-standing management problem, and cooperation with other foreign donors has been limited. U.S. agencies' strategic plans do not outline how these agencies will overcome coordination problems and cooperate with other foreign donors on program planning and implementation to maximize scarce resources. Also, U.S. agencies, including USAID, have not consistently evaluated program results and have tended to stress output measures, such as the numbers of people trained, over indicators that measure program outcomes and results, such as reforming law enforcement practices. Further, U.S. agencies have not consistently shared lessons learned from completed projects, thus missing opportunities to enhance the outcomes of their programs. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony include Heather Barker, David Bernet, Janey Cohen, Diana Glod, Kathryn Hartsburg, Edward Kennedy, Joy Labez, Jessica Lundberg, and Audrey Solis. Overseas Presence: Conditions of Overseas Diplomatic Facilities. GAO- 03-557T. Washington, D.C.: March 20, 2003. Overseas Presence: Rightsizing Framework Can Be Applied at U.S. Diplomatic Posts in Developing Countries. GAO-03-396. Washington, D.C.: April 7, 2003. Embassy Construction: Process for Determining Staffing Requirements Needs Improvement. GAO-03-411. Washington, D.C.: April 7, 2003. Overseas Presence: Framework for Assessing Embassy Staff Levels Can Support Rightsizing Initiatives. GAO-02-780. Washington, D.C.: July 26, 2002. State Department: Sale of Unneeded Property Has Increased, but Further Improvements Are Necessary. GAO-02-590. Washington, D.C.: June 11, 2002. Embassy Construction: Long-Term Planning Will Enhance Program Decision-making. GAO-01-11. Washington, D.C.: January 22, 2001. State Department: Decision to Retain Embassy Parking Lot in Paris, France, Should Be Revisited. GAO-01-477. Washington, D.C.: April 13, 2001. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Information Technology: State Department-Led Overseas Modernization Program Faces Management Challenges. GAO-02-41. Washington, D.C.: November 16, 2001. Foreign Affairs: Effort to Upgrade Information Technology Overseas Faces Formidable Challenges. GAO-T-AIMD/NSIAD-00-214. Washington, D.C.: June 22, 2000. Electronic Signature: Sanction of the Department of State's System. GAO/AIMD-00-227R. Washington, D.C.: July 10, 2000. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: January 2003. Department of State: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-02-42. Washington, D.C.: December 7, 2001. Observations on the Department of State's Fiscal Year 1999 Performance Report and Fiscal Year 2001 Performance Plan. GAO/NSIAD-00-189R. Washington, D.C.: June 30, 2000. Major Management Challenges and Program Risks: Department of State. GAO-01-252. Washington, D.C.: January 2001. U.S. Agency for International Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-721. Washington, D.C.: August 17, 2001. Observations on the Department of State's Fiscal Year 2000 Performance Plan. GAO/NSIAD-99-183R. Washington, D.C.: July 20, 1999. Major Management Challenges and Program Risks: Implementation Status of Open Recommendations. GAO/OCG-99-28. Washington, D.C.: July 30, 1999. The Results Act: Observations on the Department of State's Fiscal Year 1999 Annual Performance Plan. GAO/NSIAD-98-210R. Washington, D.C.: June 17, 1998. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-03-111. Washington, D.C.: January 2003. Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes, but USAID Needs Greater Flexibility to Improve Its Response Capability. GAO-02-787. Washington, D.C.: July 24, 2002. Foreign Assistance: USAID Relies Heavily on Nongovernmental Organizations, but Better Data Needed to Evaluate Approaches. GAO-02- 471. Washington, D.C.: April 25, 2002. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-01-256. Washington, D.C.: January 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In recent years, funding for the Department of State has increased dramatically, particularly for security upgrades at overseas facilities and a major hiring program. The U.S. Agency for International Development (USAID) has also received more funds, especially for programs in Afghanistan and Iraq and HIV/AIDS relief. Both State and USAID face significant management challenges in carrying out their respective missions, particularly in areas such as human capital management, performance measurement, and information technology management. Despite increased funding, resources are not unlimited. Thus, State, USAID, and all government agencies have an obligation to ensure that taxpayer resources are managed wisely. Long-lasting improvements in performance will require continual vigilance and the identification of widespread opportunities to improve the economy, efficiency, and effectiveness of State's and USAID's existing goals and programs. GAO was asked to summarize its findings from reports on State's and USAID's management of resources, actions taken in response to our reports, and recommendations to promote cost savings and more efficient and effective operations at the department and agency. Overall, State has increased its attention to managing resources, and its efforts are starting to show results, including potential cost savings and improved operational effectiveness and efficiency. For example, in 1996, GAO criticized State's performance in disposing of its overseas property. Between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million with plans to sell additional properties between fiscal years 2003 through 2008 for approximately $300 million. Additional sales would help offset costs of replacing about 160 unsecure and deteriorating embassies. State is now taking a more businesslike approach with its embassy construction program, which is estimated to cost an additional $17 billion beginning in fiscal year 2004. Cost-cutting efforts allowed State to achieve $150 million in potential cost savings during fiscal year 2002. State should continue its reforms as it determines requirements for, designs, and builds new embassies. The costs of maintaining staff overseas are generally very high. In response to management weaknesses GAO identified, State has begun addressing workforce planning issues to ensure that the government has the right people in the right places at the right times. State should continue this work and adopt industry best practices that could reduce costs and streamline services overseas. GAO and others have highlighted deficiencies in State's information technology. State invested $236 million in fiscal year 2002 on modernization initiatives overseas and plans to spend $262 million over fiscal years 2003 and 2004. Ongoing oversight of this investment will be necessary to minimize the risks of spending large sums of money on systems that do not produce commensurate value. State has improved its strategic planning to better link staffing and budgetary requirements with policy priorities. Setting clear objectives and tying resources to them will make operations more efficient. GAO and others have also identified some management weaknesses at USAID, mainly in human capital management and workforce planning, program evaluation and performance measurement, information technology, and financial management. While USAID is taking corrective actions, better management of critical systems is essential to safeguard the agency's funds. Given the added resources State and USAID must manage, current budget deficits, and new requirements since Sept. 11, 2001, oversight is needed to ensure continued progress toward effective management practices. This focus could result in cost savings or other efficiencies.
7,578
689
Governments are heavily involved in most defense export transactions and they support exports for a variety of reasons. European governments support defense exports primarily to maintain a desired level of defense production capability. Their national markets are not large enough to sustain the full range of weapon systems they believe necessary for their national security. The United States has traditionally supported defense exports to meet national security and foreign policy objectives through its security assistance program. In the United States more recently, however, the impact of exports on maintaining the industrial base has gained support as a rationale for providing additional assistance to defense exporters. Defense exports in general have a positive impact on the balance of trade. In 1993 defense exports represented about 0.3 of total exports for Germany, 1.7 percent for France, 2.2 percent for the United States, and 2.4 for the United Kingdom. The impact of defense exports to total exports, however, shows a general downward trend since 1990 for three of the four countries we reviewed. During 1990 defense exports represented 0.4 percent of total exports for Germany, 3.2 percent for France, and 3.4 percent for the United States. In the United Kingdom defense exports to total exports remained at about 2.4 percent in 1990 and 1993. Deliveries of global defense exports have declined 64 percent since 1987, when deliveries were $77 billion. In 1993 deliveries were $28 billion. The end of the Cold War and changes in the political and economic structure of the former Soviet Union were considered significant factors contributing to the overall decrease in arms trade. While the global defense export market has declined since the late 1980s, the United States has become the world's leading defense exporter. The United States had the largest share of global arms deliveries at 32 percent in 1990 and increased its share to 49 percent in 1993. The overall increase in the U.S. market share from 1990 to 1993 was due, in part, to decreased sales by the former Soviet Union. In 1990 the Soviet Union's arms deliveries were $17 billion. By 1993 Russia's defense exports had decreased 82 percent to less than $3 billion. The dollar value of U.S. arms deliveries also decreased during this time, declining 22 percent from $18 billion in 1990 to $14 billion in 1993. Arms deliveries data for calendar year 1994 is not yet available. However, the Department of Defense (DOD), which collects data on a fiscal year basis, reported that fiscal year 1994 U.S. arms deliveries were about $10 billion. According to defense analysts, U.S. arms deliveries are likely to remain at about $10 billion annually for the rest of the decade. The market share of France, Germany, and the United Kingdom combined has increased from 26 percent of total arms deliveries in 1990 to 32 percent in 1993. Of these three countries, only the United Kingdom increased its market share, raising it from 9 percent in 1990 to 15 percent in 1993. The French market share declined from 14 percent to 13 percent during the same period, while Germany remained constant at about 4 percent of the arms market in 1990 and 1993. The total value of arms deliveries for the three European countries combined declined 40 percent, from $15 billion in 1990 to about $9 billion in 1993. Preliminary 1994 delivery data for France and the United Kingdom suggests a decline from 1993 levels. French and U.K. defense exports for 1994, in terms of deliveries, are estimated at $2.2 billion and $2.8 billion, respectively. Delivery data for Germany for 1994 is not yet available. Figures 1 and 2 show the percentage of global arms deliveries for 1990 and 1993 by supplier country. *Includes all other European countries, except France, Germany, and the United Kingdom. *Includes all other European countries, except France, Germany, and the United Kingdom. In the short term, at least, it is likely that the United States will remain strong in the world market; it has $86 billion in defense orders placed from 1990 to 1993, while France, Germany, and the United Kingdom combined have $27 billion in defense orders from the same period. Although 1994 data for the three European competitor nations, in terms of defense orders, is not yet available, U.S. defense orders for fiscal year 1994 were about $13 billion--a 59-percent decrease from fiscal year 1993 levels, when orders were $32 billion. Figure 3 shows the total value of defense orders placed with France, Germany, the United Kingdom, and the United States from 1990 to 1993. Further growth in the U.S. market share will be limited by several factors, including U.S. national security and export control policies. For example, in order to reduce dangerous or destabilizing arms transfers, the United States does not sell its defense products to certain countries, as part of its national security objectives. Those countries include Cuba, Iran, Iraq, Libya, North Korea, Syria, and several countries of the former Soviet Union. According to the State Department, U.S. sales to other countries are reviewed on a case-by-case basis against U.S. conventional arms transfer policy criteria. Certain major foreign country buyers' practices of diversifying weapons purchases among multiple suppliers further limits U.S. market share. For example, Kuwait announced in 1994 that it planned to diversify its weapons purchases among all five permanent members of the United Nations Security Council. Prior studies conducted by the Office of Management and Budget (OMB), the Office of Technology Assessment (OTA), and our office have concluded that there are numerous factors affecting defense export sales and that no one factor is paramount in every sale. These studies indicate that (1) each sale has its own unique set of circumstances and (2) the outcome is dependent on various factors. For example, the OMB study on financing defense exports concluded that each customer's decision-making process on defense acquisitions is sufficiently different that it is impossible to draw definitive conclusions about the relative importance of any one factor. While the study was conducted to determine the need for defense export financing, it found that other factors influence defense sales, such as price, technical sophistication of the equipment, the cost and availability of follow-on support, system performance, lead time from placement of order to delivery, the availability of training, political influence, and the financial and economic conditions of purchasing countries. The OTA study identified co-production and technology transfer as factors that can influence a defense sale. This study noted countries that desire to develop their own defense industries are likely to consider access to technology when buying defense goods. In our May 1991 testimony before the House Committee on Banking, Finance and Urban Affairs on a proposal to finance defense export sales, we pointed out that it is difficult to quantify the effect of financing on defense sales because of all the other factors involved in the decision-making process. In addition to the factors cited by OMB and OTA, we noted the importance of offsets to a buying country when deciding between competitors in a defense sale. Industry representatives and government officials in the United States and Europe cited numerous factors that are important to defense export sales, but had differing views on what factors contributed to winning a specific defense sale. These officials cited the same factors identified by earlier government studies, including offsets, political ties, and price and quality of a product. However, when discussing the reasons behind any particular sale's outcome, U.S. government officials and industry representatives identified different reasons for the outcome of the sale. For example, in the recent German tank sale to Sweden, U.S. government officials identified offsets as the deciding factor in the sale, while an industry representative believed that the historical ties between Sweden and Germany was the reason why the German tank was chosen. In a sale of French tanks to the United Arab Emirates, U.S. government officials considered offsets to be the more important determinant in the sale, while an industry representative cited historical relationships between the buyer and the seller as the primary factor. Moreover, several U.S. and European government officials and industry representatives stated that potential customers abroad view domestic procurement of a product as an important endorsement of confidence and one that helps lower unit costs by increasing the economies of scale associated with a system. These officials added that it is very difficult for a company to sell a defense article if its own country's defense department or ministry does not use the equipment. For example, according to a U.S. government official, Northrop's F-20 was designed specifically for export; however, Northrop was unable to sell the aircraft overseas, in part, because the U.S. government did not purchase it for domestic use. Further, because of the large size of the U.S. domestic defense market, European businesses feel that they are at a disadvantage with respect to their U.S. competitors, according to a 1992 survey conducted by the major French land-defense industry association and the consulting firm Ernst & Young. We found that France, Germany, the United Kingdom, and the United States generally provided the same types of assistance, but the extent and structure of the assistance varies. All three European countries provide some form of government-backed export credit guarantees for both non-defense and defense exports as a means to provide security assistance and promote sales of their defense products. Data on the value of guarantees for defense exports, however, was available only in the United Kingdom. During fiscal year 1993/1994,the United Kingdom guaranteed $2.9 billion in defense exports. France and Germany report total export financing and do not differentiate between defense and non-defense export financing. Therefore, we were unable to obtain information on the extent of guarantees provided to defense exports in either country. In the United States, government financing is provided through the FMF program. According to DOD officials, FMF is provided as an instrument to advance U.S. foreign policy and national security interests rather than a means to promote U.S. exports. In fiscal year 1994 the United States used the program to provide about $3.1 billion in grants, mostly to Israel and Egypt, and $0.8 billion in loans to Greece, Turkey, and Portugal. Applicable U.S. legislation provides that FMF grants are generally intended to fund purchases of U.S. military goods and related services. It is unlikely U.S. contractors would lose sales to foreign competitors for FMF grant-funded purchases. The U.S. government is fully funding the purchase of U.S. military goods and services by other countries, thus giving U.S. companies an advantage over foreign competitors that are only offering government guarantees on loans. In addition, in fiscal year 1994, the Defense Security Assistance Agency waived about $273 million in research and development costs on foreign military sales to nine allied countries. U.S. commercial banks provide some financing of defense exports; however, the U.S. government does not guarantee such financing. The Export-Import Bank of the United States is prohibited from providing loans or guarantees for purchasing defense articles or services unless requested to do so by the President. Limited export financing is also provided at the state level. For example, from July 1988 to November 1994 the state of California provided about $26 million in loan guarantees to California-based defense companies. The French and U.K. governments have historically sent high-level government officials, such as ministers of defense, ambassadors, or prime ministers, to persuade foreign buyers to buy their national defense products. The German government has generally avoided using high-level government officials to promote defense exports, in part because defense exports are a politically sensitive issue in Germany. In the United States, defense exports have traditionally been approved to further U.S. national security and foreign policy goals. Nevertheless, as part of the U.S. government's emphasis on overall export promotion efforts, high-ranking U.S. officials have been increasingly willing to intervene to influence competitions in favor of U.S. defense companies. However, DOD policy indicates that U.S. officials should support the marketing efforts of U.S. companies but maintain strict neutrality between U.S. competitors. During the competition for the United Kingdom's Skynet-4 Satellite launch vehicle, U.S. government officials intervened at a high level on behalf of U.S. defense exporters. According to an industry representative involved in this sale, the U.K. Ministry of Defence split the contract between the U.S. company and the French as a result of intervention by the U.S. Ambassador and the Secretary of Commerce. The official stated that without U.S. government involvement, the French manufacturer would have received the entire $1-billion contract. France and the United Kingdom each have a single organization within their respective defense ministries with responsibility for identifying defense export opportunities abroad, promoting and facilitating defense exports, providing assistance with defense equipment demonstrations and trade shows, and providing advice to industry regarding offsets. In France this organization is known as the Delegation for International Relations. In the United Kingdom this organization is known as the Defence Export Services Organisation. Although Germany does not have a defense ministry organization comparable to that of France or the United Kingdom, German companies involved in cross-border collaborative efforts with those countries are able to benefit indirectly from the export promotion activities of the French and U.K. organizations. While the United States has no centralized government organization with a comparable export promotion role, the Departments of Defense, Commerce, and State each provide similar support for U.S. defense exports. The Departments of Commerce, Defense, and State were given the opportunity to comment on a draft of this report. Defense concurred with the report. Commerce wrote that it had reviewed the draft report and did not have any comments. State, in general, agreed with our analysis and conclusions and found the draft report to be an accurate reflection of the international competition for military export contracts. State also commented that offsets play a major role in determining which firms obtain contracts and foreign governments are eager to support offset arrangements to obtain a competitive advantage. In addition, State noted that sales of conventional arms are a legitimate instrument of U.S. foreign policy deserving U.S. government support when they help friends and allies deter aggression, promote regional stability, and increase interoperability of U.S. and allied forces. However, State pointed out that an examination of the dynamics of regional power balances and the potential for destabilizing changes in the region is required for each specific sale. We have made minor factual revisions to the report where appropriate based on technical comments provided by Defense and State. We did our work between January 1994 and February 1995 in accordance with generally accepted government auditing standards. A discussion of our scope and methodology is in appendix I. More information on government support to enhance the competitiveness of defense products is provided in appendix II. The comments of the Departments of Defense, State, and Commerce are presented in appendixes III, IV, and V, respectively. We are sending copies of this report to the Secretaries of Defense, Commerce, and State and the appropriate congressional committees. Copies will also be available to other interested parties on request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix VI. Because of the continuing debate on how much support to provide to defense exporters, we reviewed conditions in the global defense export market and the tools used by France, Germany, the United Kingdom, and the United States to enhance the competitiveness of their defense exports. Specifically, we compared the U.S. position in the global defense market relative to its major competitors and analyzed the various factors that can contribute to a sale, including export financing and other types of government support. For our review, we selected France, Germany, and the United Kingdom because they (1) represent the major competitors to U.S. defense exporters in terms of the value of exports sold and (2) sell to approximately the same buyers. In 1993 these four countries represented 81 percent of the world's total defense market. Together, Russia and China represented 13 percent of the total market, but were not part of this review because a large share of Russian and Chinese defense products are sold to countries to which the United States would not sell. While several U.S. government agencies collect information on defense exports, it is difficult to compare their analyses because each agency uses different methodologies for collecting and reporting the data. We used mostly Congressional Research Service (CRS) data on defense exports for calendar years 1990 to 1993 to compare the U.S. position in the global defense market relative to its European competitors. We also used more current data on French defense exports, in terms of deliveries, provided by the U.S. government. This new data increased the level of French defense exports, both in absolute and relative terms, previously reported by CRS. Further, we use calendar year data rather than fiscal year data because data on European defense exports is reported on a calendar year basis. We did not independently verify CRS data, but the data is generally accepted among government agencies as dependable. In addition, we used the State Department's Office of Defense Trade Controls data on deliveries of U.S. direct commercial sales, because CRS does not include that data in its annual reports on global arms sales. To determine the U.S. position in the global defense market in the near future, we used the value of U.S. defense orders as reported by CRS. However, the value of these orders includes only those placed through the Foreign Military Sales program and does not include orders placed by direct commercial means. While the State Department reports the value of export licenses approved for direct commercial sales, it does not report the value of actual defense orders placed as a result of those licenses. The value of direct commercial sales deliveries as a result of those licenses, according to government documents, may be as little as 40 to 60 percent of the value originally reported when the license was approved. The State Department reported that it issued about $87 billion in licenses from fiscal year 1990 to 1993. In analyzing the various factors that contributed to winning a defense sale, we held discussions with U.S. government and defense company officials responsible for tracking U.S. defense sales. In addition, we reviewed prior government reports on the subject. To obtain information on U.S. defense export promotion efforts, we reviewed numerous government and nongovernment studies and reports on the subject. In addition, we interviewed officials at the Departments of Defense, Commerce, and State, and the Defense Security Assistance Agency; U.S. defense company officials located in the United States and Europe; and trade organizations. We also spoke to officials from the Office of Management and Budget, the Export-Import Bank, the Banker's Association for Foreign Trade, and six commercial banks, to obtain additional information on defense export financing. To obtain information on European countries' export promotion programs, we discussed with, and analyzed documents from, officials involved in their countries' defense export promotion activities. This group included officials from national governments, academia, and European defense companies. We also met with officials from the Department of Defense's Office of Defense Cooperation and the Department of Commerce's U.S. and Foreign Commercial Service offices. We also attended the Eurosatory Land Show in Paris, France, to observe U.S. exporters and their competitors at a major defense trade show. To convert French francs and British pounds to U.S. dollars, we used the following exchange rates. To report on France's Delegation for International Relations annual budget, we used the average calendar year 1994 exchange rate. To report on the U.K.'s Defence Export Services Organisation annual budget and the amount of defense export financing provided by the Export Credits Guarantee Department, we used the exchange rate at the end of the U.K. fiscal years ending March 31, 1993, and March 31, 1994. We sought to report on multilateral agreements on defense trade and found that no such agreements exist. Approaches to financing defense exports vary among the four countries. Such financing includes the use of various financial instruments, including grants, loans, and guarantees. In the United States, most financing is provided through the government's Foreign Military Financing (FMF) program, with limited financing provided by commercial banks. Some financing is also available at the state level. A 1992 decision to cancel fees on some sales that recovered part of the government's investment in a weapon system was made to increase the competitiveness of U.S. firms. In fiscal year 1994 the United States used the FMF program to provide about $3.1 billion in grants--mostly to Israel and Egypt--and $0.8 billion in loans to Greece, Turkey, and Portugal. The FMF program enables U.S. allies to buy U.S. defense goods and related services and training. Congress often specifies the extent of assistance to certain countries. Most grants and loans are used to purchase U.S. defense products, although a designated amount of FMF funding is permitted to be spent on procurement in Israel. In fiscal year 1994 Israel was permitted to spend at least $475 million of its grant assistance on procurement in Israel. The FMF program has decreased since 1990, when the program provided over $4.8 billion in loans and grants. The U.S. government does not guarantee commercial financing for defense exports. Further, the Export-Import Bank of the United States is prohibited from providing loans or guarantees for purchasing defense equipment. Therefore, according to U.S. bank officials, U.S. commercial banks provide few financial services for defense exports, partly because of concerns that such services might generate negative publicity. Senior bank managers approve defense export financing transactions on a case-by-case basis. Financing is provided for defense transactions that are low risk and will carry a short repayment schedule. According to bank officials, repayment terms of commercial loans for defense exports generally do not exceed 2 years. These officials further stated that commercial banks are reluctant to provide financing to foreign countries without some type of U.S. government guarantee program. Moreover, even with such a program, some banks would still be reluctant to provide financing to defense exports, because of concerns about negative publicity. Some export financing is provided at the state level. For example, the state of California provides export financing for its defense companies. From July 1988 through November 1994 California provided about $26 million in loan guarantees for 77 transactions to California-based defense companies. At the time of this review, 30 states provided export financing. However, data on export financing is not separated out by defense and nondefense exports; therefore, we were not able to determine how many states, other than California, provided financing for defense exports. For years the price of U.S. military exports generally included a Department of Defense (DOD) charge to recover a portion of its non-recurring research and development costs. In 1992 the policy of recovering these costs when the sales were directly between the U.S. contractor and a foreign government was canceled. The recovery of U.S. government costs were canceled in an effort to increase the competitiveness of U.S. firms in the world market. In addition, the Arms Export Control Act, which generally requires recovery of such costs on government to government sales, permits DOD to waive or reduce such charges on sales to North Atlantic Treaty Organization countries, Australia, New Zealand, and Japan in furtherance of standardization and mutual defense treaties. In fiscal year 1994, DOD recovered $181 million in such costs but waived about $273 million. Recently, the executive branch has proposed that Congress repeal the requirement to collect such charges on future government to government sales. All three European countries provide some form of government-backed export credit guarantees for both nondefense and defense exports. Export credit guarantees are a form of insurance covering risk of loss due to such factors as exchange rate fluctuations or buyer nonpayment. They can allow access to financing for exporters extending credit to their buyers and for overseas buyers borrowing directly from banks. Data on the value of guarantees for defense exports, however, was available only in the United Kingdom. France and Germany report total export financing and do not differentiate between defense and nondefense export financing. Thus, we were unable to obtain information on the extent of guarantees provided to defense exports in either country. During fiscal year 1993/1994, the United Kingdom's Export Credits Guarantee Department (ECGD) guaranteed about $6.1 billion in exports, of which $2.9 billion (or 48 percent) was for defense exports. About 90 percent of the $2.9 billion was for defense equipment sold to countries in the Middle East, mostly to Kuwait, Oman, Qatar, and Saudi Arabia. Among industry sectors, military aircraft represented about 40 percent of the $2.9 billion total, military vehicles represented about 39 percent, and naval vessels represented about 21 percent. In fiscal year 1992/1993, ECGD guaranteed about $5.8 billion in exports, of which $2.4 billion (or 42 percent) was for defense exports. About 57 percent of the $2.4 billion was for defense equipment sold to countries in the Far East and about 43 percent of the total was for equipment sold to the Middle East. Among industry sectors, naval vessels represented about 39 percent of the $2.4 billion total, military aircraft represented about 32 percent, and munitions and missiles represented about 27 percent. The French and U.K. governments have historically sent ministers of defense, ambassadors, or prime ministers to persuade foreign buyers to buy their national defense products. The German government has generally avoided using high-level government officials to promote defense exports, in part because such exports are a sensitive political issue in Germany. In the United States, defense exports have been approved to further U.S. national security and foreign policy goals. Nevertheless, as part of the U.S. government's emphasis on overall export promotion efforts, high-ranking U.S. officials have been increasingly willing to intervene to influence competitions in favor of U.S. defense companies. An example of high-level government advocacy is the Swedish government's purchase of the German Leopard 2 tank. The German Chancellor and Minister of Defense advocated on behalf of the German Leopard 2 tank, which, according to U.S. government officials, led to Sweden purchasing it over the French or U.S. tank. Other factors contributing to Sweden's choice included the German manufacturer's promise to buy Swedish defense material and services worth full value of the tanks they were exporting to Sweden. France and the United Kingdom each have a single organization within their respective defense ministries with responsibility for identifying defense export opportunities abroad, promoting and facilitating defense exports, providing assistance with defense equipment demonstrations and trade shows, and providing advice to industry regarding offsets. Although Germany does not have a defense ministry organization comparable to those of France or the United Kingdom, German companies involved in cross-border collaborative efforts with those countries are able to benefit indirectly from the export promotion activities of the French and U.K. organizations. While the United States has no centralized government organization with a comparable export promotion role, several U.S. government agencies provide similar support for U.S. defense exports. In France, the Ministry of Defense's Delegation for International Relations (DRI) is responsible for facilitating and promoting French global defense sales. DRI assigns defense attaches overseas to promote military and armament relations with other countries. DRI also subsidizes missions for small business to participate in events such as trade shows. DRI employs roughly 200 staff--about 60 are involved in facilitating and promoting defense sales with the remaining staff involved in export control activities and oversight of cooperation activities with allied nations. DRI has an annual budget of $7 million which is used in a variety of ways, including Ministry of Defense participation in trade shows and subsidizing small business missions to participate in those shows. DRI also serves as a liaison between the Ministries of Defense and Industry, which, according to DRI officials, is the most important support provided to the French defense industry. While DRI promotes and facilitates sales, sales are primarily handled either by defense companies themselves or by various marketing and sales organizations. The French government owns 49.9 percent of the Defense Conseil International (DCI). DCI serves as a consultant to buying countries to help them define their operational needs, weapon requirements, and specifications. The remaining 51.1 percent is owned by private-sector marketing and sales organizations. In the United Kingdom, the Ministry of Defence's Defence Export Services Organisation (DESO) is responsible for assisting in the marketing and sales efforts of U.K. defense companies overseas, whether manufactured nationally or in collaboration with others. DESO serves as a focal point for all defense sales and service matters, including advising firms on defense market prospects on a worldwide, regional, or country basis; providing marketing and military assistance in support of sales; organizing exhibitions, missions, and demonstrations; providing advice on export and project financing; ensuring that overseas sales consideration is given due weight in the U.K. Ministry of Defence's own procurement process; briefing companies new to the defense sector and to exporting; and monitoring offset agreements. DESO's budget for fiscal year 1992/1993 was about $25.9 million. DESO has approximately 350 staff--about 100 in marketing services, 50 in general policy, and 200 in direct project work. DESO concentrates primarily on supporting higher-value exports, although smaller companies also benefit from DESO guidance on such matters as how best to pursue potential subcontracts. In addition, larger companies rely on DESO to serve as a liaison with high-level U.K. and foreign government officials. The Departments of Defense, Commerce, and State each provide support in promoting U.S. defense exports. Moreover, the U.S. government has long recognized the positive impact that defense exports can have on the defense industrial base. Beginning in 1990 the U.S. government began to give more prominence to the economic value of defense exports. At that time, the Secretary of State directed overseas personnel to assist defense companies in marketing efforts. The Secretary added that individuals marketing U.S. defense products should receive the same courtesies and support offered to persons marketing any other U.S. product. More recently, the U.S. government announced its National Export Strategy, which is designed to establish a framework for strengthening U.S. export promotion efforts. Although the strategy does not target defense exports, some recommendations for improving export promotion activities could benefit defense exports. For example, the strategy recommended that overseas posts prepare country commercial guides. The guides are to include information on the host country's best export prospects for U.S. companies, which may include defense exports. These guides are to be made available to the public through the Department of Commerce's National Trade Data Bank. In February 1995, the President announced his conventional arms transfer policy which included, as one of its principal goals, enhancing the U.S. defense industry's ability to meet U.S. defense requirements and maintain long-term military technological superiority at lower costs. The announcement indicated that once a proposed arms transfer is approved, the U.S. government will take such steps as (1) tasking U.S. embassy personnel to support overseas marketing efforts of American companies bidding on defense contracts, (2) actively involving senior government officials in promoting sales of particular importance to the United States, and (3) supporting DOD participation in international air and trade shows. As part of the U.S. security assistance program, the Defense Security Assistance Agency and the military services implement the Foreign Military Sales program, through which most U.S. defense sales are made. U.S. security assistance personnel stationed overseas are primarily responsible for security assistance and defense cooperation activities in the host country. When requested, these personnel provide information and support to U.S. industry on business opportunities in the host country, including information on the buying countries' defense budget cycle, national procurement process, and estimates of equipment the country currently needs to fill defense requirements or likely future procurement plans. In addition, the Defense Security Assistance Agency coordinates DOD participation in international air shows and trade exhibitions. The military services lease equipment to U.S. defense companies for display or demonstration at such events. The Department of Commerce has primary responsibility for export promotion and has recently expanded its export promotion activities to include defense exports. For example, Commerce prepares market research reports on various countries. These reports identify trade opportunities in the host country, including those in defense trade. Other information on the host country included in these reports includes information on market assessment, best sales prospects, the competitive situation, and market access. These reports are made available to the public through the National Trade Data Bank. Other activities include preparing U.S. and Foreign Commercial Service Officer guidance on supporting defense exports. This guidance directs officers to provide information similar to that provided by the Defense Security Assistance Agency and the military services. Moreover, the Departments of Commerce, State, and Defense participate in defense industry liaison working groups to assess improving U.S. government support for U.S. defense exporters. The following is GAO's comment on the Department of Defense's (DOD) letter dated March 8, 1995. 1. We have not included DOD's technical annotations to our draft report but have incorporated them in the text where appropriate. The following are GAO's comments on the Department of State's letter dated March 17, 1995. 1. We have modified the report to reflect this comment. 2. We have not included the attached list of suggested editorial changes but have incorporated them in the text where appropriate. Mary R. Offerdahl Cherie M. Starck The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the global defense export market and on the tools used by the United States and three major foreign competitors to enhance the competitiveness of their defense exports. GAO found that: (1) the United States has been the world's leading defense exporter since 1990; by 1993 its market share had increased to 49 percent of the global market; (2) the increased U.S. market share occurred during a period of worldwide decreases in total defense exports; (3) the three European countries reviewed (France, Germany, and the United Kingdom) had in 1993 a combined global market share of about 32 percent of total defense exports, which also increased since 1990; (4) in the short term, at least, the United States will likely remain strong in the world market; however, further growth in its market share will be limited by a number of factors, including U.S. policies to reduce dangerous or destabilizing arms transfers to certain countries and certain major foreign country buyers' practices of diversifying weapons purchases among multiple suppliers; (5) government involvement in the defense industry's sales affects the position of defense manufacturers in overseas markets, but other factors also influencing defense sales include technical sophistication and performance, the cost and availability of follow-on support and training, price, financing, and offset arrangements; (6) government policies and programs can also affect these other factors; (7) because each sale has its own unique set of circumstances, it is not possible to quantify or rank the contribution of any one factor across the board; (8) the U.S. government has long recognized the positive impact that defense exports can have on the defense industrial base; (9) in 1990, the Secretary of State directed overseas missions to support the marketing efforts of U.S. defense companies as in all other areas of commercial activity; (10) governments in France, Germany, the United Kingdom, and the United States generally provide comparable types of support, including: (a) government-backed or -provided export financing; (b) advocacy on behalf of defense companies by high-level government officials; and (c) organizational entities that promote defense exports; (11) although all four countries generally provide comparable types of assistance to their defense exporters in these areas, the extent and structure of such assistance varies; (12) central organizations support defense exports in France and the United Kingdom, while in the United States several government agencies share in supporting defense exports; and (13) all three European countries provide government-backed guarantees for commercial bank loans, while in the United States, financing is provided primarily through the Foreign Military Financing Program in the form of grants and loans and available only to a small group of countries.
7,365
553
Over the past 8 years, DOD has designated over 33,000 servicemembers involved in OEF and OIF as wounded in action. The severity of injuries can result in a lengthy process for a patient to either return to duty or to transition to veteran status. The most seriously injured servicemembers from these conflicts usually receive care at Walter Reed Army Medical Center or the National Naval Medical Center. According to DOD officials, once they are stabilized and discharged from the hospital, servicemembers may relocate closer to their homes or military bases and be treated as outpatients by the closest military or VA facility. Recovering servicemembers potentially navigate two different disability evaluation systems that serve different purposes. DOD's system serves a personnel management purpose by identifying servicemembers who are no longer medically fit for duty. If a servicemember is found unfit because of medical conditions incurred in the line of duty, the servicemember is assigned a disability rating and can be discharged from duty. This disability rating, along with years of service and other factors, determines subsequent disability and health care benefits from DOD. Under VA's system, disability ratings help determine the level of disability compensation a veteran receives and priority status for enrollment for health care benefits. To determine eligibility for disability compensation, VA evaluates all claimed medical conditions, whether they were evaluated previously by the military service's evaluation process or not. If VA finds that a veteran has one or more service-connected disabilities that together result in a final rating of at least 10 percent, VA will pay monthly compensation and the veteran will be eligible to receive medical care from VA. Efforts have been taken to address the deficiencies reported at Walter Reed related to the care provided and transitioning of recovering servicemembers. After the press reports about Walter Reed, several high- level review groups were established to study the care and benefits provided to recovering servicemembers by DOD and VA. The studies produced from all of these groups, released from April 2007 through June 2008, contained over 400 recommendations covering a broad range of topics, including case management, disability evaluation systems, data sharing between the departments, and the need to better understand and diagnose TBI and PTSD. In May 2007, DOD and VA established the SOC as a temporary, 1-year committee with the responsibility for addressing recommendations from these reports. To conduct its work, the SOC established eight work groups called lines of action (LOA). Each LOA is co-chaired by representatives from DOD and VA and has representation from each military service. LOAs are responsible for specific issues, such as disability evaluation systems and case management. (See table 1 for an overview of the LOAs.) The committee was originally intended to expire May 2008 but it was extended to January 2009. Then, the NDAA 2009 extended the SOC through December 2009. In addition to addressing the published recommendations, the SOC assumed responsibility for addressing the policy development and reporting requirements contained in the NDAA 2008. Section 1611(a) of the NDAA 2008 directs DOD and VA, to the extent feasible, to develop and implement a comprehensive policy covering four areas--(1) care and management, (2) medical evaluation and disability evaluation, (3) the return of servicemembers to active duty, and (4) the transition of recovering servicemembers from DOD to VA. The specific requirements for each of these four areas are further enumerated in sections 1611 through 1614 of the law and would include the development of multiple policies. Table 2 summarizes the requirements for the jointly developed policies. Since its inception, the SOC has completed many initiatives, such as establishing the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury and creating a National Resource Directory, which is an online resource for recovering servicemembers, veterans, and their families. In addition, the SOC has undertaken initiatives specifically related to the requirements contained in sections 1611 through 1614 of the NDAA 2008. Specifically, the SOC supported the development of several programs to improve the care and management of benefits to recovering servicemembers, including the disability evaluation system pilot and the Federal Recovery Coordination Program. These programs are currently in pilot or beginning phases: Disability evaluation system pilot: DOD and VA are piloting a joint disability evaluation system to improve the timeliness and resource use of their separate disability evaluation systems. Key features of the pilot include a single physical examination conducted to VA standards by the medical evaluation board that documents medical conditions that may limit a servicemember's ability to serve in the military, disability ratings prepared by VA for use by both DOD and VA in determining disability benefits, and additional outreach and nonclinical case management provided by VA staff at the DOD pilot locations to explain VA results and processes to servicemembers. DOD and VA anticipate a final report on the pilot in August 2009. Federal Recovery Coordination Program: In 2007, DOD and VA established the Federal Recovery Coordination Program in response to the report by the President's Commission on Care for America's Returning Wounded Warriors, commonly referred to as the Dole-Shalala Commission. The commission's report highlighted the need for better coordination of care and additional support for families. The Federal Recovery Coordination Program serves the most severely injured or ill servicemembers, or those who are catastrophically injured. These servicemembers are highly unlikely to be able to return to duty and will have to adjust to permanent disabling conditions. The program was created to provide uniform and seamless care, management, and transition of recovering servicemembers and their families by assigning recovering servicemembers to coordinators who manage the development and implementation of a recovery plan. Each servicemember enrolled in the Federal Recovery Coordination Program has a Federal Individual Recovery Plan, which tracks care, management, and transition through recovery, rehabilitation, and reintegration. Although the Federal Recovery Coordination Program is operated as a joint DOD and VA program, VA is responsible for the administrative duties and program personnel are employees of the agency. Beyond these specific initiatives, the SOC took responsibility for issues related to electronic health records through the work of LOA 4, the SOC's work group focused on DOD and VA data sharing. This LOA also addressed issues more generally focused on joint DOD and VA data needs, including developing components for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. LOA 4's progress on these issues was monitored and overseen by the SOC. The NDAA 2008 established an interagency program office (IPO) to serve as a single point of accountability for both departments in the development and implementation of interoperable electronic health records. Subsequently, management oversight of many of LOA 4's responsibilities were transferred to the IPO. Also, the IPO's scope of responsibility was broadened to include personnel and benefits data sharing between DOD and VA. As of April 2009, DOD and VA have completed 60 of the 76 requirements we identified for jointly developing policies for recovering servicemembers on (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) servicemember transition from DOD to VA. The two departments have completed all requirements for developing policy for two of the policy areas--medical and disability evaluation and return to active duty. Of the 16 requirements that are in progress, 10 are related to care and management and 6 are related to servicemembers transitioning from DOD to VA. (See table 3.) We found that more than two-thirds of the requirements for DOD's and VA's joint policy development to improve the care and management of recovering servicemembers have been completed while the remaining requirements are in progress. (See table 4.) We identified 38 requirements for this policy area and grouped them into five categories. Although 28 of the 38 requirements had been completed, one category--improving access to medical and other health care services--had most of its requirements in progress. Most of the completed requirements were addressed in DOD's January 2009 Directive-Type Memorandum (DTM), which was developed in consultation with VA. This DTM, entitled Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members, establishes interim policy for the improvements to the care, management, and transition of recovering servicemembers in response to sections 1611 and 1614 of the NDAA 2008. In consultation with VA, DOD created the Recovery Coordination Program in response to the NDAA 2008 requirements. This program, which was launched in November 2008, extended the same comprehensive coordination and transition support provided under the Federal Recovery Coordination Program to servicemembers who were less severely injured or ill, yet who still were unlikely to return to duty and continue their careers in the military. This program follows the same structured process as the Federal Recovery Coordination Program. However, DOD oversees this program and the coordinators are DOD employees. DOD's January 2009 DTM includes information on the scope and program elements of the Recovery Coordination Program as well as on the roles and responsibilities of the recovery care coordinators, federal recovery coordinators, and medical care case managers and non-medical care managers. According to DOD officials, DOD took the lead in developing policy to address the requirements for care and management because it interpreted most of the requirements to refer to active duty servicemembers. According to DOD and VA officials, the January 2009 DTM serves as the interim policy for care, management, and transition until the completion of DOD's comprehensive policy instruction, which is estimated to be completed by June 2009. This policy instruction will contain more detailed information on the policies outlined in the DTM. A VA official told us that VA also plans to issue related policy guidance as part of a VA handbook in June 2009. The VA official noted that the final form of the policy document would correspond with DOD's instruction. DOD and VA have completed all of the requirements for developing policy to improve the medical and physical disability evaluation of recovering servicemembers. (See table 5.) We identified 18 requirements for this policy area and grouped them into three categories: (1) policy for improved medical evaluations, (2) policy for improved physical disability evaluations, and (3) reporting on the feasibility and advisability of consolidating DOD and VA disability evaluation systems. DOD issued a series of memoranda that addressed the first two categories starting in May 2007. These memoranda, some of which were developed in collaboration with VA, contained policies and implementing guidance to improve DOD's existing disability evaluation system. To address the third category in this policy area, DOD and VA have issued a report to Congress that describes the organizing framework for consolidating the two departments' disability evaluation systems and states that the departments are hopeful that consolidation would be feasible and advisable even though the evaluation of this approach through the disability evaluation system pilot is still ongoing. According to an agency official, further assessment of the feasibility and advisability of consolidation will be conducted. DOD and VA anticipate issuing a final report on the pilot in August 2009. However, as we reported in September 2008, it was unclear what specific criteria DOD and VA will use to evaluate the success of the pilot, and when sufficient data will be available to complete such an evaluation. DOD has completed the requirement for establishing standards for determining the return of recovering servicemembers to active duty. (See table 6.) On March 13, 2008, DOD issued a DTM amending its existing policy on retirement or separation due to a physical disability. The revised policy states that the disability evaluation system will be the mechanism for determining both retirement or separation and return to active duty because of a physical disability. An additional revision to the existing DOD policy allows DOD to consider requests for permanent limited active duty or reserve status for servicemembers who have been determined to be unfit because of a physical disability. Previously, DOD could consider such cases only as exceptions to the general policy. According to a DOD official, it is too early to tell whether the revisions will have an effect on retirement rates or return-to-duty rates. DOD annually assesses the disability evaluation system and tracks retirement and return to duty rates. However, because of the length of time a servicemember takes to move through the disability evaluation system--sometimes over a year--it will take a while before changes due to the policy revisions register in the annual assessment of the disability evaluation system. DOD and VA have completed more than two-thirds of the requirements for developing procedures, processes, or standards for improving the transition of recovering servicemembers. (See table 7.) We identified 19 requirements for this policy area, and we grouped them into five categories. We found that 13 of the 19 policy requirements have been completed, including all of the requirements for two of the categories--the development of a process for a joint separation and evaluation physical examination and development of procedures for surveys and other mechanisms to measure patient and family satisfaction with services for recovering servicemembers. The remaining three categories contain requirements that are still in progress. Most of the requirements for improving the transition from DOD to VA were addressed in DOD's January 2009 DTM--Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members--that establishes interim policy for the care, management, and transition of recovering servicemembers through the Recovery Coordination Program. However, we found that DOD's DTM includes limited detail related to the procedures, processes, and standards for transition of recovering servicemembers. As a result, we could not always directly link the interim policy in the DTM to the specific requirements contained in section 1614 of the NDAA 2008. DOD and VA officials noted that they will be further developing the procedures, processes, and standards for the transition of recovering servicemembers in a subsequent comprehensive policy instruction, which is estimated to be completed by June 2009. A VA official reported that VA plans to separately issue policy guidance addressing the requirements for transitioning servicemembers from DOD to VA in June 2009. DOD and VA officials told us that they experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. In addition, challenges have arisen during the initial implementation of some of the NDAA 2008 policies. Finally, recent changes to the SOC staff, including DOD's organizational changes for staff supporting the SOC, could pose challenges to the development of policy affecting recovering servicemembers. DOD and VA officials encountered numerous challenges during the course of jointly developing policies to improve the care, management, and transition of recovering servicemembers, as required by sections 1611 through 1614 of the NDAA 2008, in addition to responding to other requirements of the law. Many of these challenges have been addressed, but some have yet to be completely resolved. DOD and VA officials cited the following examples of issues for which policy development was particularly challenging. Increased support for family caregivers. The NDAA 2008 includes a number of provisions to strengthen support for families of recovering servicemembers, including those who become caregivers. However, DOD and VA officials on a SOC work group stated that before they could develop policy to increase support for such families, they had to obtain concrete evidence of their needs. Officials explained that while they did have anecdotal information about the impact on families who provide care to recovering servicemembers, they lacked the systematic data needed for sound policy decisions--such as frequency of job loss and the economic value of family-provided medical services. A work group official told us that their proposals for increasing support to family caregivers were rejected twice by the SOC, due in part to the lack of systematic data on what would be needed. The work group then contracted with researchers to obtain substantiating evidence, a study that required 18 months to complete. In January 2009, the SOC approved the work group's third proposal and family caregiver legislation is being prepared, with anticipated implementation of new benefits for caregivers in fiscal year 2010. Establishing standard definitions for operational terms. One of the important tasks facing the SOC was the need to standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD took the lead in working with its military services and VA officials to identify and define key terms. DOD and VA officials told us that many of the key terms found in existing DOD and VA policy, the reports from the review groups, and the NDAA 2008, as well as those used by the different military services are not uniformly defined. Consequently, standardized definitions are needed to promote agreement on issues such as identifying the recovering servicemembers who are subject to NDAA 2008 requirements, identifying categories of servicemembers who would receive services from the different classes of case managers or be eligible for certain benefits, managing aspects of the disability evaluation process, and establishing criteria to guide research. In some cases, standardized definitions were critical to policy development. The importance of agreement on key terms is illustrated by an issue encountered by the SOC's work group responsible for family support policy. In this case, before policy could be developed for furnishing additional support to family members that provide medical care to recovering servicemembers, the definition of "family" had to be agreed upon. DOD and VA officials said that they considered two options: to define the term narrowly to include a servicemember's spouse, parents, and children, or to use broader definitions that included distant relatives and unrelated individuals with a connection to the servicemember. These two definitions would result in significantly different numbers of family members eligible to receive additional support services. DOD and VA officials decided to use a broader definition to determine who would be eligible for support. Of the 41 key definitions identified for reconciliation, DOD and VA had concurred on 33 as of March 2009 and these 33 standardized definitions are now being used. Disagreement remains over the remaining definitions, including the definition of "mental health." A DOD official stated that given the uncertainty associated with the organizational and procedural changes recently introduced to the SOC (which are discussed below), obtaining concurrence on the remaining definitions has been given lower priority. Improving TBI and PTSD screening and treatment. Requirements relat to screening and treatment for TBI and PTSD were embedded in several sections of the NDAA 2008, including section 1611, and were also discussed extensively in a task force report on mental health. DOD and VA officials told us that policy development for these issues was difficult. For example, during development of improved TBI and PTSD treatment policy, policymakers often lacked sufficient scientific information needed to help achieve consensus on policy decisions. Also, members of the SOC work group told us that they disagreed on appropriate models for screening and treatment and struggled to reorient the military services to patient-focused treatment. A senior DOD official stated that the adoption of patient-focused models is particularly difficult for the military services because, historically, the needs of the military have been given precedence over the needs of individual servicemembers. To address these challenges, the SOC oversaw the creation of the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury--a partnership between DOD and VA. While policies continue to be developed on these issues, TBI and PTSD policy remains a challenge for DOD and VA. However, DOD officials told us that the centers of excellence have made progress with reducing knowledge gaps in psychological health and TBI treatment, identifying best practices, and establishing clinical standards of care. Release of psychological health treatment records to DOD by VA heal care providers who treat members of the National Guard and Reserve Section 1614 of the NDAA 2008 requires the departments to improve medical and support services provided to members of the National G and Reserves. In pursuing these objectives, VA faced challenges relate the release of medical information to DOD on reservists and Nationa l Guard servicemembers who have received treatment for PTSD or o mental health conditions from VA. DOD requests medical information from VA to help make command decisions about the reactivation of servicemembers, but VA practitioners face an ethical dilemma if the disclosure of medical treatment could compromise servicemembers' medical conditions, particularly for those at risk of suicide. The challe of sharing and protecting sensitive medical information on s. nge servicemembers who obtain treatment at VA was reviewed by the Blue Ribbon Work Group on Suicide Prevention convened in 2008 at the behest of the Secretary of Veterans Affairs. DOD and VA are continuing their efforts to develop policy to clarify the privacy rights of patients who receive medical services from VA while serving in the military, and a protecting the confidential records of VA patients who may also be treat by the military's health care system. The need to resolve this challenge assumes even greater importance in light of DOD's and VA's increasing capability to exchange medical records electronically, which will expand DOD's ability to access records of servicemembers who have received medical treatment from VA. In addition to challenges encountered during the joint development of policy for recovering servicemembers, additional challenges have arisen as DOD and VA have begun implementing NDAA 2008 policy initiatives. Medical examinations conducted as part of the DOD/VA disability evaluation system pilot. In 2007, DOD and VA jointly began to develop policy to improve the disability evaluation process for recovering servicemembers and began pilot testing these new procedures in the disability system. One significant innovation of the disability evaluation system pilot is the use of a single physical examination for multiple purposes, such as for both disability determinations and disability benefits from both departments. In our review of the disability evaluation system pilot, we reported that DOD and VA had tracked challenges that arose during implementation of the pilot but had not yet resolved all of them. For example, one unresolved issue was uncertainty about who will conduct the single physical examination when a VA medical center is not located nearby. Another challenge that could emerge in the future is linked to VA's announcement in November 2008 that it would cease providing physical reexaminations for recovering servicemembers placed on the Temporary Disability Retired List (TDRL). However, VA made an exception to its decision and will continue to provide reexaminations for TDRL servicemembers participating in the disability evaluation system pilot. In March 2009, VA officials told us that they were developing a policy to clarify this issue. Electronic health information sharing between DOD and VA. The two departments have been working for over a decade to share electronic health information and have continued to make progress toward increased information sharing through ongoing initiatives and activities. However, the departments continue to face challenges in managing the activities required to achieve this goal. As we previously reported, the departments' plans to further increase their electronic sharing capab do not consistently identify results-oriented performance measures, whic are essential for assessing progress toward the delivery of that capabilit Further challenging the departments is the need to complete all necessary activities to fully set up their IPO, including hiring a permanent Director and Deputy Director. Defining results-oriented performance goals in its plans and ensuring that they are met is an important responsibility of this office. Until these challenges are fully addressed, the departments and their stakeholders may lack the comprehensive understanding that they need to effectively manage their progress toward achieving increased sharing of information between the departments. Moreover, not fully addressing these challenges increases the risk that DOD and VA may not develop and implement comprehensive policies to improve the care, management, and transition of recovering servicemembers and veterans. ilities h y. Recent changes to staff and working relationships within the SOC could pose future challenges to DOD's and VA's efforts to develop joint policy. Since December 2008, the SOC has experienced turnover in leadership and changes in policy development responsibilities. The SOC is undergoing leadership changes caused by the turnover in presidential administrations as well as turnover in some of its key staff. For example, the DOD and VA deputy secretaries who previously co-chaired the SOC departed in January 2009. As a short-term measure, the Secretaries of VA and DOD have co- chaired a SOC meeting. DOD also introduced other staffing changes to replace personnel who had been temporarily detailed to the SOC and needed to return to their primary duties. DOD had relied on temporarily-assigned staff to meet SOC staffing needs because the SOC was originally envisioned as a short-term effort. In a December 2008 memo, DOD outlined the realignment of its SOC staff. This included the transition of responsibilities from detailed, temporary SOC staff and executives to permanent staff in existing DOD offices that managed similar issues. For example, the functions of LOA 7 (Legislation and Public Affairs) will now be overseen by the Assistant Secretary of Defense for Legislative Affairs, the Assistant Secretary of Defense for Public Affairs, and the DOD General Counsel. DOD also established two new organizational structures--the Office of Transition Policy and Care Coordination and an Executive Secretariat office. The Office of Transition Policy and Care Coordination oversees transition support for all servicemembers and serves as the permanent entity for issues being addressed by LOA 1 (Disability Evaluation System), LOA 3 (Case Management), and LOA 8 (Personnel, Pay, and Financial Support). The Executive Secretariat office is responsible for performance planning, performance management, and SOC support functions. According to DOD officials, the new offices were created to establish permanent organizations that address a specific set of issues and to enhance accountability for policy development and implementation as these offices report directly to the Office of the Under Secretary of Defense for Personnel and Readiness. Currently, many of the positions in these new offices, including the director positions, are staffed by officials in an acting capacity or are unfilled. DOD's changes to the SOC are important because of the potential effects these changes could have on the development of policy for recovering servicemembers. However, officials in both DOD and VA have mixed reactions about the consequences of these changes. Some DOD officials consider the organizational changes to the SOC to be positive developments that will enhance the SOC's effectiveness. They point out that the SOC's temporary staffing situation needed to be addressed, and also that the two new offices were created to support the SOC and provide focus on the implementation of key policy initiatives developed by the SOC--primarily the disability evaluation system pilot and the new case management programs. In contrast, others are concerned by DOD's changes, stating that the new organizations disrupt the unity of command that once characterized the SOC's management because personnel within the SOC organization now report to three different officials within DOD and VA. However, it is too soon to determine how well DOD's new structure will work in conjunction with the SOC. DOD and VA officials we spoke with told us that the SOC's work groups continue to carry out their roles and responsibilities. Finally, according to DOD and VA officials, the roles and scope of responsibilities of both the SOC and the DOD and VA Joint Executive Council appear to be in flux and may evolve further still. According to DOD and VA officials, changes to the oversight responsibilities of the SOC and the Joint Executive Council are causing confusion. While the SOC will remain responsible for policy matters directly related to recovering servicemembers, a number of policy issues may now be directed to the Joint Executive Council, including issues that the SOC had previously addressed. For example, management oversight of many of LOA 4's responsibilities (DOD and VA Data Sharing) has transitioned from the SOC to the IPO, which reports primarily to the Joint Executive Council. LOA 4 continues to be responsible for developing a component for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. It is not clear how the IPO will ensure effective coordination with the SOC's LOAs for the development of IT applications for these initiatives. Given that IT support for two key SOC initiatives is identified in the joint DOD/VA Information Interoperability Plan, if the IPO and the SOC do not effectively coordinate with one another, the result may impact negatively on the development of improved policies for recovering servicemembers. Mr. Chairman, this completes our prepared remarks. We would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Randall B. Williamson at (202) 512-7114 or [email protected], Daniel Bertoni at (202) 512-7215 or [email protected], or Valerie C. Melvin at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are listed in appendix II. To summarize the status of the Departments' of Defense (DOD) and Veterans Affairs (VA) efforts to jointly develop policies for each of the four policy areas outlined in sections 1611 through 1614 of the NDAA 2008, we identified 76 requirements in these sections and grouped related requirements into 14 logical categories. Tables 8 through 11 enumerate the requirements in each of GAO's categories and provide the status of DOD and VA's efforts to develop policy related to each requirement, as of April 2009. In addition to the contacts named above, Bonnie Anderson, Assistant Director; Mark Bird, Assistant Director; Susannah Bloch; Catina Bradley; April Brantley; Frederick Caison; Joel Green; Lisa Motley; Elise Pressma; J. Michael Resser; Regina Santucci; Kelly Shaw; Eric Trout; and Gregory Whitney made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) requires the Departments of Defense (DOD) and Veterans Affairs (VA) to jointly develop and implement comprehensive policies on the care, management, and transition of recovering servicemembers. The Senior Oversight Committee (SOC)--jointly chaired by DOD and VA leadership--has assumed responsibility for these policies. The NDAA 2008 also requires GAO to report on the progress DOD and VA make in developing and implementing the policies. This statement provides preliminary information on (1) the progress DOD and VA have made in jointly developing the comprehensive policies required in the NDAA 2008 and (2) the challenges DOD and VA are encountering in the joint development and initial implementation of these policies. GAO determined the current status of policy development by assessing the status reported by SOC officials and analyzing supporting documentation. To identify challenges, GAO interviewed the Acting Under Secretary of Defense for Personnel and Readiness, the Executive Director and Chief of Staff of the SOC, the departmental co-leads for most of the SOC work groups, the Acting Director of DOD's Office of Transition Policy and Care Coordination, and other knowledgeable officials. DOD and VA have made substantial progress in jointly developing policies required by sections 1611 through 1614 of the NDAA 2008 in the areas of (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) transition of care and services received from DOD to VA. Overall, GAO's analysis showed that as of March 2009, 60 of the 76 requirements GAO identified have been completed and the remaining 16 requirements are in progress. DOD and VA have completed all of the policy development requirements for medical and disability evaluations, including issuing a report on the feasibility and advisability of consolidating the DOD and VA disability evaluation systems, although the pilot for this approach is still ongoing. DOD has also completed establishing standards for returning recovering servicemembers to active duty. More than two-thirds of the policy development requirements have been completed for the remaining two policy areas--care and management and the transition of recovering servicemembers from DOD to VA. Most of these requirements were addressed in a January 2009 DOD Directive-Type Memorandum that was developed in consultation with VA. DOD officials reported that more information will be provided in a subsequent policy instruction, which will be issued in June 2009. VA also plans to issue related policy guidance in June 2009. DOD and VA officials told GAO that they have experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. For example, the SOC must still standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD and VA agreement on key definitions for what constitutes "mental health," for instance, is important for developing policies that define the scope, eligibility, and service levels for recovering servicemembers. Recent changes affecting the SOC may also pose future challenges to policy development. Some officials have expressed concern that DOD's recent changes to staff supporting the SOC have disrupted the unity of command because SOC staff now report to three different officials within DOD and VA. However, it is too soon to determine how DOD's staffing changes will work. Additionally, according to DOD and VA officials, the SOC's scope of responsibilities appears to be in flux. While the SOC will remain responsible for policy matters for recovering servicemembers, a number of policy issues may now be directed to the DOD and VA Joint Executive Council. Despite this uncertainty, DOD and VA officials told GAO that the SOC's work groups continue to carry out their responsibilities. GAO shared the information contained in this statement with DOD and VA officials, and they agreed with the information GAO presented.
6,529
860
Demand for GAO's analysis and advice remains strong across the Congress. During the past 3 years, GAO has received requests or mandated work from all of the standing committees of the House and the Senate and over 80 percent of their subcommittees. In fiscal year 2007, GAO received over 1,200 requests for studies. This is a direct result of the high quality of GAO's work that the Congress has come to expect as well as the difficult challenges facing the Congress where it believes having objective information and professional advice from GAO is instrumental. Not only has demand for our work continued to be strong, but it is also steadily increasing. The total number of requests in fiscal year 2007 was up 14 percent from the preceding year. This trend has accelerated in fiscal year 2008 as requests rose 26 percent in the first quarter and are up 20 percent at the mid-point of this fiscal year from comparable periods in 2007. As a harbinger of future congressional demand, potential mandates for GAO work being included in proposed legislation as of February 2008 totaled over 600, or an 86 percent increase from a similar period in the 109th Congress. The following examples illustrate this demand: Over 160 new mandates for GAO reviews were imbedded in law, including the Consolidated Appropriations Act of 2008, the Defense Appropriations Act of 2008, and 2008 legislation implementing the 9/11 Commission recommendations; New recurring responsibilities were given to GAO under the Honest Leadership and Open Government Act of 2007 to report annually on the compliance by lobbyists of registration and reporting requirements; and Expanded bid protest provisions applied to GAO that (1) allow federal employees to file protests concerning competitive sourcing decisions (A- 76), (2) establish exclusive bid protest jurisdiction at GAO over issuance of task and delivery orders valued at over $10 million, and (3) provide GAO bid protest jurisdiction over contracts awarded by the Transportation Security Administration. Further evidence of GAO's help in providing important advice to the Congress is found in the increased numbers of GAO appearances at hearings on topics of national significance and keen interest (see table 1). In fiscal year 2007 GAO testified at 276 hearings, 36 more than fiscal year 2006. The fiscal year 2007 figure was an all-time high for GAO on a per capita basis and among the top requests for GAO input in the last 25 years. This up tempo of GAO appearances at congressional hearings has continued, with GAO already appearing at 140 hearings this fiscal year, as of April 4th. Our FTE level in fiscal year 2008 is 3,100--the lowest level ever for GAO. We are proud of the results we deliver to the Congress and our nation with this level, but with a slightly less than 5 percent increase in our FTEs to 3,251 we can better meet increased congressional requests for GAO assistance. While this increase would not bring GAO back to the 3,275 FTE level of 10 years ago, it would allow us to respond to the increased workload facing the Congress. GAO staff are stretched in striving to meet Congress's increasing needs. People are operating at a pace that cannot be sustained over the long run. I am greatly concerned that if we try to provide more services with the existing level of resources, the high quality of our work could be diminished in the future. But I will not allow this to occur. This is not in the Congress's nor GAO's interest. One consequence of our demand vs. supply situation is the growing list of congressional requests that we are not able to promptly staff. While we continue to work with congressional committees to identify their areas of highest priority, we remain unable to staff important requests. This limits our ability to provide timely advice to congressional committees dealing with certain issues that they have slated for oversight, including Safety concerns such as incorporating behavior-based security programs into TSA's aviation passenger screening process, updating our 2006 study of FDA's post-market drug safety system, and reviewing state investigations of nursing home complaints. Operational improvements such as the effectiveness of Border Security checkpoints to identify illegal aliens, technical and programmatic challenges in DOD's space radar programs, oversight of federally-funded highway and transit projects and the impact of the 2005 Bankruptcy Abuse Prevention and Consumer Protection Act. Opportunities to increase revenues or stop wasteful spending including reducing potential overstatements of charitable deductions and curbing potential overpayments and contractor abuses in food assistance programs. Our fiscal year 2009 budget request seeks to better position us to maintain our high level of support for the Congress and better meet increasing requests for help. This request would help replenish our staffing levels at a time when almost 20 percent of all GAO staff will be eligible for retirement. Accordingly, our fiscal year 2009 budget request seeks funds to ensure that we have the increased staff capacity to effectively support the Congress's agenda, cover pay and uncontrollable inflationary cost increases, and undertake critical investments, such as technology improvement. GAO is requesting budget authority of $545.5 million to support a staff level of 3,251 FTEs needed to serve the Congress. This is a fiscally prudent request of 7.5 percent over our fiscal year 2008 funding level, as illustrated in table 2. Our request includes about $538.1 million in direct appropriations and authority to use about $7.4 million in offsetting collections. This request also reflects a reduction of about $6 million in nonrecurring fiscal year 2008 costs. Our request includes funds needed to increase our staffing level by less than 5 percent to help us provide more timely responses to congressional requests for studies; enhance employee recruitment, retention, and development programs, which increase our competitiveness for a talented workforce; recognize dedicated contributions of our hardworking staff through awards and recognition programs; address critical human capital components, such as knowledge capacity building, succession planning, and staff skills and competencies; pursue critical structural and infrastructure maintenance and restore program funding levels to regain our lost purchasing power; and undertake critical initiatives to increase our productivity. Key elements of our proposed budget increase are outlined as follows: Pay and inflationary cost increases We are requesting funds to cover anticipated pay and inflationary cost increases resulting primarily from annual across-the-board and performance-based increases and annualization of prior fiscal year costs. These costs also include uncontrollable, inflationary increases imposed by vendors as the cost of doing business. GAO generally loses about 10 percent of its workforce annually to retirements and attrition. This annual loss places GAO under continual pressure to replace staff capacity and renew institutional memory. In fiscal year 2007, we were able to replace only about half of our staff loss. In fiscal year 2008, we plan to replace only staff departures. Our proposed fiscal year 2009 staffing level of 3,251 FTEs would restore our staff capacity through a modest FTE increase, which would allow us to initiate congressional requests in a timelier manner and begin reducing the backlog of pending requests. Critical technology and infrastructure improvements We are requesting funds to undertake critical investments that would allow us to implement technology improvements, as well as streamline and reengineer work processes to enhance the productivity and effectiveness of our staff, make essential investments that have been deferred year after year but cannot continue to be delayed, and implement responses to changing federal conditions. Human capital initiatives and additional legislative authorities GAO is working with the appropriate authorization and oversight committees to make reforms that are designed to benefit our employees and to provide a means to continue to attract, retain, and reward a top- flight workforce, as well as help us improve our operations and increase administrative efficiencies. Among the requested provisions, GAO supports the adoption of a "floor guarantee" for future annual pay adjustments similar to the agreement governing 2008 payment adjustments reached with the GAO Employees Organization, IFPTE. The floor guarantee reasonably balances our commitment to performance-based pay with an appropriate degree of predictability and equity for all GAO employees. At the invitation of the House federal workforce subcommittee, we also have engaged in fruitful discussions about a reasonable and practical approach should the Congress decide to include a legislative provision to compensate GAO employees who did not receive the full base pay increases of 2.6 percent in 2006 and 2.4 percent in 2007. We appreciate their willingness to provide us with the necessary legal authorities to address this issue and look forward to working together with you and our oversight committee to obtain necessary funding to cover these payments. The budget authority to cover the future impact of these payments is not reflected in this budget request. As you know, on September 19, 2007, our Band I and Band II Analysts, Auditors, Specialists, and Investigators voted to be represented by the GAO Employees Organization, IFPTE, for the purpose of bargaining with GAO management on various terms and conditions of employment. GAO management is committed to working constructively with employee union representatives to forge a positive labor-management relationship. Since September, GAO management has taken a variety of steps to ensure it is following applicable labor relations laws and has the resources in place to work effectively and productively in this new union environment. Our efforts have involved delivering specialized labor-management relations training to our establishing a new Workforce Relations Center to provide employee and labor relations advice and services; hiring a Workforce Relations Center director, who also serves as our chief negotiator in collective bargaining deliberations; and postponing work on several initiatives regarding our current performance and pay programs. In addition, we routinely notify union representatives of meetings that may qualify as formal discussions, so that a representative of the IFPTE can attend the meeting. We also regularly provide the IFPTE with information about projects involving changes to terms and conditions of employment over which the union has the right to bargain. We are pleased that GAO and the IFPTE reached a prompt agreement on 2008 pay adjustments. The agreement was overwhelmingly ratified by bargaining unit members on February 14, 2008, and we have applied the agreed-upon approach to the 2008 adjustments to all GAO staff, with the exception of the SES and Senior Level staff, regardless of whether they are represented by the union. In fiscal year 2007, we addressed many difficult issues confronting the nation, including the conflict in Iraq, domestic disaster relief and recovery, national security, and criteria for assessing lead in drinking water. For example, GAO has continued its oversight on issues directly related to the Iraq war and reconstruction, issuing 20 products in fiscal year 2007 alone--including 11 testimonies to congressional committees. These products covered timely issues such as the status of Iraqi government actions, the accountability of U.S.-funded equipment, and various contracting and security challenges. GAO's work spans the security, political, economic, and reconstruction prongs of the U.S. national strategy in Iraq. Highlights of the outcomes of GAO work are outlined below. See appendix II for a detailed summary of GAO's annual measures and targets. Additional information on our performance results can be found in Performance and Accountability Highlights Fiscal Year 2007 at www.gao.gov. GAO's work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2007, we recorded a total of 1,354 other improvements in government resulting from GAO work. For example, in 646 instances federal agencies improved services to the public, in 634 other cases agencies improved core business processes or governmentwide reforms were advanced, and in 74 instances information we provided to the Congress resulted in statutory or regulatory changes. These actions spanned the full spectrum of national issues, from strengthened screening procedures for all VA health care practitioners to improved information security at the Securities and Exchange Commission. See table 4 for additional examples. In January 2007, we also issued our High-Risk Series: An Update, which identifies federal areas and programs at risk of fraud, waste, abuse, and mismanagement and those in need of broad-based transformations. Issued to coincide with the start of each new Congress, our high-risk list focuses on major government programs and operations that need urgent attention. Overall, this program has served to help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. GAO added the 2010 Census as a high-risk area in March 2008. GAO's achievements are of great service to the Congress and American taxpayers. With your support, we will be able to continue to provide the high level of performance that has come to be expected of GAO. GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Provide Timely, Quality Service to the Congress and the Federal Government to . . . . . . Address Current and Emerging Challenges to the Well-being and Financial Security of the American People relted to . . . Viable commnitieNl rerce usnd environmentl protection Phyicl infrastrctre . . . Respond to Changing Security Threats and the Challenges of Global Interdependence involving . . . Advncement of U.S. intereGlobal mrket forceHelp Transform the Federal Government's Role and How It Does Business to Meet 21st Century Challenges assssing . . . Key mgement chllenge nd progrm riFil poition nd finncing of the government Maximize the Value of GAO by Being a Model Federal Agency and a World-Class Professional Services Organization in the reas of . . . Our employee feedback survey asks staff how often the following occurred in the last 12 months (1) my job made good use of my skills, (2) GAO provided me with opportunities to do challenging work, and (3) in general, I was utilized effectively. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The budget authority GAO is requesting for fiscal year 2009--$545.5 million--represents a prudent request of 7.5 percent to support the Congress as it confronts a growing array of difficult challenges. GAO will continue to reward the confidence Congress places in us by providing a strong return on this investment. In fiscal year 2007 for example, in addition to delivering hundreds of reports and briefings to aid congressional oversight and decisionmaking, our work yielded: financial benefits, such as increased collection of delinquent taxes and civil fines, totaling $45.9 billion--a return of $94 for every dollar invested in GAO; over 1,300 other improvements in government operations spanning the full spectrum of national issues, ranging from helping Congress create a center to better locate children after disasters to strengthening computer security over sensitive government records and assets to encouraging more transparency over nursing home fire safety to strengthening screening procedures for VA health care practitioners; and expert testimony at 276 congressional hearings to help Congress address a variety of issues of broad national concern, such as the conflict in Iraq and efforts to ensure drug and food safety. GAO's work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2007, we recorded a total of 1,354 other improvements in government resulting from GAO work. For example, in 646 instances federal agencies improved services to the public, in 634 other cases agencies improved core business processes or governmentwide reforms were advanced, and in 74 instances information we provided to the Congress resulted in statutory or regulatory changes. These actions spanned the full spectrum of national issues, from strengthened screening procedures for all VA health care practitioners to improved information security at the Securities and Exchange Commission. In January 2007, we also issued our High-Risk Series: An Update, which identifies federal areas and programs at risk of fraud, waste, abuse, and mismanagement and those in need of broad-based transformations. Issued to coincide with the start of each new Congress, our high-risk list focuses on major government programs and operations that need urgent attention. Overall, this program has served to help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. GAO added the 2010 Census as a high-risk area in March 2008. GAO's achievements are of great service to the Congress and American taxpayers. With Congressional support, we will be able to continue to provide the high level of performance that has come to be expected of GAO.
3,159
623
As you know, Mr. Chairman, the decennial census is a constitutionally mandated enterprise critical to our nation. Census data are used to apportion seats and redraw congressional districts, and to help allocate over $400 billion in federal aid to state and local governments each year. We added the 2010 Census to our list of high-risk areas in March 2008, because improvements were needed in the Bureau's management of IT systems, the reliability of handheld computers (HHC) that were designed in part to collect data for address canvassing, and the quality of the Bureau's cost estimates. Compounding the risk was that the Bureau canceled a full dress rehearsal of the census that was scheduled in 2008, in part, because of performance problems with the HHCs during the address canvassing portion of the dress rehearsal, which included freeze-ups and unreliable data transmissions. In response to our findings and recommendations, the Bureau has strengthened its risk management efforts, including the development of a high-risk improvement plan that described the Bureau's strategy for managing risk and key actions to address our concerns. Overall, since March 2008, the Bureau has made commendable progress in getting the census back on track, but still faces a number of challenges moving forward. One of the Bureau's long-standing challenges has been building an accurate address file, especially locating unconventional and hidden housing units, such as converted basements and attics. For example, as shown in figure 1, what appears to be a single-family house could contain an apartment, as suggested by its two doorbells. The Bureau has trained address listers to look for extra mailboxes, utility meters, and other signs of hidden housing units, and has developed training guides for 2010 to help enumerators locate hidden housing. Nonetheless, decisions on what is a habitable dwelling are often difficult to make--what is habitable to one worker may seem uninhabitable to another. If the address lister thought the house in figure 1 was a single family home, but a second family was living in the basement, the second family is at greater risk of being missed by the census. Conversely, if the lister thought a second family could be residing in the home, when in fact it was a single family house, two questionnaires would be mailed to the home and costly nonresponse follow-up visits could ensue in an effort to obtain a response from a phantom housing unit. Under the LUCA program, the Bureau partners with state, local, and tribal governments, tapping into their knowledge of local populations and housing conditions in order to secure a more complete count. Between November 2007 and March 2008, over 8,000 state, local, and tribal governments provided approximately 42 million addresses for potential addition, deletion, or other actions. Of those submissions, approximately 36 million were processed as potential address additions to the MAF--or what the Bureau considers "adds." According to Bureau officials, one reason LUCA is important is because local government officials may be better positioned than the Bureau to identify unconventional and hidden housing units due to their knowledge of particular neighborhoods, or because of their access to administrative records in their jurisdictions. For example, local governments may have alternate sources of address information (such as utility bills, tax records, information from housing or zoning officials, or 911 emergency systems). In addition, according to Bureau officials, providing local governments with opportunities to actively participate in the development of the MAF can enhance local governments' understanding of the census and encourage them to support subsequent operations. The preliminary results of address canvassing show that the Bureau added relatively few of the address updates submitted for inclusion in the MAF through LUCA. Of approximately 36 million addresses submitted, about 27.7 million were already in the MAF. Around 8.3 million updates were not in the MAF and needed to be field-verified during address canvassing. Of these, about 5.5 million were not added to the MAF because they did not exist, were a duplicate address, or were nonresidential. Address canvassing confirmed the existence of around 2.4 million addresses submitted by LUCA participants that were not already in the MAF (or about 7 percent of the 36 million proposed additions). Bureau officials have indicated that they began shipping out detailed feedback to eligible LUCA participants on October 8, 2009, that includes information on which addresses were accepted. On November 1, 2009, the Office of Management and Budget is scheduled to open the LUCA appeals office that will enable LUCA participants who disagree with the Bureau's feedback to challenge the Bureau's decisions. This appeals process allows governments to provide evidence of the existence of addresses that the Bureau missed. If the government's appeal is sustained, then Bureau will include those addresses in later enumeration activities, and enumerate them if they are located in the field. The LUCA program is labor intensive for both localities and the Bureau because it involves data reviews, on-site verification, quality control procedures, and other activities, but produced marginal returns. While these were unique additions to the MAF that may not have been identified in any other MAF-building operation, they were costly additions nonetheless. As a result, as the Bureau prepares for the 2020 Census, it will be important for it to explore options that help improve the efficiency of LUCA, especially by reducing the number of duplicate and nonexistent addresses submitted by localities. The Bureau conducted address canvassing from March to July 2009. During that time, about 135,000 address listers went door to door across the country, comparing the housing units they saw on the ground to what was listed in the database of their HHCs. Depending on what they observed, listers could add, delete, or update the location of housing units. Although the projected length of the field operation ranged from 9 to 14 weeks, most early opening local census offices completed the effort in less than 10 weeks. Moreover, the few areas that did not finish early were delayed by unusual circumstances such as access issues created by flooding. The testing and improvements the Bureau made to the reliability of the HHCs prior to the start of address canvassing, including a final field test that was added to the Bureau's preparations in December 2008, played a key role in the pace of the operation; but other factors, once address canvassing was launched, were important as well, including the (1) prompt resolution of problems with the HHCs as they occurred and (2) lower than expected employee turnover. With respect to the prompt resolution of problems, the December 2008 field test indicated that the more significant problems affecting the HHCs had been resolved. However, various glitches continued to affect the HHCs in the first month of address canvassing. For example, we were informed by listers or crew leaders in 14 early opening local census offices that they had encountered problems with transmissions, freeze-ups, and other problems. Moreover, in 10 early opening local census offices we visited, listers said they had problems using the Global Positioning System function on their HHCs to precisely locate housing units. When such problems occurred, listers called their crew leaders and/or the Bureau's help desk to resolve the problems. When the issues were more systemic in nature, such as a software issue, the Bureau was able to quickly fix them using software patches. Moreover, to obtain an early warning of trouble, the Bureau monitored key indicators of the performance of the HHCs, such as the number of successful and failed HHC transmissions. This approach proved useful as Bureau quality control field staff were alerted to the existence of a software problem when they noticed that the devices were taking a long time to close out completed assignment areas. The Bureau also took steps to address procedural issues. For example, in the course of our field observations, we noticed that in several locations listers were not always adhering to training for identifying hidden housing units. Specifically, listers were instructed to knock on every door and ask, "Are there any additional places in this building where people live or could live?" However, we found that listers did not always ask this question. On April 28, 2009, we discussed this issue with senior Bureau officials. The Bureau, in turn, transmitted a message to its field staff emphasizing the importance of following training and querying residents if possible. Lower than expected attrition rates and listers' availability to work more hours than expected also contributed to the Bureau's ability to complete the Address Canvassing operation ahead of schedule. For example, the Bureau had planned for 25 percent of new hires to quit before, during, or soon after training; however, the national average was 16 percent. Bureau officials said that not having to replace listers with inexperienced staff accelerated the pace of the operation. Additionally, the Bureau assumed that employees would be available 18.5 hours a week. Instead, they averaged 22.3 hours a week. The Bureau's address list at the start of address canvassing consisted of 141.8 million housing units. Listers added around 17 million addresses and marked about 21 million for deletion because, for example, the address did not exist. All told, listers identified about 4.5 million duplicate addresses, 1.2 million nonresidential addresses, and about 690,000 addresses that were uninhabitable structures. Importantly, these preliminary results represent actions taken during the production phase of address canvassing and do not reflect actual changes made to the Bureau's master address list as the actions are first subject to a quality control check and then processed by the Bureau's Geography Division. The preliminary analysis of addresses flagged for add and delete shows that the results of the operation (prior to quality control) were generally consistent with the results of address canvassing for the 2008 dress rehearsal. Table 1 compares the add and delete actions for the two operations. According to the Bureau's preliminary analysis, the estimated cost for address canvassing field operations was $444 million, or $88 million (25 percent) more than its initial budget of $356 million. As shown in table 2, according to the Bureau, the cost overruns were because of several factors. One such factor was that the address canvassing cost estimate was not comprehensive, which resulted in a cost increase of $41 million. The Bureau underestimated the initial address canvassing workload and the fiscal year 2009 budget by 11 million addresses. Further, the additional 11 million addresses increased the Bureau's quality control workload, where the Bureau verifies certain actions taken to correct the address list. Specifically, the Bureau did not fully anticipate the impact these additional addresses would have on the quality control workload, and therefore did not revise its cost estimate accordingly. Moreover, under the Bureau's procedures, addresses that failed quality control would need to be recanvassed, but the Bureau's cost model did not account for the extra cost of recanvassing addresses. As a result, the Bureau underestimated its quality control workload by 26 million addresses which resulted in $34 million in additional costs, according to the Bureau. Bringing aboard more staff than was needed also contributed to the cost overruns. For example, according to the Bureau's preliminary analysis, training additional staff accounted for about $7 million in additional costs. Bureau officials attributed the additional training cost to inviting additional candidates to initial training due to past experience and anticipated no show and drop out rates, even though (1) the Bureau's staffing plans already accounted for the possibility of high turnover and (2) the additional employees were not included in the cost estimate or budget. The largest census field operation will be next summer's nonresponse follow-up, when the Bureau is to go door to door in an effort to collect data from households that did not mail back their census questionnaire. Based on the expected mail response rate, the Bureau estimates that over 570,000 enumerators will need to be hired for that operation. To better manage the risk of staffing difficulties while simultaneously controlling costs, several potential lessons learned for 2010 can be drawn from the Bureau's experience during address canvassing. For example, we found that the staffing authorization and guidance provided to some local census managers were unclear and did not specify that there was already a cushion in the hiring plans for local census offices to account for potential turnover. Also, basing the number of people invited to initial training on factors likely to affect worker hiring and retention, such as the local unemployment rate, could help the Bureau better manage costs. According to Bureau officials, they are reviewing the results from address canvassing to determine whether they need to revisit the staffing strategy for nonresponse follow-up and have already made some changes. For example, in recruiting candidates, when a local census office reaches 90 percent of its qualified applicant goal, it is to stop blanket recruiting and instead focus its efforts on areas that need more help, such as tribal lands. However, in hiring candidates, the officials pointed out that they are cautious not to underestimate resource needs for nonresponse follow-up based on address canvassing results because they face different operational challenges in that operation than for address canvassing. For example, for nonresponse follow-up, the Bureau needs to hire enumerators who can work in the evenings when people are more likely to be at home and who can effectively deal with reluctant respondents, whereas with address canvassing, there was less interaction with households and the operation could be completed during the day. Problems with accurately estimating the cost of address canvassing are indicative of long-standing weaknesses in the Bureau's ability to develop credible and accurate cost estimates for the 2010 Census. Accurate cost estimates are essential to a successful census because they help ensure that the Bureau has adequate funds and that Congress, the administration, and the Bureau itself can have reliable information on which to base decisions. However, in our past work, we noted that the Bureau's estimate lacked detailed documentation on data sources and significant assumptions, and was not comprehensive because it did not include all costs. Following best practices from our Cost Estimating and Assessment Guide, such as defining necessary resources and tasks, could have helped the Bureau recognize the need to update address canvassing workload and other operational assumptions, resulting in a more reliable cost estimate. To better screen its workforce of hundreds of thousands of temporary census workers, the Bureau plans to fingerprint its temporary workforce for the first time in the 2010 Census. In past censuses, temporary workers were subject to a name background check that was completed at the time of recruitment. The Federal Bureau of Investigation (FBI) will provide the results of a name background check when temporary workers are first recruited. At the end of the workers' first day of training, Bureau employees who have received around 2 hours of fingerprinting instruction are to capture two sets of fingerprints on ink fingerprint cards from each temporary worker. The cards are then sent to the Bureau's National Processing Center in Jeffersonville, Indiana, to be scanned and electronically submitted to the FBI. If the results show a criminal record that makes an employee unsuitable for employment, the Bureau is to either terminate the person immediately or place the individual in nonworking status until the matter is resolved. If the first set of prints are unclassifiable, the National Processing Center is to send the FBI the second set of prints. Fingerprinting during address canvassing was problematic. Of the over 162,000 employees hired for the operation, 22 percent--or approximately 35,700 workers--had unclassifiable prints that the FBI could not process. The FBI determined that the unclassifiable prints were generally the result of errors that occurred when the prints were first made. Factors affecting the quality of the prints included difficulty in first learning how to effectively capture the prints and the adequacy of the Bureau's training. Further, the workspace and environment for taking fingerprints was unpredictable, and factors such as the height of the workspace on which the prints were taken could affect the legibility of the prints. Consistent with FBI guidance, the Bureau relied on the results of the name background check for the nearly 36,000 employees with unclassifiable prints. Of the prints that could be processed, fingerprint results identified approximately 1,800 temporary workers (1.1 percent of total hires) with criminal records that name check alone failed to identify. Of the 1,800 workers with criminal records, approximately 750 (42 percent) were terminated or were further reviewed because the Bureau determined their criminal records--which included crimes such as rape, manslaughter, and child abuse--disqualified them from census employment. Projecting these percentages to the 35,700 temporary employees with unclassifiable prints, it is possible that more than 200 temporary census employees might have had criminal records that would have made them ineligible for census employment. Importantly, this is a projection, and the number of individuals with criminal backgrounds that were hired for address canvassing, if any, is not known. Applying these same percentages to the approximately 600,000 people the Bureau plans to fingerprint for nonresponse follow-up, unless the problems with fingerprinting are addressed, approximately 785 employees with unclassifiable prints could have disqualifying criminal records but still end up working for the Bureau. Aside from public safety concerns, there are cost issues as well. The FBI charged the Bureau $17.25 per person for each background check, whether or not the fingerprints were classifiable. The Bureau has taken steps to improve image quality for fingerprints captured in future operations by refining instruction manuals and providing remediation training on proper procedures. In addition, the Bureau is considering activating a feature on the National Processing Center's scanners that can check the legibility of the image and thus prevent poor quality prints from reaching the FBI. These are steps in the right direction. As a further contingency, it might also be important for the Bureau to develop a policy for refingerprinting employees to the extent that both cards cannot be read. The scale of the destruction in those areas affected by Hurricanes Katrina, Rita, and Ike made address canvassing in parts of Mississippi, Louisiana, and Texas especially challenging (see fig. 2). Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes. Recognizing the difficulties associated with address canvassing in these areas because of shifting and hidden populations and changes to the housing stock, the Bureau, partly in response to recommendations made in our June 2007 report, developed supplemental training materials for natural disaster areas to help listers identify addresses where people are, or may be, living when census questionnaires are distributed. For example, the materials noted the various situations listers might encounter, such as people living in trailers, homes marked for demolition, converted buses and recreational vehicles, and nonresidential space such as storage areas above restaurants. The training material also described the clues that could alert listers to the presence of nontraditional places where people are living and provided a script they should follow when interviewing residents on the possible presence of hidden housing units. Additional steps taken by the city of New Orleans also helped the Bureau overcome the challenge of canvassing neighborhoods devastated by Hurricane Katrina. As depicted in figure 3 below, city officials replaced the street signs even in abandoned neighborhoods. This assisted listers in locating the blocks they were assigned to canvass and expedited the canvassing process in these deserted blocks. To further ensure a quality count in the hurricane-affected areas, the Bureau plans to hand-deliver an estimated 1.2 million questionnaires (and simultaneously update the address list) to housing units in much of southeast Louisiana and south Mississippi that appear inhabitable, even if they do not appear on the address list updated by listers during address canvassing. Finally, the Bureau stated that it must count people where they are living on Census Day and emphasized that if a housing unit gets rebuilt and people move back before Census Day, then that is where those people will be counted. However, if they are living someplace else, then they will be counted where they are living on Census Day. To help ensure group quarters are accurately included in the census, the Bureau is conducting an operation called Group Quarters Validation, an effort that is to run during September and October 2009, and has a workload of around 2 million addresses in both the United States and Puerto Rico. During this operation, census workers are to visit each group quarter and interview its manager or administrator using a short questionnaire. The goal is to determine the status of the address as a group quarter, housing unit, transitory location, nonresidential, vacant, or delete. If the dwelling is in fact a group quarter, it must then be determined what category it fits under (e.g., boarding school, correctional facility, health care facility, military quarters, residence hall or dormitory, etc.), and confirm its correct geographic location. The actual enumeration of group quarters is scheduled to begin April 1, 2010. According to the 2005-2007 American Community Survey 3-year estimates, more than 8.1 million people, or approximately 2.7 percent of the population, live in group quarter facilities. Group quarters with the largest populations include college and university housing (2.3 million), adult correctional facilities (2.1 million), and nursing facilities (1.8 million). The Bureau drew from a number of sources to build its list of group quarters addresses including data from the 2000 Census, LUCA submissions, internet based research, and group quarters located during address canvassing. During the 2000 Census, the Bureau did not always accurately enumerate group quarters. For example, in our prior work, we found that the population count of Morehead, Kentucky, increased by more than 1,600 when it was later found that a large number of students from Morehead State University's dormitories were erroneously excluded from the city's population when the Bureau incorrectly identified the dormitories as being outside city limits and in an unincorporated area of Rowan County. Similarly, North Carolina's population count was reduced by 2,828 people, largely because the Bureau had to delete duplicate data on almost 2,700 students in 26 dormitories at the University of North Carolina at Chapel Hill. Precision is critical because, in some cases, small differences in population totals could potentially impact apportionment and/or redistricting decisions. The Bureau developed and tested new group quarters procedures in 2004 and 2006 that were designed to address the difficulties the Bureau had in trying to identify and count this population during the 2000 Census. For example, the Bureau integrated its housing unit and group quarters address lists in an effort to reduce the potential for duplicate counting as group quarters would sometimes appear on both address lists. Moreover, the Bureau has refined its definition of the various types of group quarters to make it easier to accurately categorize them. The operation began on September 28, as planned, in all 151 early opening local census offices and was 95 percent complete as of October 16, 2009. We have begun observations and will report our findings at a later date. With the cost of enumerating each housing unit continuing to grow, it will be important for the Bureau to determine which of its multiple MAF- building operations provide the best return on investment in terms of contributing to accuracy and coverage. According to the Bureau, it is planning to launch over 70 evaluations and assessments of critical 2010 Census operations and processes, many of which are focused on improving the quality of the MAF. For example, the Bureau plans to study options for targeted address canvassing as an alternative to canvassing every block in the country. The Bureau considered two major criteria for determining which studies to include in their evaluation program--the possibility for significant cost savings in 2020 and/or the possibility of significant quality gains in 2020. As the Bureau makes plans for the 2020 Census, these and other studies could prove useful in helping the Bureau streamline and consolidate operations, with an eye toward controlling costs and improving accuracy. Automation and IT systems will play a critical role in the ability of MAF/TIGER to extract address lists, maps, and provide other geographic support services. In our prior work, however, we have called on the Bureau to strengthen its testing of the MAF/TIGER system. In March 2009, for example, we reported and testified that while the MAF/TIGER program had partially completed testing activities, test plans and schedules were incomplete and the program's ability to track progress was unclear. Specifically, while the Bureau had partially completed testing for certain MAF/TIGER products (e.g., database extracts) related to address canvassing, subsequent test plans and schedules did not cover all of the remaining products needed to support the 2010 Census. Further, Bureau officials stated that although they were estimating the number of products needed, the exact number would not be known until the requirements for all of the 2010 Census operations were determined. As such, without knowing the total number of products and when the products would be needed, the Bureau risked not being able to effectively measure the progress of MAF/TIGER testing activities. This in turn increased the risk that there may not be sufficient time and resources to adequately test the system and that the system may not perform as intended. At that time we recommended that the MAF/TIGER program establish the number of products required and establish testing plans and schedules for 2010 operations. In response to our recommendations, the Bureau has taken several steps to improve its MAF/TIGER testing activities, but substantial work remains to be completed. For example, the MAF/TIGER program has established the number of products and when the products are needed for key operations. Furthermore, the program finalized five of eight test plans for 2010 operations, of which the testing activities for one test plan (address canvassing) have been completed; three are under way; and one has not yet started. Lastly, the program's test metrics for MAF/TIGER have recently been revised; however, only two of five finalized test plans include detailed metrics. While these activities demonstrate progress made in testing the MAF/TIGER system, the lack of finalized test plans and metrics still presents a risk that there may not be sufficient time and resources to adequately test the system and that the system may not perform as intended. Given the importance of MAF/TIGER to establishing where to count U.S. residents, it is critical that the Bureau ensure this system is thoroughly tested. Bureau officials have repeatedly stated that the limited amount of time remaining will make completing all testing activities challenging. The Bureau recognizes the critical importance of an accurate address list and maps, and continues to put forth tremendous effort to help ensure MAF/TIGER is complete and accurate. That said, the nation's housing inventory is large, complex, and diverse, with people residing in a range of different circumstances, both conventional and unconventional. The operations we included in this review generally have proceeded as planned, or are proceeding as planned. Nevertheless, accurately locating each and every dwelling in the nation is an inherently challenging endeavor, and the overall quality of the Bureau's address list will not be known until the Bureau completes various assessments later in the census. Moreover, while the Bureau has improved its management of MAF/TIGER IT systems, we continue to be concerned about the lack of finalized test plans, incomplete metrics to gauge progress, and an aggressive testing and implementation schedule going forward. Given the importance of MAF/TIGER to an accurate census, it is critical that the Bureau ensure this system is thoroughly tested. On October 15, 2009, we provided the Bureau with a statement of facts for our ongoing audit work pertaining to this testimony, and on October 16, 2009, the Bureau forwarded written comments. The Bureau made some suggestions where additional context or clarification was needed and, where appropriate, we made those changes. Mr. Chairman and members of this Subcommittee, this concludes my statement. I would be happy to respond to any questions that you might have at this time. If you have any questions on matters discussed in this statement, please contact Robert N. Goldenkoff at (202) 512-2757 or by e-mail at [email protected]. Other key contributors to this testimony include Assistant Director Signora May, Peter Beck, Steven Berke, Virginia Chanley, Benjamin Crawford, Jeffrey DeMarco, Dewi Djunaidy, Vijay D'Souza, Elizabeth Fan, Amy Higgins, Richard Hung, Kirsten Lauber, Andrea Levine, Naomi Mosser, Catharine Myrick, Lisa Pearson, David Reed, Jessica Thomsen, Jonathan Ticehurst, Kate Wulff, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a constitutionally mandated activity that produces data used to apportion congressional seats, redraw congressional districts, and help allocate billions of dollars in federal assistance. A complete and accurate master address file (MAF), along with precise maps--the U.S. Census Bureau's (Bureau) mapping system is called Topologically Integrated Geographic Encoding and Referencing (TIGER)--are the building blocks of a successful census. If the Bureau's address list and maps are inaccurate, people can be missed, counted more than once, or included in the wrong location. This testimony discusses the Bureau's readiness for the 2010 Census and covers: (1) the Bureau's progress in building an accurate address list; and (2) an update of the Bureau's information technology (IT) system used to extract information from its MAF/TIGER? database. Our review included observations at 20 early opening local census offices in hard-to-count areas. The testimony is based on previously issued and ongoing work. The Bureau has taken, and continues to take measures to build an accurate MAF and to update its maps. From an operational perspective, the Local Update of Census Addresses (LUCA) and address canvassing generally proceeded as planned, and GAO did not observe any significant flaws or operational setbacks. Group quarters validation got underway in late September as planned. A group quarters is a place where people live or stay that is normally owned or managed by an entity or organization providing housing and/or services for the residents (such as a boarding school, correctional facility, health care facility, military quarters, residence hall, or dormitory). LUCA made use of local knowledge to enhance MAF accuracy. Between November 2007 and March 2008, over 8,000 state, local, and tribal governments participated in the program. However, LUCA submissions generated a relatively small percentage of additions to the MAF. For example, of approximately 36 million possible additions to the MAF that localities submitted, 2.4 million (7 percent) were not already in the MAF. The other submissions were duplicate addresses, non-existent, or non-residential. Address canvassing (an operation where temporary workers go door to door to verify and update address data) finished ahead of schedule, but was over budget. Based on initial Bureau data, the preliminary figure on the actual cost of address canvassing is $88 million higher than the original estimate of $356 million, an overrun of 25 percent. The testing and improvements the Bureau made to the reliability of the hand held computers prior to the start of address canvassing played a key role in the pace of the operation, but other factors were important as well, including the prompt resolution of technical problems and lower than expected employee turnover. The Bureau's address list at the start of address canvassing consisted of 141.8 million housing units. Listers added around 17 million addresses and marked about 21 million for deletion. All told, listers identified about 4.5 million duplicate addresses, 1.2 million nonresidential addresses, and about 690,000 addresses that were uninhabitable structures. The overall quality of the address file will not be known until later in the census when the Bureau completes various assessments. While the Bureau has made some improvements to its management of MAF/TIGER? IT such as finalizing five of eight test plans, GAO continues to be concerned about the lack of finalized test plans, incomplete metrics to gauge progress, and an aggressive testing and implementation schedule going forward. Given the importance of MAF/TIGER? to an accurate census, it is critical that the Bureau ensure this system is thoroughly tested.
6,250
792
The abuse of anabolic steroids differs from the abuse of other illicit substances. When users initially begin to abuse anabolic steroids, they typically are not driven by a desire to achieve an immediate euphoria like that which accompanies most abused drugs such as cocaine, heroin, and marijuana. The abuse of anabolic steroids is typically driven by the desire of users to improve their athletic performance and appearance-- characteristics that are important to many teenagers. Anabolic steroids can increase strength and boost confidence, leading users to overlook the potential serious and long-term damage to their health that these substances can cause. In addition, the methods and patterns of use for anabolic steroids differ from those of other drugs. Anabolic steroids are most often taken orally or injected, typically in cycles of weeks or months (referred to as "cycling"), rather than continuously. Cycling involves taking multiple doses of anabolic steroids over a specific period of time, stopping for a period, and starting again. In addition, users often combine several different types of anabolic steroids to maximize their effectiveness (referred to as "stacking"). While anabolic steroids can enhance certain types of performance or appearance, when used inappropriately they can cause a host of severe, long-term, and in some cases, irreversible health consequences. The abuse of anabolic steroids can lead to heart attacks, strokes, liver tumors, and kidney failure. In addition, because anabolic steroids are often injected, users who share needles or use nonsterile injection techniques are at risk for contracting dangerous infections, such as HIV/AIDS and hepatitis B and C. There are also numerous side effects that are gender-specific, including reduced sperm count, infertility, baldness, and development of breasts among men; and growth of facial hair, male-pattern baldness, changes in or cessation of the menstrual cycle, and deepened voice among women. There is also concern that teenagers who abuse anabolic steroids may face the additional risk of halted growth resulting from premature skeletal maturation and accelerated puberty changes. The abuse of anabolic steroids may also lead to aggressive behavior and other psychological side effects. Many users report feeling good about themselves while on anabolic steroids, but for some users extreme mood swings also can occur, including manic-like symptoms leading to violence. Some users also may experience depression when the drugs are stopped, which may contribute to dependence on anabolic steroids. Users may also suffer from paranoia, jealousy, extreme irritability, delusions, and impaired judgment stemming from feelings of invincibility. Two national surveys showed increasing prevalence in teenage abuse of steroids throughout the 1990s until about 2002 and a decline since then (see fig. 1). One of these two national surveys, the Monitoring the Future (MTF) survey, is an annual survey conducted by the University of Michigan and supported by NIDA funding. The MTF survey measures drug use and attitudes among students in grades 8, 10, and 12, and asks several questions about the use of and attitudes towards anabolic steroids, such as perceived risk, disapproval, and availability of anabolic steroids. The survey's questions are designed to assess respondents' use of steroids in the last 30 days, the past year, and over the course of the respondent's lifetime. Questions about steroid use were added to the study beginning in 1989. The most recent results from this survey showed that in 2006, 2.7 percent of 12th graders said they had used anabolic steroids without a prescription at least once. The second national survey, the Youth Risk Behavior Survey (YRBS), is a biennial survey conducted since 1991 by CDC. The YRBS is part of a surveillance system consisting of national, state, and local surveys of students in grades 9 through 12. These surveys collect information about a wide variety of risk behaviors, including sexual activity and alcohol and drug use. The most recent available national YRBS survey--conducted in 2005--asked one question related to lifetime steroid use without a prescription, which showed that 3.3 percent of 12th graders had used steroids at least once. The MTF and YRBS surveys indicate a low abuse rate for anabolic steroids among teenagers when compared with the abuse rates for other drugs. However, the reported easy availability of steroids and the potential for serious health effects make anabolic steroid abuse a health concern for teenagers, particularly among males. In general, the reported rates of anabolic steroid abuse are higher for males than for females (see fig. 2). Data from the 2006 MTF survey showed that 1.7 percent of teenage males reported abusing anabolic steroids in the past year, as compared with 0.6 percent of females. Data from the 2005 YRBS survey showed that 4.8 percent of high school males reported abusing steroids in their lifetime, as compared with 3.2 percent of females. There are two categories of federally funded efforts that address teenage abuse of anabolic steroids. Efforts are either designed to focus on preventing the abuse of anabolic steroids among teenagers or are broader and designed to prevent substance abuse in general--which can include abuse of anabolic steroids among teenagers. Two programs that received federal research funding for their development and testing, ATLAS and ATHENA, are designed to focus on preventing or reducing teen abuse of anabolic steroids. In addition, there are various research efforts and education and outreach activities that focus on this issue. Two federal grant programs--ONDCP's Drug-Free Communities Support program and Education's School-Based Student Drug Testing program--are designed to support state and local efforts to prevent substance abuse in general and may include anabolic steroid abuse among teenagers as part of the programs' substance abuse prevention efforts. See appendix I for a list of the federally funded efforts discussed below. There are various federally funded efforts--programs, research, and educational activities--that address teenage abuse of anabolic steroids. Some of these efforts are designed to focus on preventing or reducing anabolic steroid abuse among teenagers. As part of our review we identified two programs, the ATLAS and ATHENA programs, which received federal research funding during their development and testing and are designed to focus on preventing the abuse of anabolic steroids among male and female high school athletes, respectively. ATLAS is a student-led curriculum designed to prevent male high school athletes from abusing anabolic steroids and other performance-enhancing substances. The program's intervention strategy relies on peer pressure and providing information on healthy alternatives for increasing muscle strength and size. The ATLAS curriculum is typically delivered during a sport team's season in a series of 45-minute sessions scheduled at the coaches' discretion and integrated into the usual team practice activities. The athletes meet as a team in groups of six or eight students with one student functioning as the assigned group leader. Coaches, group leaders, and student athletes all work from manuals and workbooks, which provide brief, interactive activities that focus on drugs used in sports, sport supplements, strength training, sport nutrition, and decision making. The ATHENA program is designed to prevent the abuse of body-shaping substances such as diet pills and anabolic steroids, although abuse of the latter is less common in females than in males. Like ATLAS, the ATHENA curriculum is integrated into a sport team's usual practice activities and uses workbooks and student group leaders. The ATHENA curriculum takes into account that female athletes are less likely than males to abuse anabolic steroids but are more likely to have problems with eating disorders and to use drugs such as diet pills and tobacco. As a result, ATHENA's curriculum gives more attention than ATLAS's to addressing these behaviors. The ATLAS and ATHENA curricula were developed and tested with funding provided by NIDA. From fiscal years 1993 through 2001, NIDA provided more than $3.4 million to fund the research that developed and tested the effectiveness of the ATLAS curriculum. Similarly, from fiscal years 1999 through 2003 NIDA provided $4.7 million in research funding to develop and test the effectiveness of the ATHENA curriculum. While ATLAS and ATHENA were developed and tested with federal funding, the programs are implemented at the local level. Schools in at least 25 states have chosen to implement the programs with local and private funds, and the National Football League and Sports Illustrated magazine together have supported the programs in more than 70 schools nationwide. In addition to the ATLAS and ATHENA programs, there are various federally funded research efforts that focus on preventing or reducing anabolic steroid abuse among teenagers. NIDA has funded several research projects examining the factors that influence teenagers to abuse anabolic steroids and the effectiveness of interventions used to prevent teenage steroid abuse. From fiscal years 2000 through 2006, NIDA awarded nearly $10.1 million in grants to support an average of four research projects each year related to anabolic steroid abuse with a specific focus on adolescents. In fiscal year 2006, for example, NIDA awarded a total of nearly $638,000 to three research projects that examined risk factors for anabolic steroid abuse among teenagers or the effects of steroid abuse in this population. Like NIDA, the United States Anti-Doping Agency (USADA)--an independent, nonprofit corporation funded primarily by ONDCP--supports research related to the abuse of anabolic steroids and other performance-enhancing drugs by athletes, including teenage athletes. In fiscal year 2006, USADA spent $1.8 million for research, and an ONDCP official estimated that about one-third of that research funding was directed to anabolic steroids and another performance-enhancing drug, human growth hormone. In addition to research, there are various education and outreach activities that focus on preventing anabolic steroid abuse among teenagers. Many of these efforts have been supported by NIDA. Since 2000, NIDA has provided nearly $500,000 in funding for a variety of education and outreach efforts in support of this goal. For example, in April 2000, in response to an upward trend in steroid abuse among students, NIDA launched a multimedia educational initiative intended to prevent anabolic steroid abuse among teenagers. Along with several national partners, including the National Collegiate Athletic Association, the American College of Sports Medicine, and the American Academy of Pediatrics, the initiative produced a Web site, a research report on steroid abuse, and postcard-sized messages about steroids for placement in gyms, movie theaters, shopping malls, bookstores, and restaurants in selected areas. By 2007, NIDA funding for this particular initiative totaled about $124,000. In addition to NIDA, other federal agencies and organizations have supported educational and outreach activities that focus on preventing anabolic steroid abuse among teenagers, as the following examples illustrate. ONDCP has funded six informational briefings since 2001 to encourage journalists, entertainment writers, and producers to accurately cover anabolic steroids and drug abuse among teenage athletes. ONDCP also has Web sites for teens and parents with information about anabolic steroids and links to NIDA resources. Since 2003, USADA has produced written publications and annual reports on anabolic steroid abuse and has distributed those publications through high schools and state high school associations. In addition, some USADA public service announcements to be aired during televised sports events and movie trailers have targeted anabolic steroid abuse. In fiscal years 2007 and 2008, SAMHSA expects to spend a total of $99,000 under a contract to develop and disseminate educational materials addressing the abuse of anabolic steroids by adolescent athletes. These materials, which are intended for use by high school athletic and health science departments, include brochures, a video, and 10 high school outreach seminars. As part of our review, we identified two federal grant programs that are designed to support state and local efforts to prevent various forms of substance abuse and that may include teenagers. Grantees of these programs may address teenage anabolic steroid abuse as part of the programs' general substance abuse prevention efforts. The Drug-Free Communities Support program, funded by ONDCP and administered by SAMHSA under an interagency agreement, provides grants to community coalitions to address drug abuse problems identified in their communities. Many community coalitions choose to implement school- based drug prevention programs with their grant funding and are allowed to tailor these programs to address the drug prevention needs of their communities. In 2007, about one-quarter of more than 700 grantees reported that they were addressing steroid abuse as one of their program's objectives. Each community coalition is eligible for grants of up to $125,000 per year, renewable for up to 4 more years, and requiring dollar- for-dollar community matching funds. In 2007, the Drug-Free Communities Support program is providing about $80 million in grants to 709 community coalitions for drug prevention activities based on the needs of the communities. Another federal grant program that supports substance abuse prevention efforts for teenagers and that may also include efforts to address anabolic steroid abuse in this population is the School-Based Student Drug Testing program in Education's Office of Safe and Drug-Free Schools. Since 2003, this program has provided grants to school districts and public and private entities to establish school-based drug-testing efforts. For fiscal years 2003 through 2007, the Office of Safe and Drug-Free Schools awarded $32.2 million in grants to 87 individual School-Based Student Drug Testing grantees. According to information provided in the grantees' grant applications, 34 of the grantees (representing 180 middle, junior, and high schools and at least 70,000 students) proposed using their grant-supported drug testing to test for anabolic steroids in addition to other substances such as amphetamines, marijuana, and cocaine. Education officials told us that although grantees generally identify the drugs for which they are testing in their annual performance reports, there has been no independent verification by Education staff that confirms that the 34 grantees actually have implemented anabolic steroid testing or whether additional grantees have included steroid testing in their efforts. Of the 16 studies we reviewed, nearly half focused on linking certain risk factors and behaviors to teenagers' abuse of anabolic steroids, including the use of other drugs, risky sexual behaviors, and aggressive behaviors. Most of the other studies we reviewed were assessments of the ATLAS and ATHENA prevention programs and in general suggested that the programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Appendix II is a list of the articles we reviewed. Almost half of the studies we reviewed identified certain risk factors and behaviors linked to the abuse of anabolic steroids among teenagers. Risk factors, such as antisocial behavior, family violence, and low academic achievement, are linked to youths' likelihood of engaging in risky behaviors, including drug abuse. Several studies found that the use of alcohol and other drugs--such as tobacco, marijuana, and cocaine--is associated with the abuse of anabolic steroids among teenagers, including teenage athletes and non-athletes. One 2005 study found that the use of other drugs was more likely to predict anabolic steroid abuse than participation in athletic activities. Several studies we reviewed found no difference between athletes and non-athletes in their abuse of anabolic steroids, and one 2007 study of teenage girls found that female athletes were less likely than female non-athletes to abuse anabolic steroids. A few studies we reviewed found a positive correlation between anabolic steroid abuse and risky sexual behaviors such as early initiation of sexual activity and an increased number of sexual partners. Some studies found that aggressive behaviors such as fighting were related to anabolic steroid abuse by both males and females. Moreover, one 1997 study found that adolescents (both male and female) who reported abusing anabolic steroids in the past year were more likely to be perpetrators of sexual violence. However, the cause-and-effect relationships between anabolic steroid abuse and other risky behaviors, such as violence, have not been determined. About half of the studies we reviewed were assessments of the ATLAS and ATHENA prevention programs, and in general these studies suggested that these programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Researchers assessing the ATLAS program reported that both the intention to abuse anabolic steroids and the reported abuse of steroids were lower among athletes who participated in the ATLAS program than among athletes who did not participate in the program. The most recent study found that although the intention to abuse anabolic steroids remained lower at follow-up 1 year later for athletes who participated in the ATLAS program, the effectiveness of the program in reducing reported use diminished with time. Similarly, researchers assessing the ATHENA program found that girls who participated in the program reported less ongoing and new abuse of anabolic steroids as well as a reduction in the abuse of other performance-enhancing and body-shaping substances. The authors note that these results are short term, and the long-term effectiveness of the ATHENA program is not known. The authors of the one study in our review that looked at student drug- testing programs found that the abuse of anabolic steroids and other illicit drugs and performance-enhancing substances was decreased among athletes at schools that implemented mandatory, random drug-testing programs. However, this group of athletes also showed an increase in risk factors that are generally associated with greater abuse of illicit drugs, including anabolic steroids. For example, athletes at schools with drug- testing programs were more likely to believe that peers and authority figures were more tolerant of drug abuse, had less belief in the negative consequences of drug abuse, and had less belief in the efficacy of drug testing. Based on these seemingly inconsistent findings, the study's authors called for caution in interpreting the findings. Experts identified gaps in the research that addresses anabolic steroid abuse among teenagers. Experts identified gaps in the current research on the outcomes of prevention programs that focus on anabolic steroids. Experts also identified gaps in the research on the long-term health effects of initiating the abuse of anabolic steroids as teenagers. According to experts, available research does not establish the extent to which the ATLAS and ATHENA programs are effective over time in preventing anabolic steroid abuse among teenage athletes. Experts acknowledge that both programs appear promising in their ability to prevent the abuse of anabolic steroids among teenage athletes immediately following participants' completion of the programs. Assessment of the effectiveness of the ATLAS program 1 year later, however, found that the lower incidence of anabolic steroid use was not sustained, although participants continued to report reduced intentions to use anabolic steroids. The long-term effectiveness of the ATHENA program has not been reported. The effectiveness of these programs has been assessed only in some schools in Oregon, and therefore experts report that the effectiveness of the programs may not be generalizable. In another example, experts identified the need for additional research to assess the effectiveness of drug-testing programs, such as those funded under Education's School-Based Student Drug Testing program, in reducing anabolic steroid abuse among teenagers. According to experts, there are several gaps in research on the health effects of teenage abuse of anabolic steroids. Experts report that while there is some research that has examined the health effects of anabolic steroid abuse among adults--for example, the harmful effects on the cardiovascular, hormonal, and immune systems--there is a lack of research on these effects among teenagers. There is also a lack of research on the long-term health effects of initiating anabolic steroid abuse during the teenage years. Some health effects of steroid abuse among adults, such as adverse effects on the hormonal system, have been shown to be reversible when the adults have stopped abusing anabolic steroids. Experts point out, however, that it is not known whether this reversibility holds true for teenagers as well. While some experts suggest that anabolic steroid abuse may do more lasting harm to teenagers, due to the complex physical changes unique to adolescence, according to other experts there is no conclusive evidence of potentially permanent health effects. Experts also report that the extent of the psychological effects of anabolic steroid abuse and, in particular, of withdrawal from steroid abuse, is unclear due to limited research. Some experts we consulted noted a need to better inform primary care physicians and pediatricians about anabolic steroid abuse among teenagers, so these providers would be better able to recognize steroid abuse in their patients and initiate early intervention and treatment. We provided a draft of this report to HHS and Education for comment and received technical comments only, which we incorporated into the report as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services and to the Secretary of Education. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. Table 1 lists selected federally funded efforts--including programs, research, and educational and outreach activities--that are designed to focus on preventing or reducing the abuse of anabolic steroids by teenagers (focused efforts), as well as other broader efforts that may address teenage abuse of anabolic steroids as part of the programs' general substance abuse prevention efforts. The list includes programs funded by two departments and the Office of National Drug Control Policy (ONDCP), in the Executive Office of the President. Borowsky, I.W., M. Hogan, and M. Ireland. "Adolescent sexual aggression: risk and protective factors." Pediatrics, vol. 100, no. 6 (1997): e71-e78. Dukarm, C.P., R.S. Byrd, P. Auinger, and M. Weitzman. "Illicit substance use, gender, and the risk of violent behavior among adolescents." Archives of Pediatric & Adolescent Medicine, vol. 150, no. 8 (1996): 797-801. DuRant, R.H., L.G. Escobedo, and G.W. Heath, "Anabolic-steroid use, strength training, and multiple drug use among adolescents in the United States." Pediatrics, vol. 96, no. 1 (1995): 23-28. Elliot, D., J. Cheong, E.L. Moe, and L. Goldberg. "Cross-sectional study of female students reporting anabolic steroid use." Archives of Pediatric & Adolescent Medicine, vol. 161, no. 6 (2007): 572-577. Elliot, D., and L. Goldberg. "Intervention and prevention of steroid use in adolescents." American Journal of Sports Medicine, vol. 24, no. 6 (1996): S46-S47. Elliot, D.L., L. Goldberg, E.L. Moe, C.A. DeFrancesco, M.B. Durham, and H. Hix-Small. "Preventing substance use and disordered eating: Initial outcomes of the ATHENA (Athletes Targeting Healthy Exercise and Nutrition Alternatives) program." Archives of Pediatric & Adolescent Medicine, vol. 158, no. 11 (2004): 1043-1049. Elliot, D.L., E.L. Moe, L. Goldberg, C.A. DeFrancesco, M.B. Durham, and H. Hix-Small. "Definition and outcome of a curriculum to prevent disordered eating and body-shaping drug use." The Journal of School Health, vol. 76, no. 2 (2006): 67-73. Fritz, M.S., D.P. MacKinnon, J. Williams, L. Goldberg, E.L. Moe, and D.L. Elliot. "Analysis of baseline by treatment interactions in a drug prevention and health promotion program for high school male athletes." Addictive Behaviors, vol. 30, no. 5 (2005): 1001-1005. Goldberg, L., D. Elliot, G.N. Clarke, D.P. MacKinnon, E. Moe, L. Zoref, E. Greffrath, D.J. Miller, and A. Lapin. "Effects of a multidimensional anabolic steroid prevention intervention: the Adolescents Training and Learning to Avoid Steroids (ATLAS) program." JAMA, vol. 276, no. 19 (1996): 1555- 1562. Goldberg, L., D. Elliot, G.N. Clarke, D.P. MacKinnon, L. Zoref, E. Moe, C. Green, and S.L. Wolf. "The Adolescent Training and Learning to Avoid Steroids (ATLAS) prevention program: background and results of a model intervention." Archives of Pediatric & Adolescent Medicine, vol. 150 (1996): 713-721. Goldberg, L., D.L. Elliot, D.P. MacKinnon, E. Moe, K.S. Kuehl, L. Nohre, and C.M. Lockwood. "Drug testing athletes to prevent substance abuse: Background and pilot study results of the SATURN (Student Athlete Testing Using Random Notification) study." Journal of Adolescent Health, vol. 32, no. 1 (2003): 16-25. Goldberg, L., D.P. MacKinnon, D.L. Elliot, E.L. Moe, G. Clarke, and J. Cheong. "The Adolescents Training and Learning to Avoid Steroids Program: Preventing drug use and promoting health behaviors." Archives of Pediatric & Adolescent Medicine, vol. 154, no. 4 (2000): 332-338. MacKinnon, D.P., L. Goldberg, G. Clarke, D.L. Elliot, J. Cheong, A. Lapin, E.L. Moe, and J.L. Krull. "Mediating mechanisms in a program to reduce intentions to use anabolic steroids and improve exercise self-efficacy and dietary behavior." Prevention Science, vol. 2, no. 1 (2001): 15-28. Miller, K.E., J.H. Hoffman, G.M. Barnes, D. Sabo, M.J. Melnick, and M.P. Farrell. "Adolescent anabolic steroid use, gender, physical activity, and other problem behaviors." Substance Use & Misuse, vol. 40, no. 11 (2005): 1637-1657. Naylor, A.H., D. Gardner, and L. Zaichkowsky. "Drug use patterns among high school athletes and nonathletes." Adolescence, vol. 36, no. 144 (2001): 627-639. Rich, J.D., C.K. Foisie, C.W. Towe, B.P. Dickinson, M. McKenzie, and C.M. Salas. "Needle exchange program participation by anabolic steroid injectors." Drug and Alcohol Dependence, vol. 56, no. 2 (1999): 157-160. Laurie Ekstrand, at (202) 512-7114 or [email protected]. In addition to the contact named above, key contributors to this report were Christine Brudevold, Assistant Director; Ellen M. Smith; Julie Thomas; Rasanjali Wickrema; and Krister Friday.
The abuse of anabolic steroids by teenagers--that is, their use without a prescription--is a health concern. Anabolic steroids are synthetic forms of the hormone testosterone that can be taken orally, injected, or rubbed on the skin. Although a 2006 survey funded by the National Institute on Drug Abuse (NIDA) found that less than 3 percent of 12th graders had abused anabolic steroids, it also found that about 40 percent of 12th graders described anabolic steroids as "fairly easy" or "very easy" to get. The abuse of anabolic steroids can cause serious health effects and behavioral changes in teenagers. GAO was asked to examine federally funded efforts to address the abuse of anabolic steroids among teenagers and to review available research on this issue. This report describes (1) federally funded efforts that address teenage abuse of anabolic steroids, (2) available research on teenage abuse of anabolic steroids, and (3) gaps or areas in need of improvement that federal officials and other experts identify in research that addresses teenage anabolic steroid abuse. To do this work, GAO reviewed federal agency materials and published studies identified through a literature review and interviewed federal officials and other experts. There are two categories of federally funded efforts that address teenage abuse of anabolic steroids. Efforts are either designed to focus on preventing the abuse of anabolic steroids among teenagers or are broader and designed to prevent substance abuse in general--which can include abuse of anabolic steroids among teenagers. Two programs that received federal funding during their development and testing, Athletes Training and Learning to Avoid Steroids (ATLAS) and Athletes Targeting Healthy Exercise & Nutrition Alternatives (ATHENA), are designed to focus on preventing or reducing teen abuse of anabolic steroids through use of gender-specific student-led curricula. In addition, there are various research efforts and education and outreach activities that focus on this issue. Two federal grant programs--the Office of National Drug Control Policy's Drug-Free Communities Support program and the Department of Education's School-Based Student Drug Testing program--are designed to support state and local efforts to prevent substance abuse in general and may include anabolic steroid abuse among teenagers as part of the programs' substance abuse prevention efforts. In 2007, about one-quarter of more than 700 Drug-Free Communities Support program grantees reported that they were addressing steroid abuse as one of their program's objectives. Almost half of the 16 studies GAO reviewed identified certain risk factors and behaviors linked to the abuse of anabolic steroids among teenagers. Several of these studies found connections between anabolic steroid abuse and risk factors such as use of other drugs, risky sexual behaviors, and aggressive behaviors. Most of the other studies were assessments of the ATLAS and ATHENA prevention programs and in general suggested that the programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Experts identified gaps in the research addressing teenage abuse of anabolic steroids. Experts identified a lack of conclusive evidence of the sustained effectiveness over time of available prevention programs, for example at 1 year following participants' completion of the programs. Experts also identified gaps in the research on the long-term health effects of initiating anabolic steroid abuse as a teenager--including research on effects that may be particularly harmful in teens--and in research on psychological effects of anabolic steroid abuse.
6,280
767
As we reported in September 2004, improvements in information technology, decreasing data transmission costs, and expanded infrastructure in developing countries have facilitated services offshoring. Offshoring is reflected in services import data because when a company replaces work done domestically with work done overseas, such as in India or China, the services are now being imported from overseas. For example, when a U.S.-based company pays for a service (such as computer and data processing services in India), the payment is recorded as a services import (from India in this example). BEA reports data on trade in services that are frequently associated with offshoring. BEA's trade in services data consist of cross-border transactions between U.S. and foreign residents and comprise five broad categories of services. One of these five categories of services is "other private services," which includes key sectors associated with offshoring under the subcategory of BPT services. In 2003, BPT services accounted for $40.8 billion or 48 percent of U.S. imports of "other private services," which totaled $85.8 billion. (See fig. 1.) U.S. data on BPT services differentiate between affiliated and unaffiliated trade. Affiliated trade occurs between U.S. parent firms and their foreign affiliates and between foreign parent firms and their affiliates in the United States; while unaffiliated trade occurs between U.S. entities and foreigners that do not own, nor are owned by, the U.S. entity. In 2003, total U.S. imports of affiliated BPT services accounted for approximately $29.9 billion, or about 73 percent of all U.S. imports of these services. BEA does not disaggregate affiliated trade by country, in particular types of services, due to its concerns about the accuracy and completeness of data firms' report. Total U.S. imports of unaffiliated BPT services amounted to approximately $11.0 billion in 2003, or about 27 percent of the total unaffiliated U.S. imports of BPT services. According to U.S. data, the growth of U.S. trade in BPT services has been rapid. For example, from 1994 to 2003, total unaffiliated U.S. imports of these services more than doubled. In addition, U.S. exports of unaffiliated BPT services almost doubled during the same period. To report data on trade in BPT services, BEA conducts mandatory quarterly, annual, and 5-year benchmark surveys of firms in the United States. In administering its services surveys, BEA seeks to collect information from the entire universe of firms with transactions in BPT services above certain threshold levels for the period covered by each survey. The mailing lists for the surveys include firms in the United States that have previously filed a survey and other firms that BEA believes may have had transactions in the services covered by the survey. The mailing lists of firms receiving surveys are derived, in part, from U.S. government sources, industry associations, business directories, and various periodicals. Firms receiving the surveys are required to report transactions above a certain threshold value, which BEA believes, in theory, captures virtually the entire universe of transactions in the services covered by its surveys. Those firms with transactions falling below the threshold value are exempt from reporting data by type of service, but they are asked to voluntarily provide estimates of the aggregate value of their transactions for all services covered by the survey. The trade data that BEA produces help government officials, business decision makers, researchers, and the American public to follow and understand the performance of the U.S. economy. For example, analysts and policy makers use U.S. trade data to assess the impact of international trade on the U.S. balance of payments and the overall economy. In addition, U.S. trade data are used by trade policy officials to negotiate international trade agreements. U.S. data show a significantly smaller volume of trade in BPT services between India and the United States than Indian data show. BEA data on U.S. imports of unaffiliated BPT services from India indicate that U.S. firms import only a small fraction of the total that India reported in exports of similar services to the United States. In addition, this gap has grown between 2002 and 2003. This gap does not exist just for U.S. and Indian data. A similar gap also exists between other developed countries' import data and Indian export data. BEA data show a rapid increase in U.S. imports of unaffiliated BPT services from India. For 2002, the total unaffiliated U.S. imports of BPT services from India totaled approximately $240 million. For 2003, the total unaffiliated U.S. imports of BPT services from India increased to about $420 million. India reports exports to the United States of similar services of about $6.5 billion for 2002 and $8.7 billion for 2003. Thus, the value of the gap between U.S. and Indian data in 2002 was approximately $6.2 billion and, in 2003, was about $8.3 billion, an increase of about one-third. (See fig. 2.) RBI, which is India's central bank, is responsible for reporting official Indian data on trade in services. However, RBI data on trade in services incorporate the data collected by India's primary information technology association--the National Association of Software and Service Companies (NASSCOM). To improve the completeness of the data NASSCOM provides to RBI, NASSCOM includes data on the software services exports it receives from an Indian government program, the Software Technology Parks of India (STPI). While RBI does not provide country-specific data on India's exports of services to the United States, NASSCOM's data do provide a country-specific breakdown. Thus, the data cited above for India come from NASSCOM. According to a recent RBI report, a technical group recommended in 2003 that RBI compile data on software and information technology exports through quarterly surveys, and through a comprehensive survey to be conducted every 3 years. The first of these studies was released in September 2005, as our report was being finalized, and provides data on Indian exports of computer services for 2002. The 2005 RBI report showed that India reported approximately $4.3 billion in computer services exports to the United States and Canada for 2002 (2003 data have not yet been provided). Although RBI's report did not provide an estimate of the U.S. share of these exports, on the basis of NASSCOM's estimate that 80 to 85 percent of exports to North America were destined for the United States in 2002, we estimate that India exported approximately $3.5 billion in computer services to the United States. Those examining trends in offshoring often compare U.S. and Indian data series; however, there are at least five factors that make this comparison difficult and affect the difference between U.S. and Indian data. These factors relate to (1) the treatment of services provided by foreign temporary workers in the United States; (2) the definition of some services, such as computer programs embedded in goods and certain information technology-enabled services; (3) the treatment of transactions between firms in India and the overseas offices of U.S. firms; (4) the reporting of country-specific data on trade in affiliated services; and (5) the sources of data and other methodological differences in the collection of services trade data. According to U.S. and Indian officials, U.S. and Indian data differ in their treatment of salaries paid to certain temporary foreign workers providing services to clients in the United States. U.S. data do not include such salaries as cross-border trade in services. The United States only includes the salaries paid to temporary foreign workers who have been in the United States less than 1 year and are not on the payrolls of firms in the United States. However, Indian data do include, as Indian exports, the value of services provided by Indian workers employed in the United States for more than 1 year, according to Indian officials. The U.S. approach accords with the international standards of IMF. According to BEA and international standards, cross-border trade in services occurs between residents of a country and nonresidents, or "foreigners," and residency of a temporary foreign worker employed abroad is based, in part, on the worker's length of stay in the country. Therefore, according to these standards, if a temporary foreign worker stays or intends to stay in the United States for 1 year or more, that worker is considered a U.S. resident, and the value of the work performed is not included in U.S. import data. The treatment of services provided by temporary foreign workers in the United States is likely a significant factor contributing to the difference between U.S. and Indian data, according to Indian officials. Some Indian officials estimated that in past years, approximately 40 percent of India's exports to the United States of services corresponding to BPT services were delivered by temporary Indian workers in the United States. For example, for 2002, RBI found that approximately 47 percent of India's global exports of computer services occurred through the on-site delivery of services by temporary Indian workers. U.S. and Indian data differ, in part, due to differences in how both countries count services trade. India counts as trade in services certain transactions in software that are classified as trade in goods in U.S. data. For example, Indian data on trade in services include software embedded on computer hardware, which the United States classifies as trade in goods. Consistent with internationally recommended standards, the United States does not separate the value of embedded software that is physically shipped to or from the United States from the overall value of the media or computer in which it is installed. Thus, the value of such software is not recorded as trade in services but is included in the value of the physical media and hardware--which are counted as trade in goods in U.S. data. We were not able to determine the extent to which this factor contributes to the difference in U.S. and Indian data because we found no estimates of the proportion of embedded software in Indian data on services exports to the United States. Indian officials stated that the difference in the treatment of embedded software likely does not significantly contribute to the difference in data because India exports a relatively low value of embedded software. For example, according to Indian officials, the portion of India's global services exports delivered through physical media and hardware accounts for 10 to 15 percent of the total value of India-reported exports of services corresponding to BPT services. U.S. and Indian data also differ in how they define services in their respective data series. Unlike BEA, RBI and NASSCOM do not report data under the category of BPT services. RBI officials stated that it reports trade data on services similar to BPT services under the category of Software Services. RBI does not report a breakdown of its data on software services into subcategories of services. According to a NASSCOM official, NASSCOM classifies its trade data on services that most closely correspond to BPT services under Information Technology and Information Technology-Enabled Services (IT-ITES). The subcategories of services under this classification do not directly correspond to the subcategories of BPT services, but are similar. For example, under its IT- ITES classification, NASSCOM reports data on IT Services and Software, while BPT services include computer and data processing, and database and other information services. However, NASSCOM includes data on certain information technology-enabled services, such as certain financial services, that are not included in BEA's definition of BPT services, but are recorded separately. Although these categories roughly compare, a reconciliation of these subcategories has not yet been done. Thus, we were not able to determine the extent to which these definitional differences contribute to the difference between U.S. and Indian data. The treatment of services involving the overseas offices of U.S. firms by BEA and India is another factor explaining some of the difference between U.S. and Indian data. Unlike the United States, India counts the sales of services from firms in India to U.S.-owned firms outside the United States as exports to the United States. U.S. data do not count such sales as U.S. imports of services from India, because BEA considers the overseas offices of U.S. firms to be residents of the countries where they are located rather than residents of the country of the firm's owners. The U.S. approach is consistent with international standards. U.S. and Indian officials could not provide us an estimate of the extent to which the treatment of transactions involving the overseas offices of U.S.- owned firms contribute to the difference in U.S. and Indian data. However, one high-level Indian official stated that it is likely a significant factor. The reporting of affiliated trade in services differ in U.S. and Indian data. BEA reports country-specific data only for unaffiliated U.S. imports of BPT services, while Indian data include both affiliated and unaffiliated trade in services but do not separate the two. BEA reports detailed data only for unaffiliated trade because it has concerns about the accuracy and completeness of the data that firms report about affiliated trade in BPT services by country. For example, multinational firms with global offices may find it difficult to establish where, between whom, and what type of services have been transacted; and report these data along national lines to a statistical agency. BEA does collect data on overall affiliated services trade, but it reports only the total value across all countries due to its concerns about the reliability of how companies are allocating these totals to specific countries. In addition, due to concerns over the reporting burden on U.S. companies, BEA collects less detailed data on affiliated transactions than on unaffiliated transactions. U.S. data on overall affiliated trade across all countries show that a significant majority of total U.S. imports of BPT services take the form of trade between parents and affiliates. For example, for 2003, approximately three-quarters of all U.S. imports of BPT services--about $29.9 billion-- represented trade within multinational firms. If U.S.-Indian trade in these services reflects this overall share of trade through affiliates, then unreported affiliated trade with India may be much larger than the unaffiliated trade that is reported. Therefore, the lack of reported data on affiliated imports of BPT services contributes to the difference in data. There are differences in the sources of data the United States and India use to collect data on trade in services, which may contribute to overcounting or undercounting of services trade. While both BEA and NASSCOM prepare estimates of cross-border trade in services by surveying qualifying firms, U.S. and Indian data differ in the universe of such firms covered by their survey methodologies. The universe of firms in India exporting services is relatively easily identified because these firms have an incentive to report data on their exports of services and tend to be concentrated in certain industries. For example, firms exporting software services are required to report export data to the government of India's STPI program. STPI requires firms to report these data in order to comply with India's foreign exchange controls and to qualify for certain tax incentives and infrastructure benefits. To improve the completeness of its own survey data from its member firms, NASSCOM incorporates information on other exporters collected under the STPI program prior to providing these data to RBI. In addition, services exporting firms tend to be concentrated in certain industries. For instance, according to Indian officials, NASSCOM surveys its member firms in India to collect the annual dollar value of these firms' exports. The member firms that NASSCOM surveys number approximately 900 and, according to a NASSCOM official, these firms contribute a large share of India's total exports of these services. In addition, RBI has begun its own comprehensive survey of companies, which according to RBI, covered all of the identified companies engaged in software and IT services exports activities. RBI identified these companies on the basis of lists provided by NASSCOM, STPI, and the Electronics and Computer Software Export Promotion Council (ESC). In contrast to how India identifies firms exporting services, BEA does not have an easily available list of services importers. Instead, it must identify firms from public sources. BEA acknowledges that its survey methodology may contribute to the undercounting of U.S. imports of services due, in part, to the difficulty it faces in identifying the universe of services importers. The firms in the United States that BEA surveys to estimate U.S. imports are in many different industries and number in the thousands. Thus, BEA notes that it is difficult to establish and maintain a comprehensive mailing list for all U.S. firms importing services from foreign sources, particularly if the group of firms that import services changes substantially from year to year. In addition, maintaining accurate coverage using surveys is particularly difficult when there is rapid growth in the activity, as is the case with BPT services imports from India. Under BEA regulations, BEA exempts smaller importers from reporting their imports. Instead, it estimates these imports on the basis of a sample. If the value of smaller transactions is higher than BEA assumes in its estimation procedures, then imports of services would be understated. BEA, therefore, may undercount the total value of U.S. imports of services. The data collection entities--BEA and NASSCOM--also differ significantly in mission and scope. BEA is the U.S. agency charged with collecting, analyzing, and reporting official statistics on a broad range of U.S. imports and exports of services. BEA is regarded as a leading statistical organization, and it provides both statistical concepts and best practices to other countries and statistical organizations worldwide. NASSCOM is not a government statistical agency. It is a private trade association that represents the interests of the software and services industry in India, and data collection is but one element of a broader mission that focuses on representing that industry. Recently, RBI has recognized a need to reexamine the current methodology on the collection of software exports data, and is utilizing a methodology to collect services data in accordance with IMF standards. As a U.S. government agency, we were not able to fully review India's methodologies, but we did further examine in the next section of this report the challenges BEA faces in collecting services statistics. BEA faces challenges in collecting services import data, including identifying the full universe of services importers. To test its survey coverage, we provided BEA with lists of firms that we identified from public sources as likely importing BPT services from India. Although the BEA mailing lists included most of the firms we identified, they did not include all of these firms. In addition, BEA may be undercounting imports because it is challenging to identify all of the applicable surveys to send to firms. BEA also has not always received quality survey responses from firms. BEA has taken action to improve survey coverage and responses through outreach to survey respondents and by attempting to collaborate with other federal agencies, but it has not been able to access data that could assist in identifying the universe of firms importing services. Services offshoring presents its own challenges for statistical agencies. As previously discussed, identifying services importers becomes difficult if the group of firms and individuals importing services changes over time, or if there is a rapid increase in services imports. In the case of BPT services, both the United States and India have reported a rapid increase of exports to the United States and BEA may be undercounting U.S. firms importing such services from India due to this growth. (See fig. 3.) BEA acknowledges that it is able to identify a higher proportion of U.S. exporters than U.S. importers. This is because exporters tend to be large firms providing one particular type of service and are concentrated in certain industries, while importers vary in size and industry affiliation. Thus, BEA officials expressed concern that they are not able to identify and survey small firms that import BPT services infrequently, and are potentially undercounting U.S. trade in these services. To test for potential undercounting of U.S. imports, we provided BEA with lists of firms that we identified through publicly available sources as likely to be importing BPT services from India. BEA then (1) reviewed its mailing lists of firms that were sent surveys to verify that it had previously identified and surveyed these firms and (2) verified whether the firms we identified reported imports from India. Table 1 shows the following: BEA had included in its mailing lists 87 of the 104 firms we identified as likely importing BPT services from India; thus, BEA did not send surveys to 17 of these firms. After further analysis, BEA added 13 of these firms to its mailing lists and has sent them surveys, thus improving the universe of services importers. Of the 66 affiliated firms that received surveys, 48 firms received the quarterly survey for affiliated imports; thus, BEA did not send 18 affiliated firms this quarterly survey, although they received other surveys. Of the 21 unaffiliated firms that received surveys, 6 received the quarterly survey for unaffiliated imports; thus, BEA did not send 15 unaffiliated firms this quarterly survey, although they received other surveys. BEA may miss some BPT services imports because it is difficult to identify the total number of surveys that apply to all of the services transactions for which each firm was qualified. On the basis of the review of our lists, it appears that some of the firms that BEA identified in at least one of its comprehensive mailing lists were not on the mailing lists for other surveys that we expected. These firms likely had transactions covered by surveys other than the one they received. For example, several companies we identified as having an affiliate office in India did not receive one of the surveys for affiliated transactions, although these firms received a survey for unaffiliated transactions. With respect to BEA's effort to verify whether firms that we identified actually reported imports from India, of the 51 firms responding to the quarterly surveys, 15 firms indicated imports from India. Thus, 15 of the 104 firms we identified on the basis of public-source data as likely importing BPT services from India, reported those imports to BEA. High-level BEA officials indicated that it is possible that companies are not reporting country information because they fall below the survey exemption levels and, thus, were not required to provide such detailed data to BEA. BEA requests firms falling below survey exemption levels to voluntarily report aggregate transactions for all countries combined, without a country- specific breakdown. While these results cannot be generalized, they confirm the challenges of collecting services import data. However, they do not provide an indication of the magnitude or extent of these challenges. In addition, our lists of firms were based on a review of multiple sources of publicly available information. Without directly surveying each firm, however, it is not possible to confirm that they actually purchased BPT services from India. BEA is addressing concerns related to the identification of U.S. importers, the undercounting of services, and the administration of its surveys. For example, BEA contracted with a private firm to undertake an external review of its data sources and methods of identifying these services importers. The review will examine the extent of undercounting in both affiliated and unaffiliated services transactions, including the possible sources of undercounting, and any additional methods or sources of information that will improve survey coverage. The goals of this effort include identifying the extent of qualified firms that are not currently on the survey mailing lists, and to improve the estimates of international transactions. BEA expects the results of this review early in fiscal year 2006. BEA also has made efforts to ensure that firms receive the surveys for which they are qualified. BEA routinely sends surveys to firms that may be exempt from reporting in order to make a determination that they are still exempt. In addition, firms having transactions in services not covered in the surveys they receive are required to request additional surveys from BEA. In order to report data on trade in services, BEA needs to receive accurate and complete survey responses. However, BEA notes that the information it receives from firms on their affiliated imports of particular types of services has not proved sufficiently reliable to support the release of country-level estimates. As previously discussed, BEA is able to report overall affiliated trade for specific countries, but it is not able to report BPT trade for specific countries. This is because BEA has concerns over the quality of responses it receives from firms when they allocate affiliated imports to detailed types of services. Global firms may have difficulty accurately attributing services exported to the United States when their operations are spread across multiple countries. In addition, a high-level BEA official said that firms may not fully report all of their affiliated transactions for which they should report. This official noted that these reporting difficulties may reflect business record-keeping practices, which are intended to meet financial reporting requirements, rather than government surveys. In order to address these challenges, BEA is taking action to improve the quality of survey responses and to overcome the difficulty of reporting detailed data on affiliated imports of services. For example, an examination of BEA's data on affiliated transactions is a component of BEA's contract with a private firm that is conducting an external review of BEA's data sources and methods of identifying services importers. In addition, BEA has requested Census to conduct an external review of its survey forms and instructions, and to make recommendations that would improve clarity and promote accurate reporting. BEA is also performing its own review of its surveys to determine the clarity of survey instructions and is providing training to survey recipients on how to complete the surveys accurately. In addition, to improve the quality of its data on affiliated services imports, including affiliated imports of BPT services, BEA is considering collecting data on both affiliated and unaffiliated transactions on the same survey form. BEA is also considering expanding the types of affiliated BPT services for which it requests data to match the detailed data it collects on unaffiliated imports of BPT services. BEA is currently negotiating access to data from other federal agencies to expand its existing sources of data and to improve its survey coverage, but BEA has been unable to access this data from other federal agencies. According to BEA officials, other federal agencies, such as Census, possess data that could assist BEA in preparing its estimates of trade in services, including information on firms in the United States that could be importing services. For example, Census surveys firms to collect data of firms' business expenses, which include the purchase of BPT services. These surveys may be useful to identify importers because large purchasers of services may also be importing these services. The survey data that Census currently collects are not directly useful for BEA because the data on business expenses do not separate domestic from international expenses and do not distinguish between affiliated and unaffiliated transactions. However, BEA would get name and addresses of potential services importers. In addition, BEA could potentially request that Census add questions to one or more of the surveys that Census administers in order to identify services importers. However, BEA currently faces legal restrictions in gaining access to data utilized by Census. Although federal laws allow such data sharing between Census and BEA, BEA is generally restricted from gaining access to federal tax information that Census obtains from the Internal Revenue Service. According to BEA officials, BEA is negotiating with Census and the Internal Revenue Service to gain access to sources of data to improve its mailing lists. The large difference between U.S. and Indian data on BPT sources makes the analysis of the extent of offshoring more difficult. Some of this difference in data can be attributed to varying definitions of BPT services, but some also appears to be due to incomplete U.S. data. BEA has been seeking various ways to improve the overall quality of U.S. services trade data, but our test of whether they had identified BPT service importers indicated that they were not identifying all U.S. importers of services. Given the importance of this category of data in understanding the extent of offshoring of services, a subject of continuing public and congressional concern, we believe that additional efforts to strengthen the quality of U.S. services data are merited. We are recommending that the Secretary of Commerce direct BEA to systematically expand its sources of information for identifying firms to survey. BEA should consider ways to improve its identification of the appropriate survey forms to send to firms and the information requested about services imports, particularly with regard to affiliated imports. We also recommend that the Secretary direct BEA to pursue additional company information from previous Census surveys and consider requesting Census to add questions to future surveys to help identify services importers. The Department of Commerce provided written comments on the draft report, which are reproduced in appendix II. Commerce concurred with our recommendation that BEA should strive to improve its coverage of services imports. In particular, Commerce agreed that BEA should pursue additional company information from Census. Commerce also provided technical comments, which we incorporated into the report as appropriate. Following the receipt of agency comments from Commerce, RBI publicly released a report outlining a new methodology to compile services export data in accordance with IMF standards. Although RBI's new survey methodology conforms more closely to IMF standards for defining international transactions in services, differences between U.S. and Indian data remain due to a variety of factors we discuss in this report. For example, the RBI report acknowledges that Indian data include not only exports of computer-related services but also exports of ITES. Since the primary objective of RBI's survey was to collect data on software exports in conformity with IMF's definition of computer services, RBI's survey data exclude data from companies exclusively exporting ITES, and include only data on computer services. However, RBI's report does not indicate that RBI's survey methodology addresses other factors contributing to the difference between U.S. and Indian data. For example, it appears that RBI's survey data include the earnings of foreign temporary workers employed abroad without taking into account their length of stay or intention to remain abroad. RBI estimated this on-site work to account for approximately 47 percent of India's total worldwide exports, although some portion of this total may include services provided by temporary Indian workers employed abroad for over 1 year. In addition, RBI's report does not indicate that sales of embedded software are excluded from RBI's survey data. We are providing copies of this report to interested congressional committees and the Secretary of Commerce. Copies will be available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about this report, please contact Mr. Yager on (202) 512-4128. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix III. This report discusses (1) the extent of the difference between U.S. and Indian data on trade in business, professional, and technical (BPT) services, (2) the factors that explain the difference between U.S. data on imports of BPT services and India's data on exports of those same services, and (3) the challenges that the United States has faced in collecting services data. To obtain information on the extent of the difference between U.S. and Indian services trade data, we analyzed and compared U.S. and Indian data and interviewed U.S. and Indian government officials from the relevant agencies, including the U.S. Bureau of Economic Analysis (BEA), and the Reserve Bank of India (RBI). RBI relies on a trade association, the National Association of Service and Software Companies (NASSCOM), to collect data on these services. Although we reviewed NASSCOM's survey form and discussed with a NASSCOM official the collection of their statistics, NASSCOM did not provide us with their methodology for ensuring the reliability of their data. Therefore, we were not able to independently assess the quality and consistency of their data. However, for the purposes of this report, we found these data to be sufficiently reliable for reporting the difference in the official U.S. and Indian trade data in BPT services. To determine the factors that explain the difference in U.S. and Indian trade data, we reviewed official methodologies, interviewed relevant officials, and conducted a search of available literature. We reviewed documentation and technical notes from BEA and RBI to determine the U.S. and Indian methodologies for collecting and reporting trade in services data and to assess the limitations and reliability of various data series. We discussed these topics with BEA officials. In addition, we traveled to India to interview RBI officials and NASSCOM representatives and to obtain documentation on the collection and limitations of Indian data. We also interviewed a range of U.S. and Indian businesses in India that supply trade data to the United States and India to determine how they report data. We performed a literature search and obtained information from the Brookings Institution, the Institute for International Economics, and the Organization for Economic Co-operation and Development (OECD). To determine the international standards for collecting and reporting trade-in-services data, we reviewed relevant documentation from international organizations, including the International Monetary Fund and the United Nations. In September 2005, as our report was being finalized, RBI released a report entitled "Computer Services Exports from India: 2002-03," which discusses the methodology and results of a comprehensive survey that RBI conducted to collect data on India's "computer services" exports for 2002 in conformity with the International Monetary Fund's Balance of Payments Manual, 5th edition (1993). The RBI report provides information about RBI's survey methodology, including the number and types of companies surveyed and the information sought through the survey. In addition, the report outlines recommendations for RBI to collect data on software and information technology exports through representative quarterly surveys and a comprehensive survey every 3 years. We incorporated this additional information from the RBI report where appropriate. To examine the coverage of BEA's surveys for collecting trade-in-services data, we supplied BEA with lists of U.S.-based companies we identified as likely importers of services from India to compare with its mailing lists. We developed two lists. The first list included the names and addresses of companies in the United States with affiliate offices in India that are likely importing BPT services from India through affiliates. The second list included the names and addresses of companies that are likely purchasers of services through unaffiliated parties in India. We identified these companies through publicly available sources, including public media, company filings with the Securities and Exchange Commission, annual reports of companies, the list of NASSCOM member companies, and lists of companies compiled by information technology interest groups. Our lists of firms are not necessarily representative of all U.S. firms importing from India, and we do not generalize our results. We asked BEA to compare these lists with the following mailing lists for affiliated and unaffiliated surveys to identify how many companies it was surveying: We requested that BEA provide us with the number of companies from both lists that BEA was able to identify and not identify on its corresponding mailing lists. For companies that received a survey, we asked BEA to identify the number of these companies that responded to the survey and provided information on purchases from India. For companies that were not on any mailing list, BEA was asked to identify (1) whether the firms were excluded from its mailing list because they were assumed to be below exemption levels for the particular survey, (2) whether the firms are on BEA's current mailing list for the particular survey, and (3) whether the firms are listed on other BEA mailing lists. We discussed the results of this review with BEA officials. To assess the challenges the United States has faced in collecting and reporting data on trade in services, we reviewed relevant BEA documentation and interviewed BEA officials. We reviewed BEA documentation to determine BEA's data limitations and to assess the challenges BEA faces in collecting and reporting U.S. data on trade in services. To determine the challenges of expanding BEA's survey coverage through interagency data sharing we interviewed officials at BEA and the U.S. Census Bureau (Census), and we reviewed Census documentation. We also interviewed BEA officials to discuss these identified challenges and to determine the plans and actions BEA has taken to improve the quality of U.S. data. Finally, we interviewed Internal Revenue Service (IRS) officials to gain an understanding of IRS policy on restricting access to federal tax information that the IRS provides to Census. We performed our work from March 2005 through September 2005 in accordance with generally accepted government auditing standards. In addition to the person named above, Virginia Hughes, Bradley Hunt, Ernie Jackson, Sona Kalapura, Judith Knepper, Robert Parker, Cheryl Peterson, and Tim Wedding made major contributions to this report.
Trade in business, professional, and technical (BPT) services associated with offshoring needs to be accurately tracked, but a gap exists between U.S. and Indian data. The extent of and reasons for this gap are important to understand in order to address questions about the magnitude of offshoring and to analyze its future development. Under the authority of the Comptroller General of the United States, and as part of a body of GAO work on the issue of offshoring of services, this report (1) describes the extent of the gap between U.S. and Indian data, (2) identifies factors that contribute to the difference between the two countries' data, and (3) examines the challenges the United States has faced in collecting services trade data. GAO has addressed this report to the congressional committees of jurisdiction. The gap between U.S. and Indian data on trade in BPT services is significant. For example, data show that for 2003, the United States reported $420 million in unaffiliated imports of BPT services from India, while India reported approximately $8.7 billion in exports of affiliated and unaffiliated BPT services to the United States. At least five definitional and methodological factors contribute to the difference between U.S. and Indian data on BPT services. First, India and the United States follow different practices in accounting for the earnings of temporary Indian workers residing in the United States. Second, India defines certain services, such as software embedded on computer hardware, differently than the United States. Third, India and the United States follow different practices for counting sales by India to U.S.-owned firms located outside of the United States. The United States follows International Monetary Fund standards for each of these factors. Fourth, BEA does not report country-specific data for particular types of services due to concerns about the quality of responses it receives from firms when they allocate their affiliated imports to detailed types of services. As a result, U.S. data on BPT services include only unaffiliated imports from India, while Indian data include both affiliated and unaffiliated exports. Fifth, other differences, such as identifying all services importers, may also contribute to the data gap. The U.S. Bureau of Economic Analysis (BEA) has experienced challenges in identifying all U.S. services importers and obtaining quality survey data from importers. To test BEA's survey coverage, GAO provided BEA with lists of firms identified from public sources as likely importers of BPT services from India. The results of this test showed that some services importers were not included in BEA's mailing lists. However, BEA has taken action to address these challenges, including collaborating with other federal agencies, such as the U.S. Census Bureau and the Internal Revenue Service, to better identify firms to survey. However, data-sharing restrictions hamper BEA's efforts.
8,033
616
Recent advances in aircraft technology, including advanced collision avoidance and flight management systems, and new automated tools for air traffic controllers enable a shift from air traffic control to collaborative air traffic management. Free flight, a key component of air traffic management, will provide pilots with more flexibility, under certain conditions, to fly more direct routes from city to city. Currently, pilots primarily fly fixed routes--the aerial equivalent of the interstate highway system--that often are less direct because pilots are dependent on ground- based navigational aids. Through free flight, FAA hopes to increase the capacity, efficiency, and safety of our nation's airspace system to meet the growing demand for air transportation as well as enhance the controllers' productivity. The aviation industry, especially the airlines, is seeking to shorten flight times and reduce fuel consumption. According to FAA's preliminary estimates, the benefits to the flying public and the aviation industry could reach into the billions of dollars when the program is fully operational. In 1998, FAA and the aviation community agreed to a phased approach for implementing the free flight program, established a schedule for phase 1, and created a special program office to manage this phase. During phase 1, which FAA plans to complete by the end of calendar year 2002, the agency has been deploying five new technologies to a limited number of locations and measuring their benefits. Figure 1 shows how these five technologies--Surface Movement Advisor (SMA), User Request Evaluation Tool (URET), Traffic Management Advisor (TMA), Collaborative Decision Making (CDM), and passive Final Approach Spacing Tool (pFAST)--operate to help manage air traffic. According to FAA, SMA and CDM have been deployed at all phase 1 sites on or ahead of schedule. Table 1 shows FAA's actual and planned deployment dates for URET, TMA, and pFAST. To measure whether the free flight tools will increase system capacity and efficiency, in phase 1, FAA has been collecting data for the year prior to deployment and initially planned to collect this information for the year after deployment before making a decision about moving forward. In December 1999, at the urging of the aviation community, FAA accelerated its funding request to enable it to complete the next phase of the free flight program by 2005--2 years ahead of schedule. During this second phase, FAA plans to deploy some of the tools at additional locations and colocate some of them at selected facilities. FAA also plans to conduct research on enhancements to these tools and incorporate them when they are sufficiently mature. FAA plans to make an investment decision in March 2002 about whether to proceed to phase 2. However, by that date, the last site for URET will have been operational for only 1 month, thus not allowing the agency to collect data for 1 year after deployment for that site before deciding to move forward. (See table 1.) FAA officials told us that because the preliminary data showed that the benefits were occurring more rapidly than anticipated, they believe it is unnecessary to wait for the results from the evaluation plan to make a decision about moving forward. To help airports achieve their maximum capacity for arrivals through free flight, FAA's controllers will undergo a major cultural change in how they will manage the flow of air traffic over a fixed point (known as metering). Under the commonly used method, controllers use "distance" to meter aircraft. With the introduction of TMA, controllers will have to adapt to using "time" to meter aircraft. The major technical challenge with deploying the free flight tools is making URET work with FAA's other air traffic control systems. While FAA does not think this challenge is insurmountable, we believe it is important for FAA to resolve this issue to fully realize URET's benefit of increasing controller productivity. Initially, controllers had expressed concern about how often they could rely on TMA to provide the data needed to effectively manage the flow of traffic. However, according to FAA and subsequent conversations with controllers, this problem was corrected in May 2001 when the agency upgraded TMA software and deployed the new version to all sites. To FAA's credit, it has decided not to deploy pFAST to additional facilities in phase 2 because of technical difficulties associated with customizing the tool to meet the specific needs of each facility, designing other automated systems that are needed to make it work, and affordability considerations. Ensuring that URET is compatible with other major air traffic control systems is a crucial technical challenge because this requires FAA to integrate software changes among multiple systems. Among these systems are FAA's HOST, Display System Replacement, and local communications networks. Compounding this challenge, FAA has been simultaneously upgrading these systems' software to increase their capabilities. How well URET will work with these systems is unknown because FAA has yet to test this tool with them. FAA has developed the software needed for integration and has begun preliminary testing. Although problems have been uncovered during testing, FAA has indicated that these problems should not preclude URET's continued deployment. By the end of August 2001, FAA expects to complete testing of URET's initial software in conjunction with the agency's other major air traffic control systems. FAA acknowledges that further testing might uncover the need for additional software modifications, which could increase costs above FAA's current estimate for this tool's software development and could cause the agency to defer capabilities planned for phase 1. Ensuring URET's compatibility with other air traffic control systems is important to fully realize its benefits of increasing controllers' productivity. URET is used in facilities that control air traffic at high altitudes and will help associate and lead controllers work together to safely separate aircraft. Traditionally, an associate controller has used the data on aircraft positions provided by the HOST computer and displayed on the Display System Replacement workstation to assess whether a potential conflict between aircraft exists. If so, an associate controller would annotate the paper flight strips containing information on their flights and forward these paper flight strips to the lead controller who would use the Display System Replacement workstation to enter flight plan amendments into the HOST. URET users we spoke with said that this traditional approach is a labor-intensive process, requiring over 30 keystrokes. With URET, an associate controller can rely on this tool to automatically search for potential conflicts between aircraft, which are then displayed. URET also helps an associate controller resolve a potential conflict by automatically calculating the implications of any change prior to amending the flight plan directly into the HOST. According to the users we spoke with, these amendments require only three keystrokes with URET. FAA, controllers, maintenance technicians, the aviation community, and other stakeholders agree on the importance of using a phased approach to implementing the free flight program. This approach allows FAA the opportunity to gradually deploy the new technologies at selected facilities and users to gain operational experience before total commitment to the free flight tools. It basically follows the "build a little, test a little, field a little" approach that we have endorsed on numerous occasions. To FAA's credit, the agency has appropriately used this approach to determine that it will not deploy pFAST in phase 2. We also agree with major stakeholders that adapting to the program's tools poses the greatest operational challenge because they will change the roles and responsibilities of the controllers and others involved in air traffic services. However, the success of free flight will rely on agencywide cultural changes, especially with controllers, who trust their own judgment more than some of FAA's new technologies, particularly because the agency's prior efforts to deploy them have had significant problems.Without training in these new tools, air traffic controllers would be hampered in fulfilling their new roles and responsibilities. Another major challenge is effectively communicating TMA's capabilities to users. Because FAA has been deferring and changing capabilities, it has been difficult for controllers to know what to expect and when from this tool and for FAA to ensure that it provides all the capabilities that had been agreed when FAA approved the investment for phase 1. During our meetings with air traffic controllers and supervisors, their biggest concern was that the free flight tools would require cultural changes in the way they carry out their responsibilities. By increasing their dependence on automation for their decisionmaking, these tools are expected to help increase controllers' productivity. Moreover, the tools will require changes in commonly recognized and accepted methods for managing traffic. Controllers and supervisors emphasized that URET will increase the responsibilities of the associate controllers in two important ways. First, their role would no longer be focused primarily on separating traffic by reading information on aircraft routes and altitudes from paper flight strips, calculating potential conflicts, and manually reconfiguring the strips in a tray to convey this information to a lead controller. With the URET software that automatically identifies potential conflicts up to 20 minutes in advance, associate controllers can be more productive because they will no longer have to perform these manual tasks. Second, they can assume a more strategic outlook by becoming more focused on improving the use of the airspace. URET enables them to be more responsive to a pilot's request to amend a flight plan (such as to take advantage of favorable winds) because automation enables them to more quickly check for potential conflicts before granting a request. Although the controllers said they look forward to assuming this greater role and believe that URET will improve the operational efficiency of our nation's airspace, they have some reservations. Achieving this operational efficiency comes with its own set of cultural and operational challenges. Culturally, controllers will have to reduce their dependency on paper flight strips as URET presents data electronically on a computer screen. According to the controllers we interviewed, this change will be very challenging, especially at facilities that handle large volumes of traffic, such as Chicago, because the two facilities that have received URET have taken several years to become proficient with it even though they have less traffic. Operationally, controllers said that URET's design must include some backup capability because they foresee the tool becoming a critical component in future operations. Moreover, as controllers become increasingly experienced and reliant on URET, they will be reluctant to return to the former manual way because those skills will have become less current. As new controllers join the workforce, an automated backup capability will become increasingly essential because they will not be familiar with controlling traffic manually with paper flight strips. Currently, FAA is not committed to providing a backup to URET in either phase because the tool is only a support tool, not a mission-critical tool that requires backup. However, the agency is taking preliminary steps to provide some additional space for new equipment in the event it decides to provide this backup. Depending on how the agency plans to address this issue, the cost increase will vary. For TMA, controllers emphasized during our discussions that using time rather than distance to meter properly separated aircraft represents a major cultural shift. While controllers can visually measure distance, they cannot do the same with time. As one controller in a discussion group commented, TMA "is going to be a strain, ... and I hate to use the word sell, but it will be a sell for the workforce to get this on the floor and turn it on and use it." Currently, controllers at most en route facilities use distance to meter aircraft as they begin their descent into an airport's terminal airspace. This method, which relies on the controllers' judgment, results in the less efficient use of this airspace because controllers often add distance between planes to increase the margin of safety. With TMA, controllers will rely on the computer's software to assign a certain time for aircraft to arrive at a predetermined point. Through continuous automatic updating of its calculations, TMA helps balance the flow of arriving flights into the congested terminal airspace by rapidly responding to changing conditions. The controllers at the first three of the en route centers that have transitioned to TMA easily accepted it because they had been using time to meter air traffic for 20 years. However, as other en route centers transition to TMA, the controllers' receptivity will be difficult because they have traditionally used distance to meter air traffic. FAA management realizes that the controllers' transition to metering based on time versus distance will be challenging and has allowed at least 1 full year for them to become proficient in using the tool and begin to reap its full benefits. As a result, the Free Flight Program Office has established a 1-year period for controllers to become trained and comfortable with using this tool. FAA is relying heavily on national user teams to help develop training for TMA and URET. However, because of a lack of training development expertise and other factors, their efforts to provide adequate training for TMA have been hampered. Controllers said that, while they have knowledge of TMA, they are not specialists in developing training and therefore need more assistance from the program office. Also, because only a few key controllers have experience in using TMA, the teams have had to rely on them to develop a standardized training program while working with local facilities to tailor it to their needs. Moreover, these controllers are being asked to troubleshoot technical problems. Finally, controllers said the computer-based training they have received to date has not been effective because it does not realistically simulate operational conditions. FAA is currently revising its computer-based training to provide more realistic simulations. Because using the free flight tools will require controllers to undergo a complex and time-consuming cultural change, developing a comprehensive training program would greatly help FAA's efforts to implement the new free flight technologies. Communicating to users how the new tools will benefit the organization and them will greatly enhance the agency's training strategy. While FAA's training plans for URET are preliminary because it is undergoing testing and is not scheduled for deployment until the latter part of 2001, we believe that providing adequate training in advance is essential for controllers to become proficient in using this tool. Our discussions with controllers and FAA's TMA contractor indicated that in order to address local needs and to fix technical problems with TMA, FAA deferred several aspects of the tool that had been established for earlier deployment in phase 1. FAA officials maintain that these capabilities will be deployed before the end of phase 1. However, if these capabilities are not implemented in phase 1, pushing them into phase 2 will likely increase costs and defer benefits. For example, TMA's full capability to process data from adjacent en route centers has been changed because FAA determined that providing the full capability was not cost effective. While controllers said that even without this full capability, TMA has provided some benefits, they said that deferring some aspects of the tool's capabilities has made it less useful than they expected. Moreover, controllers maintain that FAA has not clearly communicated the changes with the tool's capabilities to them. Without knowing how the tool's capabilities are being changed and when the changes will be incorporated, it is difficult for users to know what to expect and when and for FAA to evaluate the tool's cost, schedule, and ability to provide expected benefits. FAA has begun to measure capacity and efficiency gains from using the free flight tools and its preliminary data show that the tools provide benefits. FAA expects additional sites to show similar or greater benefits, thus providing data to support a decision to move to phase 2 by March 2002. Because the future demand for air traffic services is expected to outpace the tools' capacity increases, the collective length of delays during peak periods will continue to increase but not to the extent that they would have without them. When FAA, in collaboration with the aviation industry, instituted the phased approach to implement its free flight program in 1998, the agency established a qualitative goal for increasing capacity and efficiency. In May 2001, FAA announced quantifiable goals for each of the three tools. For URET, FAA established an efficiency goal to increase direct routings by 15 percent within the first year of being fully implemented. Achieving this goal translates into reduced flight times and fuel costs for the airlines. The capacity goals for TMA and pFAST are dependent upon whether they are used together (colocated) and whether any constraints at an airport prevent them from being used to their full potential to expand capacity. If they are used together (such as at Minneapolis), FAA expects capacity to increase by 3 percent in the first year of operations and by 5 percent in the following year. However, at Atlanta, which is constrained by a lack of runways, the goal is 3 percent when these tools are used together. If only one of these tools is deployed (such as at Miami), FAA expects a 3-percent increase in capacity. While FAA has established quantifiable goals for these tools, the agency has only recently begun to develop information to determine whether attaining its goals will result in a positive return on the investment. Making this determination is important to help ensure that the capacity and efficiency gains provided by these tools are worth the investment. As previously shown in table 1, the actual systems that will be deployed for TMA and pFAST have only recently been installed at several locations or are scheduled to be installed this winter. To date, prototypes of these tools have been colocated at one location, and the actual equipment has been colocated at three locations. TMA is in a stand-alone mode at two locations. FAA reported that TMA achieved its first-year goal of a 3- percent increase in capacity at Minneapolis, and the agency is collecting data to determine whether the tool is meeting its goals at the other locations. Most of FAA's data regarding the benefits provided by these tools are based on operations of their prototypes at Dallas-Fort Worth. These data show that TMA and pFAST achieved the 5-percent colocation goal. However, the data might not be indicative of the performance of the actual tools that will be deployed to other locations because Dallas-Fort Worth does not face the constraints affecting many other airports (such as a lack of runways). Because FAA does not plan to begin deploying the actual model of URET until November 2001, the agency's data on its benefits have been based only on a prototype. At the two facilities--Indianapolis and Memphis-- where the prototype has been deployed since 1997, FAA reported that URET has increased the number of direct routings by over 17 percent as of April 2001. According to FAA's data, all flights through these two facilities were shortened by an average of one-half mile, which collectively saved the airlines approximately $1.5 million per month in operating costs. However, the benefits that FAA has documented for using URET reflect savings for just a segment of a flight--when an airplane is cruising through high-altitude airspace--not the entire flight from departure to arrival. Maintaining URET's benefits for an entire flight is partly dependent on using it in conjunction with TMA and pFAST. Although a researcher at the Massachusetts Institute of Technology, who is reviewing aspects of FAA's free flight program, recognizes URET's potential benefits, the researcher expressed concerns that its benefits could be lessened in the airspace around airports whose capacity is already constrained. Likewise, in a study on free flight supported by the National Academy of Sciences and the Department of Transportation, the authors found that the savings attributed to using direct routings might "be lost as a large stack of rapidly arriving aircraft must now wait" in the terminal airspace at constrained airports. Although URET can get an airplane closer to its final destination faster, airport congestion will delay its landing. While TMA and pFAST are designed to help an airport handle arrivals more efficiently and effectively, they cannot increase the capacity of an airport's terminal airspace beyond the physical limitations imposed by such constraining factors as insufficient runways or gates. In contrast, FAA's Free Flight Program Office believes that the savings observed with the prototype of URET will accrue when the actual tool is used in conjunction with TMA and pFAST. FAA plans to have procedures in place by the time these three tools are used together so that URET's benefits will not be reduced. However, the colocation of these three tools is not expected to occur until February 2002, which is only 1 month before the agency plans to make an investment decision for phase 2. Thus, we believe that FAA will not have enough time to know whether URET's benefits would be reduced. During peak periods, the demand for air traffic currently exceeds capacity at some airports, causing delays. FAA expects this demand to grow, meaning that more aircraft will be delayed for longer periods. Free flight tools have the potential to allow the air traffic system to handle more aircraft (increase capacity) but not to keep up with the projected growth in demand. Thus, they can only slow the growth of future delays. They cannot fully eliminate future delays or reduce current delays unless demand remains constant or declines. FAA's model of aircraft arrivals at a hypothetical congested airport, depicted in figure 2, illustrates the projected impact of the tools. According to the model, if demand increases and the tools are not deployed (capacity remains constant); the collective delays for all arriving flights (not each one) will increase by about an hour during peak periods. But if demand increases exceed capacity increases from deploying the tools, these delays will only increase by about half an hour. While recognizing that the free flight tools will provide other benefits, FAA has not quantified them. According to FAA, although TMA and pFAST are designed to maximize an airport's arrival rates, they also can increase departure rates because of their ability to optimize the use of the airspace and infrastructure around an airport. Regarding URET, FAA maintains that by automating some of the functions that controllers had performed manually, such as manipulating paper flight strips, the tool allows controllers to be more productive. If FAA's data continue to show positive benefits, the agency should be in a position by March 2002 to make a decision to deploy TMA to additional sites. However, FAA might not be in a position to make an informed decision on URET because the schedule might not allow time to collect sufficient data to fully analyze the expected benefits from this tool during phase 1. Currently, operational issues present the greatest challenge because using the free flight tools will entail a major cultural shift for controllers as their roles and responsibilities and methods for managing air traffic will change. While FAA management has recognized the cultural changes involved, they have not taken a leadership role in responding to the magnitude of the changes. In particular, while involving controllers in developing and delivering training on these new tools, FAA has not provided support to ensure that the training can be effectively developed and presented at local sites. Because the agency has been changing the capabilities of TMA from what had been originally planned but not systematically documenting and communicating these changes, FAA and the users of this tool lack a common framework for understanding what is to be accomplished and whether the agency has met its goals. While the free flight tools have demonstrated their potential to increase capacity and save the airlines money, only recently has FAA established quantifiable goals for each tool and begun to determine whether its goals are reasonable--that they will result in a positive return on investment. Because several factors influence the benefits expected from the tools, it is important for FAA to clearly articulate the expectations for each tool by specific location. To make the most informed decision about moving to phase 2 of the free flight program, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following actions: Collect and analyze sufficient data in phase 1 to ensure that URET can effectively work with other air traffic control systems. Improve the development and the provision of local training to enable field personnel to become proficient with the free flight tools. Determine that the goals established in phase 1 result in a positive return on investment and collect data to verify that the goals are being met at each location. Establish a detailed set of capabilities for each tool at each location for phase 2 and establish a process to systematically document and communicate changes to them in terms of cost, schedule, and expected benefits. We provided a draft of this report to the Department of Transportation and the National Aeronautics and Space Administration for their review and comment. We met with officials from the Office of the Secretary and FAA, including the Director and Deputy Director Free Flight Program Office, to obtain their comments on the draft report. These officials generally concurred with the recommendations in the draft report. They stated that, to date, FAA has completed deployment of the Surface Movement Advisor and the Collaborative Decision Making tools on, or ahead of, schedule at all phase 1 locations and plans to complete the deployment of the remaining free flight tools on schedule. FAA officials also stated that the agency is confident that it will be in position to make an informed decision, as scheduled in March 2002, about moving to the program's next phase, which includes the geographic expansion of TMA and URET. Furthermore, FAA stated that the free flight tools have already demonstrated positive benefits in an operational environment and that it expects these benefits will continue to be consistent with the program's goals as the tools are installed at additional sites. In addition, FAA officials provided technical clarifications, which we have incorporated in this report, as appropriate. We acknowledge that FAA has deployed the Surface Movement Advisor and the Collaborative Decision Making tools on schedule at various locations. Furthermore, the report acknowledges that the free flight tools have demonstrated benefits and that the agency should have the data on TMA to make a decision about moving forward to phase 2 by March 2002. However, as we note in the report, FAA faces a significant technical challenge in ensuring that URET works with other air traffic control systems. Moreover, the data on URET's benefits reflect those of the prototype system. FAA is scheduled to deploy the first actual system in November 2001 and the last in February 2002--just 1 month before it plans to make an investment decision. With this schedule, the actual system might not be operational long enough to gather sufficient data to measure its benefits. Furthermore, FAA has yet to overcome the operational challenge that is posed when controllers use TMA and must shift from the traditional distance-based method of metering air traffic to one based on time. If FAA can not satisfactorily resolve these issues, the free flight program might not continue to show positive benefits and could experience cost overruns, delays, and performance shortfalls. The National Aeronautics and Space Administration expressed two major concerns. First, it felt that the benefits provided from the TMA tool justified its further deployment. Our initial conclusion in the draft report, that FAA lacked sufficient data to support deploying this tool to additional sites, was based on FAA's initial evaluation plan, which required at least 1 year of operational data after each tool had been deployed. FAA officials now believe that waiting for full results from the evaluation plan before making a decision to move forward is no longer necessary because TMA's performance results are occurring more rapidly than anticipated. This report now acknowledges that the agency should have the data it needs to make a decision to move forward with this tool. Second, NASA felt that the report was unclear regarding the nature of our concerns about the reliability of TMA's data. The discussion in the draft report indicated that FAA lacked sufficient data to show that it had addressed our concerns with TMA. FAA officials provided this support, and this report has been revised accordingly. In addition, National Aeronautics and Space Administration officials provided technical clarifications, which we have incorporated into this report, as appropriate. (See appendix II for the National Aeronautics and Space Administration's comments.) As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested Members of Congress; the Secretary of Transportation; the Administrator, Federal Aviation Administration; and the Administrator, National Aeronautics and Space Administration. We will also make copies available to others upon request. If you have questions about this report, please contact me at (202) 512- 3650. Key contributors are listed in appendix III. Because of the importance of the free flight program to the future operation of our nation's aviation system and the upcoming decision about whether to proceed to the next phase, the Chairmen of the Senate Committee on Commerce, Science, and Transportation and the Subcommittee on Aviation asked us to provide information to help them determine whether the Federal Aviation Administration (FAA) will be in a position to decide on moving to the next phase. This report discusses (1) the significant technical and operational issues that could impair the ability of the free flight tools to achieve their full potential and (2) the extent to which these tools will increase efficiency and capacity while helping to minimize delays in our nation's airspace system. Our review focused on three free flight phase 1 tools--the User Request Evaluation Tool, the Traffic Management Advisor, and the passive Final Approach Spacing Tool--because they account for approximately 80 percent of FAA's $630 million estimated investment for phase 1 and approximately 80 percent of FAA's $717 million estimated investment for phase 2. We did not review the Surface Movement Advisor or the Collaborative Decision Making tools because generally they had been implemented at all phase 1 locations when we started this review and FAA does not intend to deploy their identical functionality in phase 2. To obtain users' insights into the technical and operational issues and the expected benefits from these tools, we held four formal discussion group meetings with nationwide user teams made up of controllers, technicians, and supervisors from all the facilities currently using or scheduled to receive the Traffic Management Advisor during phase 1. We also visited and/or held conference calls with controllers, technicians, and supervisors that used one or more of these tools in Dallas, Texas; southern California; Minneapolis, Minnesota; Memphis, Tennessee; Indianapolis, Indiana; and Kansas City, Kansas. development and acquisition. Based on these criteria, we interviewed FAA officials in the Free Flight Program Office, the Office of Air Traffic Planning and Procedures, and the Office of Independent Operational Test and Evaluation. To review test reports and other documentation highlighting technical and operational issues confronting these tools, we visited FAA's William J. Hughes Technical Center in Atlantic City, New Jersey, and FAA's prime contractors that are developing the three free flight tools. We also visited the National Aeronautics and Space Administration's Ames Research Center at Moffett Field, California, to understand how its early efforts to develop free flight tools are influencing FAA's current enhancement efforts. To determine the extent to which the free flight tools will increase capacity and efficiency while helping to minimize delays, we analyzed the relevant legislative and Office of Management and Budget's requirements that recognize the need for agencies to develop performance goals for their major programs and activities. We also interviewed FAA officials in the Free Flight Program Office and the Office of System Architecture and Investment for information on the performance goals of the free flight tools during phase 1. In addition, we held discussions with officials from RTCA, which provides a forum for government and industry officials to develop consensus-based recommendations. We also reviewed documentation explaining how the tools are expected to and actually have helped increase system capacity and efficiency, thereby helping to minimize delays. We conducted our review from October 2000 through July 2001, in accordance with generally accepted government auditing standards. In addition to those named above, Nabajyoti Barkakati, Jean Brady, William R. Chatlos, Peter G. Maristch, Luann M. Moy, John T. Noto, and Madhav S. Panwar made key contributions to this report.
This report reviews the Federal Aviation Administration's (FAA) progress on implementing the Free Flight Program, which would provide more flexibility in air traffic operations. This program would increase collaboration between FAA and the aviation community. By using a set of new automated technologies (tools) and procedures, free flight is intended to increase the capacity and efficiency of the nation's airspace system while helping to minimize delays. GAO found that the scheduled March 2002 date will be too early for FAA to make an informed investment decision about moving to phase 2 of its Free Flight Program because of significant technical and operational issues. Furthermore, FAA's schedule for deploying these tools will not allow enough time to collect enough data to fully analyze their expected benefits. Currently, FAA lacks enough data to demonstrate that these tools can be relied upon to provide accurate data.
6,876
178
DIA was built to replace Stapleton International Airport (SIA), which in 1994 was the eighth busiest airport in the world. A great deal of controversy was generated by DIA's construction. Proponents pointed to various inadequacies related to SIA's facilities, limits on expansion, and noise pollution. Opponents raised objections related to DIA's construction and operating costs, levels of future passenger demand, and long-term financial viability. The airport, which opened for business on February 28, 1995, experienced numerous construction delays and cost overruns. Allegations of inadequate disclosures in bond offerings to the public have resulted in an SEC investigation and several lawsuits. About 65 percent of DIA's revenues are collected from the airlines for space rental and landing fees. The remaining 35 percent of revenues come from concessions, passenger facility charges (PFCs), interest income, and other sources. To help ensure that revenues will cover costs, DIA has a rate maintenance covenant with bondholders. This covenant requires DIA to set annual rates and fees to result in an amount that, when combined with funds held in reserve in the coverage account, is equal to (1) all costs of operating the airport plus (2) 125 percent of the debt service requirements on senior bonds for that year. Senior bonds comprise about $3.5 billion of DIA's total $3.8 billion bond debt. DIA's revenue bonds were issued under the 1984 General Bond Ordinance, which promises bondholders that the rate maintenance covenant will be honored in setting billing rates for airlines. Under the airlines' use and lease agreements, each airline is required to pay rates and charges sufficient to meet the rate maintenance covenant after taking into consideration all airport revenues. Because there are no limits on costs built into the rate maintenance cost recovery model, DIA has agreed to share 80 percent of net receipts with airlines for 5 years from February 28, 1995, and lower percentages thereafter. After sharing net receipts with the airlines, DIA estimates that it will retain an estimated $6.3 million to $7.6 million a year for fiscal years 1996 through 2000, which will be transferred into the capital fund. Many airports calculate the airlines' cost per enplaned passenger as a benchmark. This cost is based on the airlines' share of airport costs, divided by the actual number of enplaned passengers. DIA's lease contract with United Airlines includes a provision for nullifying the contract if the cost per enplaned passenger rises beyond a predetermined level. To identify risks that could affect DIA's financial performance, we read and evaluated risk disclosures in DIA's Official Statements; interviewed DIA, Colorado Springs Airport, and SEC officials; obtained financial information on United Airlines; interviewed airline industry experts, including airline executives, aviation forecasters, and airline financial consultants; and obtained data from American Express on ticket prices at DIA. To review DIA's revenues, we (1) sampled DIA's daily revenue transactions for March through May 1995 and examined supporting documentation regarding collections of airline rents and landing fees, (2) tested supporting documentation for revenues from concessions such as parking, fees from rental car companies, food and beverage concessions, and retailers, (3) extracted data from reports on City of Denver investment income and journal vouchers on receipts of passenger facility charges from airlines, (4) compared actual revenues for March through May 1995 to the monthly estimates of cash flows DIA prepared for 1995, (5) analyzed and studied for consistency DIA's long-term estimates covering 1996 through 2000 for revenue and other financial information, and (6) reviewed the terms of lease agreements with airlines and cargo carriers, obtained and analyzed passenger data from airline landing reports for March through August 1995, and became familiar with the rates and charges methodology DIA used to set rental rates and landing fees for airlines. To review DIA's debt service requirements, we examined DIA's plan of finance, which summarized details on all outstanding revenue bonds at DIA and contained detailed amortization schedules for paying off revenue bonds. We compared selected payments on this schedule to bond documents. We also inspected documentation for actual transfers of operating funds to DIA's bond fund for March through May 1995. To review DIA's operating costs, we obtained DIA's weekly cash flow statements for March through May 1995 and operating expense data files for that period and traced samples from those files to supporting documentation. We also reviewed DIA's operations and maintenance cost budgets by studying supporting documentation, such as contracts and other DIA budgetary analysis, for all budgetary line items exceeding $1 million. We compared DIA's budgets to those of other operating airports. Finally, we interviewed DIA and City of Denver officials to gain an understanding of the accounting system for DIA expenses and to obtain further information about transactions tested. We used information from our tests of revenues, bond debt, and expenses to prepare a statement of actual cash flows for March through May 1995. We also analyzed the cash balances the City of Denver maintained in DIA operating and cash reserve accounts. To review DIA's actual cash reserves and cash flows, we obtained cash reserve balances from City of Denver accounting records and reviewed the audit work papers of DIA's auditors, identified restrictions on the use of reserve funds, interviewed bond analysts, and performed detailed analyses of DIA documentation supporting cash receipts and disbursements. We also interviewed DIA managers and airline officials and reviewed testimony before a congressional subcommittee by proponents and opponents of DIA. We performed our work between March 1995 and November 1995 in accordance with generally accepted government auditing standards for performance audits. This report is not intended to be a financial projection under the American Institute of Certified Public Accountants' standards for such reporting. There are certain risks inherent in any projection of financial data to future periods. Specifically, differences between expected and actual results of operations may arise because events and circumstances frequently do not occur as expected, and those differences may be material. In addition, DIA's future financial performance could be threatened by a number of factors specific to the airport's operations, most notably the overall volatility of the airline industry in general and any future deterioration in the financial health of its major tenant, United Airlines. Also, because DIA's revenues are primarily driven by passenger volume, increased ticket prices may be a concern if they result in significant passenger declines. Other risks include the possibility of (1) unknown construction defects resulting in major unexpected costs or (2) adverse actions arising from a current Securities and Exchange Commission investigation and/or lawsuits filed by bondholders against DIA. The potential severity of the effect on DIA's future financial condition varies with each of these risk elements. Financial results of the airline industry, a key risk factor, have been volatile since deregulation in 1978. Most airlines have reported substantial net losses since 1990, with total losses of about $13 billion from 1990 through 1994. For example, one of the airlines that used DIA, MarkAir, filed for bankruptcy in April 1995 and went out of business in October 1995. In addition to the condition of the airline industry in general, an important factor affecting DIA's financial viability is the financial health of its major tenant, United Airlines. United accounted for over 70 percent of passenger enplanements during the first 4 months of 1995, as discussed later. Also, DIA has projected that 43.1 percent of enplanements for 1995 will be passenger transfers as a result of United's hubbing operation. United Airlines reported annual losses in 1993, 1992, and 1991 of $50 million, $957 million, and $332 million, respectively. United reported profits in 1994 for the first time since 1990, with net earnings of $51 million shown on its audited financial statements for the calendar year 1994. United Airlines is thinly capitalized, with net equity of about $76 million and debt of about $12 billion, reported as of March 31, 1995. In late October 1995, United announced record profits of $243 million for the quarter ended September 30, 1995. In commenting on a draft of this report, DIA's Director of Aviation acknowledged that there are risks inherent with any business venture and related financial projections and that the volatility of the airline industry could impact the financial performance of DIA. The comments point out DIA's view that the risks associated with the financial health of United Airlines are offset by several factors, including DIA's strong market in both origination and destination travel as well as regional connecting traffic. Risks to passenger volume is another key consideration in DIA's future financial health. One factor influencing passenger volume, in turn, is ticket prices. Ticket prices at DIA increased 20 percent to 38 percent compared to those charged a year earlier at SIA. American Express recently reported that the average fare paid at DIA for March 1995 was 20 percent higher than fares at SIA in March 1994, with an average fare of $290 at DIA compared to $241 at SIA. American Express also reported that the average fare nationally, based on 215 domestic city pairs, showed no change during that period. In addition, the American Express review for the second quarter of 1995 reported that the average fare paid at DIA in June 1995 was 38 percent higher than the average fare at SIA in June 1994, while the average fare was up 7 percent nationally during that period. We also reviewed the Department of Transportation's (DOT) airfare statistics, which are based on a broader 10 percent sample of all domestic airline travel. DOT's data showed that the average fare for Denver travel for the second quarter of 1995--DIA's first full quarter of operation--was 9 percent higher than the SIA fare for the same period in 1994. According to DOT's statistics, the average fare nationwide for the second quarter of 1995 was 2.4 percent higher compared to the average fare 1 year earlier. According to airline industry representatives we interviewed, airport charges to airlines for rental costs and landing fees represent a small fraction of airlines' total costs, which also include, for example, aircraft fuel and maintenance costs and personnel and benefits expenses. Thus, the industry officials indicated that the lack of a competitive market in the Denver area for United Airlines, rather than DIA airline charges, is probably the most important factor affecting the price of tickets. United Airlines dominates the market at DIA, carrying about 70 percent of all passengers enplaned in Denver during the first 4 months of 1995. Historically, Continental Airlines was United's major competition in the Denver market; however, as discussed later, Continental has eliminated its hubbing operation from Denver. Airlines that have a reputation for low fares, such as Southwest, have stated in media reports that they have chosen not to use DIA because its rates are too high. The airport at Colorado Springs, which is located about 70 miles south of Denver, has attracted a low fare airline, Western Pacific Airlines, that is offering competition to DIA. Colorado Springs expects to enplane 1.4 million passengers in 1995 compared to 791,000 in 1994, a 72-percent growth rate. Colorado Springs Airport officials told us that some of the growth is fueled by Denver passengers, although they have not performed any studies to verify this. Future growth at Colorado Springs, however, will be limited by its size; it is currently operating at full capacity with only about 7 percent of DIA's passenger volume. Our analysis of landing reports generated by the airlines for the first 6 months of operations at DIA showed that DIA enplaned 100.3 percent of forecasted passengers for March, April, and May 1995. However, volumes declined through the summer of 1995 as compared to forecasts, with 94.5 percent in June, 90.6 percent in July, and 89.0 percent in August. DIA officials stated that higher ticket prices were the primary cause of the decline in passenger volume in the summer of 1995, as well as the loss of Continental's hubbing operation. Passenger volume has improved in recent months, with 90.3 percent of forecasted passengers enplaned in September, 94.8 percent in October, and 99.1 percent in November. Another critical risk factor that we identified are the many allegations that have been made about improper construction practices at DIA, involving the main terminal, concourses, and runways. Although investigations to date have not disclosed major deficiencies that would result in significant repair costs, if undisclosed defects are present that eventually cause expensive repairs, DIA's cost structure could be materially affected. It should be noted, however, that the City of Denver's contracts with its DIA building contractors included a standard "Latent Defect Clause." This clause states that any hidden defects that develop as a result of materials and equipment incorporated into the project will be remedied by the contractor at no extra cost to the city. The City of Denver has advised us that the Securities and Exchange Commission (SEC) is conducting a formal investigation regarding the adequacy of the city's disclosure of information in bond offering documents with respect to the automated baggage system and related delays in opening the airport. Current estimates of whether the city will be able to repay investors would not appear to be within the scope of that investigation. Generally, when the SEC finds a violation of federal security law, it has the discretion to pursue a range of enforcement mechanisms and penalties. The SEC may, for example, require correction of public filings, direct future compliance, or, in some circumstances, ask a court to impose monetary penalties. The City of Denver provided us with a copy of a letter dated October 11, 1995, in which SEC regional staff advised the city that as a result of its investigation, the staff planned to recommend that the Commission institute an administrative action, the next step in the SEC's enforcement process. The city was given an opportunity to submit a written statement (known as a "Wells Submission") to the SEC to counter the staff's recommendation. The city advised us that it issued its Wells Submission on December 7, 1995, and denied violating federal securities laws in connection with the financing of DIA. Also, in February 1995 and March 1995, four class action lawsuits were filed in United States District Court for the Colorado District by DIA bondholders seeking damages from the City and County of Denver. The four lawsuits allege that the city misrepresented the design and construction status of the automated baggage system and the opening date of DIA. In addition, two of the lawsuits make allegations that the city and other defendants engaged in a conspiracy to conceal adverse facts from the investing public in order to artificially inflate the market price of the bonds. On May 1, 1995, a class action complaint was filed in Denver District Court by the four plaintiffs in the federal court cases, making substantially similar allegations. An SEC determination resulting from its investigation that disclosures were not fair or complete could aid litigants claiming losses from improper disclosures. In its Official Statement published in June 1995 to promote bond sales, DIA noted several investment risk factors that could potentially affect the security of DIA bonds, including the ongoing SEC investigation and bondholder litigation discussed above. In addition, we have summarized the following risk factors from that statement as items that must be noted as part of any analysis of DIA's long-term financial condition. DIA estimates operating revenues of about $500 million per year for the period 1995 to 2000, and anticipates receiving federal grants in amounts adequate to retire $118 million in subordinate bonds over the 5-year period. Grants require congressional action that cannot be assured. Many of the airlines operating at DIA, including United, Continental, Delta, Northwest, TWA, and others, have sent letters objecting to various aspects of the rates and charges for the airport. DIA officials stated that only TWA has filed a complaint with DOT, and DOT resolved TWA's complaint in favor of the City of Denver. Other factors that will affect aviation activity at DIA include (1) the growth of the economy in the Denver metropolitan area, (2) airline service and route networks, (3) national and international economic and political conditions, (4) the price of aviation fuel, (5) levels of airfares, and (6) the capacity of the national air traffic control system. Based on our review of DIA's long-term budgets and the data available on actual operations from its opening on February 28, 1995, through August 31, 1995, we found no significant issues which would lead us to believe that DIA will be unable to meet its financial obligations. However, the risks we identified in the previous section must be carefully considered by users of our report. Passenger enplanements are a key measure primarily because United Airlines, which accounts for over 70 percent of DIA passengers, has an agreement with DIA that it will honor its lease as long as costs per enplaned passenger do not exceed a specified level. DIA's leases also include a rate maintenance agreement that allows it to charge rates and fees sufficient to cover DIA's debt service and operating costs. Thus, the effectiveness of this agreement in supporting DIA's ability to meet its obligations is based upon maintaining the level of enplanements and costs per enplaned passenger within limits specified by the United lease agreement. During its initial 6 months of operations, DIA's volume of enplaned passengers averaged 95 percent of estimates. Both DIA and the Federal Aviation Administration (FAA) expect enplanement levels to increase over the next 5 years. Although leases were below anticipated levels due to Continental Airlines' removal of its hub from Denver and MarkAir's bankruptcy, DIA estimates that it will have positive net revenues of $19.5 million for 1995. Debt service requirements have been spread relatively evenly over the next 30 years. DIA's current budgeted operating costs were based on contractual agreements and detailed budgets. DIA expects these operating expenses to increase with the levels of inflation over the next 30 years. DIA posted positive cash flows during the period under review and has adequate cash reserves to draw on in case of emergency in the immediate future. DIA's ability to generate sufficient revenues to cover its operating costs and debt service requirements ultimately depends upon the number of passengers that choose to use the airport. Passenger volume dictates airline demand for space at DIA and is directly linked to the financial success or failure of DIA concessions. We analyzed airline landing reports for the first 6 months of operations at DIA and found that its volume of enplaned passengers was about 95 percent of its estimates. DIA and FAA both expect enplanement levels to increase in future years. Provided DIA does not suffer a significant decline in passenger levels, a risk we previously discussed, and have unanticipated costs, it should be able to keep its cost per enplaned passenger within the limits specified by its lease agreement with United Airlines. In October 1995, DIA estimated that passenger enplanements for 1995 would be 15.9 million, while FAA estimated that they would be 15.1 million. Both estimated that enplanements would rise from 1995 to 2000, reaching 18.2 million in 2000. DIA estimated an annual growth rate of about 2.6 percent in passenger volume from 1995 through 2000, while FAA estimated an annual growth rate of about 4 percent from 1995 through 2010. United Airlines has an agreement with DIA that it will honor its 30-year lease as long as costs per enplaned passenger do not exceed $20, measured in 1990 dollars. In June 1995, DIA estimated that United's cost per enplaned passenger in 1995 would be $16.31 in 1990 dollars and, if enplanement levels approximate estimates and unanticipated costs are not incurred, would drop to $13.22 by the year 2000. In our October 1994 report, we estimated that, with all other factors remaining constant, passenger traffic would have to drop to between 12 million and 12.5 million enplaned passengers in 1995 to drive costs above $20 per enplaned passenger. DIA has three concourses containing a total of 90 jet gates; however, as of September 1, 1995, only 76 of the gates were being used by airlines, with 69 of them covered by lease agreements. DIA is operating substantially below capacity due to Continental Airlines' decision to remove its hub from Denver and, to a lesser extent, MarkAir's bankruptcy and failure. Although this reduced the level of operations, DIA's reports show that it has covered its costs and achieved positive cash flows for its first 6 months. Following DIA's April 1995 agreement allowing Continental to reduce its lease commitment from 20 gates to 10, DIA raised its rental rates to airlines, effective May 1, 1995, by 6.8 percent. Other airlines, primarily United, have increased passenger volume due to Continental's pullout. In addition, reported operating costs have been below budget. All these factors have contributed to DIA's positive financial results to date. Furthermore, because DIA is operating below capacity, it is positioned to meet the expected increase in passenger volumes in future years without constructing new facilities. DIA's 14 idle gates were all on concourse A, which was planned to support Continental Airlines' hubbing operation. Continental entered into an agreement with DIA in August 1992 to lease 20 of the 26 gates on concourse A but had eliminated most of its Denver operations by the time DIA opened in 1995. In April 1995, Continental's lease commitment was reduced to 10 gates for 5 years. Further, Continental was allowed to sublease up to 7 of these gates. As of September 1, 1995, Frontier was subleasing 4 gates and America West was subleasing 1 gate from Continental. Two other gates on concourse A were used by Mexicana Airlines and Martinair Holland. All 44 gates on concourse B were leased by United Airlines for 30 years. The 20 gates on concourse C were used by various airlines, with 13 gates leased as of September 1, 1995, generally under 5-year leases. The remaining seven gates were used by non-signatory airlines. Airlines operating on a non-signatory basis pay 20 percent higher rates for space rent and landing fees and do not share in the year-end dividend based on 80 percent of DIA's net receipts. Five of those unleased gates on concourse C were used by MarkAir, which filed for bankruptcy in April 1995. In October 1995, MarkAir went out of business, owing DIA about $2.9 million. DIA also hosts a substantial air cargo operation. It has lease agreements with several major cargo carriers, including Federal Express, United Parcel Service, and Emery Worldwide. According to DIA's estimate, which we reviewed and found reasonable, this operation was to produce $3.3 million in space rent plus about $5 million in landing fees for fiscal year 1995. Debt service requirements and operations and maintenance are DIA's two major cost components. Debt service costs are expected to remain relatively stable over the next 30 years. Operating costs are expected to rise with inflation over that time frame. Debt service payments constitute over 60 percent of DIA's estimated annual costs. DIA's bonds are scheduled to be paid off in relatively equal installments over the next 30 years. After a bond sale in June 1995, DIA had bonds payable of about $3.8 billion. DIA's June 22, 1995, estimates included two future bond sales to finance capital improvements. The first of these sales, held on November 15, 1995, after the end of our review, yielded $107,585,000 in bond principal. The second sale was scheduled for January 1, 1997, for $40,835,000 in bond principal. Based on its current contractual agreements with bondholders and estimated servicing requirements on the two additional bond sales, DIA's cash requirements for servicing the debt on its bonds will be spread relatively evenly over the next 30 years. Annual bond payments will rise from about $288 million in fiscal year 1996 to about $327 million in fiscal year 2005. From fiscal years 2006 through 2024, the payments are to range from $307 million to $329 million, with a final bond payment in fiscal year 2025 totaling $267 million. In addition to debt service payments, operations and maintenance and other expenses of the Denver Airport System (including upkeep of Stapleton International Airport) comprise DIA's other major cost element. DIA estimated that these costs would be about $159 million in fiscal year 1996 and would increase by about 3 percent a year as a result of inflation. Table 1 lists DIA's estimated operations and maintenance costs for fiscal year 1996 by cost category. We reviewed DIA's budgets for operations and maintenance costs by category and found the estimated amounts to be reasonable and supported by adequate documentation. Many cost categories were supported by contracts for services, including cleaning services, parking system management, and operation and maintenance of the underground train. Other categories were based on detailed, documented budgets that were developed using data such as number of employees, utility costs per square foot of building space, and other standard estimating methods. Estimates beyond the current year are based on 1996 estimates that were adjusted for a reasonable inflation factor. Estimates and analyses of short- and long-term cash flows are valuable financial management tools, especially when cash flows are volatile or uncertain--for example, when an operation is just getting underway or during periods when significant construction and capital improvement programs are being carried out. Used in conjunction with an entity's other important financial reports, cash flow estimates and statements provide useful analytical information. For example, comparing cash flows with accrual-based accounting information can yield valuable management information. In response to our request, DIA prepared estimates of cash flows for fiscal years 1996 through 2000. In April 1995, DIA officials also provided estimates of cash flows by month for 1995. We compiled DIA's actual cash flows for March through May 1995 and found that DIA produced a positive cash flow of $1.5 million in its first 3 months of operations. In September 1995, DIA's finance office provided us with cash flow statements it prepared for March through August 1995. The statements showed a positive cash flow of $1.8 million for March through May, which approximates the results of our analysis, and $12.1 million for June through August 1995. We confirmed that the statement's $49.9 million ending cash balance as of August 31, 1995, matched the balance on DIA's general ledger. At the time of our review, DIA officials said they were not required to prepare long-term cash flow estimates or statements. DIA's Finance Director told us that DIA did not use long-term cash flow estimates and analysis to assist in managing DIA operations. She stated that financial information available on the accrual basis of accounting was not materially different from information available on the cash basis and, in DIA's view, is sufficient for long-term planning. Finally, she stated that DIA's rate maintenance covenant ensures that DIA will generate adequate receipts to cover all disbursements. We surveyed seven airports about their use of cash flow estimates as a management tool. Two of the seven stated that they use cash flow estimates. For example, an Atlanta airport official stated that cash flow estimates were particularly valuable in its new concourse construction program. The five airports that did not use cash flow analyses had stable operations that experienced minimal fluctuations from year to year in receipts and disbursements. In commenting on a draft of this report, DIA's Director of Aviation reiterated DIA's position that cash flow estimates beyond the current fiscal year are not useful for several reasons and that the airport's 5-year feasibility study is an adequate long-term planning tool. We believe, however, that cash flow estimates would have been a valuable management tool during the period of our review as DIA completed construction. Also, in conjunction with DIA's other financial data, such estimates could continue to provide useful analytical data as the airport's operations stabilize during its initial years of operations. DIA's comments also stated that weekly cash flow estimates had been prepared since January 1994 and that weekly estimates were rolled up into monthly and quarterly reports. During the course of our work, we made repeated requests for such estimates, including a writen request on January 27, 1995. In a letter dated February 2, 1995, DIA's Assistant Director of Aviation for Finance advised us that the monthly cash flow estimates for 1995 had not been completed. As stated earlier in this section, we did not receive DIA's estimates of cash flows for fiscal year 1995 by month until April 1995. As of September 25, 1995, the date of DIA's latest available reserve fund statement, DIA had an operating cash balance of $57 million and held $420 million in reserve funds. In the event of a temporary financial crisis, about $260 million of these reserve funds could be used, subject to certain restrictions. Table 2 presents DIA's reported reserve fund balances as of September 25, 1995. The following restrictions apply to the use of the reserve funds: Bond Reserve Fund. Under terms of the bond ordinance, money can be withdrawn from this fund only to meet debt service requirements. Withdrawn funds must be paid back at the rate of 1/60th of the amount owed each month. Our analysis showed that about $200 million could be withdrawn from this fund before the payback requirements would exceed the remaining balance. However, according to bond analysts to whom we spoke, drawing on this fund could have a negative effect on DIA's bond ratings if DIA seeks future bond financing. As previously discussed, only one additional bond sale is currently being planned. Capital Fund. This fund can be used without restriction to pay for capital improvement costs, extraordinary costs, or debt service requirements. DIA anticipates that in the ordinary course of business, it will draw upon this fund for capital improvements. Coverage Fund. DIA's rate maintenance covenant requires that net revenues of the airport, combined with the coverage fund, equal no less than 125 percent of the debt service requirement on senior bonds for the upcoming year. The coverage fund amount is calculated at the end of each year and must be fully funded at that time. In June 1995, DIA reported that the December 31, 1996, coverage fund requirement will be $58.4 million. Any amounts withdrawn from the coverage fund must be replenished by December 31 of each year, which effectively limits the use of this fund in a financial crisis. Operations and Maintenance Reserve Fund. This fund must be fully funded by January 1, 1997. Full funding requires that 2 months of operations and maintenance expenses be on deposit in the fund, a requirement of about $27 million. This fund can be used to cover operations and maintenance expenses if net cash from operations is inadequate. We requested written comments on a draft of this report from the Secretary of Transportation and the Director of Aviation, DIA, of the City of Denver. A representative of the Secretary advised us that the Department of Transportation had no comments on the report. DIA's Director of Aviation provided us with written comments, which are incorporated in the report as appropriate and reprinted in appendix I. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Director, Office of Management and Budget; officials of the City of Denver; and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-9542 if you or your staff have any questions. Major contributors to this report are listed in appendix II. The following are GAO's comments on the letter from Denver International Airport's Director of Aviation dated January 22, 1996. 1. See the "Health of the Airline Industry and United Airlines" section of the report. Also, we did not reprint the referenced article. 2. See the "DIA Cash Flows" section of the report. Thomas H. Armstrong, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Denver International Airport's (DIA) financial condition, focusing on DIA: (1) cash reserves and estimated cash flows; and (2) ability to meet its financial obligations. GAO found that: (1) predicting the future financial performance of DIA is difficult, since it has been operating for less than 1 year; (2) the difficulties in projecting DIA financial performance relate to the volatility of the airline industry, unexpected construction delays and costs, and the city of Denver's ability to repay airport investors; (3) the Securities and Exchange Commission is formally investigating the adequacy of the city's disclosure of information in bond documents with respect to delays in opening the airport; (4) there is no evidence that DIA will be unable to meet its financial obligations, since DIA has generated positive cash flows in its first 6 months of operation despite operating at well below capacity; (5) DIA debt service costs are expected to remain stable over the next 30 years, while operating and maintenance costs are expected to rise with inflation; and (6) as of September 1995, DIA had an operating cash balance of $57 million and held $420 million in reserve funds, of which $260 million could be used in the event of a financial crisis.
7,094
269
Before originating a residential mortgage loan, a lender assesses its risk through the underwriting process, in which the lender generally examines the borrower's credit history and capacity to repay the mortgage and obtains a valuation of the property that will be the loan's collateral. Lenders need to know the property's market value, or the probable price that the property should bring in a competitive and open market, in order to provide information for assessing their potential loss exposure if the Real estate can be valued using a number of borrower defaults.methods, including appraisals, broker price opinions (BPO), and automated valuation models (AVM). Appraisals are opinions of value based on market research and analysis as of a specific date. Appraisals are performed by state-licensed or -certified appraisers who are required to follow the Uniform Standards of Professional Appraisal Practice (USPAP). A BPO is an estimate of the probable selling price of a particular property prepared by a real estate broker, agent, or salesperson rather than by an appraiser. An AVM is a computerized model that estimates property values using public record data, such as tax records and information kept by county recorders, multiple listing services, and other real estate records. In 1986, the House Committee on Government Operations issued a report concluding that problematic appraisals were an important contributor to the losses that the federal government suffered during the savings and loan crisis. The report stated that hundreds of savings and loans chartered or insured by the federal government were severely weakened or declared insolvent because faulty and fraudulent real estate appraisals provided documentation for loans larger than what the collateral's real value justified. In response, Congress incorporated provisions in Title XI of FIRREA that were intended to ensure that appraisals performed for federally related transactions were done (1) in writing, in accordance with uniform professional standards, and (2) by individuals whose competency had been demonstrated and whose professional conduct was subject to effective supervision. Various private, state, and federal entities have roles in the Title XI regulatory structure: The Appraisal Foundation. The Appraisal Foundation is a private not- for-profit corporation composed of groups from the real estate industry that works to foster professionalism in appraising. The foundation sponsors two independent boards with responsibilities under Title XI. The first of these, the Appraisal Standards Board, sets rules for developing an appraisal and reporting its results through USPAP. The second board, the Appraiser Qualifications Board, establishes the minimum qualification criteria for state certification and licensing of real property appraisers.of publications but also receives an annual grant from ASC. Evaluations are estimates of market value that do not have to be performed by a state- licensed or -certified appraiser. The federal banking regulators permit evaluations to be performed (consistent with safe and sound lending practices) in certain circumstances, such as mortgage transactions of $250,000 or less that are conducted by regulated institutions. for assessing the completeness, adequacy, and appropriateness of these institutions' appraisal and evaluation policies and procedures. Appraisal Subcommittee. ASC has responsibility for monitoring the implementation of Title XI by the private, state, and federal entities noted previously. Among other things, ASC is responsible for (1) monitoring and reviewing the practices, procedures, activities, and organizational structure of the Appraisal Foundation--including making grants to the Foundation in amounts that it deems appropriate to help defray costs associated with its Title XI activities; (2) monitoring the requirements that states and their appraiser regulatory agencies establish for the certification and licensing of appraisers; (3) monitoring the requirements established by the federal banking regulators regarding appraisal standards for federally related transactions and determinations of which federally related transactions will require the services of state-licensed or -certified appraisers; and (4) maintaining a national registry of state-licensed and -certified appraisers who can perform appraisals for federally related transactions. Among other responsibilities and authorities, the Dodd-Frank Act requires ASC to implement a national appraisal complaint hotline and provides ASC with limited rulemaking authority. To carry out these tasks, ASC has 7 board member positions and 10 staff headed by an Executive Director hired by the board. Five of the board members are designated by the federal agencies that are part of FFIEC--the Bureau of Consumer Financial Protection (also known as the Consumer Financial Protection Bureau or CFPB), FDIC, the Federal Reserve, NCUA, and OCC. The other two board members are designated by the U.S. Department of Housing and Urban Development (HUD)--which includes the Federal Housing Administration (FHA)--and FHFA. ASC is funded by appraiser registration fees that totaled $2.6 million in fiscal year 2011. Available data and interviews with lenders and other mortgage industry participants indicate that appraisals are the most frequently used valuation method for home purchase and refinance mortgage originations. Appraisals provide an opinion of market value at a point in time and reflect prevailing economic and housing market conditions. Data provided to us by the five largest lenders (measured by dollar volume of mortgage originations in 2010) show that, for the first-lien residential mortgages for which data were available, these lenders obtained appraisals for about 90 percent of the mortgages they made in 2009 and 2010, including 98 percent of home purchase mortgages. The data we obtained from lenders included mortgages sold to the enterprises and mortgages insured by FHA, which together accounted for the bulk of the mortgages originated in 2009 and 2010. The enterprises and FHA require appraisals to be performed for a large majority of the mortgages they purchase or insure. For mortgages for which an appraisal was not done, the lenders we spoke with reported that they generally relied on validation of the sales price (or loan amounts in the case of refinances) against an AVM-generated value, in accordance with enterprise policies that permit this practice for some mortgages that have characteristics associated with a lower default risk. The enterprises, FHA, and lenders require and obtain appraisals for most mortgages because mortgage industry participants consider appraising to be the most credible and reliable valuation method, for a number of reasons. Most notably, appraisals and appraisers are subject to specific requirements and standards. In particular, USPAP outlines the steps appraisers must take in developing appraisals and the information appraisal reports must contain. It also requires that appraisers follow standards for ethical conduct and have the competence needed for a particular assignment. Furthermore, state licensing and certification requirements for appraisers include minimum education and experience criteria, and standardized report forms provide a way to report relevant appraisal information in a consistent format. In contrast, other valuation methods such as BPOs and AVMs are not permitted for most purchase and refinance mortgage originations. The enterprises do not permit lenders to use BPOs for mortgage originations and permit lenders to use AVMs for only a modest percentage of mortgages they purchase. Additionally, the federal banking regulators' guidelines state that BPOs and AVMs cannot be used as the primary basis for determining property values for mortgages originated by regulated institutions. However, the enterprises and lenders use BPOs and AVMs in a number of circumstances other than purchase and refinance mortgage originations because these methods can provide a quicker, less expensive means of valuing properties in active markets. When performing appraisals, appraisers can use one or more of three approaches to value--sales comparison, cost, and income. The sales comparison approach compares and contrasts the property under appraisal with recent offerings and sales of similar properties. The cost approach is based on an estimate of the value of the land plus what it would cost to replace or reproduce the improvements minus depreciation. The income approach is an estimate of what a prudent investor would pay based upon the net income the property produces. USPAP requires appraisers to consider which approaches to value are applicable and necessary to perform a credible appraisal and provide an opinion of the market value of a particular property. Appraisers must then reconcile the values produced by the different approaches they use to reach a value conclusion. The enterprises and FHA require that, at a minimum, appraisers use the sales comparison approach for all appraisals because it is considered the most applicable for estimating market value in typical mortgage transactions. Consistent with these policies, our review of valuation data from a mortgage technology company--representing about 20 percent of mortgage originations in 2010--indicated that appraisers used the sales comparison approach for nearly all (more than 99 percent) of the mortgages covered by these data. The cost approach, which was generally used in conjunction with the sales comparison approach, was used somewhat less often--in approximately two-thirds of the transactions in 2009 and 2010, according to these data. The income approach was rarely used. Some mortgage industry stakeholders have argued that wider use of the cost approach in particular could help mitigate what they viewed as a limitation of the sales comparison approach. They told us that relying solely on the sales comparison approach could lead to market values rising to unsustainable levels and that using the cost approach as a check on the sales comparison approach could help lenders and appraisers identify when this is happening. For example, they pointed to a growing gap between average market values and average replacement costs of properties as the housing bubble developed in the early to mid-2000s. However, other mortgage industry participants noted that a rigorous application of the cost approach might not generate values much different from those generated using the sales comparison approach. They indicated, for example, that components of the cost approach--such as land value or profit margins of real estate developers--could grow rapidly in housing markets where sales prices are increasing. The data we obtained did not allow us to analyze the differences between the values appraisers generated using the different approaches. Recently issued policies reinforce long-standing requirements and guidance designed to address conflicts of interest that may arise when direct or indirect personal interests bias appraisers from exercising their independent professional judgment. In order to prevent appraisers from being pressured, the federal banking regulators, the enterprises, FHA, and other agencies have regulations and policies governing the selection of, communications with, and coercion of appraisers. Examples of recently issued policies that address appraiser independence include the now-defunct HVCC, which took effect in May 2009; the enterprises' new appraiser independence requirements that replaced HVCC in October 2010; provisions in the Dodd-Frank Act; and revised Interagency Appraisal and Evaluation Guidelines from the federal banking regulators that were issued in December 2010. Provisions of these and other policies address (1) prohibitions against the involvement of loan production staff in appraiser selection and supervision; (2) prohibitions against third parties with an interest in the mortgage transaction, such as real estate agents or mortgage brokers, selecting appraisers; (3) limits on communications with appraisers; and (4) prohibitions against coercive behaviors. According to mortgage industry participants, HVCC and other factors have contributed to changes in appraiser selection processes--in particular, to lenders' more frequent use of AMCs to select appraisers. AMCs are third parties that, among other things, select appraisers for appraisal assignments on behalf of lenders. Some appraisal industry participants said that HVCC, which required additional layers of separation between loan production staff and appraisers for mortgages sold to the enterprises, led some lenders to outsource appraisal functions to AMCs because they thought using AMCs would allow them to easily demonstrate compliance with these requirements. In addition, lenders and other mortgage industry participants told us that market conditions, including an increase in the number of mortgages originated during the mid-2000s and lenders' geographic expansion over the years, put pressure on lenders' capacity to manage appraisers and led to their reliance on AMCs. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Direct federal oversight of AMCs is limited. Federal banking regulators' guidelines for lenders' own appraisal functions list standards for appraiser selection, appraisal review, and reviewer qualifications. The guidelines also require lenders to establish processes to help ensure that these standards are met when lenders outsource appraisal functions to third parties, such as AMCs. Officials from the federal banking regulators told us that they reviewed lenders' policies and controls for overseeing AMCs, including the due diligence performed when selecting AMCs. However, they told us that they generally did not review an AMC's operations directly unless they had serious concerns about it that the lender was unable to address. In addition, a number of states began regulating AMCs in 2009, but the regulatory requirements vary and provide somewhat differing levels of oversight, according to officials from several state appraiser regulatory boards. Some appraiser groups and other appraisal industry participants have expressed concern that existing oversight may not provide adequate assurance that AMCs are complying with industry standards. These participants suggested that the practices of some AMCs for selecting appraisers, reviewing appraisal reports, and establishing qualifications for appraisal reviewers--key areas addressed in federal guidelines for lenders' appraisal functions--may have led to a decline in appraisal quality. For example, appraiser groups said that some AMCs selected appraisers based on who would accept the lowest fee and complete the appraisal report the fastest rather than on who was the most qualified, had the appropriate experience, and was familiar with the relevant neighborhood. AMC officials we spoke with said that they had processes that addressed these areas of concern--for example, using an automated system that identified the most qualified appraiser based on the requirements for the assignment, proximity to the subject property, and performance metrics such as timeliness and appraisal quality. While the impact of the increased use of AMCs on appraisal quality is unclear, Congress recognized the importance of additional AMC oversight in enacting the Dodd-Frank Act by requiring state appraiser regulatory boards to supervise AMCs. The Dodd-Frank Act requires the federal banking regulators, CFPB, and FHFA to establish minimum standards for states to apply in registering AMCs, including requirements that appraisals coordinated by an AMC comply with USPAP and be conducted independently and free from inappropriate influence and coercion. This rulemaking provides a potential avenue for reinforcing existing federal requirements for key functions that may impact appraisal quality, such as selecting appraisers, reviewing appraisals, and establishing qualifications for appraisal reviewers. Such reinforcement could help to provide greater assurance to lenders, the enterprises, and federal agencies of the quality of the appraisals provided by AMCs. To help ensure more consistent and effective oversight of the appraisal industry, we recommended in our July 2011 report that the heads of the federal banking regulators, CFPB, and FHFA--as part of their joint rulemaking required under the Dodd-Frank Act--consider including criteria for the selection of appraisers for appraisal orders, review of completed appraisals, and qualifications for appraisal reviewers when developing minimum standards for state registration of AMCs. federal banking regulators and FHFA agreed with or indicated that they would consider our recommendation but as of June 2012 had not issued a rule setting minimum standards for state registration of AMCs. ASC has been performing its monitoring role under Title XI, but several weaknesses have potentially limited its effectiveness. In particular, ASC has not fully developed appropriate policies and procedures for monitoring state appraiser regulatory agencies, the federal banking regulators, and the Appraisal Foundation. In addition, ASC faces potential challenges in implementing some Dodd-Frank Act provisions. GAO-11-653. national registry of appraisers, license reciprocity (which enables an appraiser certified or licensed in one state to perform appraisals in other states), and programs for enforcing appraiser qualifications and standards. ASC primarily uses on-site reviews conducted by ASC staff to monitor states' compliance with the policy statements. ASC's routine compliance reviews examine each state every 2 years or annually if ASC determines that a state needs closer monitoring. These reviews are designed to encourage adherence to Title XI requirements by identifying any instances of noncompliance or "areas of concern" and recommending corrective actions. ASC conveys its findings and recommendations to states through written reports. In 2010, ASC reported 34 findings of noncompliance, the majority of which concerned weaknesses in state enforcement efforts, such as a lack of timeliness in resolving complaints about appraiser misconduct or wrongdoing. At the completion of each review, ASC executive staff and board members deliberate on the findings and place the state into one of three broad compliance categories: "in substantial compliance," "not in substantial compliance," and "not in compliance." According to ASC, in substantial compliance applies when there are no issues of noncompliance or no violations of Title XI; not in substantial compliance applies when there are one or more issues of noncompliance or violations of Title XI that do not rise to the level of not in compliance; and not in compliance applies when "the number, seriousness, and/or repetitiveness of the Title XI violations warrant this finding." We found that ASC had been using the three compliance categories in its reports to states and annual reports to Congress (which provide aggregate statistics on the number of states in each category). However, it had not included the definitions of the categories in these reports or in its compliance review manual or policy and procedures manual, and its definition of "not in compliance" was not clear or specific. As previously noted, the definition states only that the category is to be used "when the number, seriousness, and/or repetitiveness of the violations warrant this finding" and does not elaborate on how these factors are weighed or provide examples of situations that would meet this definition. These shortcomings are inconsistent with our internal control standards, which state that federal agencies should have appropriate policies and procedures for each of their activities. Without clear, disclosed definitions, ASC limits the transparency of the state compliance review process and the usefulness of information Congress receives to assess states' implementation of Title XI. Further, by not incorporating the definitions into its compliance review and policy and procedures manuals, ASC increases the risk that board members and staff may not interpret and apply the compliance categories in a consistent manner. To address these shortcomings, we recommended in our January 2012 report that ASC clarify the definitions it uses to categorize states' overall compliance with Title XI and include these definitions in ASC's compliance review and policy and procedures manuals, compliance review reports to states, and annual reports to Congress. In June 2012, ASC officials told us that they had developed a revised system for rating states that included five compliance categories (ranging from excellent to poor), each with specific criteria. They said that they would soon be publishing the compliance categories in the Federal Register to obtain public comments and would include the final categories in appropriate manuals and reports. In addition to this procedural weakness, ASC has functioned without regulations and enforcement tools that could be useful in promoting state compliance with Title XI. Prior to the Dodd-Frank Act, Title XI did not give ASC rulemaking authority and provided it with only one enforcement option--"derecognition" of a state's appraiser regulatory program. This action would prohibit all licensed or certified appraisers from that state from performing appraisals in conjunction with federally related transactions. ASC has never derecognized a state, and ASC officials told us that using this sanction would have a devastating effect on the real estate markets and financial institutions within the state. The Dodd-Frank Act provides ASC with limited rulemaking authority and authorizes ASC to impose (unspecified) interim actions and suspensions against a state agency as an alternative to, or in advance of, the derecognition of the agency. As of June 2012, ASC had not implemented this new enforcement authority. ASC officials said that determining the interim actions and suspensions they would take against state agencies would be done through future rulemaking. Although Title XI charges ASC with monitoring the appraisal requirements of the federal banking regulators, ASC has not developed policies and procedures for carrying out this responsibility. While ASC's policy manual provides detailed guidance on monitoring state appraiser regulatory programs, it does not mention any activities associated with monitoring the appraisal requirements of the federal banking regulators. Further, ASC officials acknowledged the absence of a formal monitoring process. The absence of policies and procedures specifying monitoring tasks and responsibilities limits accountability for this function and is inconsistent with federal internal control standards designed to help ensure effectiveness and efficiency in agency operations. According to ASC officials, ASC performs this monitoring function through informal means, primarily through its board members who are employed by the federal banking regulators. However, minutes from ASC's monthly board meetings and ASC's annual reports to Congress indicate that the monitoring activities of ASC as a whole have been limited. For example, our review of board-meeting minutes from 2003 through 2010 found no instances of the board discussing the appraisal requirements of the federal financial regulators.function in ASC's annual reports is limited to a summary of any new appraisal requirements issued by the federal financial regulators and HUD during the preceding year. Additionally, evidence of this monitoring Stakeholder views differ as to how to interpret the Title XI requirement that ASC monitor the requirements established by the federal banking regulators with respect to appraisal standards. Specifically, some ASC board members told us that they understand their monitoring role as maintaining an awareness of the federal financial regulators' appraisal requirements. Further, one ASC board member told us that ASC's monitoring of the federal financial regulators was more limited than its monitoring of states because (1) board members from the federal financial regulatory agencies are knowledgeable of the appraisal requirements of their agencies, (2) the federal regulators' interagency process for developing appraisal guidelines (in place since 1994) has reduced the need for monitoring the consistency of guidelines across agencies, and (3) monitoring the states' appraiser requirements requires in-depth review of state processes for licensing, certification, and enforcement. ASC adopted some of the report's recommendations, such as creating a Deputy Executive Director position and allowing states to respond to preliminary compliance review findings prior to the issuance of final reports. and noted that ASC's annual reports did not provide substantive analysis or critique of federal appraisal requirements. However, appraisal industry stakeholders also noted that implementing a more expansive interpretation of ASC's monitoring role would pose challenges. For example, existing ASC staff may not have the capacity to take on additional monitoring responsibilities. Even if ASC staff were able to independently analyze the federal regulators' appraisal requirements, the analysis would be subject to review by the ASC board, which, because of its composition, is not independent from the agencies that ASC is charged with monitoring. To better define the scope of its monitoring role and improve the transparency of its activities, we recommended in our January 2012 report that ASC develop specific policies and procedures for monitoring the appraisal requirements of the federal banking regulators. In June 2012, ASC officials told us that they recognized the need for ASC to perform this monitoring function, were deliberating on ways to carry it out, and expected to have policies and procedures in place later in the year. As previously noted, the Appraisal Foundation is a private not-for-profit corporation that sponsors independent boards that set standards for appraisals and minimum qualification criteria for appraisers. ASC approves an annual grant proposal and provides monthly grant reimbursements to the Appraisal Foundation to support the Title XI- related activities of the foundation and its Appraisal Standards Board and Appraiser Qualifications Board. The reimbursements cover the foundation's incurred costs for activities under the grant. From fiscal years 2000 through 2010, ASC provided the foundation over $11 million in grant reimbursements, or about 40 percent of ASC's expenditures over that period. Although ASC monitors the foundation in several ways, ASC lacks specific policies and procedures for determining whether grant activities are related to Title XI. ASC's policies and procedures manual does not address how ASC monitors the Appraisal Foundation. Instead, ASC uses monitoring procedures contained in a memorandum prepared by a former Executive Director. The memorandum describes how the Executive Director reviewed the foundation's grant activities but does not provide criteria for deciding what is Title XI-related. When we asked current ASC officials for the criteria they used, they indicated only that ASC staff "review submissions from the Foundation and supporting cost spreadsheets to determine that activities proposed in the annual grant request or the monthly reimbursement processes meet the requirements of Title XI." They said that once staff determine whether or not a submission falls within these parameters, they make a recommendation to the ASC board. However, determinations about what activities are Title XI-related are not always clear-cut. For example, in 2003, the Executive Director at the time recommended that the foundation be reimbursed for certain legal expenses in connection with a complaint filed with the foundation's ethics committee. However, the ASC board rejected the reimbursement request because the expenses "were not sufficiently Title XI-related." ASC's records do not indicate what criteria either the Executive Director or the ASC board used as a basis for their decisions or why they disagreed. Similarly, our review of ASC documents for more recent grants found no supporting explanations for decisions about whether grant activities were Title XI-related. One ASC board member said the board had a common understanding of what activities were eligible for grants but acknowledged that the basis for funding decisions could be better documented. As previously noted, our internal control standards state that federal agencies should have appropriate policies for each of their activities. Without policies that contain specific criteria, ASC increases the risk that its grant decisions will be inconsistent, limits the transparency of its decisions, and lacks assurance that it is complying with federal internal control standards. To address this limitation, we recommended that ASC develop specific criteria for assessing whether the grant activities of the Appraisal Foundation were related to Title XI In and include these criteria in ASC's policy and procedures manual.June 2012, ASC officials told us that they had been developing these criteria and planned to finalize them by August 2012. The Dodd-Frank Act contains 14 provisions that give ASC a number of new responsibilities and authorities. Some of the tasks associated with these provisions are complex and challenging, especially for a small agency with limited resources. One of the more complex tasks for ASC is to establish a national appraisal complaint hotline and refer hotline complaints to appropriate governmental bodies for further action. Appraisal industry stakeholders we spoke with noted that creating and maintaining a hotline could be costly because it will likely require investments in staff and information technology to fully ensure that calls are properly received, screened, tracked, and referred. Stakeholders indicated that screening calls would be a critical and challenging job because frivolous complaints could overwhelm the system and identifying valid complaints would require knowledge of USPAP. Another complex task for ASC is providing grants to state appraiser regulatory agencies to support these agencies' compliance with Title XI. Appraisal industry stakeholders cited challenges that ASC could face in designing the grant program and the decisions it will need to make. Some noted the challenge of designing grant eligibility and award criteria that (1) do not reward states that have weak appraiser regulatory programs because they use appraisal-related fee revenues (from state appraiser licensing and examination fees, for example) for purposes other than appraiser oversight and (2) will not create incentives for states to use less of their own resources for regulation of appraisers. In addition, ASC officials said they were unsure whether a January 2012 increase in the national registry fee--from $25 to $40 per appraiser credential--would be adequate to fund the grants and oversee them, especially in light of recent declines in the number of appraisers. As of June 2012, ASC had not implemented either the national hotline or the state grant program but had completed some initial steps. For example, ASC officials told us that they had developed initial protocols for handling hotline complaints and had begun work on a complaint form, website, and call center. In addition, ASC is in the process of hiring a grants manager. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I am happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Steve Westley (Assistant Director), Don Brown, Marquita Campbell, Emily Chalmers, Anar Ladhani, Yola Lewis, Alexandra Martin-Arseneau, John McGrail, Erika Navarro, Carl Ramirez, Kelly Rubin, Jerry Sandau, Jennifer Schwartz, Andrew Stavisky, and Jocelyn Yin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Real estate valuations, which encompass appraisals and other estimation methods, have come under increased scrutiny in the wake of the recent mortgage crisis. The Dodd-Frank Act codified several independence requirements for appraisers and requires federal regulators to set standards for registering AMCs. Additionally, the act expanded the role of ASC, which oversees the appraisal regulatory structure established by Title XI of FIRREA. The act also directed GAO to conduct two studies on real estate appraisals. This testimony discusses information from those studies, including (1) the use of different real estate valuation methods, (2) policies on appraiser conflict-of-interest and selection and views on their impact, and (3) ASC's performance of its Title XI functions. To address these objectives, GAO analyzed government and industry data; reviewed academic and industry literature; examined policies, regulations, and professional standards; and interviewed industry participants and stakeholders. Data GAO obtained from Fannie Mae and Freddie Mac (the enterprises) and five of the largest mortgage lenders indicate that appraisals--which provide an estimate of market value at a point in time--are the most commonly used valuation method for first-lien residential mortgage originations. Other methods, such as broker price opinions and automated valuation models, are quicker and less costly but are viewed as less reliable. As a result, they generally are not used for most purchase and refinance mortgage originations. Although the enterprises and lenders GAO spoke with did not capture data on the prevalence of approaches used to perform appraisals, the sales comparison approach--in which the value is based on recent sales of similar properties--is required by the enterprises and the Federal Housing Administration. This approach is reportedly used in nearly all appraisals. Conflict-of-interest policies have changed appraiser selection processes and the appraisal industry more broadly, raising concerns about the oversight of appraisal management companies (AMC), which often manage appraisals for lenders. Recent policies, including provisions in the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), reinforce prior requirements and guidance that restrict who can select appraisers and prohibit coercion. In response to market changes and these requirements, some lenders have turned to AMCs. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Federal regulators and the enterprises said they hold lenders responsible for ensuring that AMCs' policies and practices meet their requirements but that they generally do not directly examine AMCs' operations. Some industry participants voiced concerns that some AMCs may prioritize low costs and speed over quality and competence. The Dodd-Frank Act requires state appraiser licensing boards to supervise AMCs and requires the federal banking regulators, the Federal Housing Finance Agency, and the Bureau of Consumer Financial Protection to establish minimum standards for states to apply in registering them. Setting minimum standards that address key functions AMCs perform on behalf of lenders could provide greater assurance of the quality of the appraisals that AMCs provide. As of June 2012, federal regulators had not completed rulemaking to set state standards. The Appraisal Subcommittee (ASC) has been performing its monitoring role under Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), but several weaknesses have potentially limited its effectiveness. For example, ASC has not clearly defined the criteria it uses to assess states' overall compliance with Title XI. In addition, Title XI charges ASC with monitoring the appraisal requirements of the federal banking regulators, but ASC has not defined the scope of this function--for example, by developing policies and procedures--and its monitoring activities have been limited. ASC also lacks specific policies for determining whether activities of the Appraisal Foundation (a private nonprofit organization that sets criteria for appraisals and appraisers) that are funded by ASC grants are Title XI-related. Not having appropriate policies and procedures is inconsistent with federal internal control standards that are designed to promote the effectiveness and efficiency of federal activities. GAO previously recommended that federal regulators consider key AMC functions in rulemaking to set minimum standards for registering these firms. The regulators agreed with or said they would consider this recommendation. GAO also recommended that ASC clarify the criteria it uses to assess states' compliance with Title XI and develop specific policies and procedures for monitoring the federal banking regulators and the Appraisal Foundation. ASC is taking steps to implement these recommendations. See G AO-11-653 and GAO-12-147 .
6,419
974
It is time for a fundamental rethinking of DOE's missions. Created predominantly to deal with the energy crisis of the 1970s, DOE has changed its mission and budget priorities dramatically over time. By the early 1980s, its nuclear weapons production grew substantially; and following revelations about environmental mismanagement in the mid- to late-1980s, DOE's cleanup budget began to expand, and now the task overshadows other activities. With the Cold War's end, DOE has new or expanded missions in industrial competitiveness; science education; environment, safety, and health; and nuclear arms control and verification. Responding to changing missions and priorities with organizational structures, processes, and practices that had been established largely to build nuclear weapons has been a daunting task for DOE. For example, DOE's approach to contract management, first created during the World War II Manhattan Project, allowed private contractors to manage and operate billion-dollar facilities with minimal direct federal oversight yet reimbursed them for all of their costs regardless of their actual achievements; only now is DOE attempting to impose modern standards for accountability and performance. Also, weak management and information systems for evaluating program's performance has long hindered DOE from exercising effective oversight. In addition, DOE's elaborate and highly decentralized field structure has been slow to respond to changing conditions and priorities, is fraught with communication problems, and poorly positioned to tackle difficult issues requiring a high degree of cross-cutting coordination. Experts we consulted in a 1994 survey support the view that, at a minimum, a serious reevaluation of DOE's basic missions is needed. We surveyed nearly 40 former DOE executives and experts on energy policy about how the Department's missions relate to current and future national priorities. Our respondents included a former President, four former Energy Secretaries, former Deputy and Assistant Secretaries, and individuals with distinguished involvement in issues of national energy policy. Overwhelmingly, our respondents emphasized that DOE should focus on core missions. Many believed that DOE must concentrate its attention more on energy-related missions such as energy policy, energy information, and energy supply research and development. A majority favored moving many of the remaining missions from DOE to other agencies or entities. For example, many respondents suggested moving basic research to the National Science Foundation, the Commerce or Interior departments, other federal agencies, or a new public-private entity; some multiprogram national laboratories to other federal agencies (or sharing their missions with other agencies); the management and disposal of civilian nuclear waste to a new public-private organization, a new government agency, or the Environmental Protection Agency; nuclear weapons production and waste cleanup to the Department of Defense (DOD) or a new government agency and waste cleanup to the Environmental Protection Agency; environment, safety, and health activities to the Environmental Protection Agency or other federal entities; arms control and verification to DOD, the State Department, the Arms Control and Disarmament Agency, or a new government nuclear agency; activities furthering industrial competitiveness to the Commerce Department or a public-private organization; and science education to the National Science Foundation or another federal agency. Recognizing the need to change, DOE has several efforts under way to strengthen its capacity to manage. For example, DOE's reform of its contracting practices aims to make them more business-like and results-oriented; decision-making processes have been opened up to the public in an attempt to further break down DOE's long-standing culture of secrecy, which has historically shielded the Department from outside scrutiny; and high-level task forces convened by DOE have made recommendations on laboratory and research management and on the Department's missions. DOE is also developing a strategic plan aiming to arrange its existing missions into key "business lines." While we have yet to evaluate how well DOE is reorganizing along these business lines, we did recently complete a review of DOE's Strategic Alignment and Downsizing Initiative, which arose from the plan. We found that DOE's planned budget savings are on target and that the Department is depending on process improvements and reengineering efforts to enable it to fulfill its missions under the reduced budgets called for by the Initiative. However, the cost-savings potential of DOE's efforts is uncertain because most of them are just beginning and some are not scheduled to be completed for several years. For example, of DOE's 45 implementation plans, 22 plans have milestones that delineate actions to be met after May 1996 and 5 of those plans have milestones that will not occur until the year 2000. Because these actions are in their early stages, it is not yet clear if they will reduce costs to the extent DOE envisioned. Although DOE's reforms are important and much needed, they are based on the assumption that existing missions are still valid in their present forms and that DOE is the best place to manage them. Along with many of the experts we surveyed, we think a more fundamental rethinking of missions is in order. As we explained in an August 1995 report, two fundamental questions are a good starting point for developing a framework for evaluating the future of DOE and its missions: Which missions should be eliminated because they are no longer valid governmental functions? For those missions that are governmental, what is the best organizational placement of the responsibilities? Once agreement is reached on the appropriate governmental missions, a practical set of criteria could be used to evaluate the best organizational structure for each mission. These criteria--originally used by an advisory panel for evaluating alternative approaches to managing DOE's civilian nuclear waste program--allow for rating each alternative structure on the basis of its ability to promote cost-effective practices, attract talented technical specialists, be flexible to changing conditions, and accountable to stakeholders. Using these criteria could help identify more effective ways to implement missions, particularly those that could be privatized or reconfigured under alternative governmental forms. Appendix II summarizes these criteria. Our work and others' has revealed the complex balancing of considerations in reevaluating missions. In general, deciding the best place to manage a specific mission involves assessing the advantages and disadvantages of each alternative institution for its potential to achieve that mission, produce integrated policy decisions, and improve efficiency. Potential efficiency gains (or losses) that might result from moving parts of DOE to other agencies need to be balanced against the policy reasons that first led to placing that mission in the Department. For example, transferring the nuclear weapons complex to DOD, as is proposed by some, would require carefully considering many policy and management issues. Because of the declining strategic role of nuclear weapons, some experts argue that DOD might be better able to balance resource allocations among nuclear and other types of weapons if the weapons complex were completely under its control. Others argue, however, that the need to maintain civilian control over nuclear weapons outweighs any other advantages and that little gains in efficiency would be achieved by employing DOD rather than DOE supervisors. Some experts we consulted advocated creating a new federal agency for weapons production. Similarly, moving the responsibility for cleaning up DOE's defense facilities to another agency or to a new institution, as proposed by some, requires close scrutiny. For example, a new agency concentrating its focus on cleanup exclusively would not have to allocate its resources among competing programs and could maximize research and development investments by achieving economies of scale in applying cleanup technology more broadly. On the other hand, separating cleanup responsibility from the agency that created the waste may limit incentives to reduce waste and to promote other environmentally sensitive approaches. In addition, considerable startup time and costs would accompany a new agency, at a time when the Congress is interested in downsizing the federal government. DOE's task force on the future of the national laboratories (The Galvin Task Force) has suggested creating private or federal-private corporations to manage most or all of the laboratories. Under this arrangement, nonprofit corporations would operate the laboratories under the direction of a board of trustees that would channel funding to various labs to meet the needs of both government and nongovernment entities. DOE would be a customer, rather than the direct manager of the labs. The proposal raises important issues for the Congress to consider, such as how to (1) monitor and oversee the expenditure of public funds by privately managed and operated entities; (2) continue the laboratories' significant responsibilities for addressing environmental, safety, and health problems at their facilities, some of which are governed by legal agreements between DOE, EPA, and the states; and (3) safeguard federal access to facilities so that national priorities, including national security missions, are met. Other alternatives for managing the national labs exist: each has advantages and disadvantages, and each needs to be evaluated in light of the laboratories' capabilities for designing nuclear weapons and pursuing other missions of national and strategic importance. Furthermore, the government may still need facilities dedicated to national and defense missions, a possibility that would heavily influence any future organizational decisions. Finally, another set of criteria, developed by the National Academy of Public Administration (NAPA) in another context, could be useful for determining whether DOE should remain a cabinet-level department.These criteria, which are summarized in appendix III, pose such questions as the following: "Is there a sufficiently broad national purpose for the Department? Are cabinet-level planning, executive attention, and strategic focus necessary to achieve the Department's mission goals? Is cabinet-level status needed to address significant issues that otherwise would not be given proper attention?" Although DOE's strategic plan and Strategic Alignment and Downsizing Initiative address internal activities, they assume the validity of the existing missions and their placement in the Department. But DOE alone cannot make these determinations--they require a cooperative effort among all stakeholders, with the Congress and the administration responsible for deciding which missions are needed and how best to implement them. The requirements of the Government Performance and Results Act (GPRA) reinforce this concept by providing a legislative vehicle for the Congress and agencies to use to improve the way government works. The act requires, among other things, strategic plans based on consultation with the Congress and other stakeholders. These discussions are an important opportunity for the Congress and the executive branch to jointly reassess and clarify the agencies' missions and desired outcomes. Our work has shown that to be effective, decisions about the structure and functions of the federal government should be made in a thorough manner with careful attention to the effects of changes in one agency on the workings of other agencies. Specifically, reorganization demands a coordinated approach, within and across agency lines, supported by a solid consensus for change; it should seek to achieve specific, identifiable goals; attention must be paid to how the federal government exercises its role; and sustained oversight by the Congress is needed to ensure effective implementation. Given both the current budgetary environment and other proposals to more extensively reorganize the executive branch, the Congress could judge the feasibility and desirability of assigning to some entity the responsibility of guiding reorganizations and downsizing. Even though there has been little experience abolishing federal agencies, officials with the Office of Personnel Management (OPM) articulated to us some lessons learned from their experiences: Agencies are usually willing to accept functions, but they are not necessarily willing to accept the employees who performed those functions in the abolished agency--doing so may put the receiving agency's existing staff at increased risk of a reduction-in-force. Transferring functions that have an elaborate field structure can be very expensive. Transferred functions and staff may duplicate existing functions in the new agency, so staff may feel threatened, resulting in friction. Employees performing a function in the abolished agency may be at higher or lower grades than those performing the same function in the receiving agency. Terminating an agency places an enormous burden on that agency's personnel office--it will need outside help to handle the drastic increase in paperwork due to terminations, grievances, and appeals. Regardless of what the Congress decides on the future of the DOE, a number of critical policy and management issues will require close attention regardless of their placement in the federal government or outside it. These issues include contract reform, major systems acquisitions, and environmental cleanup and waste management. DOE has a long history of management problems. At the core of many of these problems is its weak oversight of more than 110,000 contractor employees, who perform nearly all of the Department's work. Historically, these contractors worked largely without any financial risk, they got paid even if they performed poorly, and DOE oversaw them under a policy of "least interference." DOE is now reforming its contracting practices to make them more business-like and results-oriented. While we believe that these reforms, which we are currently evaluating, are generally a step in the right direction, at this time we are unsure whether the Department is truly committed to fully implementing some of its own recommendations. For example, in May 1996, the Secretary announced the extension of the University of California's three laboratory contracts (currently valued at about $3 billion). DOE's decision to extend, rather than "compete" these enormous contracts--held by the University continuously for 50 years--violates two basic tenets of the Department's philosophy of contract reform. First, contracts will be competed except in unusual circumstances. Second, if current contracts are to be extended, the terms of the extended contracts will be negotiated before DOE makes its decision to extend them. DOE justified its decision on the basis of its long-term relationship with the University. However, the Secretary's Contract Reform team concluded that DOE's contracting suffered from a lack of competition, which was caused, in part, by several long-term relationships with particular contractors. DOE has historically been unsuccessful in managing its many large projects--those that cost $100 million or more and that are important to the success of its missions. Called "major acquisitions," these projects include accelerators for high-energy and nuclear physics, nuclear reactors, and technologies to process nuclear waste. Since 1980, DOE has been involved with more than 80 major acquisitions. We currently have work underway for the Senate Governmental Affairs Committee examining DOE's success with these acquisitions. Our work indicates that many more projects are terminated prior to completion than are actually completed. Many of these projects had large cost overruns and delays. This work will also address efforts to improve the acquisition process and contributing causes of these problems. The causes appear to include constantly changing missions, which makes maintaining support over the long term difficult; annual, incremental funding of projects that does not ensure that funds are available when needed to keep the projects on schedule; the flawed system of incentives that has sometimes rewarded contractors despite poor performance; and an inability to hire, train, and retain enough people with the proper skills. Another issue needing long-term attention is cleaning up the legacy of the nuclear age. This monumental task currently assigned to DOE includes both the environmental problems created by decades of nuclear weapons production and the management and disposition of highly radioactive waste generated by over 100 commercial nuclear power plants. Although the Department has made some progress on both fronts, major obstacles remain. One obstacle common to both efforts is the estimated total cost over the next half century. According to DOE, cleaning up its complex of nuclear weapons facilities could cost as much as $265 billion (in 1996 dollars) and disposing of highly radioactive waste from commercial nuclear power plants could cost another $30 billion (in 1994 dollars). Even though DOE received over $34 billion between 1990 and 1996 for environmental activities, it has made limited progress in addressing the wide range of environmental problems at its sites. In managing its wastes, DOE has encountered major delays in its high-level waste programs and has yet to develop adequate capacity for treating mixed waste (which includes both radioactive and hazardous components) at its major sites. Finally, DOE has begun deactivating only a handful of its thousands of inactive facilities. On the basis of our reviews over the last several years of DOE's efforts to clean up its nuclear weapons complex, we have identified many ways to potentially reduce the cost. These methods can be applied regardless of who has the responsibility for the cleanup. For example, DOE has usually assumed that all of its facilities will be cleaned up for subsequent unrestricted use; however, because many of these facilities are so contaminated, unrestricted use of them is unlikely, even after cleanup. By incorporating more realistic land-use assumptions into its decision-making, DOE could, by its own estimates, save from $200 million to $600 million annually. Also, to reduce costs, DOE is now preparing to privatize portions of the cleanup, most notably the vitrification of high-level waste in the tanks at its Hanford facility. But key issues need to be considered, including whether DOE has adequately demonstrated that privatization will reduce the total cost and whether DOE is adequately prepared to assume management and safety oversight responsibilities over the private firms. Moreover, DOE cannot permanently dispose of its inventory of highly radioactive waste from the Hanford tank farms and other facilities until it has developed a geologic repository for this waste generated by the commercial nuclear power industry and DOE. Utilities operating more than 100 nuclear power plants at about 70 locations have generated about 32,000 metric tons of highly radioactive waste in the form of spent (used) fuel and are expected to have produced about 85,000 metric tons of spent fuel by the time the last of these plants has been retired in around 30 years. Although an operational repository was originally anticipated as early as 1998, DOE now does not expect to determine until 2001 if the site at Yucca Mountain, Nevada, is suitable and, if it is, to begin operating a repository there until at least 2010. Following a call from 39 Members of Congress for a presidential commission to review the nuclear waste program, this year legislation that includes reforms is pending in both the House and the Senate; and some experts, including DOE's own internal advisory panel, have called for moving the entire program to the private sector. Mr. Chairman, this concludes our prepared statement. We would be pleased to respond to any questions that you or other Members of the Committee may have. The following criteria, adapted from a former DOE advisory panel that examined the Department's civilian nuclear waste program, offers a useful framework for evaluating alternative ways to manage missions. These criteria were created to judge the potential value of several different organizational arrangements which included an independent federal commission, a mixed government-private corporation, and a private corporation. Mission orientation and focus: Will the institution be able to focus on its mission(s), or will it be encumbered by other priorities? Which organizational structure will provide the greatest focus on its mission(s)? Credibility: Will the organizational structure be credible, thus gaining public support for its action? Stability and continuity: Will the institution be able to plan for its own future without undue concern for its survival? Programmatic authority: Will the institution be free to exercise needed authority to accomplish its mission(s) without excessive oversight and control from external sources? Accessibility: Will stakeholders (both federal and state overseers as well as the public) have easy access to senior management? Responsiveness: Will the institution be structured to be responsive to all its stakeholders? Internal flexibility: Will the institution be able to change its internal systems, organization, and style to adapt to changing conditions? Political accountability: How accountable will the institution be to political sources, principally the Congress and the President? Immunity from political interference: Will the institution be sufficiently free from excessive and destructive political forces? Ability to stimulate cost-effectiveness: How well will the institution be able to encourage cost-effective solutions? Technical excellence: Will the institution attract highly competent people? Ease of transition: What will be the costs (both financial and psychological) of changing to a different institution? The following criteria were developed by the National Academy of Public Administration as an aid to deciding whether a government organization should be elevated to be a cabinet department. However, they raise issues that are relevant in judging cabinet-level status in general. 1. Does the agency or set of programs serve a broad national goal or purpose not exclusively identified with a single class, occupation, discipline, region, or sector of society? 2. Are there significant issues in the subject area that (1) would be better assessed or met by elevating the agency to a department and (2) are not now adequately recognized or addressed by the existing organization, the President, or the Congress? 3. Is there evidence of impending changes in the type and number of pressures on the institution that would be better addressed if it were made a department? Are such changes expected to continue into the future? 4. Would a department increase the visibility and thereby substantially strengthen the active political and public support for actions and programs to enhance the existing agency's goals? 5. Is there evidence that becoming a department would provide better analysis, expression, and advocacy of the needs and programs that constitute the agency's responsibilities? 6. Is there evidence that elevation to a cabinet department would improve the accomplishment of the existing agency's goals? 7. Is a department required to better coordinate or consolidate programs and functions that are now scattered throughout other agencies in the executive branch of government? 8. Is there evidence that a department--with increased centralized political authority--would result in a more effective balance within the agency, between integrated central strategic planning and resource allocation and the direct participation in management decisions by the line officers who are responsible for directing and managing the agency's programs? 9. Is there evidence of significant structural, management, or operational weaknesses in the existing organization that could be better corrected by elevation to a department? 10. Is there evidence that there are external barriers and impediments to timely decision-making and executive action that could be detrimental to improving the efficiency of the existing agency's programs? Would elevation to a department remove or mitigate these impediments? 11. Would elevation to a department help recruit and retain better qualified leadership within the existing agency? 12. Would elevation to a department promote more uniform achievement of broad, cross-cutting national policy goals? 13. Would elevation to a department strengthen the Cabinet and the Executive Office of the President as policy and management aids for the President? 14. Would elevation to a department have a beneficial or detrimental effect upon the oversight and accountability of the agency to the President and the Congress? Department of Energy: A Framework For Restructuring DOE and Its Missions (GAO/RCED-95-197, Aug. 21, 1995). Department of Energy: Framework Is Needed to Reevaluate Its Role and Missions (GAO/T-RCED-95-232, June 21, 1995). Department of Energy: Alternatives for Clearer Missions and Better Management at the National Laboratories (GAO/T-RCED-95-128, Mar. 9, 1995). Nuclear Weapons Complex: Establishing a National Risk-Based Strategy for Cleanup (GAO/T-RCED-95-120, Mar. 6, 1995). Department of Energy: National Priorities Needed for Meeting Environmental Agreements (GAO/RCED-95-1, Mar. 3, 1995). Department of Energy: Research and Agency Missions Need Reevaluation (GAO/T-RCED-95-105, Feb. 13, 1995). Department of Energy: National Laboratories Need Clearer Missions and Better Management (GAO/RCED-95-10, Jan. 27, 1995). Department of Energy: Need to Reevaluate Its Role and Missions (GAO/T-RCED-95-85, Jan. 18, 1995). Nuclear Waste: Comprehensive Review of the Disposal Program Is Needed (GAO/RCED-94-299, Sept. 27, 1994). Energy Policy: Ranking Options to Improve the Readiness of and Expand the Strategic Petroleum Reserve (GAO/RCED-94-259, Aug. 18, 1994). Department of Energy: Management Changes Needed to Expand Use of Innovative Cleanup Technologies (GAO/RCED-94-205, Aug. 10, 1994). Department of Energy: Challenges to Implementing Contract Reform (GAO/RCED-94-150, Mar. 24, 1994). DOE's National Laboratories: Adopting New Missions and Managing Effectively Pose Significant Challenges (GAO/T-RCED-94-113, Feb. 3, 1994). Financial Management: Energy's Material Financial Management Weaknesses Require Corrective Action (GAO/AIMD-93-29, Sept. 30, 1993). Department of Energy: Management Problems Require a Long-Term Commitment to Change (GAO/RCED-93-72, Aug. 31, 1993). Energy Policy: Changes Needed to Make National Energy Planning More Useful (GAO/RCED-93-29, Apr. 27, 1993). Energy Management: High-Risk Area Requires Fundamental Change (GAO/T-RCED-93-7, Feb. 17, 1993). Nuclear Weapons Complex: Issues Surrounding Consolidating Los Alamos and Livermore National Laboratories (GAO/T-RCED-92-98, Sept. 24, 1992). Department of Energy: Better Information Resources Management Needed to Accomplish Missions (GAO/IMTEC-92-53, Sept. 29, 1992). Naval Petroleum Reserve: Limited Opportunities Exist to Increase Revenues From Oil Sales in California (GAO/RCED-94-126, May 5, 1994). High-Risk Series: Department of Energy Contract Management (GAO/HR-93-9, Dec. 1992). Comments on Proposed Legislation to Restructure DOE's Uranium Enrichment Program (GAO/T-RCED-92-14, Oct. 29, 1991). Nuclear Waste: Operation of Monitored Retrievable Storage Facility Is Unlikely by 1998 (GAO/RCED-91-194, Sept. 24, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Energy's (DOE) future, focusing on DOE efforts to restructure its missions and address policy and management issues. GAO noted that: (1) DOE is having a difficult time responding to its changing mission and organizational structure; (2) DOE is unable to evaluate its activities due to weak management and information systems; (3) DOE has a highly decentralized field structure that is unable to respond to changing conditions and priorities, fraught with communication problems, and ill-equipped to handle cross-cutting issues; (4) many former DOE officials and other experts believe that DOE should concentrate on several key issues such as energy policy, energy information, and energy supply research and development; (5) DOE is reforming its contracting practices to make them more business-like and results-oriented, opening up its decisionmaking processes to the public, and organizing high-level task forces on laboratory and research management; (6) DOE is on target with its planned budget savings under the Strategic Alignment and Downsizing Initiative and is depending on its process improvements and reengineering efforts to fulfill its mission under reduced budgets; (7) a governmentwide approach to restructuring DOE is desirable, since transferring any DOE mission will have a broad impact on other federal agencies; and (8) DOE will have to address contract reform, acquisitions, and environmental cleanup and waste management issues to effectively restructure its organization.
5,785
296
Passenger and freight rail services help move people and goods through the transportation system, which helps the economic well-being of the United States. Passenger rail services can take many forms. Some mass transit agencies, which can be public or private entities, provide rail services, such as commuter rail and heavy rail (e.g., subway) in cities across the United States. Through these rail services, mass transit agencies serve a large part of the commuting population. For example, in the third quarter of 2003, commuter rail systems provided an average of 1.2 million passenger trips each weekday. The National Railroad Passenger Corporation (Amtrak) provides intercity passenger rail services in the United States. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. In fiscal year 2002, Amtrak served 23.4 million passengers, or about 64,000 passengers per day. The nation's freight rail network carries 42 percent of domestic intercity freight (measured by ton miles) in 2001-- everything from lumber to vegetables, coal to orange juice, grain to automobiles, and chemicals to scrap iron. Prior to September 11, 2001, DOT--namely, the Federal Railroad Administration (FRA), Federal Transit Administration (FTA), and Research and Special Programs Administration (RSPA)--was the primary federal entity involved in passenger and freight rail security matters. However, in response to the attacks on September 11, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring security in all modes of transportation. The act also gives TSA regulatory authority over all transportation modes. With the passage of the Homeland Security Act, TSA, along with over 20 other agencies, was transferred to the new Department of Homeland Security (DHS). Throughout the world, rail systems have been the target of terrorist attacks. For example, the first large-scale terrorist use of a chemical weapon occurred in 1995 on the Tokyo subway system. In this attack, a terrorist group released sarin gas on a subway train, killing 11 people and injuring about 5,500. In addition, according to the Mineta Institute, surface transportation systems were the target of more than 195 terrorist attacks from 1997 through 2000. (See fig. 1.) Passenger and freight rail providers face significant challenges in improving security. Some security challenges are common to passenger and freight rail systems; others are unique to the type of rail system. Common challenges include the funding of security improvements, the interconnectivity of the rail system, and the number of stakeholders involved in rail security. The unique challenges include the openness of mass transit systems and the transport of hazardous materials by freight railroads. A challenge that is common to both passenger and freight rail systems is the funding of security enhancements. Although some security improvements are inexpensive, such as removing trash cans from subway platforms, most require substantial funding. For example, as we reported in December 2002, one transit agency estimated that an intrusion alarm and closed circuit television system for only one of its portals would cost approximately $250,000--an amount equal to at least a quarter of the capital budgets of a majority of the transit agencies we surveyed. The current economic environment makes this a difficult time for private industry or state and local governments to make additional security investments. As we noted in June 2003, the sluggish economy has further weakened the transportation industry's financial condition by decreasing ridership and revenues. Given the tight budget environment, state and local governments and transportation operators, such as transit agencies, must make difficult trade-offs between security investments and other needs, such as service expansion and equipment upgrades. Further exacerbating the problem of funding security improvements are the additional costs the passenger and freight rail providers incur when the federal government elevates the national threat condition. For example, Amtrak estimates that it spends an additional $500,000 per month for police overtime when the national threat condition is increased. Another common challenge for both passenger and freight rail systems is the interconnectivity within the rail system and between the transportation sector and nearly every other sector of the economy. The passenger and freight rail systems are part of an intermodal transportation system--that is, passengers and freight can use multiple modes of transportation to reach a destination. For example, from its point of origin to its destination, a piece of freight, such as a shipping container, can move from ship to train to truck. The interconnective nature of the transportation system creates several security challenges. First, the effects of events directed at one mode of transportation can ripple throughout the entire system. For example, when the port workers in California, Oregon, and Washington went on strike in 2002, the railroads saw their intermodal traffic decline by almost 30 percent during the first week of the strike, compared with the year before. Second, the interconnecting modes can contaminate each other--that is, if a particular mode experiences a security breach, the breach could affect other modes. An example of this would be if a shipping container that held a weapon of mass destruction arrived at a U.S. port where it was placed on a train. In this case, although the original security breach occurred in the port, the rail or trucking industry would be affected as well. Thus, even if operators within one mode established high levels of security, they could be affected by the security efforts, or lack thereof, in the other modes. Third, intermodal facilities where passenger and freight rail systems connect and interact with other transportation modes--such as ports--are potential targets for attack because of the presence of passengers, freight, employees, and equipment at these facilities. An additional common challenge for both passenger and rail systems is the number of stakeholders involved. Government agencies at the federal, state, and local levels and private companies share responsibility for rail security. For example, there were over 550 freight railroads operating in the United States in 2002. In addition, many passenger rail services, such as Amtrak and commuter rail, operate over tracks owned by freight railroads. For instance, over 95 percent of Amtrak's 22,000-mile network operates on freight railroad tracks. The number of stakeholders involved in transportation security can lead to communication challenges, duplication, and conflicting guidance. As we have noted in past reports, coordination and consensus-building are critical to successful implementation of security efforts. Transportation stakeholders can have inconsistent goals or interests, which can make consensus-building challenging. For example, from a safety perspective, trains that carry hazardous materials should be required to have placards that identify the contents of a train so that emergency personnel know how best to respond to an incident. However, from a security perspective, identifying placards on vehicles that carry hazardous materials make them a potential target for attack. In addition to the common security challenges that face both passenger and rail systems, there are some challenges that are unique to the type of rail system. In our past reports, we have discussed several of these unique challenges, including the openness of mass transit systems and the size of the freight rail network and the diversity of freight hauled. According to mass transit officials and transit security experts, certain characteristics of mass transit systems make them inherently vulnerable to terrorist attacks and difficult to secure. By design, mass transit systems are open (i.e., have multiple access points and, in some cases, no barriers) so that they can move large numbers of people quickly. In contrast, the aviation system is housed in closed and controlled locations with few entry points. The openness of mass transit systems can leave them vulnerable because transit officials cannot monitor or control who enters or leaves the systems. In addition, other characteristics of some transit systems--high ridership, expensive infrastructure, economic importance, and location (e.g., large metropolitan areas or tourist destinations)--also make them attractive targets because of the potential for mass casualties and economic damage. Moreover, some of these same characteristics make mass transit systems difficult to secure. For example, the number of riders that pass through a mass transit system--especially during peak hours--make some security measures, such as metal detectors, impractical. In addition, the multiple access points along extended routes make the costs of securing each location prohibitive. Further complicating transit security is the need for transit agencies to balance security concerns with accessibility, convenience, and affordability. Because transit riders often could choose another means of transportation, such as a personal automobile, transit agencies must compete for riders. To remain competitive, transit agencies must offer convenient, inexpensive, and quality service. Therefore, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from mass transit and back into their cars. The size and diversity of the freight rail system make it difficult to adequately secure. The freight rail system's extensive infrastructure crisscrosses the nation and extends beyond our borders to move millions of tons of freight each day (see fig. 2.). There are over 100,000 miles of rail in the United States. The extensiveness of the infrastructure creates an infinite number of targets for terrorists. Protecting freight rail assets from attack is made more difficult because of the tremendous variety of freight hauled by railroads. For example, railroads carry freight as diverse as dry bulk (grain) and hazardous materials. The transport of hazardous materials is of particular concern because serious incidents involving these materials have the potential to cause widespread disruption or injury. In 2001, over 83 million tons of hazardous materials were shipped by rail in the United States across the rail network, which extends through every major city as well as thousands of small communities. (Figure 3 is a photograph of a rail tanker car containing one of the many types of hazardous materials commonly transported by rail.) For our April 2003 report on rail security, we visited a number of local communities and interviewed federal and private sector hazardous materials transportation experts. A number of issues emerged from our work: the need for measures to better safeguard hazardous materials temporarily stored in rail cars while awaiting delivery to their ultimate destination--a practice commonly called "storage-in-transit," the advisability of requiring companies to notify local communities of the type and quantities of materials stored in transit, and the appropriate amount of information rail companies should be required to provide local officials regarding hazardous material shipments that pass through their communities. We recommended in April 2003 that DOT and DHS develop a plan that specifically addresses the security of the nation's freight rail infrastructure. This plan should build upon the rail industries' experience with rail infrastructure and the transportation of hazardous materials and establish time frames for implementing specific security actions necessary to protect hazardous material rail shipments. DHS has informed us that this plan is in progress. Since September 11, passenger and freight rail providers have been working to strengthen security. Although security was a priority before September 11, the terrorist attacks elevated the importance and urgency of transportation security for passenger and rail providers. According to representatives from the Association of American Railroads, Amtrak, and transit agencies, passenger and freight rail providers have implemented new security measures or increased the frequency or intensity of existing activities, including: Conducted vulnerability or risk assessments: Many passenger and freight rail providers conducted assessments of their systems to identify potential vulnerabilities, critical infrastructure or assets, and corrective actions or needed security improvements. For example, the railroad industry conducted a risk assessment that identified over 1,300 critical assets and served as a foundation for the industry's security plan. Increased emergency drills: Many passenger rail providers have increased the frequency of emergency drills. For example, as of June 2003, Amtrak had conducted two full-scale emergency drills in New York City. The purpose of emergency drilling is to test emergency plans, identify problems, and develop corrective actions. Figure 4 is a photograph from an annual emergency drill conducted by the Washington Metropolitan Area Transit Authority. Developed or revised security plans: Passenger and freight rail providers developed security plans or reviewed existing plans to determine what changes, if any, needed to be made. For example, the Association of American Railroads worked jointly with several chemical industry associations and consultants from a security firm to develop the rail industry's security management plan. The plan establishes four alert levels and describes a graduated series of actions to prevent terrorist threats to railroad personnel and facilities that correspond to each alert level. Provided additional training: Many transit agencies have either participated in or conducted additional training on security or antiterrorism. For example, many transit agencies attended seminars conducted by FTA or by the American Public Transportation Association. The federal government has also acted to enhance rail security. Prior to September 11, DOT modal administrations had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created TSA and gave it responsibility for the security of all modes of transportation. In its first year of existence, TSA worked to establish its infrastructure and focused primarily on meeting the aviation security deadlines contained in ATSA. As TSA worked to establish itself and improve the security of the aviation system, DOT modal administrations, namely FRA, FTA, and RSPA, acted to enhance passenger and freight rail security (see tab. 1.). For example, FTA launched a multipart initiative for mass transit agencies that provided grants for emergency drills, offered free security training, conducted security assessments at 36 transit agencies, provided technical assistance, and invested in research and development. With the immediate crisis of meeting many aviation security deadlines behind it, TSA has been able to focus more on the security of all modes of transportation, including rail security. We reported in June 2003 that TSA was moving forward with efforts to secure the entire transportation system, such as developing standardized criticality, threat, and vulnerability assessment tools; and establishing security standards for all modes of transportation. Although steps have been taken to enhance passenger and freight security since September 11, the recent terrorist attack on a rail system in Spain naturally focuses our attention on what more could be done to secure the nation's rail systems. In our previous work on transportation security, we identified future actions that the federal government could take to enhance security of individual transportation modes as well as the entire transportation system. For example, in our December 2002 report on mass transit security, we recommended that the Secretary of Transportation seek a legislative change to give mass transit agencies more flexibility in using federal funds for security-related operating expenses, among other things. Two recurring themes cut across our previous work in transportation security--the need for the federal government to utilize a risk management approach and the need for the federal government to improve coordination of security efforts. Using risk management principles to guide decision-making is a good strategy, given the difficult trade-offs the federal government will likely have to make as it moves forward with its transportation security efforts. We have advocated using a risk management approach to guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. As figure 5 illustrates, the highest priorities emerge where threats, vulnerabilities, and criticality overlap. For example, rail infrastructure that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and therefore would be a higher priority for funding compared with infrastructure that was only vulnerable to attack. The federal government is likely to be viewed as a source of funding for at least some rail security enhancements. These enhancements will join the growing list of security initiatives competing for federal assistance. A risk management approach can help inform funding decisions for security improvements within the rail system and across modes. A risk management approach entails a continuous process of managing, through a series of mitigating actions, the likelihood of an adverse event happening with a negative impact. Risk management encompasses "inherent" risk (i.e., risk that would exist absent any mitigating action), as well as "residual" risk (i.e., the risk that remains even after mitigating actions have been taken). Figure 6 depicts the risk management framework. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from known or potential threats can help reduce it. (Appendix I provides a description of the key elements of the risk management approach.) We reported in June 2003 that TSA planned to adopt a risk management approach for its efforts to enhance the security of the nation's transportation system. According to TSA officials, risk management principles will drive all decisions--from standard-setting, to funding priorities, to staffing. Coordination is also a key action in meeting transportation security challenges. As we have noted in previous reports, coordination among all levels of the government and the private industry is critical to the success of security efforts. The lack of coordination can lead to such problems as duplication and/or conflicting efforts, gaps in preparedness, and confusion. Moreover, the lack of coordination can strain intergovernmental relationships, drain resources, and raise the potential for problems in responding to terrorism. The administration's National Strategy for Homeland Security and the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets also emphasize the importance of and need for coordination in security efforts. In particular, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets notes that protecting critical infrastructure, such as the transportation system, "requires a unifying organization, a clear purpose, a common understanding of roles and responsibilities, accountability, and a set of well-understood coordinating processes." We reported in June 2003 that the roles and responsibilities of TSA and DOT in transportation security, including rail security, have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities work to enhance security. Legislation has not defined TSA's role and responsibilities in securing all modes of transportation. ATSA does not specify TSA's role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. Instead, the act simply states that TSA is responsible for ensuring security in all modes of transportation. The act also did not eliminate DOT modal administrations' existing statutory responsibilities for securing the different transportation modes. Moreover, recent legislation indicates that DOT still has security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify the roles and responsibilities of TSA and DOT in transportation security matters, we recommended that the Secretary of Transportation and Secretary of Homeland Security use a mechanism, such as a memorandum of agreement to clearly delineate their roles and responsibilities. The Department of Homeland Security (DHS) and DOT disagreed with our recommendation, noting that DHS had the lead for the Administration in transportation security matters and that DHS and DOT were committed to broad and routine consultations. We continue to believe our recommendation is valid. A mechanism, such as a memorandum of agreement, would serve to clarify, delineate, and document the roles and responsibilities of each entity. This is especially important considering DOT responsibilities for transportation safety overlap with DHS' role in securing the transportation system. Moreover, recent pieces of legislation give DOT transportation security responsibilities for some activities, including the rail security. Consequently, the lack of clearly delineated roles and responsibilities could lead to duplication, confusion, and gaps in preparedness. A mechanism would also serve to hold each entity accountable for its transportation security responsibilities. Finally, it could serve as a vehicle to communicate the roles and responsibilities of each entity to transportation security stakeholders. Securing the nation's passenger and freight rail systems is a tremendous task. Many challenges must be overcome. Passenger and freight rail stakeholders have acted to enhance security, but more work is needed. As passenger and freight rail stakeholders, including the federal government, work to enhance security, it is important that efforts be coordinated. The lack of coordination could lead to duplication and confusion. More importantly, it could hamper the rail sector's ability to prepare for and respond to attacks. In addition, to ensure that finite resources are directed to the areas of highest priority, risk management principles should guide decision-making. Given budget pressures at all levels of government and the sluggish economy, difficult trade-offs will undoubtedly need to be made among competing claims for assistance. A risk management approach can help inform these difficult decisions. This concludes our prepared statement. We would be pleased to respond to any questions you or other Members of the Committee may have. For information about this testimony, please contact Peter Guerrero, Director, Physical Infrastructure Issues, on (202) 512-2834; or Norman Rabkin, Managing Director, Homeland Security and Justice Issues, on (202) 512- 8777. Individuals making key contributions to this testimony included Nikki Clowers, Susan Fleming, Maria Santos, and Robert White. Threat Assessment. Threat is defined as potential intent to cause harm or damage to an asset (e.g., natural environment, people, man-made infrastructures, and activities and operations). A threat assessment identifies adverse events that can affect an entity and may be present at the global, national, or local level. Criticality assessment. Criticality is defined as an asset's relative worth. A criticality assessment identifies and evaluates an entity's assets based on a variety of factors, including importance of a function and the significance of a system in terms of national security, economic activity, or public safety. Criticality assessments help to provide a basis for prioritizing protection relative to limited resources. Vulnerability assessment. Vulnerability is defined as the inherent state or condition of an asset that can be exploited to cause harm. A vulnerability assessment identifies the extent that these inherent states may be exploited, relative to countermeasures that have been or could be deployed. Risk Assessment. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. It may include scenarios under which two or more risks interact, creating greater or lesser impacts, as well as the ranking of risky events. Risk characterization. Risk characterization involves designating risk on a categorical scale (e.g., low, medium, and high). Risk characterization provides input for deciding which areas are most suited to mitigate risk. Mitigation Evaluation. Mitigation evaluation is the identification of mitigation alternatives to assess the effectiveness of the alternatives. The alternatives should be evaluated for their likely effect on risk and their cost. Mitigation Selection. Mitigation selection involves a management decision on which mitigation alternatives should be implemented among alternatives, taking into account risk, costs, and the effectiveness of mitigation alternatives. Selection among mitigation alternatives should be based upon pre-considered criteria. There are as of yet no clearly preferred selection criteria, although potential factors might include risk reduction, net benefits, equality of treatment, or other stated values. Mitigation selection does not necessarily involve prioritizing all resources to the highest risk area, but in attempting to balance overall risk and available resources. Risk mitigation. Risk mitigation is the implementation of mitigating actions, depending upon an organization's chosen action posture (i.e. the decision on what to do about overall risk). Specifically, risk mitigation may involve risk acceptance (taking no action), risk avoidance (taking actions to avoid activities that involve risk), risk reduction (taking actions to reduce the likelihood and/or impact of risk), and risk sharing (taking actions to reduce risk by sharing risk with other entities). As shown in figure 6, risk mitigation is best framed within an integrated systems approach that encompasses action in all organizational areas; including personnel, processes, technology, infrastructure, and governance. An integrated systems approach helps to ensure that taking action in one or more areas would not create unintended consequences in another area. Monitoring and evaluation. Monitoring and evaluation is a continuous repetitive assessment process to keep risk management current and relevant. It should involve reassessing risk characterizations after mitigating efforts have been implemented. It also includes peer review, testing, and validation. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Passenger and freight rail services are important links in the nation's transportation system. Terrorist attacks on passenger and/or freight rail services have the potential to cause widespread injury, loss of life, and economic disruption. The recent terrorist attack in Spain illustrates that rail systems, like all modes of transportation, are targets for attacks. GAO was asked to summarize the results of its recent reports on transportation security that examined (1) challenges in securing passenger and freight rail systems, (2) actions rail stakeholders have taken to enhance passenger and freight rail systems, and (3) future actions that could further enhance rail security. Securing the passenger and freight rail systems are fraught with challenges. Some of these challenges are common to passenger and freight rail systems, such as the funding of security improvements, the interconnectivity of the rail system, and the number of stakeholders involved in rail security. Other challenges are unique to the type of rail system. For example, the open access and high ridership of mass transit systems make them both vulnerable to attack and difficult to secure. Similarly, freight railroads transport millions of tons of hazardous materials each year across the United States, raising concerns about the vulnerability of these shipments to terrorist attack. Passenger and freight rail stakeholders have taken a number of steps to improve the security of the nation's rail system since September 11, 2001. Although security received attention before September 11, the terrorist attacks elevated the importance and urgency of transportation security for passenger and rail providers. Consequently, passenger and freight rail providers have implemented new security measures or increased the frequency or intensity of existing activities, including performing risk assessments, conducting emergency drills, and developing security plans. The federal government has also acted to enhance rail security. For example, the Federal Transit Administration has provided grants for emergency drills and conducted security assessments at the largest transit agencies, among other things. Implementation of risk management principles and improved coordination could help enhance rail security. Using risk management principles can help guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. In addition, improved coordination among federal entities could help enhance security efforts across all modes, including passenger and freight rail systems. We reported in June 2003 that the roles and responsibilities of the Transportation Security Administration (TSA) and the Department of Transportation (DOT) in transportation security, including rail security, have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities work to enhance security.
5,257
519
Although many aspects of an effective response to bioterrorism are the same as those for any form of terrorism, there are some unique features. For example, if a biological agent is released covertly, it may not be recognized for a week or more because symptoms may not appear for several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics to large segments of the population to prevent the spread of an infectious disease. However, some aspects of an effective response to bioterrorism are also important in responding to any type of large-scale disaster, such as providing emergency medical services, continuing health care services delivery, and, potentially, managing mass fatalities. The burden of responding to bioterrorist incidents falls initially on personnel in state and local emergency response agencies. These "first responders" include firefighters, emergency medical service personnel, law enforcement officers, public health officials, health care workers (including doctors, nurses, and other medical professionals), and public works personnel. If the emergency requires federal disaster assistance, federal departments and agencies will respond according to responsibilities outlined in the Federal Response Plan. Under the Federal Response Plan, CDC is the lead Department of Health and Human Services (HHS) agency providing assistance to state and local governments for five functions: (1) health surveillance, (2) worker health and safety, (3) radiological, chemical, and biological hazard consultation, (4) public health information, and (5) vector control. Each of these functions is described in table 1. HHS is currently leading an effort to work with governmental and nongovernmental partners to upgrade the nation's public health infrastructure and capacities to respond to bioterrorism. As part of this effort, several CDC centers, institutes, and offices work together in the agency's Bioterrorism Preparedness and Response Program. The principal priority of CDC's program is to upgrade infrastructure and capacity to respond to a large-scale epidemic, regardless of whether it is the result of a bioterrorist attack or a naturally occurring infectious disease outbreak. The program was started in fiscal year 1999 and was tasked with building and enhancing national, state, and local capacity; developing a national pharmaceutical stockpile; and conducting several independent studies on bioterrorism. CDC is conducting a variety of activities related to research on and preparedness for a bioterrorist attack. Since CDC's program began 3 years ago, funding for these activities has increased. Research activities focus on detection, treatment, vaccination, and emergency response equipment. Preparedness efforts include increasing state and local response capacity, increasing CDC's response capacity, preparedness and response planning, and building the National Pharmaceutical Stockpile Program. The funding for CDC's activities related to research on and preparedness for a bioterrorist attack has increased 61 percent over the past 2 years. See table 2 for reported funding for these activities. Funding for CDC's Bioterrorism Preparedness and Response Program grew approximately 43 percent in fiscal year 2000 and an additional 12 percent in fiscal year 2001. While the percentage increases are significant, they reflect only a $73 million increase because many of the programs initially received relatively small allocations. Approximately $45 million of the overall two-year increase was due to new research activities. Relative changes in funding for the various components of CDC's Bioterrorism Preparedness and Response Program are shown in Figure 1. Funding for research activities increased sharply from fiscal year 1999 to fiscal year 2000, and then dropped slightly in fiscal year 2001. The increase in fiscal year 2000 was largely due to a $40.5 million increase in research funding for studies on anthrax and smallpox. Funding for preparedness and response planning, upgrading CDC capacity, and upgrading state and local capacity was relatively constant between fiscal year 1999 and fiscal year 2000 and grew in fiscal year 2001. For example, funding increased to upgrade CDC capacity by 47 percent and to upgrade state and local capacity by 17 percent in fiscal year 2001. The National Pharmaceutical Stockpile Program experienced a slight increase in funding of 2 percent in fiscal year 2000 and a slight decrease in funding of 2 percent in fiscal year 2001. CDC's research activities focus on detection, treatment, vaccination, and emergency response equipment. In fiscal year 2001, CDC was allocated $18 million to continue research on an anthrax vaccine and associated issues, such as scheduling and dosage. The agency also received $22.4 million in fiscal year 2001 to conduct smallpox research. In addition, CDC oversees a number of independent studies, which fund specific universities and hospitals to do research and other work on bioterrorism. For example, funding in fiscal year 2001 included $941,000 to the University of Findlay in Findlay, Ohio, to develop training for health care providers and other hospital staff on how to handle victims who come to an emergency department during a bioterrorist incident. Another $750,000 was provided to the University of Texas Medical Branch in Galveston, Texas, to study various viruses in order to discover means to prevent or treat infections by these and other viruses (such as Rift Valley Fever and the smallpox virus). For worker safety, CDC's National Institute for Occupational Safety and Health is developing standards for respiratory protection equipment used against biological agents by firefighters, laboratory technicians, and other potentially affected workers. Most of CDC's activities to counter bioterrorism are focused on building and expanding public health infrastructure at the federal, state, and local levels. For example, CDC reported receiving funding to upgrade state and local capacity to detect and respond to a bioterrorist attack. CDC received additional funding for upgrading its own capacity in these areas, for preparedness and response planning, and for developing the National Pharmaceutical Stockpile Program. In addition to preparing for a bioterrorist attack, these activities also prepare the agency to respond to other challenges, such as identifying and containing a naturally occurring emerging infectious disease. CDC provides grants, technical support, and performance standards to support bioterrorism preparedness and response planning at the state and local levels. In fiscal year 2000, CDC funded 50 states and four major metropolitan health departments for preparedness and response activities. CDC is developing planning guidance for state public health officials to upgrade state and local public health departments' preparedness and response capabilities. In addition, CDC has worked with the Department of Justice to complete a public health assessment tool, which is being used to determine the ability of state and local public health agencies to respond to release of biological and chemical agents, as well as other public health emergencies. Ten states (Florida, Hawaii, Maine, Michigan, Minnesota, Pennsylvania, Rhode Island, South Carolina, Utah, and Wisconsin) have completed the assessment, and others are currently completing it. States have received funding from CDC to increase staff, enhance capacity to detect the release of a biological agent or an emerging infectious disease, and improve communications infrastructure. In fiscal year 1999, for example, a total of $7.8 million was awarded to 41 state and local health agencies to improve their ability to link different sources of data, such as sales of certain pharmaceuticals, which could be helpful in detecting a covert bioterrorist event. Rapid identification and confirmatory diagnosis of biological agents are critical to ensuring that prevention and treatment measures can be implemented quickly. CDC was allocated $13 million in fiscal year 1999 to enhance state and local laboratory capacity. CDC has established a Laboratory Response Network of federal, state, and local laboratories that maintain state-of-the-art capabilities for biological agent identification and characterization of human clinical samples such as blood. CDC has provided technical assistance and training in identification techniques to state and local public health laboratories. In addition, five state health departments received awards totaling $3 million to enhance chemical laboratory capabilities from the fiscal year 2000 funds. The states used these funds to purchase equipment and provide training. CDC is working with state and local health agencies to improve electronic infrastructure for public health communications for the collection and transmission of information related to a bioterrorism incident as well as other events. For example, $21 million was awarded to states in fiscal year 1999 to begin implementation of the Health Alert Network, which will support the exchange of key information over the Internet and provide a means to conduct distance training that could potentially reach a large segment of the public health community. Currently, 13 states are connected to all of their local jurisdictions. CDC is also directly connected to groups such as the American Medical Association to reach healthcare providers. CDC has described the Health Alert Network as a "highway" on which programs, such as the National Electronic Disease Surveillance System (NEDSS) and the Epidemic Information Exchange (Epi-X), will run. NEDSS is designed to facilitate the development of an integrated, coherent national system for public health surveillance. Ultimately, it is meant to support the automated collection, transmission, and monitoring of disease data from multiple sources (for example, clinician's offices and laboratories) from local to state health departments to CDC. This year, a total of $10.9 million will go to 36 jurisdictions for new or continuing NEDSS activities. Epi-X is a secure, Web-based exchange for public health officials to rapidly report and discuss disease outbreaks and other health events potentially related to bioterrorism as they are identified and investigated. CDC is upgrading its own epidemiologic and disease surveillance capacity. It has deployed, and is continuing to enhance, a surveillance system to increase surveillance and epidemiological capacities before, during, and after special events (such as the 1999 World Trade Organization meeting in Seattle). Besides improving emergency response at the special events, the agency gains valuable experience in developing and practicing plans to combat terrorism. In addition, CDC monitors unusual clusters of illnesses, such as influenza in June. Although unusual clusters are not always a cause for concern, they can indicate a potential problem. The agency is also increasing its surveillance of disease outbreaks in animals. CDC has strengthened its own laboratory capacity. For example, it is developing and validating new diagnostic tests as well as creating agent- specific detection protocols. In collaboration with the Association of Public Health Laboratories and the Department of Defense, CDC has started a secure Web-based network that allows state, local, and other public health laboratories access to guidelines for analyzing biological agents. The site also allows authenticated users to order critical reagentsneeded in performing laboratory analysis of samples. The agency has also opened a Rapid Response and Advance Technology Laboratory, which screens samples for the presence of suspicious biological agents and evaluates new technology and protocols for the detection of biological agents. These technology assessments and protocols, as well as reagents and reference samples, are being shared with state and local public health laboratories. One activity CDC has undertaken is the implementation of a national bioterrorism response training plan. This plan focuses on preparing CDC officials to respond to bioterrorism and includes the development of exercises to assess progress in achieving bioterrorism preparedness at the federal, state, and local levels. The agency is also developing a crisis communications/media response curriculum for bioterrorism, as well as core capabilities guidelines to assist states and localities in their efforts to build comprehensive anti-bioterrorism programs. CDC has developed a bioterrorism information Web site. This site provides emergency contact information for state and local officials in the event of possible bioterrorism incidents, a list of critical biological and chemical agents, summaries of state and local bioterrorism projects, general information about CDC's bioterrorism initiative, and links to documents on bioterrorism preparedness and response. The National Pharmaceutical Stockpile Program maintains a repository of life-saving pharmaceuticals, antidotes, and medical supplies, known as 12- Hour Push Packages, that could be used in an emergency, including a bioterrorist attack. The packages can be delivered to the site of a biological (or chemical) attack within 12 hours of deployment for the treatment of civilians. The first emergency use of the National Pharmaceutical Stockpile occurred on September 11, 2001, when in response to the terrorist attack on the World Trade Center, CDC released one of the eight Push Packages. The National Pharmaceutical Stockpile also includes additional antibiotics, antidotes, other drugs, medical equipment, and supplies, known as the Vendor Managed Inventory, that can be delivered within 24 to 36 hours after the appropriate vendors are notified. Deliveries from the Vendor Managed Inventory can be tailored to an individual incident. The program received $51.0 million in fiscal year 1999, $51.8 million in fiscal year 2000, and $51.0 million in fiscal year 2001. CDC and the Office of Emergency Preparedness (another agency in HHS that also maintains a stockpile of medical supplies) have encouraged state and local representatives to consider stockpile assets in their emergency planning for a biological attack and have trained representatives from state and local authorities in using the stockpile. The stockpile program also provides technical advisers in response to an event to ensure the appropriate and timely transfer of stockpile contents to authorized state representatives. Recently, individuals who may have been exposed to anthrax through the mail have been given antibiotics from the Vendor Managed Inventory. While CDC has funded research and preparedness programs for bioterrorism, a great deal of work remains to be done. CDC and HHS have identified gaps in bioterrorism research and preparedness that need to be addressed. In addition, some of our work on naturally occurring diseases also also indicates gaps in preparedness that would be important in the event of a bioterrorist attack. Gaps in research activities center on vaccines and field testing for infectious agents. CDC has reported that it needs to continue the smallpox vaccine development and production contract begun in fiscal year 2000. This includes clinical testing of the vaccine and submitting a licensing application to the Food and Drug Administration for the prevention of smallpox in adults and children. CDC also plans to conduct further studies of the anthrax vaccine. This research will include studies to better understand the immunological response that correlates with protection against inhalation anthrax and risk factors for adverse events as well as investigating modified vaccination schedules that could maintain protection and result in fewer adverse reactions. The agency has also indicated that it needs to continue research in the area of rapid assay tests to allow field diagnosis of a biological or chemical agent. Gaps remain in all of the areas of preparedness activities under CDC's program. In particular, there are many unmet needs in upgrading state and local capacity to respond to a bioterrorist attack. There are also further needs in upgrading CDC's capacity, preparedness and response planning, and building the National Pharmaceutical Stockpile. Health officials at many levels have called for CDC to support bioterrorism planning efforts at the state and local level. In a series of regional meetings from May through September 2000 to discuss issues associated with developing comprehensive bioterrorism response plans, state and local officials identified a need for additional federal support of their planning efforts. This includes federal efforts to develop effective written planning guidance for state and local health agencies and to provide on-site assistance that will ensure optimal preparedness and response. HHS has noted that surveillance capabilities need to be increased. In addition to enhancing traditional state and local capabilities for infectious disease surveillance, HHS has recognized the need to expand surveillance beyond the boundaries of the public health departments. In the department's FY 2002--FY 2006 Plan for Combating Bioterrorism, HHS notes that potential sources for data on morbidity trends include 911 emergency calls, reasons for emergency department visits, hospital bed usage, and the purchase of specific products at pharmacies. Improved monitoring of food is also necessary to reduce its vulnerability as an avenue of infection and of terrorism. Other sources beyond public health departments can provide critical information for detection and identification of an outbreak. For example, the 1999 West Nile virus outbreak showed the importance of links with veterinary surveillance.Initially there were two separate investigations: one of sick people, the other of dying birds. Once the two investigations converged, the link was made, and the virus was correctly identified. HHS has found that state and local laboratories need to continue to upgrade their facilities and equipment. The department has stated that it would be beneficial if research, hospital, and commercial laboratories that have state-of-the-art equipment and well-trained staff were added to the National Laboratory Response Network. Currently, there are 104 laboratories in the network that can provide testing of biological samples for detection and confirmation of biological agents. Based on the 2000 regional meetings, CDC concluded that it needs to continue to support the laboratory network and identify opportunities to include more clinical laboratories to provide additional surge capacity. CDC also concluded from the 2000 regional meetings that, although it has begun to develop information systems, it needs to continue to enhance these systems to detect and respond to biological and chemical terrorism. HHS has stated that the work that has begun on the Health Alert Network, NEDSS, and Epi-X needs to continue. One aspect of this work is developing, testing, and implementing standards that will permit surveillance data from different systems to be easily shared. During the West Nile virus outbreak, while a secure electronic communication network was in place at the time of the initial outbreak, not all involved agencies and officials were capable of using it at the same time. For example, because CDC's laboratory was not linked to the New York State network, the New York State Department of Health had to act as an intermediary in sharing CDC's laboratory test results with local health departments. CDC and the New York State Department of Health laboratory databases were not linked to the database in New York City, and laboratory results consequently had to be manually entered there. These problems slowed the investigation of the outbreak. Moreover, we have testified that there is also a notable lack of training focused on detecting and responding to bioterrorist threats. Most physicians and nurses have never seen cases of certain diseases, such as smallpox or plague, and some biological agents initially produce symptoms that can be easily confused with influenza or other, less virulent illnesses, leading to a delay in diagnosis or identification. Medical laboratory personnel require training because they also lack experience in identifying biological agents such as anthrax. HHS has stated that epidemiologic capacity at CDC also needs to be improved. A standard system of disease reporting would better enable CDC to monitor disease, track trends, and intervene at the earliest sign of unusual or unexplained illness. HHS has noted that CDC needs to enhance its in-house laboratory capabilities to deal with likely terrorist agents. CDC plans to develop agent-specific detection and identification protocols for use by the laboratory response network, a research agenda, and guidelines for laboratory management and quality assurance. CDC also plans further development of its Rapid Response and Advanced Technology Laboratory. As we reported in September 2000, even the West Nile virus outbreak, which was relatively small and occurred in an area with one of the nation's largest local public health agencies, taxed the federal, state, and local laboratory resources. Both the New York State and the CDC laboratories were quickly inundated with requests for tests during the West Nile virus outbreak, and because of the limited capacity at the New York laboratories, the CDC laboratory handled the bulk of the testing. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. CDC plans to work with other agencies in HHS to develop guidance to facilitate preparedness planning and associated investments by local-level medical and public health systems. The department has stated that to the extent that the guidance can help foster uniformity across local efforts with respect to preparedness concepts and structural and operational strategies, this would enable government units to work more effectively together than if each local approach was essentially unique. More generally, CDC has found a need to implement a national strategy for public health preparedness for bioterrorism, and to work with federal, state, and local partners to ensure communication and teamwork in response to a potential bioterrorist incident. Planning needs to continue for potential naturally occurring epidemics as well. In October 2000, we reported that federal and state influenza pandemic plans are in various stages of completion and do not completely or consistently address key issues surrounding the purchase, distribution, and administration of vaccines and antiviral drugs. At the time of our report, 10 states either had developed or were developing plans using general guidance from CDC, and 19 more states had plans under development. Outstanding issues remained, however, because certain key federal decisions had not been made. For example, HHS had not determined the proportion of vaccines and antiviral drugs to be purchased, distributed, and administered by the public and private sectors or established priorities for which population groups should receive vaccines and antiviral drugs first when supplies are limited. As of July 2001, HHS continued to work on a national plan. As a result, policies may differ among states and between states and the federal government, and in the event of a pandemic, these inconsistencies could contribute to public confusion and weaken the effectiveness of the public health response. The recent anthrax incidents have focused a great deal of attention on the national pharmaceutical stockpile. Prior to this, in its FY2002 - FY 2006 Plan for Combating Bioterrorism, HHS had indicated what actions would be necessary regarding the stockpile over the next several years. These included purchasing additional products so that pharmaceuticals were available for treating additional biological agents in fiscal year 2002, and conducting a demonstration project that incorporates the National Guard in planning for receipt, transport, organization, distribution, and dissemination of stockpile supplies in fiscal year 2003. CDC also proposed providing grants to cities in fiscal year 2004 to hire a stockpile program coordinator to help the community develop a comprehensive plan for handling the stockpile and organizing volunteers trained to manage the stockpile during a chemical or biological event. Clearly, these longer range plans are changing, but the need for these activities remains. For further information about this statement, please contact me at (202) 512-7118. Robert Copeland, Marcia Crosse, Greg Ferrante, David Gootnick, Deborah Miller, and Roseanne Price also made key contributions to this statement. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, Oct. 31, 2001). Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-199T, Oct. 24, 2001). Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-175T, Oct. 24, 2001). Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness (GAO-02-162T, Oct. 17, 2001). Homeland Security: Need to Consider VA's Role in Strengthening Federal Preparedness (GAO-02-145T, Oct. 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, Oct. 12, 2001). Bioterrorism: Review of Public Health Preparedness Programs (GAO-02- 149T, Oct. 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President's Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001).
Federal research and preparedness activities related to bioterrorism center on detection; the development of vaccines, antibiotics, and antivirals; and the development of performance standards for emergency response equipment. Preparedness activities include (1) increasing federal, state, and local response capabilities; (2) developing response teams; (3) increasing the availability of medical treatments; (4) participating in and sponsoring exercises; (5) aiding victims; and (6) providing support at special events, such as presidential inaugurations and Olympic games. To coordinate their efforts to combat terrorism, federal agencies are developing interagency response plans, participating in various interagency work groups, and entering into formal agreements with other agencies to share resources and capabilities. However, coordination of federal terrorism research, preparedness, and response programs is fragmented, raising concerns about the ability of states and localities to respond to a bioterrorist attack. These concerns include insufficient state and local planning and a lack of hospital participation in training on terrorism and emergency response planning. This testimony summarizes a September 2001 report (GAO-01-915).
5,203
228
The President has established, and DOD operates geographic combatant commands to perform military missions around the world. Geographic combatant commands are each assigned an area of responsibility in which to conduct their missions and activities (see fig. 1 below). Combatant commands are responsible for a variety of functions including tasks such as (1) deploying forces as necessary to carry out the missions assigned to the command; (2) coordinating and approving those aspects of administration, support (including control of resources and equipment, internal organization, and training), and discipline necessary to carry out missions assigned to the command; and (3) assigning command functions to subordinate commanders. Combatant commands are supported by Service component commands (Army, Navy, Marine Corps, and Air Force) and Special Operations Command. Each of these component commands has a significant role in planning and supporting operations. On February 6, 2007, the President directed the Secretary of Defense to establish a new geographic combatant command to consolidate the responsibility for DOD activities in Africa that have been shared by U.S. Central Command, U.S. Pacific Command, and U.S. European Command. AFRICOM was officially established on October 1, 2007, with a goal to reach full operational capability as a separate, independent geographic combatant command by September 30, 2008. Full operational capability was defined as the point at which the AFRICOM commander will accept responsibility for executing all U.S. military activities in Africa currently being conducted by the U.S. European, Central, and Pacific commands; have the capability to plan and conduct new operations; and have the capability to develop new initiatives. AFRICOM's mission statement, which was approved by the Secretary of Defense in May 2008, is to act in concert with other U. S. government agencies and international partners to conduct sustained security engagement through military-to-military programs, military-sponsored activities, and other military operations as directed to promote a stable and secure African environment in support of U.S. foreign policy. Since the President announced the establishment of AFRICOM, DOD has focused on building the capabilities necessary for AFRICOM to systematically assume responsibility for all existing military missions, activities, programs, and exercises in the area of responsibility it is inheriting from the U.S. European, Central, and Pacific commands. From the outset, AFRICOM has sought to assume responsibility for these existing activities seamlessly, without disrupting them or other U.S. government and international efforts in Africa. To accomplish this task, AFRICOM officials created a formal process to manage the transfer of activities it initially identified as ongoing within AFRICOM's area of responsibility. These range from activities to combat HIV/AIDS to programs that provide training opportunities for foreign military personnel and include the two largest U.S. military activities in Africa, the Combined Joint Task Force-Horn of Africa and Operation Enduring Freedom-Trans Sahara. DOD plans to transfer most activities to the new command by September 30, 2008. The areas of responsibility and examples of activities being transferred to AFRICOM from the U.S. European, Central and Pacific commands are presented in figure 2. In cases involving State Department-led activities where DOD plays a primary role in its execution, such as the International Military Education and Training program, AFRICOM is assuming only the execution of the program from other combatant commands--the State Department still maintains overall authority and responsibility for the program. Since the initial establishment of the command in October 2007, AFRICOM has also sought to staff its headquarters, which will include DOD military personnel, DOD civilian personnel, and interagency personnel. Officials explained that staffing the command's positions is the most critical and limiting factor in the process for assuming responsibility for activities in Africa because activities cannot be transferred without personnel in place to execute them. DOD has approved 1,304 positions (military and DOD civilian) for the command's headquarters, of which about 270 military positions are being transferred from other commands. By September 30, 2008, DOD plans to have filled 75 percent, or 980 of these positions. In addition, DOD plans to have 13 command positions filled by representatives from non-DOD agencies. As a result, on September 30, 2008, 1 percent of AFRICOM headquarters positions will be filled by representatives from non-DOD organizations (see fig. 3). At this point, the number of interagency representatives in AFRICOM headquarters will be only slightly more than the number of representatives in other geographic commands, but AFRICOM has been designed to embed these interagency personnel at all levels in the command, including in leadership and management roles. While AFRICOM expects to fill 622 (97 percent) of its military personnel positions by September 30, 2008, it only expects to fill 358 (54 percent) of its DOD civilian positions, and 13 out of 52 (25 percent) targeted interagency positions by this time. DOD officials explained that unlike military positions, hiring civilians may include conducting security clearance investigations and overcoming the logistics necessary to physically relocate civilians overseas as well as other administrative requirements. Figure 4 compares the positions DOD has approved for AFRICOM, the targeted interagency positions, the command's progress in filling them as of July, 2008, and the progress it expects to make by October 1, 2008. In order to meet infrastructure needs, AFRICOM is renovating existing facilities in Stuttgart, Germany, to establish an interim headquarters at a projected cost of approximately $40 million. DOD also projects an investment of approximately $43 million in command, control, communications, and computer systems infrastructure to enable AFRICOM to monitor and manage the vast array of DOD activities in Africa. Decisions related to the location of AFRICOM's permanent headquarters and the overall command presence in Africa will be decided at a future date; therefore, DOD expects the command will operate from the interim headquarters in Germany for the foreseeable future. In total, DOD budgeted approximately $125 million to support the establishment of AFRICOM during fiscal years 2007 and 2008 and has requested nearly $390 million more for fiscal year 2009. This does not reflect the full cost of establishing the command over the next several years, a cost that is projected to be substantial and could range in the billions of dollars. For example, although DOD has not fully estimated the additional costs of establishing and operating the command, AFRICOM officials said that as the command is further developed and decisions are made on its permanent headquarters, it will need to construct both enduring facilities and meet other operational support requirements. DOD's preliminary estimates for the command's future infrastructure and equipping costs over the next several years exceed several billion dollars, excluding the cost of activities AFRICOM will be performing. The progress AFRICOM intends to make in establishing the command by September 30, 2008, will provide it a foundation for working toward DOD's goal to promote whole-of-government approaches to building the capacity of partner nations. However, AFRICOM officials recognize the command will need to continue to develop after its September 30, 2008, milestone to move beyond episodic security cooperation events to more strategic, sustained efforts. The AFRICOM commander has described the command as a "...listening, growing, and developing organization." In addition, senior DOD officials told us that on September 30, 2008, DOD does not anticipate that AFRICOM will have the desired interagency skill sets, the ability to strategically engage with African countries beyond the established level, or the capacity to take on new initiatives. In addition to DOD's efforts to establish the combatant command, the military services and Special Operations Command are also working to establish component commands that will be subordinate to AFRICOM. They are in the process of developing organizational structures and determining facilities, personnel, and other requirements, such as operational support aircraft, that have yet to be fully defined, but could be challenging for the services to meet. For example, personnel requirements for each component command range from approximately 100 personnel to more than 400, and Army officials said they will likely face difficulties in filling positions because many of the positions require a certain level of rank or experience that is in high demand. At the time that AFRICOM is estimated to reach full operational capability (September 30, 2008), only two component commands (Navy, Marine Corps) are expected to be fully operational. The Army, Air Force, and Special Operations component commands are expected to reach full operational capability by October 1, 2009. DOD's first challenge to achieving its vision for AFRICOM is in integrating personnel from civilian agencies into AFRICOM's command and staff structure. According to AFRICOM, strategic success in Africa depends on a whole-of-government approach to stability and security. A whole-of- government approach necessitates collaboration among federal agencies to ensure their activities are synchronized and integrated in pursuit of a common goal. Integrating personnel from federal civilian agencies is intended to facilitate collaboration among agencies, but AFRICOM has had difficulties in filling its interagency positions. Unlike liaison positions in other combatant commands, AFRICOM has been designed to embed personnel from non-DOD agencies in leadership, management, and staff positions at all levels in the command. For example, AFRICOM's Deputy to the Commander for Civil-Military Activities, one of two co-equal Deputies to the Commander, is a senior Foreign Service officer from the Department of State. By bringing knowledge of their home agencies, personnel from other agencies, such as the USAID and the departments of Treasury and Commerce, are expected to improve the planning and execution of AFRICOM's plans, programs, and activities and to stimulate collaboration among U.S. government agencies. Initially, DOD established a notional goal of 25 percent of AFRICOM's headquarters' staff would be provided by non-DOD agencies. According to State officials, however, this goal was not vetted through civilian agencies and was not realistic because of the resource limitations in civilian agencies. Subsequently, AFRICOM reduced its interagency representation to 52 notional interagency positions and as displayed in figure 5, would be approximately 4 percent of the AFRICOM staff. As previously discussed, however, DOD officials have indicated that the target of 52 interagency positions for the command will continue to evolve as AFRICOM receives input from other agencies. Even with a reduction in the number of interagency positions, according to DOD officials, some civilian agencies have limited personnel resources and incompatible personnel systems that have not easily accommodated DOD's intent to place interagency personnel in the command. AFRICOM is looking to civilian agencies for skills sets that it does not have internally, but many of the personnel who have these skills sets and experience outside of DOD are in high demand. Officials at the State Department, in particular, noted their concern about the ability to fill positions left vacant by personnel being detailed to AFRICOM since it takes a long time to develop Foreign Service officers with the requisite expertise and experience. In fact, according to State Department officials, some U.S. embassies in Africa are already experiencing shortfalls in personnel, especially at the mid-level. DOD officials also said that personnel systems among federal agencies were incompatible and do not readily facilitate integrating personnel into other agencies, particularly into non-liaison roles. In addition, many non-DOD agencies have missions that are domestically focused and therefore will need time to determine how best to provide personnel support to AFRICOM. To encourage agencies to provide personnel to fill positions in AFRICOM, DOD will pay the salaries and expenses for these personnel. As previously discussed, while DOD has focused initially on establishing AFRICOM's headquarters, the services and Special Operations Command are also working to establish component commands to support AFRICOM, but the extent of interagency participation at these commands has not been fully defined. Neither OSD nor AFRICOM has provided guidance on whether AFRICOM's component commands should integrate interagency representatives, and among the services, plans for embedded interagency personnel varied. The Army has proposed including four interagency positions in AFRICOM's Army service component command, U.S. Army, Africa. Officials from the Office of the Secretary of Defense, the Joint Forces Command, Marine Corps, and the Air Force stated that component commands would receive interagency input from AFRICOM headquarters and embassy country teams. One OSD official added that the level of interagency input at the headquarters was sufficient because component commands are responsible for executing plans developed by the combatant command headquarters where interagency personnel would be involved in the planning process. In the 2006 Quadrennial Defense Review Execution Roadmap, Building Partnership Capacity, DOD recognized the importance of a seamless integration of U.S. government capabilities by calling for strategies, plans, and operations to be coordinated with civilian agencies. One of AFRICOM's guiding principles is to collaborate with U.S. government agencies, host nations, international partners, and nongovernmental organizations. AFRICOM officials told us that they had not yet developed the mechanisms or structures to ensure that their activities were synchronized or integrated with those of civilian agencies to ensure a mutually supportive and sustainable effort, but would turn their attention to this synchronization after October 2008. Barriers to interagency collaboration, however, could arise as AFRICOM develops mechanisms, processes, and structures to facilitate interagency collaboration, since both AFRICOM and the agencies will likely encounter additional challenges that are outside their control, such as different planning processes, authorities, and diverse institutional cultures. For example, according to State and DOD officials, the State Department is focused on bilateral relationships with foreign governments through its embassies overseas, while the Defense Department is focused regionally through its geographic combatant commands. With relatively few interagency personnel on the AFRICOM staff, such coordination mechanisms could be critical for the command to achieve its vision. DOD's second challenge to achieving its vision for AFRICOM is in overcoming stakeholder concerns of the command's mission. This could limit its ability to develop key partnerships. Since its establishment was announced in early 2007, AFRICOM has encountered concerns from U.S. civilian agencies, nongovernmental organizations, and African partners about what AFRICOM is and what it hopes to accomplish in Africa. Many of the concerns from U.S. government agencies, nongovernmental organizations, and African partners stem from their interpretations of AFRICOM's intended mission and goals. Although DOD has often stated that AFRICOM is intended to support, not lead, U.S. diplomatic and development efforts in Africa, State Department officials expressed concern that AFRICOM would become the lead for all U.S. government activities in Africa, even though the U.S. embassy leads decision-making on U.S. government non-combat activities conducted in that country. Other State and USAID officials noted that the creation of AFRICOM could blur traditional boundaries among diplomacy, development, and defense, thereby militarizing U.S. foreign policy. An organization that represents U.S.-based international nongovernmental organizations told us that many nongovernmental organizations shared the perception that AFRICOM would militarize U.S. foreign aid and lead to greater U.S. military involvement in humanitarian assistance. Nongovernmental organizations are concerned that this would put their aid workers at greater risk if their activities are confused or associated with U.S. military activities. Among African countries, there is apprehension that AFRICOM will be used as an opportunity to increase the number of U.S. troops and military bases in Africa. African leaders also expressed concerns to DOD that U.S. priorities in Africa may not be shared by their governments. For example, at a DOD- sponsored roundtable, a group of U.S.-based African attaches identified their most pressing security issues were poverty, food shortages, inadequate educational opportunities, displaced persons, and HIV/AIDS, while they perceived U.S. priorities were focused on combating terrorism and weakened states. One factor contributing to persistent concerns among U.S. government agencies, non governmental organizations, and African partners is the evolution of how DOD has characterized AFRICOM's unique mission and goals. Between February 2007 and May 2008 AFRICOM's mission statement went through several iterations that ranged in its emphasis on humanitarian-oriented activities to more traditional military programs. According to an official from an organization representing nongovernmental organizations, the emphasis on humanitarian assistance as part of AFRICOM's mission early on contributed to their fears that AFRICOM would be engaged in activities that are traditionally the mission of civilian agencies and organizations. Additionally, the discussion of AFRICOM's mission evolved from highlighting its whole-of-government approach to referring to it as a bureaucratic reorganization within DOD. When articulating its vision for AFRICOM, DOD also used language that did not translate well to African partners and civilian agency stakeholders. For civilian agencies use of the words "integrating U.S. government activities" led to concerns over AFRICOM's assuming leadership in directing all U.S. government efforts. Likewise, DOD's use of the term "combatant command" led some African partners to question whether AFRICOM was focused on non-warfighting activities. State Department officials said that they had difficulty in responding to African concerns because of their own confusion over AFRICOM's intended mission and goals. Another factor contributing to concerns over AFRICOM's mission and goals can be attributed to unclear roles and responsibilities. Although DOD has long been involved in humanitarian and stability-related activities, AFRICOM's emphasis on programs that prevent conflict in order to foster dialogue and development has put a spotlight on an ongoing debate over the appropriate role of the U.S. military in non-combat activities. Consequently, civilian agencies are concerned about the overlap of DOD missions with their own and what impact DOD's role may have on theirs. DOD is currently conducting a mission analysis to help define roles and responsibilities between AFRICOM and civilian agencies operating in Africa, but broader governmentwide consensus on these issues has not been reached. An additional factor contributing to U.S. government perceptions that AFRICOM could militarize U.S. foreign policy is in part based on DOD's vast resources and capacity compared to the civilian agencies. Civilian agencies and some African partners are concerned that the strategic focus AFRICOM could bring to the continent would result in AFRICOM supplanting civilian planning and activities. One USAID official told us that an increase in funding executed by AFRICOM could change the dynamic in relationships among U.S. federal agencies and in relationships between individual U.S. agencies and African partners. DOD has not yet reached agreement with the State Department and potential host nations on the structure and location of AFRICOM's presence in Africa. Initially, an important goal of AFRICOM was to establish a command presence in Africa that would provide a regional approach to African security and complement DOD's representation in U.S. embassies. AFRICOM is planning to increase its representation in 11 U.S. embassies by establishing new offices to strengthen bilateral military- to-military relationships. It is also planning to establish regional offices in five locations on the continent that would align with the five regional economic communities in Africa. DOD, however, has faced difficulty reaching agreement with the State Department on AFRICOM's future presence on the continent. Therefore, AFRICOM will be based in Stuttgart, Germany, for the foreseeable future and plans to focus on increasing its representatives in embassies until decisions on the structure and location of AFRICOM's presence are made. In testimony to the Congress in March of this year, the AFRICOM Commander stated that he considers command presence in Africa an important issue, but states that it is not considered a matter of urgency. DOD officials have previously stated that the command's presence in Africa was important. Specifically, DOD officials have indicated that the structure and location of AFRICOM's presence in Africa is important because being located in Africa would provide AFRICOM staff with a more comprehensive understanding of the regional environment and African needs. Second, having staff located in Africa would help the command build relationships and partnerships with African nations and the regional economic communities and associated regional standby forces. Enduring relationships are an important aspect of building African partner security capacity and in successfully planning and executing programs and activities. Third, regional offices are intended to promote a regional dimension to U.S. security assistance through their coordination with DOD representatives who manage these programs in multiple U.S. embassies. As DOD continues to evolve its plans for a presence in Africa and decisions involving presence are delayed, DOD officials have indicated that other coordinating mechanisms may be established as a substitute for a physical presence on the continent. In addition, senior DOD officials have stated that preparing budget estimates for future fiscal years is difficult without an agreed upon AFRICOM presence on the continent. For example, although DOD requested $20 million in fiscal year 2009 to begin establishing the presence in Africa, AFRICOM has not been able to identify total funding requirements for headquarters infrastructure and operations in Africa. Furthermore, a senior official from the Office of the Secretary of Defense for Program Analysis and Evaluation stated that AFRICOM's future presence in Africa was one of the most important policy decisions that could affect the ability of the department to estimate future costs for the command. For example, in developing the fiscal year 2009 budget request, DOD estimated the costs to operate the interim headquarters in Stuttgart, Germany, was approximately $183 million, but these costs may change significantly, according to DOD officials, if the headquarters were located in an African country with more limited infrastructure than currently available in Stuttgart, Germany. Therefore, without an agreed-upon U.S. government strategy for establishing AFRICOM's presence on the continent of Africa that is negotiated with and supported by potential host nations, the potentially significant fiscal implications of AFRICOM's presence and impact on its ability to develop relationships and partnerships at the regional and local levels will remain unclear. As AFRICOM nears the October 2008 date slated for reaching full operational capability, DOD is working to shape expectations for the emergent command--both inside and outside the United States. Confronted by concerns from other U.S. agencies and African partners, AFRICOM is focused on assuming existing military missions while building capacity for the future. The ultimate role of AFRICOM in promoting a whole-of-government approach to stability and security on the continent is still uncertain, but initial expectations that the command would represent a dramatic shift in U.S. approach to security in Africa are being scaled back. Two key precepts of the command--that it would have significant interagency participation and would be physically located in Africa to engage partners there--will not be realized in the near term. Looking to the future, the difficulties encountered in staffing the command, sorting out the military's role in policy, and establishing a presence in Africa are emblematic of deeper cultural and structural issues within the U.S. government. Having such a command will likely help DOD focus military efforts on the African continent, but the extent to which an integrated approach is feasible remains unclear. Over the next few years, DOD intends to invest billions in this new command--including devoting hundreds of staff--and sustained attention will be needed to ensure that this substantial investment pays off over time. Mr. Chairman, this concludes my prepared statement. We would be happy to answer any questions you may have. For questions regarding this testimony, please call John Pendleton at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this statement were Robert L. Repasky, Tim Burke, Leigh Caraher, Grace Colemen, Taylor Matheson, Lonnie McAllister, and Amber Simco. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In February 2007, the President announced the U. S. Africa Command (AFRICOM), a Department of Defense (DOD) geographic combatant command with a focus on strengthening U.S. security cooperation with Africa, creating opportunities to bolster the capabilities of African partners, and enhancing peace and security efforts on the continent through activities such as military training and support to other U.S. government agencies' efforts. DOD officials have emphasized that AFRICOM is designed to integrate DOD and non-DOD personnel into the command to stimulate greater coordination among U.S. government agencies to achieve a more whole-of-government approach. This testimony is based on the preliminary results of work GAO is conducting for the Subcommittee on the establishment of AFRICOM. GAO analyzed relevant documentation and obtained perspectives from the combatant commands, military services, Joint Staff, Department of State, USAID and non-governmental organizations. GAO plans to provide the Subcommittee with a report later this year that will include recommendations as appropriate. This testimony addresses (1) the status of DOD's efforts to establish and fund AFRICOM and (2) challenges that may hinder the command's ability to achieve interagency participation and a more integrated, whole-of-government approach to DOD activities in Africa. The Department of Defense has made progress in transferring activities, staffing the command, and establishing an interim headquarters for AFRICOM, but has not yet fully estimated the additional costs of establishing and operating the command. To date, AFRICOM's primary focus has been on assuming responsibility for existing DOD activities such as military exercises and humanitarian assistance programs, and DOD plans to have most of these activities transferred by October 1, 2008. DOD has approved 1,304 positions for the command's headquarters, and by October 1, 2008, plans to have filled about 75 percent, or 980 positions. Also, DOD plans to have 13 other positions filled by representatives from non-DOD organizations, such as the State Department. DOD is renovating facilities in Stuttgart, Germany, for interim headquarters and plans to use these facilities for the foreseeable future until decisions are made regarding the permanent AFRICOM headquarters location. The initial concept for AFRICOM, designed and developed by DOD, met resistance from within the U.S. government and African countries and contributed to several implementation challenges. First, DOD has had difficulties integrating interagency personnel in the command, which is critical to synchronizing DOD efforts with other U. S. government agencies. DOD continues to lower its estimate of the ultimate level of interagency participation in the command. According to DOD, other agencies have limited resources and personnel systems which have not easily accommodated DOD's intent to place interagency personnel in the command. Second, DOD has encountered concerns from civilian agencies and other stakeholders over the command's mission and goals. For example, State Department and U.S. Agency for International Development officials have expressed concerns that AFRICOM will become the lead for all U.S. efforts in Africa, rather than just DOD activities. If not addressed, these concerns could limit the command's ability to develop key partnerships. Third, DOD has not yet reached agreement with the State Department and potential host nations on the structure and location of the command's presence in Africa. Uncertainties related to AFRICOM's presence hinder DOD's ability to estimate future funding requirements for AFRICOM and raises questions about whether DOD's concept for developing enduring relationships on the continent can be achieved.
5,336
765
Our investigation of DCAA hotline allegations and our DCAA-wide follow- up audit document systemic weaknesses in DCAA's management environment and structure for assuring audit quality. Last year, our investigation of hotline allegations substantiated auditor concerns made on all 14 audits we reviewed at two locations and 62 forward pricing reports we investigated at a third location. We found that (1) workpapers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate audit evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. In addition, we found that contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions of some audits--a serious independence issue. This year, our follow-on audit found DCAA-wide audit quality problems similar to those identified in our investigation, including compromise of auditor independence, insufficient audit testing to support conclusions and opinions, and inadequate planning and supervision. For example, of the 69 audits and cost-related assignments we reviewed, 65 exhibited serious GAGAS and other deficiencies that rendered them unreliable for decisions on contract awards and contract management and oversight. Although not as serious, the remaining four audits also had GAGAS compliance problems. Of the 69 audits and cost-related assignments, 37 covered key contractor business systems and related controls, including cost accounting, estimating, and billing systems. Contracting officers rely on the results of these audits for 3 or more years to make decisions on pricing, contract awards, and payments. In addition, while DCAA did not consider 26 of the 32 cost-related assignments we reviewed to be GAGAS audits, DCAA did not perform sufficient testing to support reported conclusions on that work related to contractor billings. DCAA has rescinded 81 audit reports in response to our work and the DOD Inspector General's (IG) follow-up audit because the audit evidence was outdated, insufficient, or inconsistent with reported conclusions and opinions and reliance on these reports for contracting decisions could pose a problem. About one-third of the rescinded reports relate to unsupported opinions on contractor internal controls and were used as the basis for risk-assessments and planning on subsequent internal control and cost-related audits. Other rescinded reports relate to CAS compliance and contract pricing decisions. Because the conclusions and opinions in the rescinded reports were used to assess risk in planning subsequent audits, they impact the reliability of hundreds of other audits and contracting decisions covering billions of dollars in DOD expenditures. Our hotline investigation found numerous examples where DCAA failed to comply with GAGAS. For example, contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions and opinions, and reporting in three cases we investigated--a serious independence issue. For 14 audits at two DCAA locations, we found that (1) audit documentation did not support the reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. We also substantiated allegations that forward pricing audit reports at a third DCAA location were issued before supervisors completed their review of the audit documentation because of the 20- to 30-day time frames required to support contract negotiations. Throughout our investigation, auditors at each of the three locations addressed in the hotline allegations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. Deficient audits do not provide assurance that billions of dollars in annual payments to these contactors complied with the FAR, CAS, or contract terms. We also found that DCAA managers took actions against staff at two locations, attempting to intimidate auditors, prevent them from speaking with investigators, and creating a generally abusive work environment. The following discussion highlights some of the examples from our investigation. In planning an estimating system audit of a major aerospace company, DCAA made an up-front agreement with the contractor to limit the scope of work and basis for the audit opinion. The contractor was unable to develop compliant estimates, leading to a draft audit opinion of "inadequate-in-part." The contractor objected to the draft findings, and DCAA management assigned a new supervisory auditor. DCAA management then threatened the senior auditor with personnel action if he did not delete the findings from the report and change the draft audit opinion to "adequate." Another audit of the above contractor related to a revised proposal that was submitted after DCAA had reported an "adverse" (inadequate) opinion on the contractor's 2005 proposal to provide commercial satellite launch capability. At the beginning of the audit, the buying command and contractor officials met with a DCAA regional audit manager to determine how to resolve CAS compliance issues and obtain a favorable audit opinion. Although the contractor failed to provide all cost information requested for the audit, the DCAA regional audit manager (RAM) instructed the auditors that they could not base an "adverse" opinion on the lack of information to audit certain costs. The manager directed the auditors to exclude any reference to CAS noncompliance in the audit documentation and to change the audit opinion to "inadequate-in-part." Based on the more favorable audit opinion, the buying command negotiated a $967 million contract which has since grown to over $1.6 billion through fiscal year 2009. The Defense Criminal Investigative Service is completing a criminal investigation conducted in response to our findings. The DOD IG performed a follow-up audit and confirmed our findings that DCAA's audit was impaired because of a lack of independence; the audit working papers did not support the reported opinions in the May 8, 2006, proposal audit report; and the draft audit opinion was changed without sufficient documentation. In addition, the DOD IG concluded that the DCAA RAM failed to exercise objective and impartial judgment on significant issues associated with conducting the audit and reporting on the work--a significant independence impairment--and that the RAM did not protect the interests of the government as required by DCAA policy. The DOD IG also concluded that the contractor's unabsorbed Program Management and Hardware Support (PM&HS) costs represented losses incurred on other contracts and prior accounting periods, including commercial losses--a CAS noncompliance. The DOD IG recommended that the Air Force buying command withhold the balance of $271 million for unabsorbed PM&HS costs (of which $101 million had already been paid) and that the Air Force cease negotiations with the launch services contractor on a $114 million proposal for unabsorbed costs. DCAA is currently performing CAS compliance audits on the commercial satellite launch contract costs. If DCAA determines that the contractor's costs did not comply with CAS related to unallowable costs, cost accounting period, and allocation of direct and indirect cost, and the FAR related to losses on other contracts, DCAA findings should provide the basis for recovering amounts already paid. For a billing system audit of a contractor with $168 million in annual billings to the government, the field office manager allowed the original auditor to work on the audit after being assured that the auditors would help the contractor correct billing system deficiencies during the performance of the audit. After the original auditor identified 10 significant billing system deficiencies, the manager removed her from the audit and assigned a second auditor who then dropped 8 of the 10 significant deficiencies and reported one significant deficiency and one suggestion to improve the system. The final opinion was "inadequate- in-part." However, the DCAA field office retained the contractor's direct billing privileges--a status conveyed to a contractor based on the strength of its billing system controls whereby invoices are submitted directly to the government paying office without prior review. After we brought this to the attention of DCAA western region officials, the field office rescinded the contractor's direct billing status. Our follow-up audit found that a management environment and agency culture that focused on facilitating the award of contracts and an ineffective audit quality assurance structure are at the root of the DCAA- wide audit failures that we identified for the 69 audits and cost related assignments that we reviewed. DCAA's focus on a production-oriented mission led DCAA management to establish policies, procedures, and training that emphasized performing a large quantity of audits to support contracting decisions and gave inadequate attention to performing quality audits. An ineffective quality assurance structure, whereby DCAA gave passing scores to deficient audits compounded this problem. Although the reports for all 37 audits of contractor internal controls that we reviewed stated that the audits were performed in accordance with GAGAS, we found GAGAS compliance issues with all of these audits. The issues or themes are consistent with those identified in our prior investigation. Lack of independence. In seven audits, independence was compromised because auditors provided material nonaudit services to a contractor they later audited; experienced access to records problems that were not fully resolved; and significantly delayed report issuance, which allowed the contractors to resolve cited deficiencies so that they were not included in the audit reports. GAGAS state that auditors should be free from influences that restrict access to records or that improperly modify audit scope. Insufficient testing. Thirty-three of 37 internal control audits did not include sufficient testing of internal controls to support auditor conclusions and opinions. GAGAS for examination-level attestation engagements require that sufficient evidence be obtained to provide a reasonable basis for the conclusion that is expressed in the report. For internal control audits, which are relied on for 2 to 4 years and sometimes longer, the auditors would be expected to test a representative selection of transactions across the year and not transactions for just 1 day, 1 month, or a couple of months. However, we found that for many controls, the procedures performed consisted of documenting the auditors' understanding of controls, and the auditors did not test the effectiveness of the implementation and operation of controls at all. Unsupported opinions. The lack of sufficient support for the audit opinions on 33 of the 37 internal control audits we reviewed rendered them unreliable for decision making on contract awards, direct-billing privileges, the reliability of cost estimates, and reported direct cost and indirect cost rates. Similarly, the 32 cost-related assignments we reviewed did not contain sufficient testing to provide reasonable assurance that overpayments and billing errors that might have occurred were identified. As a result, there is limited assurance that any such errors, if they occurred, were corrected and that related improper contract payments, if any, were refunded or credited to the government. Contractors are responsible for ensuring that their billings reflect fair and reasonable prices and contain only allowable costs, and taxpayers expect DCAA to review these billings to provide reasonable assurance that the government is not paying more than it should for goods and services. Based on our findings that sufficient voucher testing was not performed to support decisions to approve contractors for direct-billing privileges, DCAA recently removed over 200 contractors from the direct-bill program. Production environment and audit quality issues. DCAA's mission statement, strategic plan, and metrics all focused on producing a large number of audit reports and provided little focus on assuring quality audits that protect taxpayer interest. For example, DCAA's current approach of performing 30,000 or more audits annually and issuing over 22,000 audit reports with 3,600 auditors substantially contributed to the widespread audit quality problems we identified. Within this environment, DCAA's audit quality assurance program was not properly implemented, resulting in an ineffective quality control process that accepted audits with significant deficiencies and noncompliance with GAGAS and DCAA policy. Moreover, even when DCAA's quality assurance documentation showed evidence of serious deficiencies within individual offices, those offices were given satisfactory ratings. Considering the large number of DCAA audit reports issued annually and the reliance the contracting and finance communities have placed on DCAA audit conclusions and opinions, an effective quality assurance program is key to protecting the public interest. Such a program would report review findings along with recommendations for any needed corrective actions; provide training and additional policy guidance, as appropriate; and perform follow-up reviews to assure that corrective actions are taken. GAGAS require that each audit organization performing audits and attestation engagements in accordance with GAGAS should have a system of quality control that is designed to provide the audit organization with reasonable assurance that the organization and its personnel comply with professional standards and applicable legal and regulatory requirements, and have an external peer review at least once every 3 years. On September 1, 2009, the DCAA Director advised us that DCAA needs up to 2 years to revise its current audit approach and establish an adequate audit quality control system before undergoing another peer review. For fiscal year 2008, DOD reported that it obligated over $380 billion for payments to federal contractors, more than double the amount it obligated for fiscal year 2002. With hundreds of billions in taxpayer dollars at stake, the government needs strong controls to provide reasonable assurance that these contract funds are not being lost to fraud, waste, abuse, and mismanagement. Moreover, effective contract audit capacity is particularly important as DOD continues its use of high-risk contracting strategies. For example, we have found numerous issues with DOD's use of time-and-materials contracts, which are used to purchase billions of dollars of services across the government. Under these types of contracts, payments to contractors are based on the number of labor hours billed at a fixed hourly rate--which includes wages, overhead, and profit--and the cost of any materials. These contracts are considered high risk for the government because the contractor's profit is tied to the number of hours worked. Because the government bears the responsibility for managing contract costs, it is essential that the government be assured, using DCAA as needed, that the contractor has a good system in place to keep an accurate accounting of the number of hours billed and materials acquired and used. In addition, we have said that DOD needs to improve its management and oversight of undefinitized contract actions, under which DOD can authorize contractors to begin work and incur costs before reaching a final agreement on contract terms and conditions, including price. These contracts are high risk because the contractor has little incentive to control costs while the contract remains undefinitized. In one case, we found that the lack of timely negotiations on a task order issued to restore Iraq's oil infrastructure increased the government's risk when DOD paid the contractor nearly all of the $221 million in costs questioned by DCAA. More timely negotiations, including involvement by DCAA, could have reduced the risk to the government of possible overpayment. DCAA initiated a number of actions to address findings in our July 2008 report as well as findings from DOD follow-up efforts, including the DOD Comptroller/Chief Financial Officer (CFO) August 2008 "tiger team" review and the Defense Business Board study, which was officially released in January 2009. Examples of recent DCAA and DOD actions include the following. Eliminating production metrics and implementing new metrics intended to focus on achieving quality audits. Establishing an anonymous Web site to address management and hotline issues. In addition, DCAA's Assistant Director for Operations has been proactive in handling internal DCAA Web site hotline complaints. Revising policy guidance to address auditor independence, assure management involvement in key decisions, and address audit quality issues. DCAA also took action to halt auditor participation in nonaudit services that posed independence concerns. DCAA also has enlisted assistance from other agencies to develop a human capital strategic plan, assist in cultural transformation, and conduct a staffing study. Further, in March 2009, the new DOD Comptroller/CFO established a DCAA Oversight Committee to monitor and advise on DCAA corrective actions. While these are positive steps, much more needs to be done to address fundamental weaknesses in DCAA's mission, strategic plan, metrics, audit approach, and human capital practices that have resulted in widespread audit quality problems. DCAA's production-oriented culture is deeply imbedded and will likely take several years to change. DCAA's mission focused primarily on producing reports to support procurement and contracting community decisions with no mention of quality audits that serve taxpayer interest. Further, DCAA's culture has focused on hiring at the entry level and promoting from within the agency and most training has been conducted by agency staff, which has led to an insular culture where there are limited perspectives on how to make effective organizational changes. To address these issues, our September 2009 report contained 15 recommendations to improve the quality of DCAA's audits and strengthen auditor effectiveness and independence. Key GAO recommendations relate to the need for DCAA to develop a risk- based audit approach and develop a staffing plan in order to match audit priorities to available resources. To develop an effective risk-based audit approach, DCAA will need to work with key DOD stakeholders to determine the appropriate mix of audit and nonaudit services it should perform and determine what, if any, of these responsibilities should be transferred or reassigned to another DOD agency or terminated in order for DCAA to comply with GAGAS requirements. We also made recommendations for DCAA to establish in-house expertise or obtain outside expertise on auditing standards to (1) assist in revising contract audit policy, (2) provide guidance on sampling and testing, and (3) develop training on professional auditing standards. In addition, we recommended that DOD conduct an independent review of DCAA's revised audit quality assurance program and follow-up to assure that appropriate corrective actions are taken. Mr. Chairman and Members of the Panel, this concludes my statement. We would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Major contributors to our testimony include William T. Woods, Director, Acquisition and Sourcing Management; F. Abe Dymond, Assistant General Counsel; Gayle L. Fischer, Assistant Director; Financial Management and Assurance; Richard Cambosos; Jeremiah Cockrum; Shawnda Lindsey; Andrew McIntosh; Lerone Reid, and Angela Thomas. DOD's High-Risk Areas: Actions Needed to Reduce Vulnerabilities and Improve Business Outcome, GAO-09-460T, Washington, D.C.: March 12, 2009. High-Risk Series: An Update, GAO-09-271, Washington, D.C.: January 2009. DCAA Audits: Widespread Problems with Audit Quality Require Significant Reform, GAO-09-468, Washington, D.C.: Sept. 23, 2009. DCAA Audits: Widespread Problems with Audit Quality Require Significant Reform, GAO-09-1009T, Washington, D.C.: Sept. 23, 2009. DCAA Audits: Allegations That Certain Audits at Three Locations Did Not Meet Professional Standards Were Substantiated, GAO-08-993T, Washington, D.C.: Sept. 10, 2008. DCAA Audits: Allegations That Certain Audits at Three Locations Did Not Meet Professional Standards Were Substantiated, GAO-08-857, Washington, D.C.: July 22, 2008. Contract Management: Minimal Compliance with New Safeguards for Time-and-Materials Contracts for Commercial Services and Safeguards Have Not Been Applied to GSA Schedules Program, GAO-09-579, Washington, D.C.: June 24, 2009. Defense Acquisitions: Charting a Course for Lasting Reform, GAO-09-663T, Washington, D.C.: April 30, 2009. Defense Management: Actions Needed to Overcome Long-standing Challenges with Weapon Systems Acquisition and Service Contract Management, GAO-09-362T, Washington, D.C.: Feb. 11, 2009. Defense Acquisitions: Perspectives on Potential Changes to Department of Defense Acquisition Management Framework, GAO-09-295R, Washington, D.C.: February 27, 2009. Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges, GAO-08-1039, Washington, D.C.: September 26, 2008. Defense Contracting: Post-Government Employment of Former DOD Officials Needs Greater Transparency, GAO-08-485, Washington, D.C.: May 21, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists, GAO-08-360, Washington, D.C.: March 26, 2008. Defense Contracting: Additional Personal Conflict of Interest Safeguards Needed for Certain DOD Contractor Employees, GAO-08-169, Washington, D.C.: March 7, 2008. Defense Contract Management: DOD's Lack of Adherence to Key Contracting Principles on Iraq Oil Contract Put Government Interests at Risk, GAO-07-839, Washington, D.C.: July 31, 2007. Defense Contracting: Improved Insight and Controls Needed over DOD's Time-and-Materials Contracts, GAO-07-273, Washington, D.C.: June 29, 2007. Defense Contracting: Use of Undefinitized Contract Actions Understated and Definitization Time Frames Often Not Met, GAO-07-559, Washington, D.C.: June 19, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD's Acquisition of Services, GAO-07-832T, Washington, D.C.: May 10, 2007. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes, GAO-07-20, Washington, D.C.: November 9, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2008, the Department of Defense (DOD) obligated over $380 billion to federal contractors, more than doubling the amount it obligated in fiscal year 2002. With hundreds of billions of taxpayer dollars at stake, the government needs strong controls to provide reasonable assurance that contract funds are not being lost to fraud, waste, abuse, and mismanagement. The Defense Contract Audit Agency (DCAA) is charged with a critical role in contractor oversight by providing auditing, accounting, and financial advisory services in connection with DOD and other federal agency contracts and subcontracts. However, last year GAO found numerous problems with DCAA audit quality at three locations in California, including the failure to meet professional auditing standards. In a follow-up audit issued this September, GAO found that these problems existed agencywide. Today's testimony describes widespread audit quality problems at DCAA and provides information about continuing contract management challenges at DOD, which underscore the importance of DCAA audits that meet professional standards. It also discusses some of the corrective actions taken by DCAA and DOD and key GAO recommendations to improve DCAA audit quality. In preparing this testimony, GAO drew from issued reports and testimonies. These products contained statements regarding the scope and methodology GAO used. GAO found substantial evidence of widespread audit quality problems at DCAA. In the face of this evidence, DOD, Congress, and American taxpayers lack reasonable assurance that billions of dollars in federal contract payments are being appropriately scrutinized for fraud, waste, abuse, and mismanagement. An initial investigation of hotline allegations at three DCAA field office locations in California revealed that all 14 audits and 62 forward pricing reports GAO examined were not performed in accordance with professional auditing standards. For example, while auditing the satellite launch proposal for a major U.S. defense contractor, a DCAA manager experienced pressure from the contractor and the DOD buying command to drop adverse findings. The manager directed his auditors to drop the findings, and DCAA issued a more favorable opinion, allowing the contractor to win a contract that improperly compensated the contractor for hundreds of millions of dollars in commercial business losses. Specifically, of $271 million in unallowable costs related to commercial losses, the contractor has already been paid $101 million. This incident is under criminal investigation by the DOD Inspector General (IG). In September of this year, GAO followed up on its initial investigation and identified audit quality problems agencywide at DCAA. Audit quality problems included insufficient audit testing, inadequate planning and supervision, and the compromise of auditor independence. For example, of the 69 audits and cost-related assignments GAO reviewed, 65 exhibited serious deficiencies that rendered them unreliable for decisions on contract awards, management, and oversight. DCAA has rescinded 81 audit reports to date as a result of GAO's and DOD IG's work. Because the rescinded reports were used to assess risk in planning subsequent audits, they affect the reliability of hundreds of other audits and contracting decisions covering billions of dollars in DOD contract expenditures. GAO determined that quality problems are widespread because DCAA's management environment and quality assurance structure were based on a production-oriented mission that prevented DCAA from protecting the public interest while also facilitating DOD contracting. GAO has designated both contract management and weapon systems acquisition as high-risk areas since the early 1990s. DOD acquisition and contract management weaknesses create vulnerabilities to fraud, waste, abuse, and mismanagement that leave hundreds of billions of taxpayer dollars at risk, and underscore the importance of a strong contract audit function. In response to GAO's findings and recommendations, DCAA has taken several steps to improve metrics, policies, and processes, and the DOD Comptroller has established a DCAA oversight committee. To ensure quality audits for contractor oversight and accountability, DOD and DCAA will also need to address the fundamental weaknesses in DCAA's mission, strategic plan, metrics, audit approach, and human capital practices that have had a detrimental effect on audit quality.
4,876
881
We substantiated the allegations and auditor concerns made on each of the 13 cases we investigated, involving 14 audits at two locations and forward pricing audit issues at a third location. The 13 cases related to seven contractors. In the 12 cases at locations 1 and 2, we substantiated the allegations and auditor concerns that (1) workpapers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate audit evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. We also found that contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions of some audits--a serious independence issue. We also substantiated allegations of problems with the audit environment and inadequate supervision of certain forward pricing audits at location 3. Moreover, during our investigation, DCAA managers took actions against their staff at two locations that served to intimidate auditors and create an abusive work environment. DCAA states that its audits are performed according to professional standards (GAGAS). However, in substantiating the allegations, we found numerous failures to comply with these standards in all 13 cases we investigated. The working papers did not adequately support the final conclusion and opinion for any of the 14 audits we investigated. In many cases, supervisors changed audit opinions to indicate contractor controls or compliance with CAS was adequate when workpaper evidence indicated that significant deficiencies existed. We also found that in some cases, DCAA auditors did not perform sufficient work to support draft audit conclusions and their supervisors did not instruct or allow them to perform additional work before issuing final reports that concluded contractor controls or compliance with CAS were adequate. At location 1, we also found undue contractor influence that impaired auditor independence. At location 2, two supervisors were responsible for the 12 audits we investigated, and 11 of these audits involved insufficient work to support the reported opinions. At location 3, we substantiated allegations about inadequate supervision of trainees, reports being issued without final supervisory review, and contracting officer pressure to issue reports before audit work was completed in order to meet contract negotiation time frames--a serious independence issue. Noncompliance with GAGAS in the cases we investigated has had an unknown financial effect on the government. Because DCAA auditors' limited work identified potential significant deficiencies in contractor systems and accounting practices that were not analyzed in sufficient detail to support reportable findings and recommendations for corrective action, reliance on data and information generated by the audited systems could put users and decision makers at risk. Tables summarizing our findings for all the audits can be found in appendixes I and II. The following examples illustrate problems we found at two DCAA locations: In conducting a 2002 audit related to a contractor estimating system, DCAA auditors reviewed draft basis of estimates (BOE) prepared by the contractor and advised the contractor on how to correct significant deficiencies. BOEs are the means for providing government contract officials with information critical to making contract pricing decisions. This process resulted from an up-front agreement between the DCAA resident auditor and the contractor--one of the top five government contractors based on contract dollar value--that limited the scope of work and established the basis for the audit opinion. According to the agreement, the contractor knew which BOEs would be selected for audit and the audit opinion would be based on the final, corrected BOEs after several DCAA reviews. Even with this BOE review effort, the auditors found that the contractor still could not produce compliant BOEs and labeled the estimating system "inadequate in part." We found that enough evidence had been collected by the original supervisory auditor and senior auditor to support this opinion. However, after the contractor objected to draft findings and conclusions presented at the audit exit conference, the DCAA resident auditor replaced the original supervisory auditor assigned to this audit and threatened the senior auditor with personnel action if he did not change the summary workpaper and draft audit opinion. The second supervisory auditor issued the final report with an "adequate" opinion without documenting adequate support for the changes. This audit did not meet GAGAS for auditor objectivity and independence because of the up-front agreement, and it did not meet standards related to adequate support for audit opinions. The draft report for a 2005 billing system audit identified six significant deficiencies, one of which allowed the contractor to overbill the government by $246,000 and another that may have led to $3.5 million in overbillings. DCAA managers replaced the supervisory auditor and auditor, and the new staff worked together to modify working papers and change the draft audit opinion from "inadequate," to "inadequate in part," and, finally, to "adequate." Sufficient testing was not documented to support this opinion. The DOD IG concluded that DCAA should rescind the final report for this audit, but DCAA did not do so. Billing system audits are conducted to assess contractor controls for assuring that charges to the government are appropriate and compliant and to support decisions on whether to approve contractors for direct billing. As a result of the 2005 audit, DCAA authorized this contractor for direct billing of its invoices without prior government review, thereby providing quicker payments and improved cash flow to the contractor. On June 20, 2008, when we briefed DOD on the results of our investigation, DCAA advised us that a DCAA Western Region review of this audit in 2008 concluded that the $3.5 million finding was based on a flawed audit procedure. As a result, it rescinded the audit report on May 22, 2008. However, DCAA officials said that they did not remove the contractor's direct-billing privileges because other audits did not identify billing problems. The draft report for a 2005 CAS 403 compliance audit requested by a Department of Energy administrative contracting officer (ACO) identified four deficiencies related to corporate cost allocations to government business segments. However, a DCAA supervisory auditor directed a member of her staff to write a "clean opinion" report in 1 day using "boilerplate" language and without reviewing the existing set of working papers developed by the original auditor. The supervisory auditor appropriately dropped two significant deficiencies from the draft report, but did not adequately document the changes in the workpapers. In addition, the supervisory auditor improperly referred two other significant deficiencies to another DCAA office that does not have audit jurisdiction, and therefore, did not audit the contractor's corporate costs or CAS 403 compliance. The final opinion was later contradicted by a September 21, 2007, DCAA report that determined that this contractor was in fact not in compliance with CAS 403 during the period of this audit. We also substantiated allegations that there were problems with the audit environment at a third DCAA location--a resident office responsible for auditing another of the five largest government contractors. For example, the two supervisors, who approved and signed 62 of the 113 audit reports performed at the resident office location during fiscal years 2004 through 2006, said that trainees were assigned to complex forward pricing audits as their first assignments even though they had no institutional knowledge about the type of materials at risk of overcharges, how to look at related sources of information for cost comparisons, or how to complete the analysis of complex cost data required by FAR. The supervisors, who did not always have the benefit of experienced auditors to assist them in supervising the trainees, admitted that they generally did not review workpapers in final form until after reports were issued. Moreover, because the trainee auditors did not have an adequate understanding of DCAA's electronic workpaper filing system, they did not always enter completed workpapers in the system, resulting in a loss of control over official workpapers. In addition, one of the two supervisory auditors told us that contracting officers would sometimes tell auditors to issue proposal audit reports in as few as 20 days with whatever information the auditor had at that time and not to cite a scope limitation in the audit reports, so that they could begin contract negotiations. If the available information was insufficient, GAGAS would have required the auditors to report a scope limitation. Where scope limitations existed, but were not reported, the contracting officers could have negotiated contracts with insufficient information. Moreover, a 2006 DCAA Western Region quality review reported 28 systemic deficiencies on 9 of 11 forward pricing audits reviewed, including a lack of supervisory review of the audits. The problems at this location call into question the reliability of the 62 forward pricing audit reports issued by the two supervisors responsible for forward pricing audits at the resident office location from fiscal years 2004 through 2006, connected with over $6.4 billion in government contract negotiations. Throughout our investigation, auditors at each of the three DCAA locations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. At the third DCAA location we investigated, two former supervisory auditors told us that the volume of requests for the audits, short time frames demanded by customers for issuing reports to support contract negotiations (e.g., 20 to 30 days), and limited audit resources affected their ability to comply with GAGAS. Our review of DCAA performance data showed that DCAA measures audit efficiency and productivity as a factor of contract dollars audited divided by audit hours. In addition, because customer-requested assignments--such as forward pricing audits requested by contracting officers--which are referred to as demand work by DCAA, take priority, other work, such as internal control and CAS compliance audits, are often performed late in the year. Auditors told us that there is significant management pressure to complete these nondemand audits by the end of the fiscal year to meet field audit office (FAO) performance plans. During the DOD IG and GAO investigations, we identified a pattern of frequent management actions that served to intimidate the auditors and create an abusive environment at two of the three locations covered in our investigation. In this environment, some auditors were hesitant to speak to us even on a confidential basis. For example, supervisory auditors and the branch manager at one DCAA location we visited pressured auditors, including trainees who were in probationary status, to disclose to them what they told our investigators. Some probationary trainees told us this questioning made them feel pressured or uncomfortable. Further, we learned of verbal admonishments, reassignments, and threats of disciplinary action against auditors who raised questions about management guidance to omit their audit findings and change draft opinions or who spoke with or contacted our investigators, DOD investigators, or DOD contracting officials. We briefed cognizant DCAA region and headquarters officials on the results of our investigation in February 2008 and reviewed additional documentation they provided. We briefed DOD and DCAA officials on the results of our investigation on June 20 and 25, 2008. We summarized DCAA's comments on our corrective action briefing in our investigative report, and we included relevant details of DCAA's comments at the end of our case discussions. In response to our investigation, DCAA rescinded two audit reports and removed a contractor's direct billing authorization related to a third audit. DCAA also performed subsequent audits related to three additional cases that resulted in audit opinions that contradicted previously reported adequate ("clean") opinions and included numerous significant deficiencies. For other cases, DCAA officials told us that although workpaper documentation could have been better, on the basis of other audits DCAA performed, they do not believe the reported opinions were incorrect or misleading. In the cases we investigated, pressure from the contracting community and buying commands for favorable opinions to support contract negotiations impaired the independence of three audits involving two of the five largest government contractors. In addition, DCAA management pressure to (1) complete audit work on time in order to meet performance metrics and (2) report favorable opinions so that work could be reduced on future audits and contractors could be approved for direct billing privileges led the three DCAA FAOs to take inappropriate short cuts-- ultimately resulting in noncompliance with GAGAS and internal DCAA CAM guidance. Although it is important for DCAA to issue products in a timely manner, the only way for auditors to determine whether "prices paid by the government for needed goods and services are fair and reasonable" is by performing sufficient audit work to determine the adequacy of contractor systems and related controls, and their compliance with laws, regulations, cost accounting standards, and contract terms. Further, it is important that managers and supervisory auditors at the three locations we investigated work with their audit staff to foster a productive, professional relationship and assure that auditors have the appropriate training, knowledge, and experience. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact me at 202- 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Major contributors to this testimony include Gayle L. Fischer, Assistant Director; Andrew O'Connell, Assistant Director and Supervisory Special Agent; F. Abe Dymond, Assistant General Counsel; Richard T. Cambosos; Jeremiah F. Cockrum; Andrew J. McIntosh; and Ramon J. Rodriguez, Senior Special Agent. The DCAA resident office and contractor made an up-front agreement on audit scope, which had the effect of predetermining an "adequate" audit opinion. On the basis of pressure from contractor and buying command to resolve CAS compliance issues and issue a favorable opinion, a DCAA region official directed the auditors not to include CAS compliance problems in the audit workpapers. Branch manager and supervisory auditor terminated audit work and issued opinions without sufficient documentation based on their view that defective pricing did not exist on the related contracts. Supervisory auditor dropped preliminary findings based on a flawed audit procedure instead of requiring auditors to perform sufficient testing to conclude on the adequacy of billing system controls. Auditor was excluded from the exit conference, findings were dropped without adequate support, and supervisor made contradictory statements on her review of the audit. Dropped findings on corporate accounting were referred to another field audit office (FAO), which does not review corporate costs. Supervisor prepared and approved key working papers herself, without required supervisory review. Supervisor directed another auditor to write a clean opinion report without reviewing the working papers. Supervisor then changed the working papers without support and referred two dropped findings to another FAO, which does not review corporate overhead allocations. Inexperienced trainees assigned to complex forward pricing audits without proper supervision. Reports issued with unqualified opinions before supervisory review was completed due to pressure from contracting officers. Significant deficiency and FAR noncompliance related to the lack of contractor job descriptions for executives not reported. Significant deficiency related to subcontract management not reported. Second auditor and supervisor dropped 6 of 10 significant deficiencies without adequate documentation to show that identified weaknesses were resolved. Supervisor identified problems with test methodology but dropped findings instead of requiring tests to be reperformed. Second auditor and supervisor deleted most audit steps and performed limited follow-up work that did not support the reported opinion of overall compliance with CAS. Purpose of audit was to review the corrective action plan (CAP) developed by Contractor A in response to prior findings of inadequate basis of estimates (BOE) related to labor hours. In the face of pressure from DOD's contracting community to approve Contractor A's estimating system, we found evidence there was an up-front agreement between DCAA and Contractor A to limit the scope of work and basis of the audit opinion (a significant impairment of auditor independence). Auditors found significant deficiencies with the CAP implementation plan, that is, the contractor could not develop compliant BOEs without DCAA's assistance at the initial, intermediate, and final stages of the estimates. Original supervisory auditor was reassigned; the resident auditor and new supervisory auditor directed the draft opinion be changed from "inadequate in part" to "adequate" after the contractor objected to DCAA draft findings and opinion. The working papers did not contain audit evidence to support the change in opinion. Field office management threatened the senior auditor with personnel action if he did not change the draft audit opinion to "adequate." Audit related to a revised proposal submitted after DCAA reported an adverse (inadequate) opinion on Contractor A's 2005 proposal. At beginning of the audit, buying command and Contractor A officials met with a DCAA regional audit manager to determine how to resolve cost accounting standard (CAS) compliance issues and obtain a favorable audit opinion. Contractor A did not provide all cost information requested for audit. Contrary to DCAA Contract Audit Manual guidance, the regional audit manager instructed auditors that they could not base an "adverse" (inadequate) audit opinion on the lack of information to audit certain costs. On the basis of an "inadequate in part" opinion reported in May 2006, the buying command negotiated a $937 million contract, which has grown to $1.2 billion. Branch manager and supervisory auditor predetermined that there was no defective pricing; however, the auditor concluded that Contractor B's practice potentially constituted defective pricing and obtained technical guidance that specific contracts would need to be analyzed to make a determination. The branch manager disagreed. Supervisory auditor and branch manager subsequently issued three reports stating that Contractor B's practice at three divisions did not constitute defective pricing. Insufficient work was performed on these audits to come to any conclusion about defective pricing and as a result, the final opinions on all three audit reports are not supported. Absent DCAA audit support for defective pricing, the contracting officer pursued a CAS 405 noncompliance at 3 contractor divisions and recovered $71,000. On July 17, 2008, Contractor B settled on a Defense Criminal Investigative Service defective pricing case for $620,900. Draft audit report identified six significant deficiencies, one of which led Contractor C to overbill the government by $246,000 and another which potentially led to $3.5 million in overbillings, but audit work was incomplete. The contractor had refunded the $246,000. The original auditor reported that the $3.5 million was for subcontractor costs improperly billed to the government. The supervisor deleted the finding based on a flawed audit procedure, but did not require additional testing. First supervisory auditor and auditor were replaced after draft audit was completed. New auditor and supervisory auditor worked together to modify working papers and alter draft audit opinion from "inadequate," to "inadequate in part," and, finally, to "adequate." Sufficient testing was not performed to determine if the contractor had systemic weaknesses or to support an opinion that contractor billing system controls were adequate. On the basis of the "adequate" opinion, the field audit office (FAO) approved the contractor for direct billing. DOD IG recommended that DCAA rescind the final report for this audit, but DCAA did not do so. Following the briefing on our investigation, the DCAA Western Region rescinded the audit report on May 22, 2008. Auditor identified five deficiencies and concluded the contractor's system was "inadequate in part." Auditor did not perform sufficient work to support some findings, but supervisory auditor did not direct the auditor to gather additional evidence. After consulting with the branch manager, the supervisory auditor modified documents and eliminated significant deficiencies, changing the draft audit opinion from "inadequate in part" to "adequate." Working papers did not properly document the reason for the change in opinion and therefore do not support the final opinion. DOD IG recommended that DCAA rescind the final report for this audit, but DCAA did not do so. On June 27, 2008, the DCAA Western Region informed us that it was rescinding this audit report. Auditor believed audit evidence related to a 24 percent error rate in a small sample of cost pools supported an "inadequate in part" opinion and suggested testing be expanded, but supervisory auditor disagreed. Auditor and supervisory auditor documented their disagreement in the working papers. Supervisory auditor subsequently modified documents to change the draft audit opinion from "inadequate in part" to "adequate" before issuing the final report. Certain final working papers were prepared and approved by the supervisory auditor, without proper supervision. Branch manager and supervisory auditor determined that findings of corporate accounting problems should be referred to another FAO for future audit. However, the other FAO does not audit corporate costs. Working papers do not support the final opinion. Auditor identified four potential instances of noncompliance with CAS 403. Auditor was transferred to a different team before supervisory review of her working papers. Three months later, the supervisory auditor requested that another auditor write a "clean ("adequate") opinion" report. Second auditor used "boilerplate" (i.e., standardized) language to write the final report and never reviewed the working papers. The supervisor correctly deleted two findings and referred two findings of corporate-level non-compliances to another FAO for future audit. The other FAO does not audit corporate-level costs. Working papers do not support the final "clean opinion," which was later contradicted by a September 21, 2007, DCAA report that determined Contractor D was in fact not in compliance with CAS 403 during the period of this audit. Two location 3 supervisors issued 62 forward pricing audits related to Contractor E between 2004 and 2006. Supervisors responsible for the 62 forward pricing audits admitted to us that they did not have time to review working papers before report issuance. According to the DCAA region, inexperienced trainee auditors were assigned to 18 of the 62 audits without proper supervision. However, the region did not provide assignment documentation for the 62 audits. An internal DCAA Region audit quality review found audits where the audit working papers did not support the final audit report, working paper files were lost, and working paper files were not archived in the DCAA-required time period. The 62 forward pricing audits were connected with over $6.4 billion in government contract negotiations. Three different auditors worked on this audit. Original auditor did not follow DCAA guidance when developing the audit plan and was reassigned after audit work began. Second auditor lacked experience with compensation system audits and noted in her working papers that she was "floundering" and could not finish the audit by the September 30, 2005, deadline. Third auditor was assigned 10 calendar days before the audit was due to be completed. Although audit was issued with an "adequate" opinion, insufficient work was performed on this audit and, therefore, working papers do not support the final opinion. Significant system deficiencies noted in the working papers were not reported. The DOD Office of Inspector General recommended that DCAA rescind the final report for this audit, but DCAA did not do so. Instead, DCAA initiated another audit during 2007. DCAA agreed with our finding that this audit did not include sufficient testing of executive compensation. In June 2008, the branch office issued a new audit report on Contractor D's compensation system which identified seven significant deficiencies and an "inadequate in part" opinion. DCAA stated that it is currently assessing the impact of these deficiencies on current incurred cost audits. Auditor found that the contractor was not fulfilling its FAR-related obligations to ensure that subcontractors' cost claims were audited. This issue was not reported as a significant deficiency in the contractor's purchasing system. The opinion on the system was "adequate." The working papers did not include sufficient evidence to support the final opinion. DCAA relied on a 2004 Defense Contract Management Agency (DCMA) review in which the conclusions were based word-for-word on the contractor's response to a questionnaire without independent testing of controls. DCAA stated that the overall opinion was not based on DCMA's review. However, DCAA stated that it will address the issue of the contractor's procedures for ensuring subcontract audits are performed during the next purchasing system audit, which is expected to be completed by December 30, 2008. The branch manager allowed the original auditor to work on this audit after being assured that the auditors would help the contractor correct any billing system deficiencies during the performance of the audit. After the original auditor identified 10 significant billing system deficiencies, the branch manager removed her from the audit and assigned a second auditor to the audit. With approval by the FAO and region management, the second auditor dropped 8 of the 10 significant deficiencies and reported 1 significant deficiency and 1 suggestion to improve the system. The final opinion was "inadequate in part." Six of the findings were dropped without adequate support, including a finding that certain contract terms were violated and a finding that the contractor did not audit subcontract costs. Despite issuing an "inadequate in part" opinion, the FAO decided to retain the contractor's direct-billing privileges. After we brought this to the attention of region officials, the FAO rescinded the contractor's direct billing status in March 2008. DCAA did not agree with our finding that the working papers did not contain adequate support for dropping six draft findings of significant deficiencies. Auditor performed sampling to determine whether sufficient controls over employee timecards existed. Although the work was based on a limited judgmental sample, the auditor found 3 errors out of 18 employee timecards tested and concluded that controls over timecards were inadequate. Supervisory auditor initially agreed with the findings, but later modified working papers to change the draft audit conclusion from "certain labor practices require corrective actions" to "no significant deficiencies." Working papers did not properly document the reason for the change in conclusion and, therefore, do not support the final audit conclusion. Supervisory auditor later stated that the initial sampling plan was flawed, but eliminated the deficiency finding rather than asking the auditor to redo the work. On April 9, 2008, DCAA issued a new audit report which identified 8 significant deficiencies and concluded that corrective actions were needed on the contractor's labor accounting system. After original auditor was transferred to another audit, a second auditor significantly limited the scope of the audit with supervisory approval, deleting most of the standard audit steps. Second auditor performed very limited testing and relied on contractor assertions with little or no independent verification. Supervisory auditor approved issuance of the final audit with an opinion that the contractor complied with CAS 418 in all material respects. Insufficient work was performed on this audit and, therefore, the scope of work and the working paper documentation does not support the opinion. Region officials acknowledged that work was insufficient and stated that another CAS 418 audit was initiated; however, DCAA did not rescind the misleading report. On June 25, 2008, DCAA officials told us that the new CAS 418 audit was completed with an "adequate" opinion. Location 2 is a DCAA branch office. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Defense Contract Audit Agency (DCAA) under the Department of Defense (DOD) Comptroller plays a critical role in contractor oversight by providing auditing, accounting, and financial advisory services in connection with DOD and other federal agency contracts and subcontracts. DCAA has elected to follow generally accepted government auditing standards (GAGAS). These standards provide guidelines to help government auditors maintain competence, integrity, objectivity, and independence in their work. GAO investigated hotline complaints it received related to alleged failures to comply with GAGAS on 14 DCAA audits. Specifically, it was alleged that (1) working papers did not support reported opinions, (2) supervisors dropped findings and changed audit opinions without adequate evidence, and (3) sufficient work was not performed to support audit conclusions and opinions. GAO also investigated issues related to the quality of certain forward pricing audit reports. GAO investigators interviewed over 50 individuals, reviewed working papers and related documents for 14 audits issued from 2003 through 2007 by two DCAA offices, and reviewed documentation on audit issues at a third DCAA office. GAO did not reperform the audits to validate the completeness and accuracy of DCAA's findings. DCAA did not agree with the "totality" of GAO's findings, but it did acknowledge shortcomings with some audits and agreed to take certain corrective actions. GAO substantiated the allegations. Although DCAA policy states that its audits are performed according to GAGAS, GAO found numerous examples where DCAA failed to comply with GAGAS in all 13 cases. For example, contractor officials and the DOD contracting community improperly influenced the audit scope, conclusions, and opinions on three cases--a serious independence issue. At two DCAA locations, GAO found evidence that (1) working papers did not support reported opinions, (2) DCAA supervisors dropped findings and changed audit opinions without adequate evidence for their changes, and (3) sufficient audit work was not performed to support audit opinions and conclusions. GAO also substantiated allegations of inadequate supervision of certain audits at a third DCAA location. The table below contains selected details about three cases GAO investigated. Throughout GAO's investigation, auditors at each of the three DCAA locations told us that the limited number of hours approved for their audits directly affected the sufficiency of audit testing. Moreover, GAO's investigation identified a pattern of frequent management actions at two locations that served to intimidate auditors, discourage them from speaking with investigators, and create a generally abusive work environment.
5,959
543
On November 25, 2002, the President signed into law the Homeland Security Act, which created the new federal Department of Homeland Security, and the Maritime Transportation Security Act, which created a consistent security program specifically for the nation's seaports. Since that time, and in keeping with the provisions of these new laws, the federal government has been developing a variety of new national policies and procedures for improving the nation's response to domestic emergencies. These policies and procedures are designed to work together to provide a cohesive framework for preparing for, responding to, and recovering from domestic incidents. A key element of this new response framework is the use of exercises to test and evaluate federal agencies' policies and procedures, response capabilities, and skill levels. The Coast Guard has primary responsibility for such testing and evaluation in the nation's ports and waterways, and as part of its response, it has added multiagency and multicontingency terrorism exercises to its training program. These exercises vary in size and scope and are designed to test specific aspects of the Coast Guard's terrorism response plans, such as communicating with state and local responders, raising maritime security levels, or responding to incidents within the port. For each exercise the Coast Guard conducts, an after- action report detailing the objectives, participants, and lessons learned must be produced within 60 days. The framework under which federal agencies would coordinate with state and local entities to manage a port-terrorism incident is still evolving. As directed by Homeland Security Presidential Directive/HSPD-5, issued in February 2003, this framework is designed to address all types of responses to national emergencies, not just port-related events. Key elements of the framework have been released over the past 2 years. For example, the Department of Homeland Security released the Interim National Response Plan in September 2003 and was in the final approval stage for a more comprehensive National Response Plan in November 2004, as our work was drawing to a close. DHS announced the completion of the National Response Plan on January 6, 2005, too late for a substantive review to be included in this report. However, the finalized plan is designed to be the primary operational guidance for incident management and, when fully implemented, will incorporate or supersede existing federal interagency response plans. According to the updated implementation schedule in the National Response Plan, federal agencies will have up to 120 days to bring their existing plans, protocols, and training into accordance with the new plan. In March 2004, the department also put in place a system, called the National Incident Management System, which requires common principles, structures, and terminology for incident management and multiagency coordination. Although the framework that will be brought about by the final plan, the management system, and other actions is still in the implementation phase, some of the protocols and procedures contained in this framework were already evident at the port exercises we observed. However, it is still too early to determine how well the complete framework will function in coordinating an effective response to a port-related threat or incident. Port security exercises have identified relatively few issues related to federal agencies' legal authority, and none of these issues were statutory problems according to exercise participants and agency officials. Our review of fiscal year 2004 after-action reports and observation of specific exercises showed that exercise participants encountered seven legal issues, but exercise participants and agency officials we interviewed did not recommend statutory changes to address these issues. In three instances, exercise participants made nonstatutory recommendations (such as policy clarifications) to assist agencies in better exercising their authority, but did not question the adequacy of that authority. In the other four instances, no recommendations were made either because statutory authority was deemed sufficient or, in one case, because the issue involved a constitutional restraint (i.e., under the Fourth Amendment, police are prohibited from detaining passengers not suspected of terrorism). While the exercises were conducted to examine a wide range of issues and not specifically to identify gaps in agencies' legal authority, the results of the exercises are consistent with the information provided by agency officials we interviewed, who indicated that sufficient statutory authority exists to respond to a terrorist attack at a seaport. Moreover, when Department of Homeland Security officials reviewed the issue of statutory authority, as required by Homeland Security Presidential Directive/HSPD- 5, they concluded that federal agencies had sufficient authority to implement the National Response Plan and that any implementation issues could be addressed by nonstatutory means, such as better coordination mechanisms. Most of the issues identified in port security exercises have been operational rather than legal in nature. Such issues appeared in most after- action reports we reviewed and in all four of the exercises we observed. While such issues are indications that improvements are needed, it should be pointed out that the primary purpose of the exercises is to identify matters that need attention and that surfacing problems is therefore a desirable outcome, not an undesirable one. The operational issues can be divided into four main categories, listed in descending order of frequency with which they were reported: Communication--59 percent of the exercises raised communication issues, including problems with interoperable radio communications among first responders, failure to adequately share information across agency lines, and difficulties in accessing classified information when needed. Adequacy or coordination of resources--54 percent of the exercises raised concerns with the adequacy or coordination of resources, including inadequate facilities or equipment, differing response procedures or levels of acceptable risk exposure, and the need for additional training in joint agency response. Ability of participants to coordinate effectively in a command and control environment--41 percent of the exercises raised concerns related to command and control, most notably a lack of knowledge or training in the incident command structure. Lack of knowledge about who has jurisdictional or decision-making authority--28 percent of the exercises raised concerns with participants' knowledge about who has jurisdiction or decision-making authority. For example, agency personnel were sometimes unclear about who had the proper authority to raise security levels, board vessels, or detain passengers. Our review of the Coast Guard's fiscal year 2004 after-action reports from port terrorism exercises identified problems with timeliness in completing the reports and limitations in the information they contained. Specifically, Timeliness: Coast Guard guidance states that after-action reports are an extremely important part of the exercise program, and the guidance requires that such reports be submitted to the after-action report database (Contingency Preparedness System) within 60 days of completing the exercise. However, current practice falls short: 61 percent of the 85 after- action reports were not submitted within this 60-day time frame. Late reports were submitted, on average, 61 days past the due date. Exercises with late reports include large full-scale exercises designed to identify major interagency coordination and response capabilities. Not meeting the 60-day requirement can lessen the usefulness of these reports. Coast Guard guidance notes, and officials confirm, that exercise planners should regularly review past after-action reports when planning and designing future exercises, and to the extent that reports are unavailable, such review cannot be done. In previous reviews of exercises conducted by the Coast Guard and others, we found that timely after-action reports were necessary to help ensure that potential lessons can be learned and applied after each counterterrorism exercise. The main problem in producing reports on a more timely basis appeared to be one of competing priorities: Coast Guard field personnel indicated that other workload priorities were an impediment to completing reports, but most of them also said 60 days is a sufficient amount of time to develop and submit an after-action report. Officials cited the development of the Contingency Preparedness System, which is the program for managing exercises and after-action reports, as a step allowing for a renewed emphasis on timeliness. Headquarters planning staff are able to run reports using this system and regularly notify key Coast Guard officials of overdue after-action reports. However, this system was implemented more than 1 year ago, in August 2003, and was, therefore, in place during the period in which we found a majority of after- action reports were late. We did not compare our results with timeliness figures for earlier periods, and we, therefore, do not know the extent to which the system may have helped reduce the number of reports that are submitted late. Even if the new system has produced improvement, however, the overall record is still not in keeping with the Coast Guard's 60-day requirement. Content and quality: Coast Guard guidance also contains criteria for the information that should be included in an after-action report. These criteria, which are consistent with standards identified in our prior work, include listing each exercise objective and providing an assessment of how well each objective was met. However, 18 percent of the after-action reports we reviewed either did not provide such an objective-by-objective assessment or identified no issues that emerged from the exercise. While the scope of each exercise may contribute to a limited number of issues being raised, our past reviews found that after-action reports need to accurately capture all exercise results and lessons learned; otherwise, agencies may not be benefiting fully from exercises in which they participate. Similarly, officials at the Department of Defense, which like the Coast Guard conducts a variety of exercises as part of its training, said that if their after-action reports lack sufficient fundamental content, they cannot be used effectively to plan exercises and make necessary revisions to programs and protocols. Our review indicated that, in addition to the pressure of other workload demands, two additional factors may be contributing to limitations in report content and quality--current review procedures and a lack of training for planners. Headquarters planning officials noted that local commands have primary responsibility for reviewing after-action reports and that limited criteria exist at headquarters for evaluating the content of reports submitted by these commands. At the field level, many planners with whom we spoke said they were unaware of any written documentation or exercise-planning guidance they could refer to when developing an after-action report. The Coast Guard has cited several planned actions that may allow for improved content and quality in after-action reports. These actions include updating exercise management guidance and promulgating new instructions related to preparing after-action reports and collecting lessons learned. While these initiatives may address issues of content and quality in after-action reports, they are currently still in the development phase. A successful response to a terrorist threat or incident in a seaport environment clearly requires the effective cooperation and coordination of numerous federal, state, local, and private entities--issues that exercises and after-action reports are intended to identify. Complete and timely analyses of these exercises represent an important opportunity to identify and correct barriers to a successful response. The Coast Guard's inability to consistently report on these exercises in a timely and complete manner represents a lost opportunity to share potentially valuable information across the organization. The Coast Guard's existing requirements, which include submitting these reports within 60 days and assessing how well each objective has been met, appear reasonable but are not being consistently met. Coast Guard officials cited a new management system as their main effort to making reports more timely, but this system has been in place for more than a year, and timeliness remains a problem. It is important for Coast Guard officials to examine this situation to determine if more needs to be done to meet the standard. The Coast Guard has several other steps under development to address issues of report content and completeness, and it is too early to assess the effect these actions will have. For this set of actions, it will be important for the Coast Guard to monitor the situation to help ensure that exercises can achieve their full and stated purpose. To help ensure that reports on terrorism-related exercises are submitted in a timely manner that complies with all Coast Guard requirements, we are making one recommendation, that the Commandant of the Coast Guard review the Coast Guard's actions for ensuring timeliness and determine if further actions are needed. We provided DHS, DOJ, and DOD with a draft of this report for review and comment. The Coast Guard generally concurred with our findings and recommendation and did not provide any formal comments for inclusion in the final report. DOJ and DOD also did not have any official comments. DOD provided two technical clarifications, which we have incorporated to ensure the accuracy of our report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Commandant of the Coast Guard, appropriate congressional committees, and other interested parties. The report will also be available at no charge on GAO's Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or by email at [email protected], or Steve Caldwell, Assistant Director, at (202) 512-9610 or by email at [email protected], or Steve Calvo, Assistant Director, at (206) 287-4839 or by email at [email protected]. Other key contributors to this report were Christine Davis, Wesley Dunn, Michele Fejfar, Lynn Gibson, Dawn Hoff, David Hudson, Dan Klabunde, Ryan Lambert, and Stan Stenersen. Guidance and experience stress producing AARs that fully assess training objectives and document deficiencies. Coast Guard guidance: calls for exercises to be designed to expose weaknesses in plans and procedures and highlight resource and training deficiencies. Minimum requirements for AARs include documentation of each supporting objective and an assessment of how well each objective was met. Past GAO work: when AARs do not accurately capture exercise results and lessons learned, agencies may not be benefiting fully from exercises in which they participate. DOD perspective: DOD officials said AARs that did not provide fundamental content cannot be used effectively to plan exercises and make necessary revisions to programs and protocols. They also noted that new operational missions may require an additional emphasis on exercise planning and after-action reporting. Assessment of exercises may not be sufficient: 18 percent of AARs we reviewed identified no issues or did not provide adequate assessment of training objectives. Review procedures and training for planners may be insufficient in this area. Headquarters planning officials noted that the primary review of all AARs resides solely at the local command level. Although all submitted AARs are reviewed "for general approval" by headquarters officials, they said that this review uses limited criteria (grounds for rejection include use of inappropriate language or participants' names). Many Coast Guard field personnel we interviewed said they were unaware of any written documentation or exercise planning guidance they could refer to when developing an AAR. Some efforts to address timeliness are under way, but effects to date are limited. Coast Guard officials said the Contingency Preparedness System (CPS), the program for managing exercises and AARs, has allowed for a renewed emphasis on report timeliness. Headquarters planning staff currently use this system to notify each area of overdue AARs. However, CPS has been in place since August 2003, and timeliness remains a concern. Officials have also discussed the possibility of reducing the AAR submission deadline (to as few as 15 days), but efforts are still ongoing due to "pushback from the field." They also said that the formal Coast Guard training courses emphasize that AAR development be incorporated into the planning process and exercise timeline. Senior exercise management officials said they are also updating an instruction related to collecting AARs and lessons learned. They expect it to be promulgated to the field in 1-6 months. Officials noted the following efforts to improve content and quality of AARs. Formal training courses that encourage documenting exercise information quickly to capture relevant information and lessons learned before recall is diminished or competing priorities take over. Updated instruction on AARs and lessons learned collection (currently in development). Increased functionality of CPS has been proposed, which may offer additional incentives for planners to utilize the system. Key elements of the national response framework are evolving. Release of National Incident Management System and draft National Response Plan. Transitional period for agencies to revise their plans once the final NRP is released, agencies will have up to 180 days to revise their plans to align with the NRP. Few legal issues surfaced in port exercises or after-action reports. None of these issues were statutory problems according to exercise participants and agency officials. Exercises and after-action reports identified operational issues to varying degrees. Key issues included: communication, incident command, and resource coordination concerns. Many after-action reports are not submitted timely, and content and quality of some does not meet Actions taken by the Coast Guard to address these problems have had limited effect thus far. The objectives of this report were to (1) describe the emerging framework under which the federal government coordinates with state and local entities to address a terrorist incident in a U.S. port; (2) identify the issues, if any, regarding federal agencies' legal authority that have emerged from port security exercises and what statutory actions might address them; (3) describe the types of operational issues being identified through these exercises; and (4) identify any management issues related to Coast Guard- developed after-action reports. To address these objectives, we reviewed relevant legislation, regulations, directives and plans, analyzed agency operational guidance and Coast Guard after-action reports (AARs), interviewed a variety of federal officials, and observed several port security exercises. To identify the emerging framework to address a terrorist incident in a U.S. port, we reviewed relevant statutes such as the Homeland Security Act of 2002 and the Maritime Transportation Security Act of 2002 and implementing maritime regulations at 33 CFR, parts 101 to 106. We also reviewed Homeland Security Presidential Directive/HSPD- 5 and Presidential Decision Directive 39. Operational plans that were included in our analysis included the Initial National Response Plan, the Interagency Domestic Terrorism Concept of Operations plan (CONPLAN), Interim Federal Response Plan, and the National Response Plan "Final Draft." We also reviewed agency guidance related to exercise planning and evaluation such as the Coast Guard Exercise Planning Manual and Contingency Preparedness Planning Manual, as well as the Department of Homeland Security/ Office of Disaster Preparedness' Exercise and Evaluation Program. Findings were supplemented with interviews of key officials in federal agencies, including the Coast Guard (CG), the Department of Homeland Security (DHS), Department of Defense (DOD), Department of Justice (DOJ), and related federal maritime entities such as Project Seahawk. To provide a framework for evaluating agencies' legal authority in responding to a terrorist incident in a U.S. port, we adopted a case study methodology because it afforded a factual context for the emergence of legal issues that could confront agencies in the exercise of their authority. Our efforts included attending four U.S. port based terrorism exercises (Los Angeles, Calif.; Hampton Roads, Va.; Charleston, S.C.; Philadelphia, Pa.), reviewing CG AARs for fiscal year 2004, and conducting in-person and telephone interviews with DHS, CG, DOJ, DOD, and Project Seahawk. The port exercises we selected to visit were geographically diverse and each was conducted in either August or September of fiscal year 2004. Additional criteria for exercise selection included the strategic importance of the port (as defined by the Maritime Administration), the variety of terrorism scenarios to be exercised, and the federal, state, and local players involved. The AARs we reviewed were based on a list of all fiscal year 2004 exercises provided to us by the CG. We focused on any contingency that included terrorism and then requested AARs for those completed exercises from the CG. According to CG guidance, AARs are required to be submitted within 60 days of exercise completion. To ascertain compliance with this guidance, CG personnel provided us with the dates that AARs for terrorism-related exercises were received at headquarters. We used this information, in conjunction with the exercise start and stop dates, to determine which reports were on time, which were late, and the average time late reports were submitted beyond the 60-day requirement. While issues of a legal nature did surface during our observation of exercises and analysis of AARs, exercise participants and agency officials did not recommend statutory changes for these issues. We generally relied upon the agency's position as to whether legislation was necessary and did not independently assess the need for legislation by auditing the specific issues identified in the exercises. To identify operational issues that occurred during port terrorism exercises, we relied extensively on perspectives gained through our observations at the four port terrorism exercises as well as a comprehensive review of the available AARs for operational issues based on criteria we developed. In order to determine the frequency of various operational issues identified in the CG's AARs, we noted instances that each subcategory within the major category appeared. These categories and subcategories were chosen through exercise observation and an initial review of available AARs by two independent analysts. This allowed us to identify operational issues that were consistent across the terrorism exercises. We used the following major categories and subcategories (which appear in parentheses) Communication (communication interoperability issues, communication policy or protocols between or within agencies, information sharing between agencies), Command and Control/ Incident Command Structure (NIMS/ICS training, UC/IC information flow), Unclear Decision Making/ Jurisdictional Knowledge (unclear decision making authority, unclear lead authority, unclear authorities/jurisdictions of other agencies), and Resource Coordination/ Capabilities (response capabilities, response coordination/joint tactics). To analyze the reports, two GAO analysts independently reviewed each report and coded operational issues based on the above subcategories. The results of each analysis were then compared and any discrepancies were resolved. Overall percentages for the major categories were determined based on whether any of the issues were identified under the respective subcategories. The maximum number of observations for any major category was equal to one, regardless of the number of times a subcategory was recorded. To identify management concerns regarding the CG's AARs, we reviewed our previous studies on this issue as well as CG and DHS issued guidance on exercise management, such as the Coast Guard's Exercise Planning Manual and Contingency Preparedness Planning Manual Volume III. Our analysis also included in-person interviews with CG exercise management officials from headquarters and CG planners in the field to gain additional information on how terrorism exercises are planned and evaluated as well as how lessons learned are cataloged and disseminated. To ascertain the effect of untimely CG AARs (CG AARs are required to be completed within 60 days of exercise completion), we also interviewed exercise management experts from DOD. We conducted a content analysis of the available AARs to determine the weaknesses in the reports and where deviations from CG protocol were taking place. We conducted our work from June to December 2004 in accordance with generally accepted government auditing standards.
Seaports are a critical vulnerability in the nation's defense against terrorism. They are potential entry points for bombs or other devices smuggled into cargo ships and ports' often-sprawling nature presents many potential targets for attack. To assess the response procedures that would be implemented in an attack or security incident, officials conduct port-specific exercises. Many federal, state, and local agencies may potentially be involved. The Coast Guard has primary responsibility for coordinating these exercises and analyzing the results. GAO examined (1) the emerging framework for coordinating entities involved in security responses, (2) legal and operational issues emerging from exercises conducted to date, and (3) Coast Guard management of reports analyzing exercises. GAO reviewed reports on 82 exercises from fiscal year 2004 and observed 4 exercises as they were being conducted. The framework under which federal agencies would manage a port-terrorism incident is still evolving. The primary guidance for response, the National Response Plan, is in the final stages of approval, and the National Incident Management System, the structure for multiagency coordination, is still being put in place. As a result, it is too early to determine how well the complete framework will function in an actual incident. GAO's review of fiscal year 2004 terrorism-related reports and exercises identified relatively few legal issues, and none of these issues produced recommendations for statutory changes. Most issues have instead been operational in nature and have surfaced in nearly every exercise. They are of four main types: difficulties in sharing or accessing information, inadequate coordination of resources, difficulties in coordinating effectively in a command and control environment, and lack of knowledge about who has jurisdictional or decision-making authority. Reports on the exercises often do not meet the Coast Guard's standards for timeliness or completeness. Sixty-one percent of the reports were not submitted within 60 days of completing the exercise--the Coast Guard standard. The Coast Guard has implemented a new system for tracking the reports, but after a year of use, timeliness remains a concern. The Coast Guard has requirements for what the reports should contain, but 18 percent of the reports did not meet the requirement to assess each objective of the exercise. The Coast Guard has cited several planned actions that may allow for improving completeness. These actions are still in development, and it is too early to determine how much they will help.
4,921
499
A paid preparer is simply anyone who is paid to prepare, assist in preparing, or review a taxpayer's tax return. In this statement, we refer to two categories of paid preparers--tax practitioners and unenrolled preparers. CPAs, attorneys, and enrolled agents are tax practitioners. Tax practitioners differ from unenrolled preparers in that they can practice before IRS, which includes the right to represent a taxpayer before IRS, prepare and file documents with IRS for the taxpayer, and correspond and communicate with IRS. We use the term unenrolled preparer to describe the remainder of the paid preparer population. In most states, anyone can be an unenrolled preparer regardless of education, experience, or other standards. Tax practitioners are subject to standards of practice under the Department of Treasury Circular No. 230. Enrolled agents are generally required to pass a three-part examination and complete annual continuing education, while attorneys and CPAs are licensed by states but are still subject to Circular 230 standards of practice if they practice before IRS. Generally, unenrolled preparers are not subject to these requirements. In April 2006, we made a recommendation to IRS to conduct research on the extent to which paid preparers meet their responsibility to file accurate and complete tax returns. conducted a study of the quality of paid preparers and issued a report recommending increased oversight of paid preparers. Recommendations included (1) mandatory registration, (2) competency testing and continuing education, and (3) holding all paid preparers-- including unenrolled preparers--to Circular 230 standards of practice. IRS implemented each recommendation through regulations issued in September 2010 and June 2011. The June 2011 regulations amended Circular 230 and established a new class of practitioners called "registered tax return preparers." IRS intended for these new requirements to support tax professionals, increase confidence in the tax system, and increase taxpayer compliance. GAO-06-563T. According to IRS officials, approximately 84,148 competency exams were taken prior to the District Court's decision. new testing and continuing professional education requirements. IRS appealed the order, but it was affirmed in February 2014 by the U.S. Court of Appeals for the District of Columbia Circuit. Figure 1 provides a summary timeline of IRS's implementation of paid preparer requirements and legal proceedings. The President's Fiscal Year 2015 budget, released in March 2014, included a proposal to explicitly provide the Secretary of the Treasury and IRS with the authority to regulate all paid preparers. Although the District Court determined that IRS does not have the authority to regulate unenrolled preparers, the decision did not affect the requirement that all paid preparers obtain a Preparer Tax Identification Number (PTIN) and renew their PTIN annually. As of March 16, 2014, approximately 676,000 paid preparers have registered or renewed their PTINs. As shown in figure 2, the two largest categories of PTIN registrations and renewals are unenrolled preparers--55 percent--and CPAs--31 percent. Currently, Oregon, Maryland, California, and New York regulate paid preparers. Both Oregon and California began to regulate paid preparers in the 1970s, while Maryland and New York's programs were implemented more recently. Further, the programs themselves involve different types of requirements for paid preparers as illustrated in table 1. In August 2008--prior to Maryland and New York implementing paid preparer requirements--we reported on state-level paid preparer requirements in California and Oregon. Specifically, we reported that both California and Oregon have requirements that paid preparers must meet before preparing returns; of the two states, Oregon has more stringent requirements. According to our analysis of IRS tax year 2001 NRP data, Oregon returns were more likely to be accurate while California returns were less likely to be accurate compared to the rest of the country after controlling for other factors likely to affect accuracy. Specifically, in August 2008, we found that the odds that a return filed by an Oregon paid preparer was accurate were 72 percent higher than the odds for a comparable return filed by a paid preparer in the rest of the country. According to IRS's SOI data, an estimated 81.2 million or 56 percent of approximately 145 million individual tax returns filed for tax year 2011 were completed by a paid preparer. Estimated use of paid preparers was fairly evenly distributed across income levels, and as table 2 shows, taxpayers with more complex returns used preparers the most. For example, preparers were more commonly used by taxpayers who filed the Form 1040 as opposed to the 1040EZ or 1040A and those claiming itemized deductions or the Earned Income Tax Credit (EITC). Across all income levels taxpayers who used paid preparers had a higher median refund than those who prepared their own returns at statistically significant levels, as shown in table 3. Specifically, individual taxpayers who used a paid preparer had an estimated median tax refund across all adjusted gross income levels that was 36 percent greater than taxpayers who prepared their own return. Taxpayers rely on paid preparers to provide them with accurate, complete, and fully compliant tax returns; however, tax returns prepared for us in the course of our investigation often varied widely from what we determined the returns should and should not include, sometimes with significant consequences. Many of the problems we identified would put preparers, taxpayers, or both at risk of IRS enforcement actions. The NRP's review of tax returns from 2006 through 2009 also found many errors on returns prepared by paid preparers, and some of those errors were more common on paid prepared returns than on self-prepared returns. Nearly all of the returns prepared for our undercover investigators were incorrect to some degree, and several of the preparers gave us incorrect tax advice, particularly when it came to reporting non-Form W-2 income and the EITC. Only 2 of 19 tax returns showed the correct refund amount. While some errors had fairly small tax consequences, others had very large consequences resulting in the overstatement of refunds from $654 to $3,718. Our undercover investigators visited 19 randomly selected tax preparer offices--a non-generalizeable sample--to have taxes prepared. We developed two taxpayer scenarios based on common tax issues that we refer to as our "Waitress Scenario" and our "Mechanic Scenario." Key characteristics of each scenario are summarized in table 4. Refund amounts derived by the 19 preparers who prepared tax returns based on our two scenarios varied greatly. For our waitress scenario, the correct refund amount was $3,804, however, refund amounts on returns prepared for our undercover investigators ranged from $3,752 to $7,522. Similarly, the correct refund amount for the mechanic scenario was $2,351; however, refunds ranged from $2,351 to $5,632. Paid preparer errors generated during our 19 non-generalizeable visits resulted in refund amounts that varied from giving the taxpayer $52 less to $3,718 more than the correct amount. Of the 19 paid preparers we visited, 2 determined the correct refund amount: one correct tax return was prepared for the waitress scenario and one for the mechanic scenario. An additional 4 paid preparers calculated tax returns within $52 of the correct refund amount. On the remaining 13 tax returns--7 for the waitress scenario and 6 for the mechanic scenario--preparers overestimated the total refund by $100 or more. Figure 3 shows the amount of the refund over and under the correct refund amount. In some instances, paid preparers made similar errors across multiple site visits. For example, on the waitress return paid preparers made two of the same errors: (1) not claiming the unreported cash tips and (2) claiming both children as eligible to receive the EITC. These errors resulted in clusters of overstated refunds. In four site visits, paid preparers not claiming unreported cash tips resulted in a refund amount overstated by $654. In three site visits, paid preparers made both errors, which resulted in a refund amount overstated by $3,718. In the mechanic scenario, paid preparers that did not include side income resulted in tax refunds that ranged from $2,677 to $3,281 above the correct refund amount. A majority of the 19 paid preparers we visited made errors on common tax return issues; on some lines of the tax return most paid preparers were correct. Some of the most significant errors involved paid preparers (1) not reporting non-Form W-2 income, such as unreported cash tips, in 12 of 19 site visits; (2) claiming an ineligible child for the EITC in 3 of 10 site visits; and (3) not asking the required eligibility questions for the American Opportunity Tax Credit. Such errors could lead taxpayers to underpay their taxes and may expose them to IRS enforcement actions. By contrast, in some instances the majority of preparers took the right course of action. For example, 17 of 19 paid preparers completed the correct type of tax return and 18 of 19 preparers correctly determined whether to itemize or claim the standard deduction. Our results are summarized in figure 4. Type of tax return. Paid preparers completed the correct type of tax return--the Form 1040--for 17 of 19 site visits. Two paid preparers incorrectly completed the Form 1040A for the waitress scenario. The Form 1040A should not have been used because the waitress received tip income that was not reported to her employer. Dividend and capital gains income. Preparers recorded the income correctly on 8 of 9 returns. The mechanic received qualified and ordinary dividends, and capital gains from a mutual fund that were reinvested into the fund. This income was documented on a third party reporting form; the Form 1099-DIV. According to IRS guidance, a Form 1099-DIV must be filed for any person who receives dividends of $10 or more, including for funds that are reinvested. Mechanic Scenario, Site Visit #1 One paid preparer who did not accurately record the investment income said that it was not necessary to include income that was reinvested in a mutual fund. Total income. Of the 10 waitress returns prepared for us, 3 included the unreported cash tip income. However, only one of the three returns included the correct amount of tip income. Total income for the waitress scenario should include income documented on the Form W- 2, as well as the amount of unreported cash tip income offered by our investigator to the paid preparer during the site visit. The two returns that did not include the correct amount of tip income included lesser amounts. Waitress Scenario, Site Visit #5 In response to the investigator mentioning her unreported cash tip income, one paid preparer told her that tips not included on the Form W-2 do not need to be reported. Total income for the mechanic return should include non-Form W-2 business income--resulting from mechanic work and babysitting conducted outside of a formal employment arrangement--and income from ordinary dividends and capital gains. Of the 9 mechanic returns prepared for us, 4 returns included both the business income and the investment income. However, only 3 returns included the correct amounts of business and investment income. Incorrectly reporting income often resulted in cascading errors on other lines of the tax return. Tax returns that did not include side income had errors in credits that are calculated based on income. For example, if a paid preparer did not report side income in the mechanic scenario, the resulting total income would make the mechanic eligible for the EITC when he otherwise would not be eligible. Similarly, because two paid preparers incorrectly chose not to include unreported tip income for the waitress, they selected the wrong type of tax return, the Form 1040A. Mechanic Scenario, Site Visits #3 and #9 Two paid preparers demonstrated what the refund amount would be if the side income were reported compared to if it were not reported. Both preparers did not record the side income. Itemized or standard deduction. All but one of the 19 returns correctly recorded the most advantageous deduction for the two scenarios. According to IRS guidance, taxpayers should itemize deductions when the amount of their deductible expenses is greater than the standard deduction amount. For the waitress scenario, the most advantageous deduction would be the standard deduction for head of household, and for the mechanic scenario, the itemized deductions were more advantageous. One paid preparer chose to use the standard deduction for the mechanic, even though it was approximately $3,000 less than the total amount of the itemized deductions we included in the scenario. Child-care expenses. All 19 paid preparers did not record child-care expenses because neither the waitress nor mechanic was eligible to receive the credit. While none of the paid preparers recorded the credit, the reasons the preparers cited were often incorrect. According to IRS guidance, a taxpayer must attempt to collect the Social Security number of his or her child-care provider, but if unsuccessful, can report that fact to IRS and still claim the credit. For the waitress scenario, the reason that she was ineligible to claim the child-care expenses was that she did not attempt to get her child-care provider's Social Security number. Upon learning that she did not have the Social Security number of the provider, several of the paid preparers did not enter her child care expenses on her return. IRS guidance states that qualified child-care expenses only include amounts paid while the taxpayer worked or looked for work. The mechanic and his wife were not eligible for the credit because the child-care expenses were incurred for running errands, and not so that either parent could work. Again, many tax preparers said that the reason the credit could not be claimed was because the mechanic did not have the child-care provider's Social Security number, not because he was otherwise ineligible. Student loan interest. Eight of 10 paid preparers correctly included the deduction for student loan interest. The waitress's Form 1098-E shows the interest the lender received from the taxpayer on qualified student loans. A taxpayer receives a Form 1098-E if student loan interest of $600 or more is paid during the year. Sales tax deduction. Seven of 9 preparers recorded sales tax as a deduction on the mechanic's tax return, however not all chose the most advantageous amount. According to IRS guidance, taxpayers who itemize deductions can choose whether to deduct local income taxes or sales taxes. Because the mechanic lived in a state that did not have income tax, sales tax should have been deducted. Of the 7 paid preparers that deducted sales taxes, only 2 recorded the amount that was most advantageous to the taxpayer. IRS provides an online calculator to help taxpayers estimate the amount of sales taxes they likely paid in a year. To determine this estimate, taxpayers input basic information such as ZIP code and annual income in the calculator. Five preparers chose amounts that were lower than the amount the calculator estimated. Social Security and Medicare tax on unreported tips. Two of 10 paid preparers completed the Form 4137 and reported the amount of taxes owed on the tip income. Because the waitress received unreported cash tips, the amount of taxes owed on the unreported cash tip income should be calculated using the Form 4137. However, one of the preparers included a lesser amount of tip income when performing the calculation, resulting in a smaller amount of taxes owed. Another preparer reported the tip income by incorrectly completing a Schedule C, Profit or Loss from Business, and a Schedule SE for self-employment taxes. Earned Income Tax Credit. The EITC on line 64a was another area where paid preparers made mistakes that resulted in a significant overstatement of the refund. Of the 10 returns prepared for the waitress, 3 reported two children on the Schedule EIC, instead of the one child who lived with the taxpayer in 2013 and was eligible for the EITC. Waitress Scenario, Site Visit #4 One paid preparer questioned the investigator on the amount of time her older child lived with her. The investigator responded that the older child stayed with her on weekends. The paid preparer discussed the investigator's response with the office manager and then stated that she can claim the child for the EITC if no one else does, which was not correct. American Opportunity Tax Credit. All 9 paid preparers correctly chose the American Opportunity Tax Credit for the mechanic scenario. The mechanic had a 20-year-old son attending a community college and paid for both his tuition and books. According to IRS guidance, to be eligible for this credit, a student must meet certain requirements including full-time enrollment at least half the year and no felony drug offense convictions. Although we instructed the investigator to respond to paid preparer inquiries such that his son met these requirements, some paid preparers did not ask the required questions to determine eligibility. All paid preparers are subject to certain requirements in the Internal Revenue Code (IRC) and may be subject to penalties for non- compliance. For example, the IRC imposes monetary penalties on paid preparers who understate a taxpayer's tax liability due to willful or reckless conduct. As shown in figure 5, in 12 of 19 cases, paid preparers did not record additional side income not reported on Form W-2's and may be subject to this penalty. The IRC also requires that paid preparers sign the tax return and furnish an identifying number. In 3 of 19 cases, preparers did not meet the signature requirement. In addition, 3 preparers used a PTIN that did not belong to them and one used a fake PTIN. Additionally, 3 of 10 preparers in our study may be subject to a penalty for not meeting due diligence requirements when determining if both of the waitress's children qualified for the EITC. When considering the EITC, paid preparers must meet four due diligence requirements. Generally, if paid preparers file EITC claims, they must (1) ask all the questions to get the information required on Form 8867, Paid Preparers' Earned Income Credit Checklist; (2) compute the amount of the credit using the EITC worksheet from the Form 1040 instructions or a similar document; (3) ask additional questions when the information the client gives the preparer seems incorrect, inconsistent, or incomplete; and (4) keep a copy of Form 8867, the EITC worksheets, and other records used to compute the credit. Because the returns we had prepared were not real returns and were not filed, penalties would not apply. However, we plan to refer the matters we encountered to IRS so that any appropriate follow-up actions can be taken. The fees charged for tax preparation services varied widely across the 19 visits, sometimes between offices affiliated with the same chain. Often, paid preparers either did not provide an estimate of the fees upfront or the estimate was less than the actual fees charged. In several instances, upon completion of the tax return, the preparer initially charged one fee, then offered a reduced amount. Figure 6 shows the fees charged by each of the 19 paid preparers we visited for each scenario. For the waitress scenario, the final fees charged for tax preparation ranged from $160 to $408. For the mechanic scenario, the final fees charged for tax preparation ranged from $300 to $587. For the two correct tax returns that were prepared, the final fee charged was $260 for the waitress scenario and $311 for the mechanic scenario. Some paid preparers provided receipts that listed total charges that were higher than the "discounted" amount ultimately charged. For example, one preparer estimated the cost of services to be $794, but then charged the taxpayer $300. Paid preparers provided various reasons for the amount of the tax preparation fee, including, (1) the EITC form is the most expensive form to file, (2) the pricing and fees are at their peak from mid-January through February and then go down, and (3) there is a price difference depending if the tax return is completed in the morning or the evening. As in our limited investigation, our estimates from NRP data suggest that tax returns prepared by paid preparers contained a significant number of errors. As shown in table 5, returns prepared by a paid preparer showed a higher estimated error rate--60 percent--than returns prepared by the taxpayer--50 percent. Errors in this context changed either the tax due or the amount to be refunded. As noted before, it is important to remember that paid preparers are used more often on more complicated returns than on simpler ones, although we were unable to gauge the full extent to which this might be true. Furthermore, errors on a return prepared by a paid preparer do not necessarily mean the errors were the preparer's fault; the taxpayer may be to blame. Preparers depend upon the information provided by the taxpayer. In addition to different rates of errors on paid preparer filed returns and self-prepared returns, the amount taxpayers owed IRS also differed. Specifically, the estimated median amount owed to IRS was higher for paid preparer filed returns. For instance, as shown in table 6, it is estimated that taxpayers using a paid preparer owed a median of $354 to IRS, compared with $169 for taxpayers preparing their own return. NRP estimates show that both individuals and paid preparers make errors on specific forms and lines of Form 1040, some of which we experienced in our undercover visits. Table 7 shows that in many instances, returns completed by a paid preparer are estimated to have a greater percentage of errors compared to self-prepared returns. For example, of returns prepared by a paid preparer, 51 percent have an error on the EITC line compared to 44 percent of self-prepared tax returns. In total, for five line items we analyzed, the difference in the percent of errors on returns prepared by a paid preparer was statistically greater than the percent of errors on self-prepared returns. These line items include (1) the itemized or standard deduction, (2) business income, (3) total income, (4) the EITC, and (5) the refund amount. Differences between the percent of returns with errors on the student loan interest deduction line, the unreported Social Security and Medicare tax on tips line, and the education credit line were not statistically significant when comparing returns done by a paid preparer to those that were self-prepared. Over half of all taxpayers rely on the expertise of a paid preparer to provide advice and help them meet their tax obligations. IRS regards paid preparers as a critical link between taxpayers and the government. Consequently, paid preparers are in a position to have a significant impact on the federal government's ability to collect revenue and minimize the estimated $385 billion tax gap. As of March 2014, 55 percent of paid tax preparers are unenrolled preparers, not regulated by IRS. Undoubtedly, many paid preparers do their best to provide their clients with tax returns that are both fully compliant with the tax law and cause them to neither overpay nor underpay their federal income taxes. However, IRS data, which more broadly track compliance, show preparers made serious errors, similar to the findings from our site visits. The higher level of accuracy of Oregon's tax returns compared to the rest of the country suggests that a robust regulatory regime involving paid preparer registration, qualifying education, testing, and continuing education may help facilitate improved tax compliance. The courts determined that IRS does not have sufficient authority to regulate unenrolled preparers. In March 2014, the administration proposed that the Treasury and IRS be granted the explicit authority to regulate all paid preparers. Providing IRS with the necessary authority for increased oversight of the paid preparer community will help promote high-quality services from paid preparers, will improve voluntary compliance, and will foster taxpayer confidence in the fairness of the tax system. If Congress agrees that significant paid preparer errors exist, it should consider legislation granting IRS the authority to regulate paid tax preparers. Chairman Wyden, Ranking Member Hatch, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact James R. McTigue, Jr. at (202) 512-9110 ([email protected]). Individuals making key contributions to this testimony include: Wayne A. McElrath, Director; Libby Mixon, Assistant Director; Gary Bianchi, Assistant Director; Amy Bowser; Sara Daleski; Mary Diop; Rob Graves; Barbara Lewis; Steven Putansu; Ramon Rodriguez; Erinn L. Sauer; and Julie L. Spetz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For tax year 2011, an estimated 56 percent of about 145 million individual tax returns were completed by a paid preparer. IRS has long recognized that preparers' actions have an enormous effect on its ability to administer tax laws effectively and collect revenue that funds the government. Likewise, many taxpayers rely on preparers to provide them with accurate, complete, and fully compliant tax returns. GAO was asked to review the oversight and quality of paid preparers. This testimony examines (1) how preparers are regulated by IRS and (2) the characteristics of tax returns completed by preparers based on products GAO issued from April 2006 through August 2008 and work conducted from November 2013 to April 2014. GAO reviewed laws, regulations and other guidance and interviewed IRS officials. GAO analyzed IRS Statistics of Income data from tax year 2011, the most recent data available, and the NRP database, which broadly tracks compliance. To gain insight on the quality of service provided, GAO conducted 19 undercover site visits to commercial preparers in a metropolitan area. Criteria to select the metropolitan area included whether the state regulates preparers and levies an income tax. The Internal Revenue Service's (IRS) authority to regulate the practice of representatives before IRS is limited to certain preparers, such as attorneys and certified public accountants. Unenrolled preparers--those generally not subject to IRS regulation--accounted for 55 percent of all preparers as of March 2014. In 2010, IRS initiated steps to regulate unenrolled preparers through testing and education requirements; however, the courts ruled that IRS lacked the authority. GAO found significant preparer errors during undercover site visits to 19 randomly selected preparers--a sample which cannot be generalized. Refund errors in the site visits varied from giving the taxpayer $52 less to $3,718 more than the correct refund amount. Only 2 of 19 preparers calculated the correct refund amount. The quality and accuracy of tax preparation varied. Seventeen of 19 preparers completed the correct type of tax return. However, common errors included not reporting non-Form W-2 income (e.g., cash tips) in 12 of 19 site visits; claiming an ineligible child for the Earned Income Tax Credit in 3 of 10 site visits where applicable; not asking the required eligibility questions for the American Opportunity Tax Credit; and not providing an accurate preparer tax identification number These findings are consistent with the results of GAO's analysis of IRS's National Research Program (NRP) database. GAO analysis of NRP data from tax years 2006 through 2009 showed that both individuals and preparers make errors on tax returns. Errors are estimated based on a sample of returns, which IRS audits to identify misreporting on tax returns. Tax returns prepared by preparers had a higher estimated percent of errors--60 percent--than self-prepared returns--50 percent. Errors refer to changes either to the tax due or refund amount. If Congress agrees that significant preparer errors exist, it should consider legislation granting IRS the authority to regulate paid tax preparers. Technical comments from IRS were incorporated into this report.
5,491
679
FFELP is the largest source of federal financial assistance to students attending postsecondary institutions. In fiscal year 1994 students received about $23 billion in FFELP loan commitments, including about $14.8 billion in subsidized Stafford loans. The Department of Education pays interest to lenders on the behalf of subsidized Stafford loan borrowers while they are in school and during a subsequent 6-month grace period. This interest benefit is not available to borrowers for other FFELP loans. The private lenders that provide these loans may not discriminate on the basis of race, national origin, religion, sex, marital status, age, or handicapped status but, according to a Department policy official, may deny loans to eligible borrowers who do not meet their lending standards. Lenders may, for example, deny loans to students attending proprietary (for profit, typically trade and vocational) institutions or schools with high loan default rates. They may also withdraw from the program. Guaranty agencies, designated state or private not-for-profit entities, help administer FFELP by, for example, reimbursing lenders if borrowers fail to repay their loans. If an eligible borrower experiences difficulty obtaining a subsidized Stafford loan, guaranty agencies are required to provide one. The agencies may do so either directly or through a lender authorized to make LLR loans. Guaranty agencies must provide subsidized Stafford LLR loans to eligible students that have been denied a loan by two or more participating lenders. This requirement does not apply to unsubsidized Stafford loans. Several major changes to the subsidized loan program may influence the availability of loans. The 1992 amendments, for example, reduced the interest revenue lenders can receive from subsidized loans, and the 1993 Student Loan Reform Act reduced the rate at which guaranty agencies generally reimburse lenders if borrowers fail to repay their loans. In addition, the 1993 act established FDSLP to provide loans to students from the Department of Education rather than from private lenders. This program is expected to provide at least 60 percent of federal student loans by the 1998-99 academic year. Such reductions in student loan revenue and competition from the direct student loan program could reduce the profitability of student loans and reduce lenders' willingness to offer new loans to students. In response to our questionnaire and in discussions with us, participants in the subsidized Stafford loan program expressed differing views on the risk that eligible students could be denied loans through the end of fiscal year 1995. Most but not all guaranty agencies have arrangements in place to provide loans to students that have difficulty obtaining loans. The Department has several options for ensuring access if guaranty agencies are not able to do so without assistance. As some lenders become selective in making Stafford loans or stop participating in the program, many lenders and guaranty agencies expect some eligible subsidized Stafford loan borrowers to be denied loans by one or more lenders. We asked program participants to describe the risk that 5 percent or more of eligible borrowers will be refused a subsidized Stafford loan by one or more lenders through the end of fiscal year 1995. Department officials with whom we spoke foresaw little or no risk that lender refusals to make loans would be widespread. Sallie Mae officials also doubted that as many as 5 percent of eligible borrowers would be denied a loan. The President of the Consumer Bankers Association said that there is "some" risk that 5 percent or more would be denied a loan. The guaranty agencies that responded to our questionnaire had a wide range of views on this question. (See fig. 1.) Thirteen of these agencies rated the risk "moderate," "great," or "very great," while 16 agencies said that there is "little or no risk." The remaining 13 agencies indicated "some risk." One responded that it did not know. Some Risk (13) 10% Moderate Risk (4) 10% Great Risk (4) Very Great Risk (5) Concerns that some students will have difficulty obtaining access to loans evolve from lenders' deciding to leave the program or to become selective in making student loans. Additional departures of lenders from FFELP would represent a continuation of a trend begun in the mid-1980s. For example, during fiscal years 1984-1986, between 11,000 and 12,000 lenders participated in FFELP. The number of participating lenders has declined each year since, in part reflecting the general trend of mergers and consolidations in the financial community. By fiscal year 1993 the Department counted fewer than 7,500 active lenders. In response to our questionnaire, 28 agencies said that one or more of their lenders--lenders whose loans they guarantee--had indicated they plan to stop making subsidized Stafford loans sometime in the future. Six agencies said that this included one of their five largest loan volume lenders. Three of these agencies referred to the same lender. In addition to lenders that may stop making loans, concerns about loan access may arise if lenders choose to become more selective about making loans. Twenty agencies responded that one or more of their lenders planned to stop making loans to students attending institutions with student loan default rates that they--the lenders--consider too high. Most of these agencies said that 5 or fewer lenders would stop making loans, but one agency said that more than 200 lenders would stop. Most guaranty agencies--40 of the 43 respondents--had arrangements to provide LLR loans to eligible students. These arrangements included agreements with state secondary markets or other participating lenders to provide loans. Through September 30, 1993, the volume of loans provided through these arrangements had been small. More than half of the agencies said that they did not guarantee any LLR loans in fiscal years 1992 or 1993. The 16 agencies that provided data on LLR loans they made in fiscal year 1993 had an aggregate LLR loan volume of $32 million--about 0.3 percent of the $12.5 billion of subsidized Stafford loans made in fiscal year 1993. Twenty-six of the responding guaranty agencies responded to our question concerning the estimated capacity of their LLR arrangements. Twenty-two agencies estimated that they could have provided about $1.8 billion in LLR loans in fiscal year 1994. This represents an amount that is more than 50 times the total LLR loan volume for fiscal year 1993, and about one-eighth of total subsidized Stafford loan volume in fiscal year 1994. Three agencies cited "unlimited" LLR capacity. The largest guaranty agency, United Student Aid Funds, Inc., said that it has no set maximum on its LLR capacity. Nearly all of the agencies indicated they had LLR arrangements, and two-thirds had plans that the Department had approved. Department officials said that six agencies had not submitted plans for approval. Plans from the remaining agencies were either pending approval, or the plans submitted had been denied approval and the agencies had not resubmitted their plans. (See table 1.) Thirty-one guaranty agencies responded that they had agreements with lenders to provide LLR loans, but only 20 agencies had such agreements in writing. All LLR agreements but one either allow lenders to withdraw from their LLR commitments at any time or do not specify withdrawal terms. Four agreements specified that the arrangements applied for a specific time period, ranging from 12 to 18 months. Department officials told us they have several tools to help ensure that eligible borrowers have access to guaranteed student loans. They can assist guaranty agencies in recruiting lenders to provide LLR loans, direct Sallie Mae to make the loans, provide federal advances (interest-free loans) to guaranty agencies to enable them to make LLR loans, or make loans through the direct loan program. The Department is also developing a data reporting mechanism that, according to Department officials, will improve its monitoring of guaranty agencies' financial posture. It has proposed requiring each agency to submit annual 5-year financial projections. The Department recognizes that with the implementation of FDSLP, FFELP will require fewer guaranty agencies as the number of direct loans increases in relation to the number of guaranteed loans. Therefore, the Department is--and plans to continue--encouraging consolidation among guaranty agencies through mergers and takeovers in the belief that greater efficiency can be achieved through economies of scale. During the process of this consolidation, lenders could be left without guarantee services being available. In anticipation that such a condition may materialize, in 1994 the Department contracted with the private, nonprofit Transitional Guaranty Agency to provide loan guarantee functions, as the Department determines necessary. For those guaranty agencies having difficulty getting lenders to make student loans, particularly LLR loans, Department officials told us they can assist the agencies to recruit lenders or seek commitments from current LLR lenders to make more LLR loans. As of November 1, 1994, the Department had assisted one agency. According to Department and agency officials, it helped the California Student Aid Commission identify lenders to provide LLR loans to eligible borrowers at certain schools. The Department and Sallie Mae signed an agreement through which Sallie Mae could provide up to $200 million of LLR loans through fiscal year 1995. This amount can be increased by mutual written agreement between the Department and Sallie Mae. As of December 6, 1994, Sallie Mae made 149 unsubsidized Stafford LLR loans and 62 subsidized Stafford loans that were guaranteed by the Texas guaranty agency. Through the Higher Education Act of 1965, as amended, the Department can make federal advances to guaranty agencies to provide loan capital needed to make LLR loans. The statute also provides authority for Sallie Mae to make advances to guaranty agencies to enable them to make LLR loans. In addition, with the implementation of FDSLP, the Department has the option of making direct loans to students if guaranteed loans are unavailable. Many uncertainties make predictions about the availability of loans in future years very difficult. For example, it is unclear whether guaranty agencies' LLR arrangements will ensure access because many agencies' LLR agreements allow lenders to withdraw at any time. It is also unclear to what extent postsecondary institutions will increase their participation in FDSLP. As institutions elect to participate in FDSLP, the demand for FFELP loans will decline, which may in turn encourage additional lenders to withdraw from the program or become more selective in making loans. On the other hand, the demand for LLR loans may decline if schools whose students are obtaining LLR loans switch to FDSLP. It is also uncertain how the actions of the 104th Congress, whose leadership has pledged to constrain federal spending, might affect federal student loan programs and the Department's ability to ensure access. Generally FFELP administrators foresaw little or no risk of widespread loan access problems through fiscal year 1995, the period covered by our review. However, several respondents to our questionnaire foresaw more risk. Guaranty agencies have arrangements to provide LLR loans to eligible students that encounter difficulties in obtaining a loan, although most of them allow lenders to discontinue their commitments with little or no advance notice. However, if such arrangements prove inadequate, the Department has several options to ensure students' access to subsidized loans, which have proved adequate in the few instances in which they were used. It is too early to know with certainty if lenders will continue to provide subsidized loans to eligible borrowers, and this issue may need to be reevaluated in the future. We did our review from March 1994 through January 1995 in accordance with generally accepted government auditing standards. As arranged with your offices, we did not obtain agency comments on this report, although we did discuss its contents with Department program officials. These officials generally agreed with the information presented in the report. They did offer some technical suggestions, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions regarding this report. Major contributors include Joseph J. Eglin, Jr., Assistant Director, (202) 512-7009; Charles M. Novak; Benjamin P. Pfeiffer; Dianne L. Whitman; and Aaron C. Chin. The U.S. General Accounting Office (GAO) is conducting a congressionally requested study on the availability of guaranteed student loans to borrowers. As part of this study, we are asking all guaranty agencies to complete this questionnaire. Specifically, we are asking your agency to provide information about the extent of lenders' willingness to continue providing subsidized Stafford student loans and your agency's lender-of-last-resort (LLR) program. Please provide the following information about the person responsible for completing this questionnaire, so that we will know who to call to clarify information, if necessary. This questionnaire should be completed by the person who is most knowledgeable about lender participation and lender-of-last-resort programs. If this person is unable to respond to all of the questions, he or she may wish to seek the help of others in completing this questionnaire. ) This questionnaire asks for information related to only subsidized Stafford loans and by federal fiscal year (FFY). Please include all guarantee activity for these loans by your agency, except guarantees for which your agency provides guarantee services on behalf of another agency. 1.Consider all of your agency's guarantee activity, except guarantees for which your agency provides guarantee services on behalf of another agency. In total, about how many lenders either originated or purchased subsidized Stafford loans guaranteed by your agency during federal fiscal year (FFY) 1993 (October 1, 1992, through September 30, 1993)? (ENTER NUMBER) If you have any questions, please feel free to call collect either Dianne Whitman at (206) 287-4822 or Ben Pfeiffer at (206) 287-4832. Please return your completed questionnaire within 5 days of receipt, in the enclosed preaddresssed business reply envelope or by FAX. If the envelope is misplaced, please send your questionnaire to the address shown below. 2. About how many of these lenders, if any, have informed your agency that they will stop providing subsidized Stafford loans sometime in the future? (ENTER NUMBER; IF NONE, ENTER '0') Dianne Whitman U.S. General Accounting Office Jackson Federal Building, Room 1992 915 Second Avenue Seattle, WA 98174 _____________ lenders n=42 range: 0-500 mean=19 median=2 3. Have any lenders informed your agency that, by the end of FFY 1995 (September 30, 1995), they will no longer be providing subsidized Stafford loans to students attending post-secondary institutions with default rates that the lenders regard as too high? (CHECK ONE; IF YES, ENTER NUMBER) 5. How many lenders? (ENTER NUMBER) 2. [] Some risk ________ lenders n=18 range: 1-200 mean=21 median=4 3. [ ] Moderate risk 4. [ ] Great risk 5. [ ] Very great risk 6. [ ] Don't know ____________________________________________________________________________________________________________ 6. In your opinion, is each of the following factors listed below a major reason, minor reason, or not a reason why your lenders may either stop providing or provide fewer subsidized Stafford loans? (CHECK ONE FOR EACH FACTOR) Extent of change in the program Increased complexity of the program Dissatisfaction with the Department of Education's management of the program Reduced interest rate and special allowance payments from the Department of Education Reduced interest rate paid by new borrowers The 0.50 percent loan fee paid by lenders Reduction in claims reimbursement rate from 100 to 98 percent (except for LLR and exceptional performance loans) Concern about implications of the Federal Trade Commission (FTC) holder rule Expectations that lenders' market share will decline due to direct lending Concern about "windfall" profits provision Concern about audits of lenders and resulting liabilities Other (PLEASE SPECIFY) _________________________________________________ 8. Does your agency plan to change its arrangements for insuring access to loans? (CHECK ONE) Regardless of whether or not the Department of Education has approved your LLR plan, what arrangements, if any, does your agency currently have in place for ensuring that eligible borrowers who have been denied a subsidized Stafford loan will receive a loan? (CHECK ALL THAT APPLY) 1. [] Yes (CONTINUE) 2. [] No (GO TO QUESTION 10) [ ] We make these loans and hold them as lender-of-last- resort. 9. [ ] We make these loans as a lender-of-last-resort and sell them to the state secondary market. Please indicate if your agency plans to make each of the following changes to its arrangements for insuring access to loans? (CHECK YES OR NO FOR EACH) The guaranty agency plans to ... [ ] We make these loans and sell them to a secondary market other than the state secondary market. 1. [] We have or arrangement(s) with the state or a state solicit additional lenders currently not participating in the LLR program. secondary market which makes these loans. 2. [] We have arrangement(s) with lenders other than those mentioned above who make these loans. --> How many lenders? arrange for commitment by lender(s) currently participating in the LLR program to increase the amount of LLR loans it is (they are) willing to make. 3. _________ lenders n=17 range: 1-14 mean=3 median=3 increase the capacity of the guaranty agency to make LLR loans. 4. [] We refer borrowers to lenders willing to make the loans turn our LLR responsibilities over to the Department of Education or another entity. without a lender-of-last-resort designation. [ ] We have other arrangements. (PLEASE SPECIFY) 5. do something else. (PLEASE SPECIFY) [ ] We currently have no arrangements in place. 10. Did your agency provide guarantees for any lender-of-last-resort loans originated during either FFY 1992 or FFY 1993? (CHECK ONE) 1. [] Both FFY 1992 and 2. (CONTINUE) 3. 4. [] Neither FFY 1992 nor FFY 1993 (GO TO QUESTION 12) 11. What was the original gross principal dollar amount of lender-of-last resort loans that your agency guaranteed during FFY 1992 and during FFY 1993? If you cannot provide the data by federal fiscal year, please enter the dollar amount and the annual time period for which you do have information. (ENTER DOLLAR AMOUNT; IF NONE, ENTER 'O') FFY 1993 $ ___________ n=16 range: $0-$14,026,992 mean=$1,763,254 median=$172,847 sum=31,738,576 [ ] Can only provide for different time period, which is_____________________ [ ] Data not available for any time period 12. What is the projected gross dollar amount of your agency's 13. Consider your agency's projected FFY 1994 dollar amount for subsidized Stafford loans. What is the maximum amount that could be handled through your agency's current LLR loan arrangements? (ENTER DOLLAR AMOUNT; IF NONE, ENTER '0') $ ______________ n=26 range: $120,000-$1,200,000,000 mean=$83,260,670 median=$20,500,000 sum=$1,831,734,729 Also 3 agencies indicated "unlimited" and one indicated "no set maximum" [] Don't know If the dollar amount of LLR loans were to become greater than could be handled through your agency's current arrangements, please indicate if your agency would take each of the actions below and if yes, how likely it is, or not, that this action would succeed in increasing access to loans. (IF YES, CHECK ONE FOR EACH ACTION) FOR EACH) Take Action? Solicit additional lenders not currently in LLR program to provide loans Seek additional guaranty agency funding from non-federal sources to make loans directly Request that the state secondary market seek additional funding to enable it to either make LLR loans or purchase them from the guaranty agency Ask the Department of Education to advance funds to enable the guaranty agency to make these loans Ask the Department of Education to request that Sallie Mae make these loans Ask the Department of Education to make the loans directly Ask the Department of Education for other forms of assistance (PLEASE SPECIFY) _______________________ Make other arrangement(s): (PLEASE SPECIFY) ________________________ Does your agency currently have any verbal (informal) or written (either informal or formal) agreements for LLR loans with participating lenders? (CHECK ONE) 20. In any of these written agreements, can your lenders refuse to make a lender-of-last-resort loan to an eligible borrower: (CHECK ONE FOR EACH; IF NOT SPECIFIED, CHECK 'N/S') [] Verbal only (GO TO QUESTION 21) [ ] Written only (CONTINUE) [] Written and verbal (CONTINUE) 1. ... when the loan amount is below a minimum level? [] Neither (GO TO QUESTION 21) 2. 16. With how many participating lenders does your agency have written agreements for lender-of-last-resort loans? (ENTER NUMBER) ... when the loan would cause the lender to exceed a limit on the maximum number of loans? _____________ lenders n=19 range: 1-14 mean=2 median=1 [ ] Check here if this is an estimate [ ] Data not available 3. ... when the loan would cause the lender to exceed maximum amount of lender-of- last-resort loans it will make? Do the terms of any of these written agreements allow the lenders to withdraw from the agreements at any time? (CHECK ONE) 4. ... under an other condition? (PLEASE SPECIFY) [] Yes, in all cases [ ] Yes, in some cases [ ] No, may not withdraw [ ] Withdrawal terms not specified 21. Do any of these written agreements specify a length of time to which the terms apply? (CHECK ONE) Please provide below any comments that you have about this study, this questionnaire, or the LLR program. What proportion specify a length of time? (ENTER PERCENTAGE) [] No (GO TO QUESTION 20) For what length of time do most of these written agreements apply? (ENTER NUMBER OF YEARS) THANK YOU FOR YOUR HELP! The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how recent legislative changes have affected the availability of federally subsidized Stafford student loans, focusing on: (1) the arrangements guaranty agencies have to provide loans to eligible borrowers; and (2) Department of Education efforts to ensure continued student access to subsidized Stafford loans. GAO found that: (1) eligible students will have difficulty obtaining access to subsidized loans due to recent changes to the Federal Family Education Loan program (FFELP) and the introduction of the Federal Direct Student Loan program; (2) most guaranty agencies have made arrangements to provide loans to students that have difficulty obtaining loans; (3) the Department of Education has made arrangements with Sallie Mae and a new guaranty agency to make loans if guaranty agencies are unable to provide guarantees to lenders; (4) although it is difficult to predict how these arrangements will affect loan access after fiscal year (FY) 1995, FFELP administrators believe that there is little or no risk of widespread loan access problems through FY 1995; (5) Education has several options to ensure students' access to subsidized loans if its financial arrangements prove inadequate; and (6) the issue of student loan access may need to be reevaluated in the future, since it is too early to know whether lenders will continue to provide subsidized loans to eligible borrowers.
5,225
297
Except for summer employees and some contractors, the scope of DEA background investigations was designed to assess whether individuals met the requirements to receive a "top-secret" clearance. DEA used the results of these background investigations to (1) help determine whether individuals were suitable for employment and (2) provide a basis for granting a security clearance. Employees with top-secret clearances can have access to information classified up to and including the top-secret level. The unauthorized disclosure of classified information can cause irreparable damage to the national interest and loss of human life. Unless otherwise provided by law, the investigation of a person entering or employed by the federal government in the competitive service, or by career appointment in the Senior Executive Service, is the responsibility of OPM. Agencies may request delegated authority from OPM to conduct or contract out investigations of their own employees and applicants. DEA obtained this authority from OPM in the early 1980s. The two agencies executed a Memorandum of Understanding and Agreement, which transferred authority to DEA and set forth the general requirements that DEA must follow. The memorandum has been renewed periodically, but the most recent one expired in September 1998. Nevertheless, OPM and DEA have continued to follow it, according to officials from both agencies. The Memorandum of Understanding and Agreement between OPM and DEA required DEA to follow the background investigation standards used by OPM. These standards held that background investigations, needed to provide employees a top-secret clearance, must meet investigation requirements established by Executive Order 12968, "Access to Classified Information." This executive order directed the President's Security Policy Board to develop a common set of investigative standards to be used by executive agencies for determining eligibility for access to classified information. The President approved the standards that the Board developed in March 1997. DEA's background investigations were part of its personnel security program. DEA's Office of Security Programs was responsible for operating the program and, in connection with that responsibility, was to provide policy guidance and management of background investigations. This office was responsible for ensuring that appropriate investigations were completed on applicants and employees as well as providing security adjudication services for DEA. As part of these adjudication services, this office used the results of background investigations to determine whether individuals were suitable for employment and whether a security clearance should be granted. In addition to DEA, OPM and DOJ both had responsibility for overseeing the program and DEA's background investigations. and (2) all positions in the legislative and judicial branches of the federal government and in the government of the District of Columbia made subject to the civil service laws by statute. background investigations that were made for these renewals were referred to as reinvestigations. In fiscal year 1998, an estimated 5,583 background investigations were conducted of DEA applicants and employees. Of that number, about 3,401 were initial background investigations and another 2,182 were reinvestigations. Most of the investigations (about 74 percent) and all of the reinvestigations in 1998 were done by one contractor. However, DEA Special Agents conducted the background investigations of persons who applied for Special Agent positions, which accounted for about 26 percent of all initial background investigations. Based on investigative standards implementing Executive Order 12968, a typical background investigation for a top-secret clearance would include major investigative components such as proof of birth and citizenship for subjects and their immediate family a search of investigative files and other records held by federal agencies, including the FBI and CIA (referred to as a national agency check); financial review, including a credit bureau check; review of state and local law enforcement and court records (referred to as a local agency check); verification of recent education; record checks and personal testimony at places of employment; interviews of references including coworkers, employers, friends, educators, neighbors, and other individuals such as an ex-spouse; and a personal interview with the applicant. To identify and describe the circumstances that led DEA to consider relinquishing its delegated authority to conduct personnel background investigations, we interviewed cognizant officials of DEA, DOJ, and OPM. We obtained and reviewed the Memorandum of Understanding and Agreement between DEA and OPM regarding this authority. We obtained and reviewed all appraisals of DEA's personnel security program and/or the quality of background investigations done by OPM and DOJ since 1992, when DEA was first appraised by OPM as a separate DOJ component. We did not review individual background investigations or DEA's personnel security program. We also did not determine whether any employee who received a security clearance based on a deficient background investigation would have been denied clearance if the investigation had been performed according to required standards. We obtained and reviewed an internal DEA assessment of its personnel security program. We also obtained and reviewed relevant correspondence between DEA, DOJ, and OPM related to DEA's security program and its background investigations. To assess whether OPM acted in an independent and objective manner in choosing to review DEA's background investigations and security program, we applied three criteria posed in the following questions: What was OPM's responsibility for reviewing background investigations performed by DEA and/or its contractors? Did the frequency of OPM's reviews seem reasonable given the state of DEA's background investigations and program? Was the frequency of OPM's oversight activities at other agencies with delegated authority similar or dissimilar to the frequency of OPM's oversight at DEA? For this second objective, we reviewed Executive Order 10450, "Security Requirements for Government Employment," which among other things specified OPM's responsibilities for reviewing federal agencies' personnel security programs. We also identified all agencies, in addition to DEA, that had received delegated authority from OPM to perform background investigations. We compared OPM's oversight activities--the frequency of reviews and the results--to OPM's oversight activities at DEA. We requested comments on a draft of this report from the Attorney General of the United States on behalf of DOJ and DEA. We also requested comments from the Director, OPM. OPM's comments are discussed near the end of this letter and are reprinted in appendix I. DOJ orally provided technical and clarifying comments, which we incorporated into this report. We did our work in Washington, D.C., from May through July 1999 in accordance with generally accepted government auditing standards. As of July 1999, DEA was considering whether to relinquish its personnel- security background investigation authority to OPM. It had been brought to this point by the deficiencies found by OPM over much of the decade and because of an assessment DOJ made in 1997. DOJ initiated discussions with DEA in late 1998 about relinquishing its authority. Partially in response to this initiative, DEA conducted an assessment and concluded that it lacked the expertise and resources to capably perform or oversee all of its background investigations. Through a Memorandum of Understanding and Agreement with OPM, DEA was required to forward all background investigation reports to OPM when they were completed. OPM was required to review samples of reports to determine whether investigative requirements called for by the agreement were met. In addition to reviewing completed investigation reports, OPM was required to assess DEA's overall personnel security program under which background investigations were conducted. OPM's reviews of background investigation reports submitted by DEA continually found the investigations deficient. Between 1996 and 1998, OPM reviewed a total of 265 background investigations conducted by DEA and its contractors. OPM found all but one investigation deficient (i. e., all but one failed to fully comply with OPM investigative requirements, which DEA agreed to follow). Some of these background investigations contained a single deficiency while others contained more than one deficiency. There was no readily available tabulation of the deficiencies for all 264 investigations found deficient and the nature of those deficiencies. However, some information was available. The 49 DEA investigative reports that OPM found deficient in 1998 contained 221 deficiencies. Six reports contained one deficiency, and the remaining 43 reports contained multiple deficiencies. The types of deficiencies OPM identified include not determining the nature and extent of contact between a personal source and the subject of the investigation; gaps in coverage of the verification, through personal sources, of all of the subject's major activities, unemployment, and means of support; lack of or inadequate follow-up of issues admitted during the personal interview or disclosed on the Questionnaire for National Security Positions; failure to search Central Intelligence Agency files related to subject's foreign-born status or foreign travel; failure to provide information from public sources that was complete, such as bankruptcies, financial matters, and divorce; neglecting to supply verification of subject's citizenship through Immigration and Naturalization Service searches; and failure to obtain appropriate verification of an individual's name, date of birth, and place of birth through state and local bureaus of vital statistics. Generally, there is no standard for stating how serious a deficiency might be or what type is the most serious, because the deficiencies generally are errors of omission, such as failing to check a law enforcement record. Ultimately, a deficiency's seriousness depends on what type of activity might have been found if the appropriate search had been conducted or if a particular investigative technique had been used. Also, a seemingly less serious deficiency may provide an investigative lead that uncovers activity that might compromise the nation's security interest. OPM returned the reports that it found deficient to DEA for further work and correction. However, in 1998, when OPM followed up on the deficient reports that it identified in 1996 and 1997, OPM generally found that DEA had not corrected the deficiencies. OPM also found that even though the background investigations were deficient, DEA still granted security clearances. In addition to its periodic review of investigations, OPM also reviewed DEA's overall personnel security program in 1992 and again 6 years later in 1998. OPM found numerous deficiencies in 1992, and it found that DEA still had not corrected most of those deficiencies in 1998. The OPM findings include the following: The reinvestigation program did not effectively identify employees who were subject to routine reinvestigations. At DEA, employees were required to have their security clearances renewed every 5 years. Many employees in "Critical Sensitive/Top-Secret" positions were overdue for reinvestigation. DEA's Planning and Inspection Manual provisions were insufficient because they did not include pertinent OPM and DOJ regulatory guidelines. The manual, among other deficiencies, failed to incorporate administrative due process guidelines for applicants, employees, and contract employees to appeal the denial or revocation of a security clearance. Physical security safeguards for the storage and protection of investigative files were insufficient. Personnel security adjudicators whose job was to decide who would be granted security clearances needed additional training and oversight. DEA's Background Investigation Handbook did not include mandatory OPM investigative requirements. DEA did not forward copies of all its completed background investigations to OPM, as required by the conditions of its delegated authority. In addition to OPM reviews, DEA's security program was subject to compliance reviews by DOJ, which was responsible for the development, supervision, and administration of security programs within the department. In 1997, DOJ audited the DEA program and reported the results to DEA. Based on the results of this review and OPM's reviews, DOJ initiated discussions with DEA in 1998 on relinquishing its background investigation authority to OPM. DOJ's audit identified deficiencies that were similar to those that OPM identified in its review of DEA's security program in 1992. OPM also found the same sort of deficiencies in 1998 after the DOJ audit. The DOJ findings identified issues and deficiencies in (1) periodic reinvestigations; (2) background investigations; (3) due process procedures; (4) resources for monitoring, tracking, and controlling the investigation process; (5) adjudication (process for deciding whether security clearances should be granted); and (6) staff competence. DOJ referred to its findings as critical security issues and deficiencies. In October 1998, the Assistant Attorney General for Administration wrote to the DEA Administrator expressing his belief that DEA's investigative function should be relinquished to OPM but said as well that he would like to hear the DEA Administrator's comments. The memorandum was based on the DOJ audit and on the recurring findings of OPM. In that memorandum, DOJ's Assistant Attorney General also expressed concern with what DOJ saw as DEA's inability to maintain an effective overall personnel security program. This inability came about, the memorandum stated, because resources were consumed in doing certain functions--checking federal records and performing quality control--that OPM performed when doing background investigations for other agencies. OPM checked the files of various federal agencies, such as the investigative and criminal history files of the FBI, by computer. Unlike OPM, DEA lacked the extensive computer links to federal files and did many file checks manually. Checking federal files were referred to as National Agency Checks in background investigations. In the Spring of 1999, DEA assessed its personnel security program, concentrating on background investigations. This assessment, according to DEA officials, was done in response to the Assistant Attorney General for Administration's October memorandum, subsequent meetings with DOJ officials, and DEA's own awareness of the condition of its personnel security program. The assessment covered areas such as the (1) results of reviews performed by OPM and DOJ, (2) requirements of the Memorandum of Understanding and Agreement with OPM, (3) efforts to correct deficiencies with the security program and background investigations, (4) contract with the company that currently did background investigations for DEA, and (5) other management issues related to background investigations. Although its assessment noted efforts to resolve concerns raised by OPM and DOJ, DEA identified several issues that led to the conclusion that it did not have the capability to effectively perform or oversee background investigations. It also concluded that some security clearances were granted based on deficient background investigations. As of July 1999, DEA was considering whether to relinquish its background investigation authority to OPM. Following are some of the issues that the assessment identified, which led to DEA's conclusion that it had not effectively performed or overseen background investigations. DEA had historically failed to capably perform or oversee its background investigations. DEA found that the majority of people working in its personnel security unit had not been adequately trained regarding the laws, regulations, executive orders, policies, and technical practices central to initiating, and performing and overseeing background investigations, as well as providing personnel security adjudicative services to DEA. DEA had not ensured, as required by the conditions of its delegated authority, that each investigator performing investigations under its delegation had been screened by an investigation that met no less than OPM's top-secret clearance requirements. DEA did not comply with this requirement for its current contractor because DEA did not have funds to finance such investigations. DEA had not developed or implemented an integrity follow-up program to monitor contract investigators, as required under its delegated authority. DEA concluded that under current circumstances without relief that OPM would provide, it was likely that DEA would remain in violation of the integrity follow-up program requirement. DEA personnel performed National Agency Checks, a requirement of each background investigation. DEA's costs for performing these checks was associated with DEA's need to conduct many of these checks manually. In its self-assessment, DEA stated that OPM, however, had sophisticated computer facilities that permitted it to conduct required National Agency Checks through direct-access computer links with all the relevant agencies. DEA concluded that it saw no advantage to duplicate a capability that already existed in OPM. DEA bears ultimate responsibility for ensuring that background investigations performed under its delegation from OPM conform to mandated investigative criteria. DEA had been heavily criticized for its performance in this regard. DEA concluded that OPM has a fully qualified and experienced quality-control staff and that it was not reasonable for DEA to continue to attempt to duplicate this capability. As of July 1999, DEA had not made a final decision on relinquishing its background investigation authority. From what DEA officials told us, it was considering retaining the authority to investigate individuals who apply for DEA Special Agent positions but relinquishing the authority to do all other background investigations, including periodic reinvestigations of Special Agents. In his October 1998 memorandum, the Assistant Attorney General for Administration said that he believed that DEA should relinquish all authority, including the authority to investigate the backgrounds of Special Agent applicants. According to DEA, relinquishing all other background investigation authority would allow DEA to redirect resources into the investigative process for Special Agent applicants. The redirected resources would go into increased training, policy guidance, and oversight. DEA said it believed that it would be unwise to segregate the background investigation from the overall Special Agent applicant selection process by having them conducted by an independent entity not familiar with DEA's unique requirements for Special Agents. Special Agents did the background investigations of applicants and would continue to do these investigations if that authority was retained, according to DEA. DEA would not be the first agency to relinquish background investigation authority to OPM. According to an OPM official, five agencies have done so: (1) the Federal Emergency Management Agency in 1991, (2) the Department of Commerce in 1994, (3) the National Aeronautics and Space Administration Office of Inspector General in 1994, (5) the U.S. Soldiers and Airmens Home in 1994, and (5) the Department of Education Office of Inspector General in 1998. As previously mentioned, OPM had a sole-source contract with USIS, a firm that OPM was instrumental in creating, to do all background investigations except those done by agencies under delegation agreements. If DEA were to relinquish its background investigation authority to OPM, the contract between OPM and USIS would require OPM to order this investigative work from USIS until the contract expired. Because of the relationship between OPM and USIS, we reviewed whether OPM acted in an objective and independent manner in choosing to review DEA's background investigation reports and personnel security program. To gauge whether OPM acted objectively and independently, we considered OPM's responsibilities towards the security program and the program's background investigations and whether OPM's treatment of DEA differed from its treatment of other agencies. OPM appeared to have acted in an objective and independent manner. OPM was required to review DEA's personnel security program and background investigations. This requirement was contained in the Memorandum of Understanding and Agreement between OPM and DEA, which provided that OPM would monitor the agreement as part of its security program appraisal process. In addition, Executive Order 10450, "Security Requirements for Government Employment," required OPM to make a continuing study of the order's implementation. The purpose of this continuing study is to determine whether deficiencies exist in security programs that could harm the national interest and weaken national security. As already noted, OPM repeatedly found deficiencies in both the security program and the background investigations, which DEA usually did not correct, and DEA concluded that it could not capably perform or oversee background investigations. Given DEA's history of noncompliance, we believe that it was reasonable for OPM to do reviews of DEA's investigations. The frequency with which OPM reviewed DEA's investigation program appeared to be generally in line with the frequency with which OPM reviewed other agencies. In addition to DEA, three other agencies--the U.S. Marshals Service, the Small Business Administration, and the U.S. Customs Service--possessed authority delegated from OPM to conduct background investigations in fiscal year 1999. OPM reviewed the security program of the U.S. Marshals Service in 1989 and 1999 (in progress as of July 1999), the Customs Service in 1989 and 1994, and the Small Business Administration in 1983 and 1992. In comparison, it reviewed the DEA program in 1992 and followed up in 1998. OPM reviewed a sample of the background investigation reports of the U.S. Marshals Service and the Small Business Administration from July 1996 through April 1999, as it did for DEA. According to an OPM official, OPM did not routinely review the background investigation reports of the U.S. Customs Service because the Memorandum of Understanding and Agreement delegating the investigative authority to Customs did not include this requirement. However, one OPM review of 89 Customs investigations, completed in 1993, found 46 percent to be deficient. OPM was critical in its assessment of other agencies, as it was with DEA. For the aggregate samples of background investigation reports that OPM reviewed from July 1996 through April 1999, the rate of deficiency for the Small Business Administration was 75 percent. It was 93 percent for those from the U.S. Marshals Service. In comparison, the rate of deficiency for background investigation reports from DEA, which DEA and two contractors prepared over several years (1996 to 1999), was 98 percent. OPM computed these percentages by dividing the total number of reports it reviewed into the number it found deficient. Rather than raising a question regarding DEA's independence and objectivity in choosing to review background investigations performed by DEA and its contractors, the evidence raises the question of why OPM did not act to rescind DEA's delegated authority. According to OPM, the Administration announced in late 1994 that OPM's Investigative Unit was to be privatized. The privatization occurred in July 1996. During that period, two private investigative firms sued OPM. According to OPM, these firms believed that OPM was going to take work away from them to support its privatized contractor. The suits were settled when OPM agreed, among other things, that it would not rescind delegations of authority, such as the DEA delegation, except for unsatisfactory performance. Also during this period, a former director of OPM testified before Congress on its privatization plans and emphasized that OPM did not intend to rescind any delegated authorities in order to give new business to the privatized company. According to OPM, the agency has been sensitive to these commitments as well as to the potential perceptions of OPM's motivation for rescinding any such delegation. We have not evaluated OPM's explanation of this situation. However, at your request we are separately reviewing related issues concerning OPM's oversight function regarding background investigations. DEA had a long history of deficiencies in its personnel security program, including background investigations done by both contractor and agency employees that did not meet federal standards. Federal agency security programs are aimed at protecting national security interests and are predicated on thoroughly reviewing the backgrounds of federal job applicants and employees to ensure their suitability for employment and/or access to national security information. Given DEA's difficulties in ensuring the quality of its personnel background investigations and its conclusion that it is not able to capably perform or oversee background investigations, its consideration of relinquishing its delegated authority is not unreasonable. Nor do OPM's periodic appraisals of DEA background investigations for adherence to prescribed standards appear unreasonable. OPM has a mandated responsibility to oversee agency security programs, including background investigations, and appeared not to have treated DEA significantly differently, in terms of oversight from other agencies with delegated authority. We received written comments on a draft of this report from the Director of OPM and oral comments on August 17, 1999, from the Director, Audit Liaison Office, DOJ. The OPM Director said that she was pleased that we concluded that OPM was objective and independent in its oversight of the DEA personnel security program. Regarding the report's statement that the evidence raises a question of why OPM did not rescind DEA's delegated authority, the Director said that OPM had worked with DEA over several years to help it correct deficiencies that OPM had identified and that several factors mitigated against the rescission of DEA's authority. In addition to the factors cited on page 13 of this report, OPM said that it continued to work with DEA and DOJ to resolve the continuing personnel security problems and that OPM had let a reasonable amount of time elapse for DOJ, which is responsible for all of the department's security programs, to take the necessary action. In October 1998, DOJ advised DEA to relinquish its authority. OPM's complete comments are reprinted in appendix I. The DOJ Audit Liaison Director orally provided technical and clarifying comments, which we incorporated into this report. The Audit Liaison Director said that DOJ had no other comments. We are sending copies of this report to Senators Daniel K. Akaka, Robert C. Byrd, Ben Nighthorse Campbell, Thad Cochran, Susan M. Collins, Byron L. Dorgan, Richard J. Durbin, Judd Gregg, Orrin G. Hatch, Ernest F. Hollings, Patrick J. Leahy, Carl Levin, Joseph I. Lieberman, Charles E. Schumer, Ted Stevens, Fred Thompson, Strom Thurmond, and George V. Voinovich and Representatives Dan Burton, John Conyers, Jr., Elijah Cummings, Jim Kolbe, Steny H. Hoyer, Henry J. Hyde, Bill McCollum, John L. Mica, Patsy T. Mink, David Obey, Harold Rogers, Joe Scarborough, Robert C. Scott, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Members of Senate and House Committees and Subcommittees. We will also send copies to the Honorable Janet Reno, Attorney General of the United States, Department of Justice; The Honorable Janice R. Lachance, Director, Office of Personnel Management; and Mr. Donnie R. Marshall, Acting Administrator, Drug Enforcement Administration, Department of Justice and other interested parties. We will make copies of this report available to others on request. If you have any questions regarding this report, please contact me or Richard W. Caradine at (202) 512- 8676. Key contributors to this assignment were John Ripper and Anthony Assia. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on background investigations conducted by the Drug Enforcement Administration (DEA), focusing on: (1) the circumstances that led DEA to consider relinquishing its authority to conduct personnel background investigations; and (2) whether the Office of Personnel Management (OPM) acted in an independent and objective manner in choosing to review DEA and its background investigations. GAO noted that: (1) a series of evaluations in the 1990s critical of DEA's background investigations and personnel security program caused DEA to consider relinquishing its background investigation authority; (2) the findings of OPM's assessments over much of the 1990s, an assessment by the Department of Justice (DOJ) in 1998, and its own assessment in 1999 triggered DEA's consideration of this issue; (3) DEA's relinquishment of investigation authority would be consequential because DEA and its contractor performed an estimated 5,600 background investigations in 1998; (4) during the late 1990s, OPM reviewed a sample of 265 background investigation reports prepared by DEA and its contractors and determined that all but 1 investigation was deficient in meeting the investigative requirements that DEA had agreed to follow; (5) DOJ audited DEA's personnel security program in 1997 and found deficiencies similar to what OPM had found in 1992 and again in 1998; (6) based on the DOJ audit and the recurring finds of OPM, DOJ's Assistant Attorney General for Administration told the DEA Administrator in October 1998 that he believed that DEA should relinquish all of its background investigation authority to OPM; (7) in early 1999, DEA conducted its own examination of the personnel security program, focusing on background investigations, and concluded that DEA was not able to capably perform or oversee background investigations; (8) this lack of capability allowed security clearances to be granted, regardless of whether the related background investigations were adequate; (9) DEA had allowed contract investigators to perform background investigations, even though the investigators had not gone through required background investigations because DEA did not have funds to finance such investigations; (10) as of July 1999, subsequent to its examination of the personnel security program, DEA was considering relinquishing its authority for background investigations to OPM, except for the authority to investigate backgrounds of applicants for DEA Special Agent positions; (11) DEA believed that it would be unwise to separate the background investigation from the overall applicant selection process by having them conducted by an independent entity not familiar with DEA's unique requirements for Special Agents; and (12) OPM appeared to have been objective and independent in choosing to review DEA's personnel security program and background investigations.
6,028
571
The current model for regulation and oversight of the accounting profession involves federal and state regulators and a complex system of self-regulation by the accounting profession. The functions of the model are interrelated and their effectiveness is ultimately dependent upon each component working well. Basically, the current model includes: licensing members of the accounting profession to practice within the jurisdiction of a state, as well as issuing rules and regulations governing member conduct, which is done by the various state boards of accountancy; setting accounting and auditing standards, which is done by the Financial Accounting Standards Board (FASB) and the Auditing Standards Board (ASB), respectively, through acceptance of the standards by the SEC; setting auditor independence rules, which within their various areas of responsibility, have been issued by the American Institute of Certified Public Accountants (AICPA), the SEC, and GAO; and oversight and discipline, which is done through a variety of self- regulatory and public regulatory systems (e.g., the AICPA, the SEC, and various state boards of accountancy). Enron's failure and a variety of other recent events has brought a direct focus on how well the current systems of regulation and oversight of the accounting profession are working in achieving their ultimate objective that the opinions of independent auditors on the fair presentation of financial statements can be relied upon by investors, creditors, and the various other users of financial reports. The issues currently being raised about the effectiveness of the accounting profession's self-regulatory system are not unique to the collapse of Enron. Other business failures, restatements of financial statements, and the proliferation of pro forma earnings assertions over the past several years have called into question the effectiveness of the current system. A continuing message is that the current self-regulatory system is fragmented, is not well coordinated, and has a disciplinary function that is not timely, nor does it contain effective sanctions, all of which create a public image of ineffectiveness. In addressing these issues, proposals should consider whether overall the system creates the right incentives, transparency, and accountability, and operates proactively to protect the public interest. Also, the links within the self-regulatory system and with the SEC and the various state boards of accountancy (the public regulatory systems) should be considered as these systems are interrelated, and weaknesses in one component can put strain on the other components of the overall system. I would now like to address some of the more specific areas of the accounting profession's self-regulatory system that should be considered in forming and evaluating proposals to reshape or overhaul the current system. The accounting profession's current self-regulatory system for public company audits is heavily reliant on the AICPA through a system that is largely composed of volunteers from the accounting profession. This system is used to set auditing standards and auditor independence rules, monitor member public accounting firms for compliance with professional standards, and discipline members who violate auditing standards or independence rules. AICPA staff support the volunteers in conducting their responsibilities. In 1977, the AICPA, in conjunction with the SEC, administratively created the Public Oversight Board (POB) to oversee the peer review system established to monitor member public accounting firms for compliance with professional standards. In 2001, the oversight authority of the POB was expanded to include oversight of the ASB. The POB had five public members and professional staff, and received its funding from the AICPA. On January 17, 2002, the SEC Chairman outlined a proposed new self- regulatory structure to oversee the accounting profession. The SEC's proposal provided for creating an oversight body that would include monitoring and discipline functions, have a majority of public members, and be funded through private sources, although no further details were announced. The POB's Chairman and members were critical of the SEC's proposal and expressed concern that the Board was not consulted about the proposal. On January 20, 2002, the POB passed a resolution of intent to terminate its existence no later than March 31, 2002, leaving a critical oversight function in the current self-regulatory system unfilled. However, the POB's Chairman has stated that the Board will work to assist in transitioning the functions of the Board to whatever new regulatory body is established. In that respect, the SEC announced on March 19, 2002, that a Transition Oversight Staff, led by the POB's executive director, will carry out oversight functions of the POB. However, on April 2, 2002, the POB members voted to extend the POB through April 30, 2002, to provide additional time solely to finalize certain POB administrative matters and to facilitate a more orderly transition of oversight activities. The issues of fragmentation, ineffective communication, and limitations on discipline surrounding the accounting profession's self-regulatory system strongly suggest that the current self-regulatory system is not adequate in effectively protecting the public's interest. We believe these are structural weaknesses that require congressional action. Specifically, we believe that the Congress should create an independent statutory federal government body to oversee financial audits of public companies. The functions of the new independent body should include: establishing professional standards (auditing standards, including standards for attestation and review engagements; independence standards; and quality control standards) for public accounting firms and their key members who audit public companies; inspecting public accounting firms for compliance with applicable investigating and disciplining public accounting firms and/or individual auditors of public accounting firms who do not comply with applicable professional standards. As discussed later, this new body should be independent from but should closely coordinated with the SEC in connection with matters of mutual interest. In addition, we believe that the issues concerning accounting standard-setting can best be addressed by the SEC working more closely with the FASB rather than putting that function under the new body. The powers/authority of the new body should include: requiring all public accounting firms and audit partners that audit financial statements, reports, or other documents of public companies that are required to be filed with the SEC to register with the new body; issuing professional standards (e.g., independence) along with the authority to adopt or rely on existing auditing standards, including standards for attestation and review engagements, issued by other professional bodies (e.g., the ASB); enforcing compliance with professional standards, including appropriate investigative authority (e.g., subpoena power and right to maintain the confidentiality of certain records) and disciplinary powers (e.g., authority to impose fines, penalties, and other sanctions, including suspending or revoking registrations of public accounting firms and individual auditors to perform audits of public companies); requiring the new body to coordinate its compliance activities with the SEC and state boards of accountancy; requiring auditor reporting on the effectiveness of internal control over financial reporting; requiring the new body to promulgate various auditor rotation requirements for key public company audit engagement personnel (i.e., primary and second partners, and engagement managers); requiring the new body to study and report to the Congress on the pros and cons of any mandatory rotation of accounting firms that audit public companies, and take appropriate action; establishing annual registration fees and possibly inspection fees necessary to fund the activities of the new body on an independent and self-sustaining basis; and establishing rules for the operation of the new body. The new body should be created by statute as an independent federal government body. To facilitate operating independently, the new body's board members should be highly qualified and independent from the accounting profession, its funding sources should not be dependent on voluntary contributions from the accounting profession, and it should have final approval for setting professional standards and its operating rules. In that respect, the new body would have independent decisionmaking authority from the SEC. It would approve professional standards, set sanctions resulting from disciplinary actions, and establish its operating rules. At the same time, it should coordinate and communicate its activities with the SEC and the various state boards of accountancy. The new body should set its own human resource and other administrative requirements and should be given appropriate flexibility to operate as an independent entity and to provide compensation that is competitive to attract highly competent board members and supporting staff. The new body should also have adequate staff to effectively discharge its responsibilities. Candidates for board membership could be identified through a nominating committee that could include the Chairman of the Federal Reserve, Chairman of the SEC, the Secretary of the Treasury, and the Comptroller General of the United States. The number of board members could be 5 or 7 and have stated terms, such as 5 years with a limited renewal option, and the members' initial terms should be staggered to ensure some continuity. The members of the board should be appointed by the President and confirmed by the U.S. Senate. At a minimum, the chair and vice-chair should serve on a full-time basis. Importantly, board members should be independent of the accounting profession. In that regard, board members should not be active accounting profession practitioners and a majority of board members must not have been accounting profession practitioners within the recent past (e.g., 3 years). The new body should have sources of funding independent of the accounting profession. The new body could have authority to set annual registration fees for public companies. It could also have authority to set fees for services, such as inspections of public accounting firms, and authority to charge for copies of publications, such as professional standards and related guidance. The above fees and charges should be set to recover costs and sustain the operations of the new body. For accountability, we believe the new body should report annually to the Congress and the public on the full-range of its activities, including setting professional standards, inspections of public accounting firms, and related disciplinary activities. Such reporting also provides the opportunity for the Congress to conduct oversight of the performance of the new body. The Congress also may wish to have GAO review and report on the performance of the new body after the first year of its operations and periodically thereafter. Accordingly, we suggest that the Congress provide GAO not only access to the records of the new body, but also to the records of accounting firms and other professional organizations that may be needed for GAO to assess the performance of the new body. For over 70 years, the public accounting profession, through its independent audit function, has played a critical role in enhancing a financial reporting process that has supported the effective functioning of our domestic capital markets, which are widely viewed as the best in the world. The public's confidence in the reliability of issuers' financial statements, which relies in large part on the role of independent auditors, serves to encourage investment in securities issued by public companies. This sense of confidence depends on reasonable investors perceiving auditors as independent expert professionals who have neither mutual, nor conflicts of, interests in connection with the entities they are auditing. Accordingly, investors and other users expect auditors to bring to the financial reporting process integrity, independence, objectivity, and technical competence, and to prevent the issuance of misleading financial statements. Enron's failure and certain other recent events have raised questions concerning whether auditors are living up to the expectations of the investing public; however, similar questions have been raised over a number of years due to significant restatements of financial statements and certain unexpected and costly business failures, such as the savings and loan crisis. Issues debated over the years continue to focus on auditor independence concerns and the auditor's role and responsibilities. Public accounting firms providing nonaudit services to their audit client is one of the issues that has again surfaced by Enron's failure and the large amount of annual fees collected by Enron's independent auditor for nonaudit services. Auditors have the capability of performing a range of valuable services for their clients, and providing certain nonaudit services can ultimately be beneficial to investors and other interested parties. However, in some circumstances, it is not appropriate for auditors to perform both audit and certain nonaudit services for the same client. In these circumstances, the auditor, the client, or both will have to make a choice as to which of these services the auditor will provide. These concepts, which I strongly believe are in the public's interest, are reflected in the revisions to auditor independence requirements for government audits, which GAO recently issued as part of Government Auditing Standards. The new independence standard has gone through an extensive deliberative process over several years, including extensive public comments and input from my Advisory Council on Government Auditing Standards. The standard, among other things, toughens the rules associated with providing nonaudit services and includes a principle-based approach to addressing this issue, supplemented with certain safeguards. The two overarching principles in the standard for nonaudit services are that: auditors should not perform management functions or make auditors should not audit their own work or provide nonaudit services in situations where the amounts or services involved are significant or material to the subject matter of the audit. Both of the above principles should be applied using a substance over form doctrine. Under the revised standard, auditors are allowed to perform certain nonaudit services provided the services do not violate the above principles; however, in most circumstances certain additional safeguards would have to be met. For example, (1) personnel who perform allowable nonaudit services would be precluded from performing any related audit work, (2) the auditor's work could not be reduced beyond the level that would be appropriate if the nonaudit work were performed by another unrelated party, and (3) certain documentation and quality assurance requirements must be met. The new standard includes an express prohibition regarding auditors providing certain bookkeeping or record keeping services and limits payroll processing and certain other services, all of which are presently permitted under current independence rules of the AICPA. However, our new standard allows the auditor to provide routine advice and technical assistance on an ongoing basis and without being subject to the additional safeguards. The focus of these changes to the government auditing standards is to better serve the public interest and to maintain a high degree of integrity, objectivity, and independence for audits of government entities and entities that receive federal funding. However, these standards apply only to audits of federal entities and those organizations receiving federal funds, and not to audits of public companies. In the transmittal letter issuing the new independence standard, we expressed our hope that the AICPA would raise its independence standards to those contained in this new standard in order to eliminate any inconsistency between this standard and their current standards. The AICPA's recent statement before another congressional committee that the AICPA will not oppose prohibitions on auditors providing certain nonaudit services seems to be a step in the right direction. The independence of public accountants is crucial to the credibility of financial reporting and, in turn, the capital formation process. Auditor independence standards require that the audit organization and the auditor be independent both in fact and in appearance. These standards place responsibility on the auditor and the audit organization to maintain independence so that opinions, conclusions, judgments, and recommendations will be impartial and will be viewed as being impartial by knowledgeable third parties. Because independence standards are fundamental to the independent audit function, as part of its mission, the new independent and statutorily created government body, which I previously discussed, should be responsible for setting independence standards for audits of public companies, as well as the authority to discipline members of the accounting profession that violate such standards. First, I want to underscore that serving on the board of directors of a public company is an important and difficult responsibility. That responsibility is especially challenging in the current environment with increased globalization and rapidly evolving technologies having to be addressed while at the same time meeting quarterly earnings projections in order to maintain or raise the market value of the company's stock. These pressures and related executive compensation arrangements unfortunately often translate to a focus on short-term business results. This can create perverse incentives, such as attempts to manage earnings to report favorable short- term financial results, and/or failing to provide adequate transparency in financial reporting that disguises risks, uncertainties, and/or commitments of the reporting entity. On balance though, the difficulty of serving on a public company's board of directors is not a valid reason for not doing the job right, which means being knowledgeable of the company's business, asking the right questions, and doing the right thing to protect not only shareholders, but also the public's interest. At the same time it is important to strike a reasonable balance between the responsibilities, risks, and rewards of board and key committee members. To do otherwise would serve to discourage highly qualified persons from serving in these key capacities. A board member needs to have a clear understanding of who is the client being served. Namely, their client should be the shareholders of the company, and all their actions should be geared accordingly. They should, however, also be aware of the key role that they play in maintaining public confidence in our capital markets system. Audit committees have a particularly important role to play in assuring fair presentation and appropriate accountability of management in connection with financial reporting, internal control, compliance, and related matters. Furthermore, boards and audit committees should have a mutuality of interest with the external auditor to assure that the interest of shareholders are adequately protected. There are a number of steps that can be taken to enhance the independence of audit committees and their working relationship with the independent auditor to further enhance the effectiveness of the audit in protecting the public's interest. We believe that the SEC in conjunction with the stock exchanges should initially explore such actions. Therefore, any legislative reform could include a requirement for the SEC to work with the stock exchanges to enhance listing requirements for public companies to improve the effectiveness of audit committees and public company auditors, including considering whether and to what extent: audit committee members should be both independent of the company and top management and should be qualified in the areas related to their responsibilities such as accounting, auditing, finance, and the SEC reporting requirements; audit committees should have access to independent legal counsel and other areas of expertise, such as risk management and financial instruments; audit committees should hire the independent auditors, and work directly with the independent auditors to ensure the appropriate scope of the audit, resolution of key audit issues, compliance with applicable independence standards, and the reasonableness and appropriateness of audit fees. In this regard, audit committees must realize that any attempts to treat audit fees on a commodity basis can serve to increase the risk and reduce the value of the audit to all parties; audit committees should pre-approve all significant nonaudit services; audit committees should pre-approve the hiring of the public companies' key financial management officials (such as the chief financial officer, chief finance officer or controller) or the providing of financial management services if within the previous 5 years they had any responsibility for auditing the public company's financial statements, reports, or other documents required by the SEC; and audit committees should report to the SEC and public on their membership, qualifications, and execution of their duties and responsibilities. We also believe that the effectiveness of boards of directors and committees, including their working relationship with management of public companies, can be enhanced by the SEC working with the stock exchanges to enhance certain other listing requirements for public companies. In that respect, the SEC could be directed to work with the stock exchanges to consider whether and to what extent: audit committees, nominating committees, and compensation committees are qualified, independent, and adequately resourced to perform their responsibilities; boards of directors should approve management's code of conduct and any waivers from the code of conduct, and whether any waivers should be reported to the stock exchanges and the SEC; boards of directors should approve the hiring of key financial management officials who within the last 2 years had any responsibility for auditing the public company's financial statements, reports, or other documents required by the SEC; and CEOs should serve as the chairman of public company boards. Also, to further protect shareholders and the public interest, the SEC could be directed to report (1) within 180 days from enactment of legislation on other actions it is taking to enhance the overall effectiveness of the current corporate governance structure, and (2) periodically on best practices and recommendations for enhancing the effectiveness of corporate governance to protect both shareholders and the public's interest. We believe that the issues raised by Enron's sudden failure and bankruptcy regarding whether analyst's independence from issuers' of stock is affecting their suggested buy and sell recommendations can be addressed by requiring the SEC to work with the National Association of Securities Dealers (NASD) in connection with certain requirements. Accordingly, the SEC could be directed to work with the NASD to consider whether and to what extent: the firewalls between analysts and the business end of their firms should be widened to enhance analyst independence and to report to the Congress on the effectiveness of the regulations; disclosure of (1) whether the analyst's firm does investment banking, and (2) whether there is a relationship with the company in question should be improved, and whether to report to the Congress on the effectiveness of the requirements; and implementing regulations to be enforced through an effective examination program should be required. The Congress may wish to have GAO evaluate and report to it one year after enactment of legislation and periodically thereafter on the (1) results of the SEC's working relationship with the stock exchanges to strengthen corporate governance requirements, and (2) results of the SEC's working relationship with the NASD in developing independence and conflict of interest requirements for analysts. Accordingly, we suggest that the Congress provide GAO access to the records of the securities self regulatory organizations, such as the New York Stock Exchange and the NASD, that may be needed for GAO to evaluate the SEC's working relationships with these organizations. Business financial reporting is critical in promoting an effective allocation of capital among companies. Financial statements, which are at the center of present-day business reporting, must be timely, relevant, and reliable to be useful for decision-making. In our 1996 report on the accounting profession, we reported that the current financial reporting model does not fully meet users' needs. More recently, we have noted that the current reporting model is not well suited to identify and report on key value and risk elements inherent in our 21st Century knowledge-based economy. The SEC is the primary federal agency currently involved in accounting and auditing requirements for publicly traded companies but has traditionally relied on the private sector for setting standards for financial reporting and independent audits, retaining a largely oversight role. Accordingly, the SEC has accepted rules set by the Financial Accounting Standards Board (FASB)--generally accepted accounting principles (GAAP)--as the primary standard for preparation of financial statements in the private sector. We found that despite the continuing efforts of FASB and the SEC to enhance financial reporting, changes in the business environment, such as the growth in information technology, new types of relationships between companies, and the increasing use of complex business transactions and financial instruments, constantly threaten the relevance of financial statements and pose a formidable challenge for standard setters. A basic limitation of the model is that financial statements present the business entity's financial position and results of its operations largely on the basis of historical costs, which do not fully meet the broad range of user needs for financial information. Enron's failure and the inquiries that have followed have raised many of the same issues about the adequacy of the current financial reporting model, such as the need for additional transparency, clarity, more timely information, and risk-oriented financial reporting. Among other actions to address the Enron-specific accounting issues, the SEC has requested that the FASB address the specific accounting rules related to Enron's special purpose entities and related party disclosures. In addition, the SEC Chief Accountant has also raised concerns that the current standard-setting process is too cumbersome and slow and that much of the FASB's guidance is rule-based and too complex. He believes that (1) a principle-based standards will yield a less complex financial reporting paradigm that is more responsive to emerging issues, (2) the FASB needs to be more responsive to accounting standards problems identified by the SEC, and (3) the SEC needs to give the FASB freedom to address the problems, but the SEC needs to monitor projects on an ongoing basis and, if they are languishing, determine why. We generally agree with the SEC Chief Accountant's assessment. We also believe that the issues surrounding the financial reporting model can be effectively addressed by the SEC, in conjunction with the FASB, without statutorily changing the standard-setting process. However, we do believe that a more active and ongoing interaction between the SEC and the FASB is needed to facilitate a mutual understanding of priorities for standard- setting, realistic goals for achieving expectations, and timely actions to address issues that arise when expectations are not likely to be met. In that regard, the SEC could be directed to: reach agreement with the FASB on its standard-setting agenda, approach to resolving accounting issues, and timing for completion of projects; monitor the FASB's progress on projects, including taking appropriate actions to resolve issues when projects are not meeting expectations; and report annually to the Congress on the FASB's progress in setting standards, along with any recommendations, and the FASB's response to the SEC's recommendations. The Congress may wish to have GAO evaluate and report to it one year after enactment of legislation and periodically thereafter on the SEC's performance in working with the FASB to improve the timeliness and effectiveness of the accounting standard-setting process. Accordingly, we suggest that the Congress provide GAO access to the records of the FASB that may be needed for GAO to evaluate the SEC's performance in working with the FASB. The FASB receives about two-thirds of its funding from the sale of publications with the remainder of its funding coming from the accounting profession, industry sources, and others. One of the responsibilities of the FASB's parent organization, the Financial Accounting Foundation, is to raise funds for the FASB and its standard-setting process to supplement the funding that comes from the FASB's sale of publications. Some have questioned whether this is the best arrangement to ensure the independence of the standard-setting process. This issue has been raised by the appropriateness of certain accounting standards related to consolidations, that the FASB has been working on for some time, applicable to Enron's restatement of its financial statements as reported to the SEC by Enron in its November 8, 2001, Form 8-K filing. However, the issue has previously been raised when the FASB has addressed other controversial accounting issues, such as accounting for stock options. Therefore, the Congress may wish to task the SEC with studying this issue and identifying alternative sources of funding to supplement the FASB's sale of publications, including the possibility of imposing fees on registrants and/or firms, and to report to the Congress on its findings and actions taken to address the funding issue. Over the last decade, securities markets have experienced unprecedented growth and change. Moreover, technology has fundamentally changed the way markets operate and how investors access markets. These changes have made the markets more complex. In addition, the markets have become more international, and legislative changes have resulted in a regulatory framework that requires increased coordination among financial regulators and requires that the SEC regulate a greater range of products. Moreover, as I have discussed, the collapse of Enron and other corporate failures have stimulated an intense debate on the need for broad-based reform in such areas as oversight of the accounting profession, accounting standards, corporate governance, and analysts conflicts of interest issues, all of which could have significant repercussions on the SEC's role and oversight challenges. At the same time, the SEC has been faced with an ever-increasing workload and ongoing human capital challenges, most notably high staff turnover and numerous staff vacancies. Our recent report discusses these issues and the need for the SEC to improve its strategic planning to more effectively manage its operations and limited resources, and also shows that the growth of SEC resources has not kept pace with the growth in the SEC's workload (such as filings, complaints, inquiries, investigations, examinations, and inspections). We believe that the SEC should be provided with the necessary resources to effectively discharge its current and any increased responsibilities the Congress may give it. And finally, we believe that the SEC should be directed to report annually to the Congress on (1) its strategic plan for carrying out its mission, (2) the adequacy of its resources and how it is effectively managing resources through a risk-oriented approach and prioritization of risks, including effective use of information technology, and (3) any unmet needs including required funding and human resources. The United States has the largest and most respected capital markets in the world. Our capital markets have long enjoyed a reputation of integrity that promotes investor confidence. This is critical to our economy and the economies of other nations given the globalization of commerce. However, this long-standing reputation is now being challenged by some parties. The effectiveness of systems relating to independent audits, financial reporting, and corporate governance, which represent key underpinnings of capital markets and are critical to protecting the public's interest, has been called into question by the failure of Enron and certain other events and practices. Although the human elements can override any system of controls, it is clear that there are a range of actions that are critical to the effective functioning of the system underlying capital markets that require attention by a range of key players. In addition, a strong enforcement function with appropriate civil and criminal sanctions is also needed to ensure effective accountability when key players fail to properly perform their duties and responsibilities.
In the wake of the Enron collapse and the proliferation of earnings restatements and pro forma earnings assertions by other companies, questions are being raised about the soundness of private sector financial reporting, auditor independence, and corporate governance. In addressing these issues, the government's role could range from direct intervention to encouraging non-governmental and private-sector entities to adopt practices that would strengthen public confidence. GAO believes that Congress should consider a holistic approach that takes into account the many players and interrelated issues that brought about the Enron situation.
6,398
117
Since December 5, 1989, DOE has not produced War Reserve pits for the nuclear stockpile. On that date, the production of pits at Rocky Flats, which was DOE's only large-scale pit-manufacturing facility, was suspended because of environmental and regulatory concerns. At that time, it was envisioned that production operations would eventually resume at the plant, but this never occurred. In 1992, DOE closed its pit-manufacturing operations at Rocky Flats without establishing a replacement location. In 1995, DOE began work on its Stockpile Stewardship and Management Programmatic Environmental Impact Statement, which analyzed alternatives for future DOE nuclear weapons work, including the production of pits. In December 1996, Los Alamos was designated as the site for reestablishing the manufacturing of pits. DOE is now reestablishing its capability to produce War Reserve pits there so that pits removed from the existing stockpile for testing or other reasons can be replaced with new ones. Reestablishing the manufacturing of pits will be very challenging because DOE's current efforts face new constraints that did not exist previously. For example, engineering and physics tests were used in the past for pits produced at Rocky Flats to ensure that those pits met the required specifications. Nuclear tests were used to ensure that those pits and other components would perform as required. While engineering and physics tests will still be utilized for Los Alamos's pits, the safety and reliability of today's nuclear stockpile, including newly manufactured pits, must be maintained without the benefit of underground nuclear testing. The United States declared a moratorium on such testing in 1992. President Clinton extended this moratorium in 1996 by signing the Comprehensive Test Ban Treaty, through which the United States forwent underground testing indefinitely. In addition, to meet regulatory and environmental standards that did not exist when pits were produced at Rocky Flats, new pit-production processes are being developed at Los Alamos. DOD is responsible for implementing the U.S. nuclear deterrent strategy, which includes establishing the military requirements associated with planning for the stockpile. The Nuclear Weapons Council is responsible for preparing the annual Nuclear Weapons Stockpile Memorandum, which specifies how many warheads of each type will be in the stockpile. Those weapons types expected to be retained in the stockpile for the foreseeable future are referred to as the enduring stockpile. DOE is responsible for managing the nation's stockpile of nuclear weapons. Accordingly, DOE certifies the safety and reliability of the stockpile and determines the requirements for the number of weapons components, including pits, needed to support the stockpile. DOE has made important changes in the plans for its pit-manufacturing mission. Additionally, some specific goals associated with these plans are still evolving. In December 1996, DOE's goals for the mission were to (1) reestablish the Department's capability to produce War Reserve pits for one weapons system by fiscal year 2001 and to demonstrate the capability to produce all pit types for the enduring stockpile, (2) establish a manufacturing capacity of 10 pits per year by fiscal year 2001 and expand to a capacity of up to 50 pits per year by fiscal 2005, and (3) develop a contingency plan for the large-scale manufacturing of pits at some other DOE site or sites. In regard to the first goal, DOE and Los Alamos produced a pit prototype in early 1998 and believe they are on target to produce a War Reserve pit for one weapons system by fiscal year 2001. In regard to the second goal, DOE has made important changes. Most notably, DOE's capacity plans have changed from a goal of 50 pits per year in fiscal year 2005 to 20 pits per year in fiscal 2007. What the final production capacity at Los Alamos will be is uncertain. Finally, DOE's efforts to develop a contingency plan for large-scale production have been limited and when such a plan will be in place is not clear. To meet the first goal of reestablishing its capability to produce a War Reserve pit for a particular weapons system by fiscal year 2001, DOE has an ambitious schedule. This schedule is ambitious because several technical, human resource, and regulatory challenges must be overcome. Approximately 100 distinct steps or processes are utilized in fabricating a pit suitable for use in the stockpile. Some of the steps in manufacturing pits at Los Alamos will be new and were not used at Rocky Flats. Each of these manufacturing processes must be tested and approved to ensure that War Reserve quality requirements are achieved. The end result of achieving this first goal is the ability to produce pits that meet precise War Reserve specifications necessary for certification as acceptable for use in the stockpile. Skilled technicians must also be trained in the techniques associated with the pit-manufacturing processes. Currently, according to DOE and Los Alamos officials, several key areas remain understaffed. According to a Los Alamos official, the laboratory is actively seeking individuals to fill these positions; however, the number of qualified personnel who can perform this type of work and have the appropriate security clearances is limited. Finally, according to DOE and Los Alamos officials, the production of pits at Los Alamos will be taking place in a regulatory environment that is more stringent than that which existed previously at Rocky Flats. As a result, new processes are being developed, and different materials are being utilized so that the amount and types of waste can be reduced. Los Alamos achieved a major milestone related to its first goal when it produced a pit prototype on schedule in early 1998. DOE and Los Alamos officials believe they are on schedule to produce a War Reserve pit for one weapons system by fiscal year 2001. DOE plans to demonstrate the capability to produce pits for other weapons systems but does not plan to produce War Reserve pits for these systems until sometime after fiscal year 2007. Furthermore, DOE's Record of Decision stated that Los Alamos would reestablish the capability to manufacture pits for all of the weapons found in the enduring stockpile. Currently, however, according to DOE officials, DOE does not plan to reestablish the capability to produce pits for one of the weapons in the enduring stockpile until such time as the need for this type of pit becomes apparent. Once Los Alamos demonstrates the capability to produce War Reserve pits, it plans on establishing a limited manufacturing capacity. Originally, in late 1996, DOE wanted to have a manufacturing capacity of 10 pits per year by fiscal year 2001 and planned to expand this capacity to 50 pits per year by fiscal 2005. In order to achieve a 10-pits-per-year manufacturing capacity by fiscal year 2001, DOE was going to supplement existing equipment and staff in the PF-4 building at Los Alamos. To achieve a capacity of 50 pits per year by fiscal year 2005, DOE planned a 3-year suspension of production in PF-4 starting in fiscal year 2002. During this time, PF-4 would be reconfigured to accommodate the larger capacity. Also, some activities would be permanently moved to other buildings at Los Alamos to make room for the 50-pits-per-year production capacity. For example, a number of activities from the PF-4 facility would be transferred to the Chemistry and Metallurgy Research building. Once PF-4 was upgraded, it would be brought back on-line with a production capacity of 50 pits per year. In December 1997, DOE's new plan changed the Department's goal for implementing the limited manufacturing capacity. DOE still plans to have a 10-pits-per-year capacity by fiscal year 2001. However, DOE now plans to increase the capacity to 20 pits per year by fiscal year 2007. If DOE decides to increase production to 50 pits per year, it would be achieved sometime after fiscal year 2007. As with the original plan, in order to achieve a 50-pits-per-year capacity, space for manufacturing pits in PF-4, which is now shared with other activities, would have to be completely dedicated to the manufacturing of pits. DOE officials gave us a number of reasons for these changes. First, because the original plan required a 3-year shutdown of production in PF-4, DOE was concerned that there would not be enough pits during the shutdown to support the stockpile requirement, considering that pits would have been destructively examined under the stockpile surveillance program.Under the new plan, annual production will continue except for 3-or 4-month work stoppages during some years to allow for facility improvements and maintenance. Second, DOE was concerned that pits produced after the originally planned 3-year shutdown might need to be recertified. Third, DOE wanted to decouple the construction activities at the Chemistry and Metallurgy Research building from planned construction at PF-4 because linking construction projects at these two facilities might adversely affect the pit-manufacturing mission's schedule. DOE's 1996 plan called for developing a contingency plan to establish a large-scale (150-500 pits per year) pit-manufacturing capacity within 5 years, if a major problem were found in the stockpile. DOE has done little to pursue this goal. It has performed only a preliminary evaluation of possible sites. DOE has not developed a detailed contingency plan, selected a site, or established a time frame by which a plan should be completed. According to DOE officials, they will not pursue contingency planning for large-scale manufacturing until fiscal year 2000 or later. The purpose for the contingency plan was to lay out a framework by which DOE could establish a production capacity of 150 to 500 pits per year within a 5-year time frame. Such a capacity would be necessary if a systemwide problem were identified with pits in the stockpile. This issue may become more important in the future, as existing nuclear weapons and their pits are retained in the stockpile beyond their originally planned lifetime. Research is being conducted on the specific effects of aging on plutonium in pits. A DOE study found that Los Alamos is not an option for large-scale pit manufacturing because of space limitations that exist at PF-4. As a result, large-scale operations would most likely be established at some other DOE nuclear site(s) where space is adequate and where some of the necessary nuclear infrastructure exists. DOE has not specified a date by which the plan will be completed, and, according to DOE officials, the contingency plan has not been a high priority within DOE for fiscal years 1998-99. According to DOE officials, they may fund approximately $100,000 for a study of manufacturing and assembly processes for large-scale manufacturing in fiscal year 1999. In addition, according to DOE officials, DOE has not pursued contingency planning for large-scale manufacturing more aggressively because the Department would like more work to be done at PF-4 prior to initiating this effort. In this regard, the officials stated that the development of a contingency plan requires more complete knowledge of the processes, tooling, and technical skills still being put in place at Los Alamos. This knowledge will serve as a template for large-scale manufacturing. DOE believes that this knowledge should be well defined by fiscal year 2000. According to information from DOE, the total cost for establishing and operating the pit-manufacturing mission under its new plan will be over $1.1 billion from fiscal year 1996 through fiscal 2007. This estimate includes funds for numerous mission elements needed to achieve DOE's goals. This estimate does not include over $490 million in costs for other activities that are not directly attributable to pit production but are needed to support a wide variety of activities, including the pit-manufacturing mission. Some key controls related to the mission are either in the formative stages of development or do not cover the mission in its entirety. DOE provided us with data reflecting the total estimated costs of its new plans and schedules. These data were developed for the first time during our audit. DOE emphasized that these costs should be treated as draft estimates instead of approved numbers. On the basis of this information, the costs for establishing and operating the pit-manufacturing mission were estimated to total over $1.1 billion from fiscal year 1996 through fiscal 2007. Table 1 shows the total estimated costs related to the various elements of the mission. At the time of our review, DOE estimated that by the end of fiscal year 1998, it would have spent $69 million on the mission. Other activities are needed to support a wide variety of efforts, including the pit-manufacturing mission but are not directly attributable to pit production. These include construction-related activities at various Los Alamos nuclear facilities. For example, one activity is the construction upgrades at the Chemistry and Metallurgy Research building. DOE and Los Alamos officials stated that the costs of these activities would have been incurred whether or not Los Alamos was selected for the pit-manufacturing mission. However, unless these activities are carried out, DOE and Los Alamos officials believe that it will be difficult for them to achieve the mission's goals. Table 2 shows the total estimated costs of these other supporting activities. The success of DOE's pit-manufacturing mission at Los Alamos requires the use of effective cost and managerial controls for ensuring that the mission's goals are achieved within cost and on time. An effective cost and managerial control system should have (1) an integrated cost and schedule control system, (2) independent cost estimates, and (3) periodic technical/management reviews. DOE and Los Alamos have taken actions to institute these cost and managerial controls related to the pit mission. However, some of these controls are either in the formative stages of development or are limited to addressing only certain elements of the mission instead of the entire mission. An integrated cost and schedule control system would allow managers to measure costs against stages of completion for the pit-manufacturing mission's overall plan. For example, at any given time, the plan might identify a certain percentage of the mission's resources that were to be spent within established limits. If variances from the plan were to exceed those limits, corrective actions could be taken. DOE and Los Alamos have in place, or are in the process of developing, (1) an integrated planning and scheduling system for the pit-manufacturing mission and (2) a separate financial management information system for monitoring costs. Los Alamos's planning and scheduling system for the pit-manufacturing mission will eventually track, in an integrated fashion, all key planning and scheduling milestones. This system will enable managers to have timely and integrated information regarding the mission's progress. Currently, individual managers are tracking their own progress toward important milestones but do not have integrated mission information. If their individual milestones slip, managers can take corrective actions. The integrated planning and scheduling system will enable managers to have information regarding the mission's progress as a whole. According to a Los Alamos official, the planning and scheduling system will be completed in December 1998. Los Alamos's financial management information system, through which mission-related costs can be monitored, provides managers with information that enables them to track expenditures and available funds. Eventually, this system will be interfaced with the pit-manufacturing mission's integrated planning and scheduling system. However, according to a Los Alamos official, this may take several years. Independent cost estimates are important, according to DOE, because they serve as analytical tools to validate, cross-check, or analyze estimates developed by proponents of a project. DOE's guidance states that accurate and timely cost estimates are integral to the effective and efficient management of DOE's projects and programs. According to DOE and Los Alamos officials, independent cost estimates are required by DOE's guidance for individual construction projects but are not required for other elements of the pit-manufacturing mission. DOE has two construction projects directly related to the pit mission and five others that indirectly support it. The Capability Maintenance and Improvements Project and the Transition Manufacturing and Safety Equipment project are directly related to the pit-manufacturing mission. The Nuclear Materials Storage Facility Renovation, the Chemistry and Metallurgy Research Building Upgrades Project, the Nuclear Materials Safeguards and Security Upgrades Project, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Replacement Project indirectly support the mission as well as other activities at Los Alamos. DOE plans to eventually make an independent cost estimate for most of these construction projects. According to a DOE official, independent cost estimates have been completed for the Nuclear Materials Storage Facility Renovation, the Nonnuclear Reconfiguration Project, and the Fire Water Loop Project. Independent cost estimates have been performed for portions of the Chemistry and Metallurgy Research Building Upgrades Project. Additionally, a preliminary independent cost estimate was performed for the Capability Maintenance and Improvements Project prior to major changes in the project. DOE officials plan to complete independent cost estimates for the Nuclear Materials Safeguards and Security Upgrades Project, the revised Capability Maintenance and Improvements Project, and portions of the Transition Manufacturing and Safety Equipment project, depending upon their complexity. Because the bulk of mission-related costs are not construction costs, these other funds will not have the benefit of independent cost estimates. The mission's elements associated with these funds include activities concerning War Reserve pit-manufacturing capability, pit-manufacturing operations, and certification. Moreover, according to DOE and Los Alamos officials, no independent cost estimate has been prepared for the mission as a whole, and none is planned. According to these officials, this effort is not planned because of the complexity of the mission and because it is difficult to identify an external party with the requisite knowledge to accomplish this task. It is important to note, however, that these types of studies have been done by DOE. In fact, DOE has developed its own independent cost-estimating capability, which is separate and distinct from DOE's program offices, to perform such estimates. Technical/management reviews can be useful in identifying early problems that could result in cost overruns or delay the pit-manufacturing mission. DOE and Los Alamos have taken a number of actions to review particular cost and management issues. These include (1) a "Change Control Board" for the entire mission, (2) a technical advisory group on the management and technical issues related to the production of pits, (3) peer reviews by Lawrence Livermore National Laboratory on pit-certification issues, and (4) annual mission reviews. The Change Control Board consists of 14 DOE, Los Alamos, and Lawrence Livermore staff who worked on the development of the mission's integrated plan. The Board was formed in March 1998 to act as a reviewing body for costs and management issues related to the mission. This group will meet quarterly or more regularly, as needed, to resolve cost or schedule problems. The group's initial efforts have focused on addressing unresolved issues in the integrated plan. For example, the group has merged data from Lawrence Livermore National Laboratory and Los Alamos into the integrated plan and is updating a key document associated with the mission's master schedule. Since July 1997, Los Alamos has been using a technical advisory group composed of nuclear experts external to Los Alamos and DOE. This group, paid by Los Alamos, provides independent advice and consultation on management and technical issues related to pit manufacturing and other related construction projects. The specific issues for assessment are selected either by the group or upon the request of Los Alamos's management. According to the group's chairman, Los Alamos has historically had problems with project management, and the group's work has focused on efforts to strengthen this aspect of the pit-manufacturing mission. For example, the group has identified the need for and provided advice on the development of key planning documents. This group meets at Los Alamos on a monthly basis. Los Alamos plans specific peer reviews by Lawrence Livermore to independently assess the processes and tests related to the certification of pits. Los Alamos's use of these peer reviews is an effort to provide an independent reviewing authority because Los Alamos is responsible for both manufacturing the pits and approving their certification. An initial planning session for this effort is scheduled for the fall of 1998. DOE and Los Alamos officials conducted a review of the pit-manufacturing mission in September 1997. The purpose of this review was to brief DOE management on the progress and status of various elements associated with the mission. As a result of the 1997 review, DOE and Los Alamos began developing an integrated plan that brings together the various elements of the mission. According to Los Alamos officials, such reviews will be held annually. DOD is responsible for implementing the U.S. nuclear deterrent strategy. According to officials from various DOD organizations, DOE's pit-manufacturing mission is critical in supporting DOD's needs. As a result, representatives from both Departments have conferred on and continue to discuss plans for the mission. Two important issues remain unresolved. First, officials from various DOD organizations have concerns about changes in the manufacturing processes that will be used to produce pits at Los Alamos. Second, on the basis of preliminary analyses by various DOD organizations, some representatives of these organizations are not satisfied that DOE's planned capacity will meet the anticipated stockpile needs. DOE is responsible for ensuring that the stockpile is safe and reliable. The safety and reliability of the pits produced at Rocky Flats were proven through nuclear test detonations. Officials from various DOD organizations are concerned that Los Alamos's pits will be fabricated by some processes that are different from those employed previously at Rocky Flats. Furthermore, pits made with these new processes will not have the benefit of being tested in a nuclear detonation to ensure that they perform as desired. As a result, officials from various DOD organizations want assurance that Los Alamos's pits are equivalent to those produced at Rocky Flats in all engineering and physics specifications. To accomplish this, DOE and Los Alamos plan to have Lawrence Livermore conduct peer reviews. These peer reviews will focus on the certification activities related to the first type of pit to be produced. This will help verify that the necessary standards have been met. According to representatives from both Departments, they will continue to actively consult on these issues. The other unresolved issue between DOD and DOE is DOE's planned pit-manufacturing capacity. Several efforts are currently under way within various DOD organizations to determine the stockpile's needs and the associated requirements for pits. DOD has not established a date for providing DOE with this information. Nevertheless, on the basis of the preliminary analyses performed by various DOD organizations, many DOD officials believe that DOE's capacity plans will not meet their stockpile needs. According to these officials, their requirements will be higher than the production capacity planned at Los Alamos. As a result, these officials do not support DOE's stated goal of developing a contingency plan for a large-scale manufacturing capacity sometime in the future. Rather, these officials told us that they want DOE to establish a large-scale manufacturing capacity as part of its current efforts. However, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly through DOE's ongoing research on how long a pit can be expected to function after its initial manufacture. According to DOE officials, they believe that the planned capacity is sufficient to support the current needs of the nuclear weapons stockpile. Furthermore, no requirement has been established for a larger manufacturing capacity beyond that which is planned for Los Alamos. DOE officials told us that they are discussing capacity issues with DOD and are seeking to have joint agreement on the required capacity. However, no date has been established for reaching an agreement on this issue. DOE plans to spend over $1.1 billion through fiscal year 2007 to establish a 20-pits-per-year capacity. This capacity may be expanded to 50 pits per year sometime after fiscal year 2007. Various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile. These analyses indicate that neither the 20-pits-per-year capacity nor the 50-pits-per-year capacity will be sufficient to meet the needs of the stockpile. As a result, officials from organizations within DOD oppose DOE's plan for not developing a large-scale manufacturing capacity now but rather planning for it as a future contingency. Once the various DOD organizations have completed their stockpile capacity analyses, DOD can then let DOE know its position on the needs of the nuclear stockpile. DOE will then be faced with the challenge of deciding how it should respond. A decision to pursue a production capacity larger than that planned by DOE at Los Alamos will be a major undertaking. Because of the cost and critical nature of the pit-manufacturing mission, DOE needs to ensure that effective cost and managerial controls are in place and operating. DOE and Los Alamos have not fully developed some of the cost and managerial control measures that could help keep them within budget and on schedule. An integrated cost and schedule control system is not in place even though millions of dollars have been spent on the mission. Furthermore, only a small portion of the costs associated with the mission has had the benefit of independent cost estimates. Without fully developed effective cost and managerial controls, the mission could be prone to cost overruns and delays. In order for DOE to have the necessary information for making pit-production capacity decisions, we recommend that the Secretary of Defense do the following: Provide DOE with DOD's views on the pit-manufacturing capacity needed to maintain the stockpile. This should be done so that DOE can use this information as part of its reevaluation of the stockpile's long-term capacity needs. While we understand that DOD cannot yet provide detailed requirements, DOE can be provided with the findings of the preliminary analyses of various DOD organizations. In order to ensure that the pit-manufacturing mission at Los Alamos supports the nuclear stockpile in a cost-effective and timely manner, we recommend that the Secretary of Energy take the following measures: Reevaluate existing plans for the pit-manufacturing mission in light of the issues raised by DOD officials regarding the capacity planned by DOE. Expedite the development of the integrated cost and schedule control system at Los Alamos. This needs to be done as soon as possible to help ensure that the mission is achieved within cost and on time. Conduct independent cost estimates for the entire pit-manufacturing mission. This can be done either for the mission as a whole or for those individual mission elements that have not had independent estimates. We provided DOE and DOD with a draft of this report for review and comment. DOE concurred with all but one recommendation in the report. That recommendation was that the Secretary of Energy "establish a separate line item budget category for the pit-manufacturing mission at Los Alamos." In its comments, DOE emphasized that its current budgeting and accounting practices related to pit production are consistent with appropriation guidelines, are consistent with budgeting and accounting standards, and are responsive to the Government Performance and Results Act. DOE also stated that it plans to keep congressional staff informed of the mission's progress through quarterly updates. These updates will be initiated following the approval of the budget for fiscal year 1999. In a subsequent discussion, DOE's Laboratory Team Leader in the Office of Site Operation, said that these updates will include information on the mission's cost and milestones. He noted that the cost information provided could be as detailed as congressional staff require. Our recommendation was aimed at getting DOE to identify the total estimated costs associated with the pit-manufacturing mission in a clear and comprehensive manner to the Congress. The clear identification of total estimated costs is important because the pit-manufacturing mission is critical to national security interests and represents a significant financial investment for the future. Since DOE prepared a cost estimate covering the total pit mission during our audit, a baseline has been established. We believe that DOE's planned quarterly updates will be an appropriate means of updating this cost information for the Congress. As a result, we have deleted this recommendation from our final report. DOE also provided several clarifications to the report, and the report has been revised where appropriate. DOE's comments are provided in appendix II. DOD agreed with the information presented in our draft report and provided us with technical clarifications, which we incorporated as appropriate. DOD did not agree with our recommendation that the Secretary of Defense clearly articulate DOD's views on the pit-manufacturing capacity needed to maintain the stockpile. DOD was concerned that the aging of pits was not clearly identified in our report as a driving force of pit-production requirements. DOD said that it could not give detailed pit-manufacturing requirements until the lifetime of pits is specified more clearly by DOE. We have modified our report and the recommendation to recognize that DOD believes that it cannot provide DOE with detailed pit-manufacturing capacity requirements until more is known about the aging of pits. However, we believe that there are merits in DOD's sharing of the information from the preliminary analyses of various DOD organizations with DOE. This information would be useful for DOE in its long-term planning efforts, especially those related to contingency planning. DOD's comments are included in appendix III. To address our objectives, we interviewed officials and obtained documents from DOD, DOE, Los Alamos, and the Nuclear Weapons Council. We did not independently verify the reliability of the estimated cost data that DOE provided us with. According to DOE, these data represent its best estimates of future mission costs but are likely to change as the mission progresses and should not be viewed as final. Our scope and methodology are discussed in detail in appendix I. We performed our review from October 1997 through August 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Secretary of Energy; the Secretary of Defense; and the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others on request. To obtain information about the Department of Energy's (DOE) plans and schedules for reestablishing the manufacturing of pits, we gathered and analyzed various documents, including DOE's (1) Record of Decision for the Stockpile Stewardship and Management Programmatic Environmental Impact Statement, (2) guidance for stockpile management and the pit-manufacturing mission, and (3) the draft Integrated Plan for pit manufacturing and certification. We discussed with DOE and Los Alamos National Laboratory officials the basis for the mission's plans and schedules. These officials also discussed why changes were made to these plans and schedules in December 1997. DOE and Los Alamos officials discussed with us their progress in meeting milestones, which we compared with the established major milestones for the mission. In order to have a better understanding of the efforts taking place at Los Alamos, we also met with DOE and contractor employees at Rocky Flats who were formerly involved with the production of pits at that site. These individuals discussed the pit production issues and challenges that they faced at Rocky Flats. Cost information associated with the pit-manufacturing mission was obtained primarily from DOE's Albuquerque Operations Office. This information was compiled by DOE with the assistance of Los Alamos officials. These costs were only recently prepared by DOE and Los Alamos. According to a DOE official, this effort took several months partly because of changes in DOE's mission plans. These costs were provided for us in current-year dollars. As such, we did not adjust them to constant-year dollars. Additionally, we did not independently verify the accuracy of the cost data. These data were in draft form during our review and not considered approved by DOE. We interviewed both DOE and Los Alamos officials regarding the methodology that was used to develop the cost data. In addition, we also discussed with DOE and Los Alamos officials cost and managerial controls related to the mission and reviewed pertinent documents on this subject. To understand unresolved issues between the Department of Defense (DOD) and DOE regarding the manufacturing of pits, we spoke with representatives from DOD, DOE, and Los Alamos. DOD officials with whom we spoke included representatives from the Joint Chiefs of Staff, Nuclear and Chemical and Biological Defense Programs, Army, Air Force, Navy, and Strategic Command. We also met with a representative of the Nuclear Weapons Council. Our work was conducted in Golden, Colorado; Germantown, Maryland; Albuquerque, New Mexico; Los Alamos, New Mexico; Alexandria, Virginia; and Washington, D.C., from October 1997 through August 1998 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) efforts to manufacture war reserve nuclear weapon triggers, or pits, at its Los Alamos National Laboratory, focusing on: (1) DOE's plans and schedules for reestablishing the manufacturing of pits at Los Alamos; (2) the costs associated with these efforts; and (3) unresolved issues regarding the manufacturing of pits between the Department of Defense (DOD) and DOE. GAO noted that: (1) DOE's plans for reestablishing the production of pits at Los Alamos National Laboratory have changed and are still evolving; (2) DOE expects to have only a limited capacity online by fiscal year (FY) 2007; (3) specifically, DOE plans to reestablish its capability to produce war reserve pits for one weapons system by FY 2001 and plans to have an interim capacity of 20 pits per year online by FY 2007; (4) this planned capacity differs from the goal that DOE established in FY 1996 to produce up to 50 pits per year by fiscal 2005; (5) DOE has not decided what the final production capacity at Los Alamos will be; (6) DOE has done little to develop a contingency plan for the large-scale manufacturing of pits (150-500 pits per year); (7) large-scale manufacturing would be necessary if a systemwide problem were identified with pits in the stockpile; (8) the current estimated costs for establishing and operating DOE's pit-manufacturing mission total over $1.1 billion from FY 1996 through fiscal 2007; (9) this estimate does not include over $490 million in costs for other activities that are not directly attributable to the mission but are needed to support a wide variety of defense-related activities; (10) GAO also noted that some key cost and managerial controls related to DOE's pit-manufacturing mission are either in the formative stages of development or do not cover the mission in its entirety; (11) DOD and DOE have discussed, but not resolved, important issues regarding: (a) changes in the manufacturing processes that will be used to produce pits at Los Alamos; and (b) the pit-manufacturing capacity planned by DOE; (12) officials from various DOD organizations have expressed concerns about the equivalence of Los Alamo's pits to the pits previously manufactured at Rocky Flats because some manufacturing processes will be new at Los Alamos and are different from those previously used by Rocky Flats; (13) also, officials from various DOD organizations are not satisfied that DOE's current or future capacity plans will be sufficient to meet the stockpile's needs; (14) various DOD organizations have performed preliminary analyses of the capacity needed to support the stockpile; (15) on the basis of these analyses, some of these officials believe that the stockpile's needs exceed the 20-pits-per-year capacity that DOE may establish in the future; (16) however, DOD officials said that they will be unable to give detailed pit-manufacturing requirements until the lifetime of pits is more clearly specified by DOE; and (17) DOE is currently studying this issue.
7,381
693
Congress established the trade advisory committee system in Section 135 of the Trade Act of 1974 as a way to institutionalize domestic input into U.S. trade negotiations from interested parties outside the federal government. This system was considered necessary because of complaints from some in the business community about their limited and ad hoc role in previous negotiations. The 1974 law created a system of committees through which such advice, along with advice from labor and consumer groups, was to be sought. The system was originally intended to provide private sector input to global trade negotiations occurring at that time (the Tokyo Round). Since then, the original legislation has been amended to expand the scope of topics on which the President is required to seek information and advice from "negotiating objectives and bargaining positions before entering into a trade agreement" to the "operation of any trade agreement, once entered into," and on other matters regarding administration of U.S. trade policy. The legislation has also been amended to include additional interests within the advisory committee structure, such as those represented by the services sector and state and local governments. Finally, the amended legislation requires the executive branch to inform the committees of "significant departures" from their advice. The Trade Act of 1974 required the President to seek information and advice from the trade advisory committees for trade agreements pursued and submitted for approval under the authority granted by the Bipartisan Trade Promotion Authority Act of 2002. The Trade Act of 1974 also required the trade advisory committees to provide a report on the trade agreements pursued under the Bipartisan Trade Promotion Authority Act of 2002 to the President, Congress, and USTR. This requirement lapsed with TPA on June 30, 2007. The trade advisory committees are subject to the requirements of the Federal Advisory Committee Act (FACA), with limited exceptions pertaining to holding public meetings and public availability of documents. One of FACA's requirements is that advisory committees be fairly balanced in terms of points of view represented and the functions the committees perform. FACA covers most federal advisory committees and includes a number of administrative requirements, such as requiring rechartering of committees upon renewal of the committee. Four agencies, led by USTR, administer the three-tiered trade advisory committee system. USTR directly administers the first tier overall policy committee, the President's Advisory Committee for Trade Policy and Negotiations (ACTPN), and three of the second tier general policy committees, the Trade Advisory Committee on Africa (TACA), the Intergovernmental Policy Advisory Committee (IGPAC), and the Trade and Environment Policy Advisory Committee (TEPAC), for which the Environmental Protection Agency also plays a supporting role. The Department of Labor co-administers the second tier Labor Advisory Committee (LAC) and the Department of Agriculture co-administers the second tier Agricultural Policy Advisory Committee (APAC). The Department of Agriculture also co-administers the third tier Agricultural Technical Advisory Committees (ATACs), while the Department of Commerce co-administers the third tier Industry Trade Advisory Committees (ITACs). Ultimately, member appointments to the committees have to be cleared by both the Secretary of the managing agency and the U.S. Trade Representative, as they are the appointing officials. Figure 1 illustrates the committee structure. Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of "significant departures" from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. As a result, we made a series of recommendations to USTR and the other agencies to improve those aspects of the consultation process. Specifically, we recommended the agencies adopt or amend guidelines and procedures to ensure that (1) advisory committee input is sought on a continual and timely basis, (2) consultations are meaningful, and (3) committee advice is considered and committees receive substantive feedback on how agencies respond to their advice. In response to those recommendations, USTR and the other agencies made a series of improvements. For example, to improve consultations between the committee and the agencies, including member input, USTR and TEPAC members established a communications taskforce in 2004. As a result of the taskforce, USTR and EPA changed the format of principals' meetings to allow more discussion between the members and senior U.S. government officials, and they increased the frequency of liaison meetings. In addition, USTR instituted a monthly conference call with the chairs of all committees, and now holds periodic plenary sessions for ATAC and ITAC members. Furthermore, the agencies created a new secure Web site to allow all cleared advisors better access to important trade documents. When we interviewed private sector advisory committee chairs again in 2007, they were generally pleased with the numerous changes made to the committee system in response to our 2002 report. In particular, they found the secure Web site very useful. Reviews of the monthly chair conference call and plenary sessions were mixed, however. Chairs told us that their out-of-town members might find the plenaries a helpful way to gain an overall perspective and to hear cabinet-level speakers to whom they would not routinely have access, whereas others found them less valuable, largely due to the perceived lack of new or detailed information. The chairs also said that USTR and the relevant executive branch agencies consulted with the committees on a fairly regular basis, although overall views on the opportunity to provide meaningful input varied. For example, we heard from committee chairs who felt the administration took consultations seriously, while other chairs felt the administration told them what had already been decided upon instead of soliciting their advice. USTR officials told us that the fact that the advice of any particular advisory committee may not be reflected in a trade agreement does not mean that the advice was not carefully considered. In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee's ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2- year charter. To address these concerns, we recommended that USTR and other agencies start the rechartering and member appointment processes with sufficient time to avoid any lapse in the ability to hold committee meetings and that they notify Congress if a committee is unable to meet for more than 3 months due to an expired charter or delay in member appointments. Furthermore, we recommended that USTR work with the Department of Labor to extend the Labor Advisory Committee's charter from 2 years to 4 years, to be in alignment with the rest of the trade advisory committee system. USTR and the other agencies have taken some steps to address these recommendations. In May 2008, for example, the Labor Advisory Committee's charter was extended to 4 years. Not enough time has passed, however, to assess whether steps taken fully address the problems associated with rechartering and member appointment, since at present all committees have current charters and members appointed. Furthermore, even though committees are now chartered and populated, some of them have not met for over three years, despite ongoing negotiations of the Doha Round of the World Trade Organization (WTO), including the July 2008 ministerial meeting in Geneva. For example, although the ATAC charters were renewed in May 2007 and members appointed in January 2008, the FACA database shows that no ATAC has held a meeting since fiscal year 2006. In addition, although USTR held multiple teleconferences for all first and second tier advisors in fiscal year 2008, LAC and APAC members did not participate. It is unclear, therefore, whether the administration received official advice from all trade advisory committees for the Doha negotiations. In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition. In our 2002 report, we found that the structure and composition of the committee system had not been fully updated to reflect changes in the U.S. economy and U.S. trade policy. For example, representation of the services sector had not kept pace with its growing importance to U.S. output and trade. Certain manufacturing sectors, such as electronics, had fewer members than their sizable trade would indicate. In general, the system's committee structure was largely the same as it was in 1980, even though the focus of U.S. trade policy had shifted from border taxes (tariffs) toward other complex trade issues, such as protection of intellectual property rights and food safety requirements. As a result, the system had gaps in its coverage of industry sectors, trade issues, and stakeholders. For example, some negotiators reported that some key issues such as investment were not adequately covered. In addition, nonbusiness stakeholders such as environment and labor reported feeling marginalized because they have been selected to relatively few committees. The chemicals committee, representing what at the time was one of the leading U.S. export sectors, had been unable to meet due to litigation over whether the apparent denial of requests by environmental representatives for membership on the committee was consistent with FACA's fair balance requirements. In 2007, several committee chairs we interviewed also expressed the perception that the composition of their committees was not optimal, either favoring one type of industry or group over another or industry over nonbusiness interests. Furthermore, some members were the sole representative of a nonbusiness interest on their committee, and those we spoke with told us that although their interest was now represented, they still felt isolated within their own committee. The result was the perception that their minority perspective was not influential. At the same time, while Congress mandates that the advisory committee system is to involve representative segments of the private sector (e.g., industry, agriculture, and labor and environmental groups), adherence to these statutory requirements has been deemed non-justiciable. For example, although the Departments of Agriculture and Commerce solicit new members for their committees through Federal Register notices which stipulate members' qualifications, including that they must have expertise and knowledge of trade issues relevant to the particular committees, neither the notices nor the committee charters explained how the agencies would or have determined which representatives they placed on committees. Without reporting such an explanation, it was not transparent how agencies made decisions on member selection or met statutory representation requirements. As a result, we made a series of recommendations suggesting that USTR work with the other agencies to update the system to make it more relevant to the current U.S. economy and trade policy needs. We also suggested that they seek to better incorporate new trade issues and interests. Furthermore, we recommended they annually report publicly on how they meet statutory representation requirements, including clarifying which interest members represent and explaining how they determined which representatives they placed on committees. In response, USTR and the other agencies more closely aligned the system's structure and composition with the current economy and increased the system's ability to meet negotiator needs more reliably. For example, the Department of Agriculture created a new ATAC for processed foods because exports of high-value products have increased. USTR and Commerce also split the service industry into several committees to better meet negotiator needs. Furthermore, USTR and the Department of Agriculture now list which interest members represent on the public FACA database, as the Department of Commerce has been doing for years. USTR's 2009 Trade Policy Agenda and 2008 Annual Report also includes descriptions of the committees and their composition. It does not, however, explain how USTR and the agencies determined that the particular membership appointed to each committee represents a fair balance of interests in terms of the points of view represented and the committee's functions. Mr. Chairman, we appreciate the opportunity to summarize our work related to the Trade Advisory System. Based on the recommendations we have made in the areas of quality and timeliness of consultations, logistical issues, and representation of key stakeholders, we believe that USTR and other managing agencies have strengthened the Trade Advisory System. However, we support the Committee's oversight and the ongoing policy review of the system to ensure that it works smoothly and the input received from business and non-business stakeholders is sufficient, fairly considered, and representative. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony provides a summary of key findings from the comprehensive report on the trade advisory system that we provided to the Congress in 2002, as well as from our more recent report in 2007 on the Congressional and private sector consultations under Trade Promotion Authority. In particular, this testimony highlights our recommendations in three key areas--committee consultations, logistics, and overall system structure--as well as the changes that have been made by the U.S. agencies since those reports were published. Our 2002 survey of trade advisory committee members found high levels of satisfaction with many aspects of committee operations and effectiveness, yet more than a quarter of respondents indicated that the system had not realized its potential to contribute to U.S. trade policy. In particular, we received comments about the timeliness, quality, and accountability of consultations. For example, the law requires the executive branch to inform committees of "significant departures" from committee advice. However, many committee members reported that agency officials informed committees less than half of the time when their agencies pursued strategies that differed from committee input. In 2002, we found that slow administrative procedures disrupted committee operations, and the resources devoted to committee management were out of step with required tasks. In several instances, for example, committees ceased to meet and thus could not provide advice, in part because the agencies had not appointed members. However, the length of time required to obtain a security clearance contributed to delays in member appointment. To address these concerns, we recommended the agencies upgrade system management; and in response, they began to grant new advisors interim security clearances so that they could actively participate in the committee while the full clearance is conducted. Despite these actions, however, trade advisory committee chairs we contacted in 2007 told us certain logistics such as delays in rechartering committees and appointment of members still made it difficult for some committees to function effectively. We found several committees had not been able to meet for periods of time, either because agencies allowed their charters to lapse or had not started the process of soliciting and appointing members soon enough to ensure committees could meet once they were rechartered. The Labor Advisory Committee, for example, did not meet for over 2 years from September 2003 until November 2005 due in part to delays in the member appointment process. These types of process delays further reduced a committee's ability to give timely, official advice before the committee was terminated, and the rechartering process had to begin again. This was particularly true in the case of the Labor Advisory Committee, which, at the time of our 2007 report, still had a 2-year charter. In addition to the need to improve certain committee logistics, we also found that representation of stakeholders is a key component of the trade advisory committee system that warrants consideration in any review of the system. In particular, as the U.S. economy and trade policy have shifted, the trade advisory committee system has needed adjustments to remain in alignment with them, including both a revision of committee coverage as well as committee composition.
3,073
649
Medicare beneficiaries receive a wide range of services in hospital outpatient departments, such as emergency room and clinic visits, diagnostic services such as x-rays, and surgical procedures. To receive Medicare payment, hospitals report the services they provided to a beneficiary on a claim form they submit to CMS along with their charge for each service. For Medicare payment purposes, an outpatient service consists of a primary service and packaged services, the additional services or items associated with that primary service. CMS assigns each primary service to an APC, which may include other similar primary services, and pays the hospital at the designated APC payment rate, adjusted for variation in local wages. A hospital can receive multiple APC payments for a single outpatient visit if more than one primary service is delivered during that visit. On outpatient claims, hospitals identify the primary services they provided using a Healthcare Common Procedure Coding System (HCPCS) code, while they identify packaged services by either specific HCPCS codes or revenue codes that represent general hospital departments or centers, such as "pharmacy," "observation room," or "medical social services." In addition to claims, hospitals submit annual cost reports to CMS that state their total charges and costs for the year and the individual hospital department charges and costs. As a first step in calculating the OPPS payment rate for each APC, CMS obtains hospital charge data on each outpatient service from the latest available year of outpatient claims. It calculates each hospital's cost for each service by multiplying the charge by a cost-to-charge ratio that is computed from the hospital's most recent cost report, generally on an outpatient department-specific basis. In those instances when a cost-to- charge ratio does not exist for an outpatient department in a given hospital, CMS uses one from a related outpatient department or the hospital's overall cost-to-charge ratio for outpatient department services. The cost of each primary service is then combined with the costs of the related packaged services to calculate a total cost for that primary service. On single-service claims, claims with one primary service, CMS can associate packaged services with the primary service and calculate a total cost for the service (see fig. 1). However, in the case of multiple-service claims, claims with more than one primary service, packaged services and their costs listed on the claim cannot be associated with particular primary services, as the costs of a packaged service may be associated with one or a combination of primary services (see fig. 2). For this reason, CMS excluded all multiple-service claims from rate setting prior to 2003. Beginning with the 2003 payment rates, CMS identified several methods that allowed it to convert some multiple-service claims into single-service claims, and therefore include them in its rate-setting calculations. After calculating the cost of each primary service assigned to an APC for each hospital claim, CMS arrays the costs for all claims and determines the median cost. To calculate the APC's weight relative to other APCs, CMS compares the median cost of each APC to the median cost of APC 0601, a mid-level clinic visit, which is assigned a relative weight of 1.00. For example, if the median cost of APC 0601 is $100 and the median cost of "APC A" is $50, CMS assigns APC A a relative weight of 0.50. To obtain a payment rate for each APC, CMS multiplies the relative weight by a factor that converts it to a dollar amount. In addition, CMS annually reviews and revises the services assigned to a particular APC and uses the new APC assignments and the charges from the latest available outpatient hospital claims to recalibrate the relative weights, and therefore the payment rates. New drugs and devices are eligible to receive temporary pass-through payments for 2 to 3 years, depending on when each drug and device's eligibility began. January 1, 2003 was the first time that pass-through eligibility expired for any drugs or devices. Once pass-through eligibility for these items expires, CMS determines whether they will be considered a primary service and assigned to a separate APC or a packaged service and included with the primary services with which they are associated on a claim. On January 1, 2003, 236 drugs and on January 1, 2004, 7 drugs expired from pass-through eligibility. For those drugs expiring in 2003, CMS designated any drug with a median cost exceeding $150 (115 drugs) as a primary service, and each was assigned to its own, separately paid APC. The remaining drugs (121 drugs), those with a median cost less than $150, were designated as packaged services, that is, their costs were included with the costs of the primary service they were associated with on the claim. CMS stated that many of these latter drugs were likely present on claims with a primary service of drug administration and were therefore packaged with the services assigned to the six drug administration APCs, that is, the three chemotherapy administration and three drug injection and infusion APCs. For these packaged drugs, although hospitals had previously received two payments, one for the administration of the drug or other primary service and an additional pass-through payment for the drug itself, when eligibility expires, hospitals receive only one payment for both the administration or other primary service and the packaged drug. In 2004, all 7 drugs for which pass-through eligibility expired were designated as primary services and assigned to their own, separately paid APCs. On January 1, 2003, the devices in 95 device categories, and on January 1, 2004, the devices in 2 device categories, expired from pass-through eligibility; in both years, the devices in all device categories were designated as packaged services and their costs were included with the costs of the primary service they were associated with on the claim. Although hospitals had previously received two payments, one for the procedure associated with the device and an additional pass-through payment for the device, hospitals then received only one payment for both the procedure and its associated device. The OPPS payment rates of former pass-through, separately paid drugs were generally lower than the pass-through payment rate, but the payment rates of former pass-through drugs and devices that were packaged cannot be evaluated, as these items are not assigned a distinct payment rate. In 2003, the payment rates for the 115 of 236 former pass-through drugs that were designated as separately paid drugs almost universally decreased from the pass-through payment rates. In 2004, for all 7 former pass-through drugs were designated as separately paid drugs and the payment rates for all 7 decreased. In 2003, for the remaining 121 pass-through drugs and the devices in 95 pass-through device categories and, in 2004, the devices in 2 device categories, all of which were packaged, we cannot evaluate the payment rate changes because individual payment rates were not assigned for these items when they expired from pass-through eligibility. In 2003, about half of all drugs for which pass-through eligibility expired (115 of 236) were assigned to their own APC and paid separately. For these drugs, we determined that over 90 percent had payment rates lower than 95 percent of AWP, the pass-through payment rate; the median payment rate was 55 percent of AWP. Individual payment rates were often considerably lower than AWP, but decreases varied substantially. For example, 1 drug had a payment rate of about 7 percent of AWP, while another had a payment rate of about 94 percent of AWP. However, 10 drugs had a payment rate of more than 100 percent of AWP. In addition, payment as a percentage of AWP varied by drug source. The majority of the 113 separately paid drugs that we analyzed were sole-source (70 percent), followed by multi-source (19 percent), and generic (10 percent). Generic drugs, which were paid the highest percentage of AWP of the three categories, had a median payment rate of 74 percent of AWP, multi-source drugs had a median of 56 percent of AWP, and sole-source drugs had a median of 53 percent of AWP. In 2004, all seven drugs for which pass-through eligibility expired were assigned to separate APCs. The individual payment rate of each drug was lower than the pass-through rate of 95 percent of AWP, with a median payment rate of 69 percent of AWP. All drugs were sole-source. Although the decreases in payments for these drugs were often substantial and varied greatly across individual drugs, some level of decrease is expected when pass-through eligibility expires and payments become based on hospital costs instead of AWP, which often exceeds providers' acquisition costs. In 2001, we reported that certain drugs purchased by individual physicians were widely available at costs from 66 to 87 percent of AWP. In 2003, the costs of 121 former pass-through drugs and devices in 95 former pass-through device categories were packaged. Because CMS combines the costs of these items with the costs of the primary services with which they are associated on each claim, a specific payment rate for each of these drugs and devices does not exist. However, to indirectly assess the payment rates of packaged drugs and devices, we reviewed the payment rates of the APCs with which CMS stated they were likely packaged. CMS stated that, in 2003, former pass-through drug costs were most likely packaged with the six drug administration APCs. The payment rates for five of the six APCs decreased in 2003, when the costs of packaged former pass-through drugs were included, compared to 2002, when the costs of these drugs were not considered in the rate-setting calculations (see table 1). We are unable to determine why the costs of these APCs decreased because fluctuations in costs for any of the primary or packaged services in these APCs, in addition to the costs of the packaged drug, could have affected the payment rates. However, we would have expected that combining the costs of up to $150 of packaged former pass-through drugs with the costs of the primary services in these APCs would have increased the 2003 payment rates for more of these APCs as more than half of them are less than $150. To indirectly assess the payment rates of the devices in the 95 device categories expiring from pass-through eligibility in 2003, we reviewed APCs for which CMS determined that device costs made up at least 1 percent of the APC's total cost. We found that the payment rates of these APCs varied substantially between 2002 and 2003, when the former pass-through device costs likely were included. For example, the payment rate of APC 0688 (Revision/Removal of Neurostimulator Pulse Generator Receiver) decreased by 48 percent, while the payment rate of APC 0226 (Implantation of Drug Infusion Reservoir) increased by 94 percent. However, we cannot attribute these fluctuations solely to the packaging of pass-through devices, because changes between 2002 and 2003 in the costs of the primary services and other packaged services assigned to the APCs also could have affected the payment rates. In 2004, the devices in two device categories expired from pass-through eligibility. The devices in one category were associated with services in one APC--APC 0674 (Prostate Cryoablation). The payment rate for this APC almost doubled. We were unable to examine the change in payment for the APC or APCs associated with the devices in the other expired pass- through device category because CMS did not identify the APC or APCs into which the costs of the devices in this device category were packaged. No type of hospital provided a disproportionate number of Medicare outpatient services associated with certain drugs and devices, as these services, as a percentage of total Medicare outpatient services, varied little among hospitals with differences in characteristics such as the presence of an outpatient cancer center, teaching status, urban or rural location, or outpatient service volume. In 2001, outpatient drugs were most often associated with APCs for chemotherapy administration services, and devices in pass-through device categories were most often associated with APCs for cardiac services. We found that chemotherapy administration and cardiac services composed only a small proportion of total Medicare outpatient services for all hospitals (see table 2). In addition, these proportions varied little among different types of hospitals. The OPPS rate-setting methodology used by CMS may result in APC payment rates for drugs, devices, and other outpatient services that do not uniformly reflect hospitals' costs. Two areas of CMS's methodology are particularly problematic. First, the claims that CMS uses to calculate hospitals' costs and set payment rates may not be a representative sample of hospital claims, as CMS excluded many multiple-service claims when calculating the cost of OPPS services, including those with drugs and devices. The data CMS has available do not allow for the determination of whether excluding many multiple-service claims has an effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from the types or costs of services on included claims, the payment rates of some or all APCs may not uniformly reflect hospitals' costs of providing those services. Second, when calculating hospitals' costs, CMS assumes that, in setting charges within a specific department, a hospital marks up the cost of each service by the same percentage. However, not all hospitals use this methodology, and charge-setting methodologies for drugs, devices, and other outpatient services vary greatly across hospitals and across departments within a hospital. CMS's methodology does not recognize hospitals' variability in setting charges, and, therefore, the costs of services used to set payment rates may be under or overestimated. The claims CMS uses to calculate hospitals' costs and set payment rates may not be a representative sample of hospital claims. When calculating the cost of all OPPS services, including drugs and devices, to set payment rates, CMS excluded over 40 percent of all multiple-service claims because CMS could not associate particular packaged services with a specific primary service on these claims. Drug and device industry representatives we spoke with raised concerns that certain drugs and devices are often billed on multiple-service claims that are largely excluded from rate setting. For example, they stated that chemotherapy administration and the drugs themselves are typically billed on a 30-day cycle; therefore, one claim likely includes chemotherapy administration and other primary and packaged services and is likely excluded from CMS's rate-setting calculations. Device industry representatives we spoke with also asserted that multiple-service claims represent more complex, and therefore, potentially costlier, outpatient visits and excluding them from the rate-setting calculations underestimates the actual cost of a service. Because of the structure of the outpatient claim, the data CMS has available do not allow for the comparison of single-service claims and multiple-service claims to determine whether excluding many multiple- service claims has an effect on OPPS payment rates. It is possible that excluding many multiple-service claims has little or no effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from the types or costs of services on included claims, the payment rates of some or all APCs may not uniformly reflect hospitals' costs of performing these services. The costs of drugs, devices, and other outpatient services that CMS calculates from hospital charges and uses to set payment rates may not uniformly approximate hospitals' costs. CMS multiplies charges by hospital-specific cost-to-charge ratios to calculate hospitals' costs, which decreases the charges by a constant percentage. This methodology is based on the assumption that each hospital marks up its costs by a uniform percentage within each department to set each service's charge. However, we found that not all hospitals use this methodology to establish their charges, and that drug, device, and general charge-setting methodologies vary greatly among hospitals and even among departments within the same hospitals. We received information from 113 hospitals, although not all hospitals responded to each question. Of the 92 hospitals responding, 40 reported that they mark up all drug costs by a uniform percentage to establish charges, but 33 reported that they mark up low-cost drugs by a higher percentage and high-cost drugs by a lower percentage. Of 85 hospitals responding, 39 reported that they mark up all device costs using a uniform percentage, but 39 reported that they mark up low-cost devices using a higher percentage and high-cost devices using a lower percentage. In addition, 19 hospitals reported using other methods to set drug charges and 7 reported doing so for devices, such as a lower percentage markup for low-cost drugs and devices than for high-cost drugs and devices. (See appendix II for a more detailed description of hospital charge-setting methodologies.) Because CMS uses the same rate-setting methodology to determine drug and device payment rates as it uses for all other OPPS services, we also asked hospitals about more general charge-setting practices and found that they varied as well. To set base charges for clinic visits, hospitals reported using a wide variety of prices and methods, including cost, market comparisons, and the rates Medicare pays for outpatient services as well as payment rates for other benefit categories. To mark up clinic visits, 29 of the 45 hospitals responding used a uniform percentage increase; the remaining 16 hospitals reported using a variety of other methods, including using a higher percentage markup for low-cost visits than for high-cost visits. In addition to variation in charge-setting methodologies among hospitals, variation also can exist within an individual hospital. Hospital consultants told us that a single item can be assigned different charges if it is provided through more than one department within the same hospital. All 58 hospitals responding reported that they update their charges for inflation; 40 reported they did so annually, 12 did so at other times, and 6 did so both annually and at other times. Of the 58 hospitals that reported updating their charges for inflation, 25 reported that they apply a uniform, across-the-board percentage increase to all their charges, and 4 hospitals reported using both a uniform percentage and another type of increase. The remaining 29 hospitals reported using another method, such as applying an increase only to selected departments within the hospital. In addition, 33 of the 57 hospitals reported that they excluded some charges from these updates. The type of charges they excluded varied widely, but included drug and laboratory charges. The variation in methods hospitals use to update their charges reduces the likelihood that charges will uniformly reflect costs. CMS's rate-setting methodology may result in OPPS payment rates that do not uniformly reflect hospitals' costs of providing services. We identified two areas of this methodology that are of particular concern because not enough data are currently available to assess their impact. First, CMS excludes many multiple-service claims from its rate-setting calculations. To the extent that the types and costs of services on these claims are different from services on the claims included in the analysis, OPPS payment rates may not reflect hospitals' costs. The current structure of the outpatient claims does not allow for an analysis to determine the effect of these exclusions. Second, in its rate-setting calculations, CMS assumes that each hospital uses a uniform markup percentage to set its charges within each department, although we found that hospitals use a variety of markup methodologies. Therefore, CMS's application of a constant cost-to- charge ratio may not result in an accurate calculation. We recommend that the Administrator of CMS take the following three actions. First, the Administrator should gather the necessary data and perform an analysis that compares the types and costs of services on single-service claims to those on multiple-service claims. Second, the Administrator should analyze the effect that the variation in hospital charge-setting practices has on the OPPS rate-setting methodology. Third, the Administrator should, in the context of the first two recommendations, analyze whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals' costs of the outpatient services they provide to Medicare beneficiaries, and, if it does not, make appropriate changes in that methodology. We received written comments on a draft of this report from CMS (see app. III). We also received oral comments from external reviewers representing seven industry organizations. They included the Advanced Medical Technology Association (AdvaMed), which represents manufacturers of medical devices, diagnostic products, and medical information systems; the American Hospital Association (AHA); the Association of American Medical Colleges (AAMC), which represents medical schools and teaching hospitals; the Association of Community Cancer Centers (ACCC); the Biotechnology Industry Organization (BIO), which represents biotechnology companies and academic institutions conducting biotechnology research; the Federation of American Hospitals (FAH), which represents for-profit hospitals; and the Pharmaceutical Research and Manufacturers of America (PhRMA). In commenting on a draft of this report, CMS stated that it has continued to review and refine its OPPS data collection and analysis. In responding to our recommendation that CMS gather the necessary data and perform an analysis comparing the types and costs of services on single-service claims to those on multiple-service claims, CMS stated that it is searching for ways to use more data from multiple-service claims, and it has made efforts in recent rate-setting analyses to include data from more of these claims. We noted these efforts in the draft report. CMS noted that there are continuing challenges and costs, to both the federal government and hospitals, to expanding its efforts in this area. In its comments, CMS suggested that an analysis could be done using an algorithm to allocate charges among multiple-service claims, but noted that such an approach could create further distortions in the relative weights. Our recommendation to CMS, however, is that the agency should gather additional data on the relative costs of services on single and multiple-service claims, rather than continuing to analyze existing data. In response to our recommendation that CMS analyze the effect of hospital charge-setting practices on the OPPS rate-setting methodology, CMS stated that we should recognize that its rate-setting methodology that converts hospital charges to costs using a cost-to-charge ratio does so at the level of an individual hospital department. The draft report noted the fact that CMS generally calculates cost-to-charge ratios on a department-specific basis; however, we have revised the report to highlight that information throughout. CMS also said that the application of cost-to-charge ratios to charges of a hospital has long been the recognized method of establishing reasonable costs for hospital services and was an important component of the cost-based reimbursement system that was used by Medicare to pay for hospital outpatient services before OPPS was implemented. While we agree that it was an important component of the prior payment system, we believe the implementation of the current payment system has changed the relevance of applying cost-to-charge ratios to determine hospitals' costs. OPPS, rather than reimbursing individual hospitals on the basis of their costs of providing outpatient services, uses costs from individual hospitals to construct a prospective payment system that sets rates for individual services that apply to all hospitals. Finally, CMS stated that the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 specified that cost-to-charge ratios would be used to set payment amounts for brachytherapy sources; however, a discussion of brachytherapy payment is outside of the scope of this report. In response to our recommendation that CMS analyze whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals' costs of the services they provide to Medicare beneficiaries and make any appropriate changes in the methodology, CMS stated that it will consider our recommendations as it continues to assess and refine the rate- setting methodology. CMS said that it believes it has made great strides on this issue and is continuing to pursue the analyses necessary to create means by which all claims can be used to set the OPPS relative payment weights and rates. CMS also made technical comments, which we incorporated where appropriate. Industry representatives generally agreed with the findings, conclusions, and recommendations in the draft report. Comments on specific portions of the draft report centered on three areas: payment rates of former pass- through drugs and devices, provision of services associated with drugs and devices, and CMS's rate-setting methodology. Several industry representatives commented on our analysis of Medicare payment for former pass-through drugs and devices. AHA stated that although when drugs have expired from pass-through status their payment rates may have decreased, they are now more consistent, relative to costs, with the payment rates for other OPPS services. PhRMA agreed with our finding that the payment rates for former pass-through drugs and devices that are packaged cannot be evaluated and suggested that we recommend that CMS specifically address this problem. Industry representatives commented on our analysis of the provision of services associated with drugs and devices among different types of hospitals. ACCC agreed with the percentages of Medicare outpatient services related to chemotherapy administration and cardiac services in the draft report; however, it stated that it believed that these percentages demonstrated that large hospitals provided a disproportionate share of chemotherapy administration. ACCC and AAMC stated that these percentages also demonstrated that major teaching hospitals provided a disproportionate share of chemotherapy administration services. In addition, both groups suggested that we perform other analyses by type of hospital, such as the proportion of total payments, proportion of total services excluding clinic services, or absolute number of services for which chemotherapy administration and cardiac services accounted. Many of the reviewers addressed our finding that CMS's rate-setting methodology may result in OPPS payment rates that do not uniformly reflect hospitals' costs. Representatives from AAMC, ACCC, AdvaMed, BIO, and PhRMA agreed with our conclusion that CMS may not be using a representative sample of claims to set payment rates and that CMS's rate- setting methodology does not account for variation in hospital charge- setting practices. Several of these representatives suggested we analyze and discuss other factors that could further skew CMS's calculation of hospital costs, such as its use of incorrect or incomplete claims in rate setting. Regarding the suggestion that we specifically recommend that CMS address the issue that the payment rates for former pass-through drugs that are packaged and former pass-through devices cannot be evaluated, we believe that our more general recommendation allows the agency the flexibility to determine the most appropriate analyses for examining the rate-setting methodology. With respect to the comment that the percentages of Medicare outpatient services accounted for by chemotherapy administration demonstrate that certain types of hospitals provide a disproportionate share of these services, we disagree. As noted in the draft report, we found that these percentages differ by type of hospital, but the differences are not substantial, as all types of hospitals provided a relatively small proportion of these services. No type of hospital provided a disproportionately large number of these services. We analyzed the proportion of services, rather than payments as industry representatives suggested, because we believe that is the better analysis for determining whether a certain type of hospital provides a disproportionate share of these services. We did not analyze the proportion of total services except for clinic services or the absolute number these services made up, as we do not believe such an analysis would accurately and comparably reflect potential differences between hospitals for all outpatient services they perform. The industry representatives also made technical comments, which we incorporated where appropriate. We are sending a copy of this report to the Administrator of CMS. The report is available at no charge on GAO's Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staff have any questions, please call me at (202) 512-7119. Another contact and key contributors to this report appear in appendix IV. We analyzed Medicare claims data used by the Centers for Medicare & Medicaid Services (CMS) to set the 2003 outpatient prospective payment system (OPPS) payment rates. In addition, we analyzed drug average wholesale prices (AWPs), drug sources (sole-source, multi-source, or generic), and OPPS payment rates obtained from CMS. We interviewed officials at CMS and representatives from the American Hospital Association, Association of American Medical Colleges, Association of Community Cancer Centers (ACCC), Federation of American Hospitals, Greater New York Hospital Association, as well as from one large hospital system, one large hospital alliance, and five individual hospitals. In addition, we spoke with representatives from the Advanced Medical Technology Association, Biotechnology Industry Organization, California Healthcare Institute, Pharmaceutical Research and Manufacturers of America, as well as from seven drug manufacturers and three device manufacturers. We also spoke with consultants that advise hospitals on setting their charges. To compare payment for drugs to previous pass-through payments, we relied on information provided by CMS on drug sources and 2003 and 2004 drug payment rates, and on CMS's calculations of the AWPs for these drugs, which we supplemented with our own calculations. From CMS, we obtained the drug source and the payment rate for the 115 drugs and the 7 drugs whose pass-through eligibility expired as of January 1, 2003 and January 1, 2004, respectively, that were assigned to separate ambulatory payment classification (APC) groups. We used Medicare's January 2003 and January 2004 Single Drug Pricer files to determine the 2003 and 2004 AWPs, respectively, for most of the drugs. For the 37 drugs that were not included in the 2003 Single Drug Pricer file, we used the 2002 Drug Topics Red Book, published by Thomson Medical Economics, to calculate their AWPs. For the 2 drugs that were not in the 2004 Single Drug Pricer file, we used the 2003 Drug Topics Red Book, published by Thomson PDR, to calculate their AWPs. We calculated payment rates as a percentage of AWP for all drugs in 2003 and 2004. From our 2003 analysis, we excluded 1 multi-source drug for which we calculated an AWP from the 2002 Drug Topics Red Book that was inconsistent with the 2002 AWP CMS provided to us and another multi-source drug with an AWP of $0.34, but a payment rate of almost 29,000 percent of that amount. To determine whether a particular type or types of hospitals provide a disproportionate number of outpatient services associated with drugs and devices, we used the outpatient claims file that CMS used to calculate the 2003 OPPS payment rates. To perform our own data reliability check of this file, we examined selected services to determine the reasonableness of their frequency in the data set, given the population of the beneficiaries receiving services and the setting in which they are delivered. We determined the data were reasonable for our purposes. Using the claims, we determined which outpatient services were most often associated with drugs and devices and found that drugs were most often associated with chemotherapy administration services and devices were most often associated with cardiac services. Then, also using the claims, we compared proportions of chemotherapy administration and cardiac services for all hospitals, as well as for cancer center and noncancer center hospitals, major teaching and other hospitals, urban and rural hospitals, and hospitals with different outpatient service volumes. We included only those hospitals identified in CMS's 2003 OPPS impact file, a data file CMS constructs to analyze projected effects of policy changes on various hospital groups, such as urban and rural hospitals. We excluded hospitals with fewer than 1,100 total outpatient services, or approximately 3 outpatient services per day, as we believe such hospitals are not representative of most hospitals with outpatient departments. We defined cancer center hospitals as those hospitals that were members of ACCC as of February 28, 2003, the latest data available when we performed this analysis. We obtained the membership list from the ACCC. Using the September 2002 Medicare Provider of Services file and information obtained directly from the ACCC, we determined the Medicare provider numbers of ACCC members to identify claims billed by these hospitals. We defined major teaching hospitals as those hospitals having an intern/resident-to-bed ratio of 0.25 or more. We defined the urban or rural location of a hospital based on the urban/rural location indicator in the Medicare hospital OPPS impact file from calendar year 2003. We defined volume based on the number of services a hospital provided, also as indicated in the impact file. Small volume hospitals were those with fewer than 11,000 services, medium volume hospitals were those with at least 11,000 services but fewer than 43,000 services, and large volume hospitals were those with at least 43,000 services. We interviewed representatives from hospitals, hospital associations, and drug and device manufacturers and the associations that represent them to obtain information about hospital charging practices. We received information on charge-setting practices from 5 hospitals whose officials we interviewed. We indirectly received information from 50 other hospitals through association and industry representatives with whom we spoke. Finally, we contacted seven state hospital associations in geographically diverse areas not well represented in our previous sample to identify their members' charging practices. Some hospitals responded directly to us and others responded to their state association, which forwarded the responses to us. We received responses from 58 hospitals. The 113 hospitals from which we received information are not a statistically representative sample of all hospitals. We conducted our work from March 2003 through August 2004 in accordance with generally accepted government auditing standards. We received information from 113 hospitals, although not all hospitals responded to each question. Hospitals reported using a variety of methods to set the base charges for their clinic visit services (see table 3). To set the base charges for drugs, 25 of 57 hospitals responding reported that they used acquisition cost, 30 used the drug's average wholesale price (AWP), and 2 used a combination of acquisition cost and AWP. To set the base charges for devices, 55 of 57 hospitals responding reported that they used acquisition cost. After setting base charges, 29 of 45 hospitals responding reported that they marked up all of their clinic visit services by the same percentage increase, although they reported using a variety of other methods as well. To mark up base charges for drugs and devices, most hospitals responding used either the same percentage for all drugs and for all devices, or used a graduated percentage markup, marking up low-cost items by a higher percentage (see table 4). In addition, 24 of the 57 hospitals responding reported that they include nonproduct costs as a portion of their drug charges, and 25 of 57 responding reported that they include nonproduct costs as a portion of their device charges. The most common nonproduct costs included were administrative and overhead costs. Of the 24 including nonproduct costs in drug charges, 12 reported that they do so by adding an additional percentage of the drug acquisition cost to the drug charge. Of the 25 including nonproduct costs in device charges, 16 reported that they do so by adding an additional percentage of the device acquisition cost to the device charge. However, the amount of the nonproduct costs as a percentage of the charges varied widely among hospitals. Of the 24 hospitals including nonproduct costs in drug charges, 16 reported that the amount varied by the route of administration for the drug, such as intravenous or intramuscular administration. Of the 58 hospitals responding, all reported that they update their charges for inflation; 40 reported they did so annually, 12 did so at other times, and 6 did so both annually and at other times. While many used a standard across-the-board percentage increase to update their charges, the majority used other methods. In addition, 33 of the 57 hospitals responding reported that they exclude certain charges from these updates. The types of services whose charges they excluded, such as drug, laboratory, and room charges, varied widely. Finally, 49 of 58 hospitals responding reported that they periodically review all their charges. Beth Cameron Feldpush, Joanna L. Hiatt, Maria Martino, and Paul M. Thomas made major contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select "Subscribe to Updates."
Under the Medicare hospital outpatient prospective payment system (OPPS), hospitals receive a temporary additional payment for certain new drugs and devices while data on their costs are collected. In 2003, these payments expired for the first time for many drugs and devices. To incorporate these items into OPPS, the Centers for Medicare & Medicaid Services (CMS) used its rate-setting methodology that calculates costs from charges reported on claims by hospitals. At that time, some drug and device industry representatives noted that payment rates for many of these items decreased and were concerned that hospitals may limit beneficiary access to these items if they could not recover their costs. GAO was asked to examine whether the OPPS rate-setting methodology results in payment rates that uniformly reflect hospitals' costs for providing drugs and devices, and other outpatient services, and if it does not, to identify specific factors of the methodology that are problematic. The rate-setting methodology used by CMS may result in OPPS payment rates for drugs, devices, and other services that do not uniformly reflect hospitals' costs of providing those services. Two areas of the methodology are particularly problematic. The hospital claims for outpatient services that CMS uses to calculate hospitals' costs and set payment rates may not be a representative sample of all hospital outpatient claims. For Medicare payment purposes, an outpatient service consists of a primary service and the additional services or items associated with the primary service, referred to as packaged services. CMS has excluded over 40 percent of multiple-service claims, claims that include more than one primary service along with packaged services, when calculating the cost of all OPPS services, including those with drugs and devices. It excludes these multiple-service claims because, when more than one primary service is reported on a claim, CMS cannot associate each packaged service with a specific primary service. Therefore, the agency cannot calculate a total cost for each primary service on that claim, which it would use to set payment rates. The data CMS has available do not allow for a determination of whether excluding many multiple-service claims has an effect on OPPS payment rates. However, if the types or costs of services on excluded claims differ from those on included claims, the payment rates of some or all services may not uniformly reflect hospitals' actual costs of providing those services. In addition, in calculating hospitals' costs, CMS assumes that, in setting charges within a specific department, a hospital marks up the cost of each service by the same percentage. However, based on information from 113 hospitals, GAO found that not all hospitals use this methodology: charge-setting methodologies for drugs, devices, and other outpatient services vary greatly across hospitals and across departments within a hospital. CMS's methodology does not recognize hospitals' variability in setting charges, and therefore, the costs of services used to set payment rates may be under- or overestimated.
7,760
606
The Park Service is the caretaker of many of the nation's most precious natural and cultural resources. Today, more than 130 years after the first national park was created, the National Park System has grown to include 390 units covering over 84 million acres. These units include a diverse mix of sites--now in more than 20 different categories. The Park Service's mission is to preserve unimpaired the natural and cultural resources of the National Park System for the enjoyment of this and future generations. Its objectives include providing for the use of the park units by supplying appropriate visitor services and infrastructure (e.g., roads and facilities) to support these services. In addition, the Park Service protects its natural and cultural resources (e.g., preserving wildlife habitat and Native American sites) so that they will be unimpaired for the enjoyment of future generations. The Park Service receives its main source of funds to operate park units through appropriations in the ONPS account. The Park Service chooses to allocate funds to its park units in two categories--one for daily operations, and another for specific, non-recurring projects. Daily operations allocations for individual park units are built on park units' allocation for the prior year. Park units receive an increased allocation for required pay increases and may request specific increases for new or higher levels of ongoing operating responsibilities, such as adding additional law enforcement rangers for increased homeland security protection. As is true for other government operations, the cost of operating park units will increase each year due to required pay increases, the rising costs of benefits for federal employees, and rising overhead expenses such as utilities. The Park Service may provide additional allocations for daily operations to cover all or part of these cost increases. If the continuation of operations at the previous year's level would require more funds than are available, park units must adjust either by identifying efficiencies within the park unit, use other authorized funding sources such as fees or donations to fund the activity, or reduce services. Upon receiving their allocations for daily operations each year, park unit managers exercise a great deal of discretion in setting operational priorities. Generally, 80 percent or more of each park unit's allocation for daily operations is used to pay the salaries and benefits of permanent employees (personnel costs). Park units use the remainder of their allocations for daily operations for overhead expenses such as utilities, supplies, and training, among other things. In addition to daily operations funding, the Park Service also allocates project-related funding to park units for specific purposes to support its mission. For example, activities completed with Cyclic Maintenance and Repair and Rehabilitation funds include re-roofing or re-painting buildings, overhauling engines, refinishing hardwood floors, replacing sewer lines, repairing building foundations, and rehabilitating campgrounds and trails. Park units compete for project allocations by submitting requests to their respective regional office and headquarters. Regional and headquarters officials determine which projects to fund. While an individual park unit may receive funding for several projects in one year, it may receive none the next. Park units are authorized to collect revenue from outside sources such as visitor fees and donations--although how they are used may be limited to specific purposes. Since 1996, the Congress has provided the park units with authority to collect fees from visitors and retain these funds for use on projects to enhance recreation and visitor enjoyment, among other things. Since 2002, the Park Service has required park units to spend the majority of their visitor fees on deferred maintenance projects, such as road or building repair. The Park Service also receives revenue from concessionaires under contract to perform services at park units--such as operating a lodge--and cash or non-monetary donations from non-profit organizations or individuals. These funds may vary from year to year and, in the case of donations, may be accompanied by stipulations on how the funds may be used. Overall appropriations for the ONPS account--including the amounts the Park Service allocated for daily operations and projects--rose in both nominal and inflation-adjusted dollars overall from fiscal year 2001 through 2005. Appropriations increased in nominal terms from about $1.4 billion in fiscal year 2001 to almost $1.7 billion in fiscal year 2005, an average annual increase of about 4.9 percent (i.e., about $68 million per year). After adjusting these amounts for inflation, the average annual increase was about 1.3 percent or almost $18 million per year. By contrast, the Park Service's overall budget authority increased to about $2.7 billion in 2005 from about $2.6 billion in 2001, an average increase of about 1 percent per year. In inflation adjusted dollars, the total budget authority fell by an average of about 2.5 percent per year. Figure 1 shows the appropriations for the ONPS account from fiscal years 2001 through 2005. The Park Service's total allocation for daily operations for park units increased overall in nominal dollars but declined slightly when adjusted for inflation from fiscal year 2001 through 2005. As illustrated in figure 2, overall allocations for daily operations for park units rose from about $903 million in fiscal year 2001 to almost $1.03 billion in fiscal year 2005--an average annual increase of about $30 million, or about 3 percent. After adjusting for inflation, the allocation for daily operations fell slightly from about $903 million in 2001 to about $893 million in 2005--an average annual decline of about $2.5 million, or 0.3 percent. The fiscal year 2005 appropriation for the ONPS account included an additional $37.5 million over the amounts proposed by the House and Senate for the ONPS account, to be used for daily operations. The conference report accompanying the appropriation stated that the additional amount was to be used for (1) a service-wide increase of $25 million and (2) $12.5 million for visitor services programs at specific park units. Allocations for projects and other support programs increased overall in both nominal and inflation-adjusted dollars. These allocations rose from about $478 million in 2001 to about $641 million in 2005--an average annual increase of about 7.7 percent, or about $36.5 million. When adjusted for inflation, the increase was 3.9 percent, or about $18.7 million per year. Figure 3 shows allocation trends of projects and other support programs for the Park Service from fiscal years 2001 through 2005. Three programs that include project funding for individual park units--Cyclic Maintenance, Repair and Rehabilitation, and Inventory and Monitoring-- account for over half of the increase for the project and support program allocations. As a percentage of total project and support program funding, funding for these programs rose to 31 percent in 2005 from 23 percent in 2001. For example, Cyclic Maintenance program funding increased from $34.5 million in 2001 to $62.8 million in 2005--an average annual increase of 16.2 percent in nominal terms or 12.1 percent when adjusted for inflation. Increases in the Cyclic Maintenance and Repair and Rehabilitation programs reflect an emphasis on the effort for the Park Service to reduce its estimated $5 billion maintenance backlog. Increases in the Inventory and Monitoring Program reflect an emphasis on protecting natural resources primarily through an initiative called the Natural Resource Challenge. Visitor fees are also used to support park units. Overall, the Park Service collected about $717 million in visitor fees in addition to their annual appropriation for operations from 2001 through 2005, increasing from about $140 million to about $147 million in 2005 (an average annual increase of about 1 percent); however, in inflation-adjusted dollars, the Park Service collected about $670 million in visitor fees, falling from about $140 million in 2001 to about $127 million 2005 (an average annual decline of over 2 percent). Overall, the Park Service collected an average of about $143 million per year in nominal terms or about $134 million per year when adjusted for inflation. Visitor fee revenue depends on several factors, including the number of visitors to each park unit, the number of national passes purchased, and the amount each park charges for entry and services. All 12 park units we visited received allocations for projects from fiscal years 2001 through 2005 that varied among years and among park units. Allocations for daily operations for the 12 park units we visited also varied. On an average annual basis, each unit experienced an increase in daily operations allocations, but most experienced a decline in inflation- adjusted terms. Officials at each park believed that their daily operations allocations were not sufficient to address increases in operating costs and new Park Service management requirements. To manage within available funding resources, park unit managers also reported that, to varying degrees, they made trade-offs among the operational activities--which in some cases resulted in reducing services in areas such as education, visitor and resource protection, and maintenance activities. Park officials also reported that they increasingly relied on volunteers and other authorized funding sources to provide operations and services that were previously paid with allocations for daily operations from the ONPS account. Park units use project-related allocations for such things as rehabilitating structures, roads, and trails; and inventorying and monitoring natural resources. The allocations for projects at the 12 park units totaled $76.8 million from 2001 through 2005. Allocations varied from park to park and year to year because these allocations support non-recurring projects for which park units are required to compete and obtain approval from Park Service headquarters or regional offices. For example, at Grand Canyon National Park, allocations for projects between 2001 and 2005 totaled $6.7 million. However, during that time, the amount fluctuated from $824,000 in 2001 to $1.9 million in 2004 and $914,000 in 2005. Appendix I shows project-related allocations and their fluctuations from fiscal years 2001 through 2005 for the 12 parks we visited. All twelve park units experienced an annual average increase, in nominal terms, in allocations for daily operations; however, when adjusted for inflation, 8 of the 12 parks we visited experienced a decline ranging from less than 1 percent to approximately 3 percent. For example, Yosemite National Park's daily operations allocations increased from $22,583,000 in 2001 to $22,714,000 in 2005, less than an average of 1 percent per year. However, when adjusted for inflation, the park's allocation for daily operations fell by about 3 percent per year. Daily operations allocations at the remaining four parks increased after adjusting for inflation, ranging from less than 1 percent to about 7 percent. For example, Acadia National Park's daily operations allocations increased from $4,279,000 in fiscal year 2001 to $6,498,000 in fiscal year 2005, an average annual increase of about 11 percent in nominal terms and about 7 percent when adjusted for inflation. Park officials explained that although the daily operations allocation substantially increased over this period, most of the increase was for new or additional operations. To illustrate, in 2002, Acadia acquired the former Schoodic Naval Base. The increases in allocations for daily operations were to accommodate this added responsibility rather than for maintaining operations that were in existence prior to the acquisition. Park unit officials reported that required salary increases exceeded the allocation for daily operations, and rising utility costs have reduced their flexibility in managing daily operations allocations. Park Service headquarters officials reported that from 2001 through 2005, the Park Service paid personnel cost increases enacted by the Congress. For example, from fiscal years 2001 through 2005, Congress enacted salary increases of about 4 percent per year for federal employees. Park Service officials reported that the Park Service covered these salary increases with appropriations provided in the ONPS account. The Park Service allocated amounts to cover about half of the required increases, and park units had to reduce spending to compensate for the difference. As a consequence of the increases, park units had to eliminate or defer spending in order to accommodate the increases. Officials at several park units told us that since 2001, they have refrained from filling vacant positions or have filled them with lower-graded or seasonal employees. For example, in an effort to continue to perform activities that directly impact visitors--such as cleaning restrooms and answering visitor questions--officials at Sequoia and Kings Canyon National Parks stated that they left several high-graded positions unfilled in order to hire a lower graded workforce to perform these basic operational duties. Officials at most park units also told us that when positions were left vacant, the responsibilities of the remaining staff generally increased in order to fulfill park obligations. In addition to increasing personnel costs, officials at many of the parks we visited explained that rising utility costs caused parks to reduce spending in other areas. For example, at Grand Teton National Park, park officials told us that to operate the same number of facilities and assets, costs for fuel, electricity, and solid waste removal increased from $435,010 in 2003 to $633,201 in 2005--an increase of 46 percent, when adjusted for inflation. Officials told us that, as a result, their utility budget for fiscal year 2005 was spent by June 2005--three months early. In August, the park accepted the transfer requests of two division chiefs and used the salaries from these vacancies to pay for utility costs for the remaining portion of the year. Officials at some parks attributed increased utility costs to new construction that was generally not accompanied with a corresponding increase to their allocation for daily operations. Officials at most of the parks we visited also told us that their park units generally did not receive additional allocations for administering new Park Service policies directed at reducing its maintenance backlog, implementing a new asset management strategy, or maintaining specified levels of law enforcement personnel (referred to as its "no-net-loss policy"), which has reduced their flexibility in addressing other park priorities. While officials stated that these policies were important, implementing them without additional allocations reduced their management flexibility. For example, since 2001, the Park Service has placed a high priority on reducing its currently estimated $5 billion maintenance backlog. In response, the Park Service, among other things, set a goal to spend the majority of its visitor fees on deferred maintenance projects--$75 million in 2002 increasing to $95 million in 2005. Officials at several park units report that they have used daily operations allocations to absorb the cost of salaries for permanent staff needed to oversee the increasing number of visitor fee-funded projects. Park officials reported that the additional administrative and supervisory tasks associated with these projects add to the workload of an already-reduced permanent staff. Furthermore, while the Park Service may use visitor fees to pay salaries for permanent staff that manage and administer projects funded with visitor fees, it has a policy prohibiting such use. Instead, these salaries are paid using allocations for daily operations which reduce the amount of the allocation available for visitor services and other activities and limit the park units' ability to maintain these services and activities. To address differences between allocations for daily operations and expenses, officials at the park units we visited reported that they reduced or eliminated some services paid with daily operations allocations-- including some that directly affected visitors and park resources. Park officials at some of the parks we visited told us that before reducing services that directly affect the visitor, they first reduced spending for training, equipment, travel, and supplies paid from daily operations allocations. However, most parks reported that they did reduce services that directly affect the visitor, including reducing visitor center hours, educational programs, basic custodial duties, and law enforcement operations, such as back-country patrolling. Furthermore, when funds allocated for daily operations were not sufficient to pay for activities that were previously paid with this source, the park units we visited reported that they deferred activities or relied on other authorized funding sources such as allocations for projects, visitor fees, donations from cooperating associations and friends groups, and concessions fees. From 2001 to 2005, some parks delayed performing certain preventative maintenance activities formerly paid with allocations for daily operations until other authorized funding sources, such as project funds (including funds for cyclic maintenance, repair and rehabilitation, and visitor fees) could be found and approved. Rather than eliminating or not performing daily operational activities, some parks used volunteers and funding from authorized sources such as donations from non-profit partners and concessionaires' fees to accomplish activities that were formerly paid with daily operations funds. Officials at several park units said that they increasingly depend on donations from cooperating associations to pay for training and equipment and rely on their staff and volunteers to provide information and educational programs to visitors that were traditionally offered by park rangers. Funds from these sources can be significant, but they are subject to change from year to year. Officials at several park units expressed concern about using funding from other authorized sources to address needs--not only because the funds can vary from year to year, but also because these partners' stipulations on how their donations can be used may differ from the parks' priorities. As a result, relying on these sources for programs that require a long term funding commitment could be problematic. We identified three management initiatives that the Park Service has undertaken to address the fiscal performance and accountability of park units and to better manage within their available resources: the Business Plan Initiative (BPI), the Core Operations Analysis (COA), and the Park Scorecard. Each initiative operates separately and is at various stages of development and implementation. Table 2 in appendix II summarizes each of the three initiatives and their stages of implementation. Through the BPI process, park unit staff--with the help of business interns from the Student Conservation Association--identify all sources and uses of park funds to determine funding levels needed to operate and manage park units. Using this information, park unit managers develop a 5-year business plan to address any gaps between available funds and park unit operational and maintenance needs. The process used in the BPI involves 6 steps, completed over an 11-week period. Park staff and the business interns (1) identify the park unit's mission; (2) conduct an inventory of park assets; (3) analyze park funding trends; (4) identify sources and uses of park funding; (5) analyze park operations and maintenance needs; and (6) develop a strategic business plan to address gaps between funds and park needs. All 12 of the park units we visited have completed a business plan. Many officials--both at the unit level and headquarters--stated that business plans are, among other things, useful in helping them identify future budget needs. Once completed, park managers often issue a press release to announce its completion. Park managers may also send copies to their legislators, local community councils, and park partners (such as cooperating associations) to communicate the results. A Park Service official stated, however, that the Park Service is still refining these business plans to serve as a better tool for justifying funding needs. The COA was developed in 2004 to help park managers evaluate their park unit's core mission, identify essential park unit activities and associated funding levels, and make fully informed decisions on staffing and funding. The COA is part of a broader Park Service-wide effort to integrate management tools to improve park efficiency. Park Service headquarters, regional officials, and park unit staffs work together in a step-by-step process to conduct the analysis. These steps include preparing a 5-year budget cost projection (BCP) to establish baseline financial information and help project future park needs, defining core elements of the park unit's mission, identifying park priorities, reviewing and analyzing activities and associated staff resources, and identifying efficiencies. Budget staff for each park unit first complete a 5-year BCP that uses the current year's funding level for daily operations as a baseline, and estimates future levels, increases in non-personnel costs, and fixed costs such as salaries and benefits. The general target of the analysis is to adjust personal services and fixed costs at or below 80 percent of the unit's funding levels for daily operations. Three of the twelve park units we visited have completed (or are in the process of completing) a COA, and three will begin the COA in fiscal year 2006. The remaining six park units we visited have yet to be selected. Park unit officials told us that the preliminary results have helped them determine where efficiencies in operations might accrue. A Park Service regional official told us that the core operations process is still in its early development, noting that preliminary results are useful but too early to determine results to be realized by the park units. Park Service headquarters developed the Park Scorecard beginning in fiscal year 2004 to serve as an indicator of each park unit's fiscal and operational condition, and managerial performance. The scorecard is intended to provide an overarching summary of each park unit's condition by offering a way to analyze individual park unit needs. It also provides Park Service officials with information needed to understand how park units compare to one another based on broad financial, -organizational, - recreational, -and resource-management criteria. Although the Park Scorecard is still under development, the Park Service's headquarters budget office used it to validate and approve requests for increases in daily operations allocations for the highest priorities among park units to be funded out of a total of $12.5 million that was provided in 2005 for daily operations directed at visitor service programs. The Park Service approved requests for funding at 3 out of the 12 parks we visited (Badlands National Park, Grand Teton National Park, and Yellowstone National Park). Park Service headquarters officials, with the assistance and input of park unit managers, plan on refining the Park Scorecard to more accurately capture all appropriate park measurements and to identify, evaluate, and support future budget increases for park units. The Park Service also intends for park managers to use the Park Scorecard to facilitate discussions about their needs and priorities. In closing, we have found that overall, from 2001 through 2004, the Park Service increased allocations for support programs and project funding while placing less of an emphasis on funds for daily operations. In fiscal year 2005, this trend shifted, and as evidenced by our visits to 12 park units, appears to be going in the direction needed to help the units overcome some of the difficulties they have recently experienced in meeting operational needs. In responding to these trends, park unit officials found ways to reduce spending on their allocations for daily operations and to identify and use authorized sources other than these allocations to minimize some impacts on park operations and visitor services. While park units are relying more on other sources to perform operations, using such funds has its drawbacks because it usually takes parks longer, with more effort from park employees to obtain and use these sources. Visitor fees have been an important and significant source of funds for park units to address high priority needs such as reducing its maintenance backlog. However, Park Service policy prohibiting the use of visitor fees to pay salaries of permanent employees managing projects may reduce the flexibility in managing the use of funding for daily operations. While the Park Service is embarking upon three management initiatives that they believe will improve park performance and accountability, and better manage within available resources, it is too early to assess the effectiveness of these initiatives. To reduce some of the pressure on funding for daily operations, we recommended that the Secretary of the Interior direct the Director of the Park Service to revise its policy to allow park units to use visitor fee revenue to pay the cost of permanent employees administering projects funded by visitor fees to the extent authorized by law. In commenting on a draft of our report, the department generally agreed with the recommendation, but stated that it should clearly state that visitor fee revenue (and not other sources) be used to fund only a limited number of permanent employees and be specifically defined for the sole purpose of executing projects funded from fee revenue. We believe our recommendation, as written, gives the agency the flexibility sought. The department also said that our report creates a misleading impression concerning the state of park operations in that (1) record high levels of funds are being invested to staff and improve parks, and (2) the report does not examine the results achieved with these inputs. The department also believes that while employment levels at individual park units may have fluctuated for many reasons, employment servicewide, including both seasonal and permanent employees, was stable. We believe however, that our report provides a detailed analysis of the major funding trends affecting Park Service operations, including those at the 12 high-visitation park units we visited, as well as the department's initiatives and efforts to achieve results. This concludes our statement for the record. For further information on this statement, please contact Robin Nazzaro at (202) 512-3841 or [email protected]. Individuals making contributions to this testimony included Roy Judy, Assistant Director; Thomas Armstrong, Ulana Bihun, Denise Fantone, Doreen Feldman, Tim Guinane, Richard Johnson, Alison O'Neill, and Patrick Sigl.
In recent years, some reports prepared by advocacy groups have raised issues concerning the adequacy of the Park Service's financial resources needed to effectively operate the park units. This statement addresses (1) funding trends for park service operations and visitor fees for fiscal years 2001-2005; (2) specific funding trends for 12 selected high-visitation park units and how, if at all, the funding trends have affected operations; and (3) recent management initiatives the Park Service has undertaken to address fiscal performance and accountability of park units. This statement is based on GAO's March 2006 report, National Park Service: Major Operations Funding Trends and How Selected Park Units Responded to Those Trends for Fiscal Years 2001 through 2005, GAO-06-431 (Washington, D.C.: March 31, 2006). Overall, amounts appropriated to the National Park Service (Park Service) in the Operation of the National Park System account increased from 2001 to 2005. In inflation-adjusted terms, amounts allocated by the Park Service to park units from this appropriation for daily operations declined while project-related allocations increased. Project-related allocations increased primarily in (1) Cyclic Maintenance and Repair and Rehabilitation programs to reflect an emphasis on reducing the estimated $5 billion maintenance backlog and (2) the inventory and monitoring program to protect natural resources through the Natural Resource Challenge initiative. Also, on an average annual basis, visitor fees collected increased about 1 percent--a 2 percent decline when adjusted for inflation. All park units we visited received project-related allocations, but most of the park units experienced declines in inflation-adjusted terms in their allocations for daily operations. Each of the 12 park units reported their daily operations allocations were not sufficient to address increases in operating costs, such as salaries, and new Park Service requirements. In response, officials reported that they either eliminated or reduced some services or relied on other authorized sources to pay operating expenses that have historically been paid with allocations for daily operations. Also, implementing important Park Service policies--without additional allocations--has placed additional demands on the park units and reduced their flexibility. For example, the Park Service has directed its park units to spend most of their visitor fees on deferred maintenance projects. While the Park Service may use visitor fees to pay salaries for permanent staff who administer projects funded with these fees, it has a policy prohibiting such use. To alleviate the pressure on daily operations allocations, we believe it would be appropriate to use visitor fees to pay the salaries of employees working on visitor fee funded projects. Interior believes that, while employment levels at individual park units may have fluctuated for many reasons, employment servicewide was stable, including both seasonal and permanent employees. GAO identified three initiatives--Business Plan, Core Operations Analysis, and Park Scorecard--to address park units' fiscal performance and operational condition. Of the park units with a business plan we visited, officials stated that the plan, among other things, have helped them better identify future budget needs. Due to its early development stage, only a few park units have participated in the Core Operations Analysis; for those we visited who have, officials said that they are better able to determine where operational efficiencies might accrue. Park Service headquarters used the Scorecard to validate and approve increases in funding for daily operations for fiscal year 2005.
5,228
703
Taxpayers' experience depends heavily on IRS's performance during the tax filing season, roughly mid-January through mid-April. During this period, millions of taxpayers who are trying to fulfill their tax obligations contact IRS over the phone, face-to-face, and via the Internet to obtain answers to tax law questions and information about their tax accounts. This period is also when IRS processes the bulk of the approximate 140 million returns it will receive, runs initial compliance screens, and issues over 100 million refunds. In recent years, IRS has improved its returns processing but has seen its taxpayer service performance deteriorate. For years we have reported that electronic filing (e-filing) has many benefits for taxpayers, such as higher accuracy rates and faster refunds compared to filing on paper. So far in 2012, the percentage of e-filed returns has increased by 1.9 percentage points to 88.8 percent since about the same time last year (a 2.2 percent increase), as table 1 shows. Since the same time in 2007, the percentage of e-filed returns has increased from 72.3 percent to 88.8 percent. This year, IRS may meet its long-held goal of having 80 percent of individual tax returns e-filed. However, the overall e-file percentage is likely to decline as the tax filing season ends since IRS typically receives more returns filed on paper later in the filing season. In addition, IRS is in the midst of a multi-phase modernization project, known to as the Customer Account Data Engine (CADE) 2, which will fundamentally change how it processes returns. With CADE 2, IRS also expects to be able to issue refunds in 4 business days for direct deposit and 6 business days for paper checks after IRS processes the return and posts the return data to the taxpayer's account. Early in the 2012 filing season, IRS experienced two processing problems that delayed refunds to millions of taxpayers, and reported the problems had been resolved by mid-February. We summarized these problems in an interim report on the 2012 filing season. Providing good taxpayer service is important because, without it, taxpayers may not be able to obtain necessary and accurate information they need to comply with tax laws. In addition, more and more, taxpayers are relying on IRS's website to obtain information and execute transactions, making it important that IRS have a modern website. However, as we have reported, IRS has experienced declines in performance in selected taxpayer service areas, most notably with respect to providing live telephone assistance and timely responses to taxpayers' correspondence.from IRS to paper correspondence or have access to information online, they call IRS, correspond again, or seek face-to-face assistance--all of which are costly to IRS and burdensome to the taxpayer. Table 2 shows the declines in telephone service and paper correspondence and the goals for 2012 and 2013. Additional performance data is shown in appendix I. To improve the taxpayer experience and voluntary compliance, IRS has a range of options. Some of its options could provide taxpayers with better information to accurately fulfill their tax obligations. Other options would allow IRS to take enforcement actions sooner and with less burden on taxpayers. Simplifying the tax code could reduce unintentional errors and make intentional tax evasion harder. GAO-12-176. develop an online locator tool listing volunteer tax preparation sites-- and IRS introduced an enhanced volunteer site locator tool in 2012;complete an Internet strategy that provides a justification for online self-service tools as IRS expands its capacity to introduce such tools. In addition to actions we recommended, IRS is also studying ways to better communicate with taxpayers and determine which self-service tools would be the most beneficial to taxpayers. According to IRS officials, the study should be completed later this year. Identifying more efficient ways to provide service also benefits IRS because it is able to make better use of scarce resources. GAO-12-176. most effective for improving the quality of tax returns prepared by different types of paid preparers. Likewise, IRS has discussed how to measure the effect of the requirements such as requiring continuing education and testing on tax return accuracy. It will take years to implement the approach as it will likely evolve over time and become more detailed. Tax preparation software is another critical part of tax administration. Almost 30 percent of taxpayers use such software to prepare their returns and, in the process, understand their tax obligations, learn about tax law changes, and get questions answered. Many also electronically file through their software provider. Consequently, tax software companies are another important intermediary between taxpayers and IRS. We have reported that IRS has made considerable progress in working with tax software companies to provide, for example, clearer information about why an e-filed return was not accepted, require additional information on returns to allow for IRS to better identify the software used, and enhance security requirements for e-filing. To illustrate the potential for leveraging tax software companies to improve taxpayer compliance, 4 years ago we recommended and IRS agreed to expand outreach efforts to external stakeholders and include software companies as part of an effort to reduce common types of misreporting related to rental real estate. In another report, we discussed the value of research to better understand how tax software influences compliance. IRS has volunteer partners, often nonprofit organizations or universities, that staff over 12,000 volunteer sites. Volunteers at these sites prepare several million tax returns for traditionally underserved taxpayers, including the elderly, low-income, disabled, and those with limited English proficiency. In recent reports we have made recommendations about estimating of the effectiveness of targeting underserved populations at such sites and making it easier for taxpayers to find the locations of nearby sites.to work with these volunteer partners to help improve assistance to taxpayers with the goal of improving compliance. Information reporting is a proven tool that reduces tax evasion, reduces taxpayer burden, and helps taxpayers voluntarily comply. This is, in part, because taxpayers have more accurate information to complete their returns and do not have to keep records themselves. In addition, IRS research shows that when taxpayers know that IRS is receiving data from third parties, they are more likely to correctly report the income or expenses to IRS. As part of its recent update of its tax gap estimates, IRS estimated that income subject to substantial information reporting, such as pension, dividend, interest, unemployment, and Social Security income, was misreported at an 8 percent rate compared to a 56 percent misreporting rate for income with little or no information reporting, such as sole proprietor, rent, and royalty income. GAO, Tax Administration: Costs and Uses of Third-Party Information Returns, GAO-08-266 (Washington, D.C.: Nov. 20, 2007). have not entered into an agreement with IRS to report details on U.S. account holders to IRS. As these three sets of information reporting requirements have only recently taken effect, it is too soon to tell the impact they are having on taxpayer compliance. We have made recommendations or suggested possible legislative changes in several other areas in which IRS could benefit from additional information reporting. They include the following: Service payments made by landlords. Taxpayers who rent out real estate are required to report to IRS expense payments for certain services, such as payments for property repairs, only if their rental activity is considered a trade or business. However, the law does not clearly spell out how to determine when rental real estate activity is considered a trade or business. Service payments to corporations. Currently, businesses must report to IRS payments for services they make to unincorporated persons or businesses, but payments to corporations generally do not have to be reported. Broader requirements for these two forms of information reporting, covering goods in addition to services, were enacted into law in 2010, but later repealed. We believe the more narrow extensions of information reporting to include services, but not goods, remain important options for improving compliance. Additionally, we have identified existing information reporting requirements that could be enhanced. Examples include the following: Mortgage interest and rental real estate. We recommended requiring information return providers to report the address of a property securing a mortgage, mortgage balances, and an indicator of whether the mortgage is for a current year refinancing when filing mortgage interest statements (Form 1098) could help taxpayers comply with and IRS enforce rules associated with the mortgage interest deduction. We have reported that collecting the address of the secured property on Form 1098 would help taxpayers better understand and IRS better enforce requirements for reporting income from rental real estate. Higher education expenses. Eligible educational institutions are currently required to report information on qualified tuition and related expenses for higher education so that taxpayers can determine the amount of educational tax benefits they can claim. However, the reporting does not always separate eligible from ineligible expenses. We recommended revising the information reporting form could improve the usefulness of reported information. Identifying additional third-party reporting opportunities is challenging. Considerations include whether third parties exist that have accurate information available in a timely manner, the burden of reporting, and whether IRS can enforce the reporting requirement. We have noted, for example, that the reason there is little third-party reporting on sole proprietor expenses is because of the difficulty of identifying third parties that could report on expense like the business use of cars. Modernized systems should better position IRS to conduct more accurate and faster compliance checks, which benefits taxpayers by detecting errors before interest and penalties accrue. In addition, modernized systems should result in more up-to-date account information, faster refunds, and other benefits, such as clearer notices so that taxpayers can better understand why a return was not accepted by IRS. Two new, modernized systems IRS is implementing include the following: Customer Account Data Engine (CADE) 2. For the 2012 filing season, IRS implemented the first of three phases to introduce modernized tax return processing systems. Specifically, IRS introduced a modernized taxpayer account database, called CADE 2, and moved the processing of individual taxpayer accounts from a weekly to a daily processing cycle. IRS expects that completing this first phase will provide taxpayers with benefits such as faster refunds and notices and updated account information. IRS initially expected to implement phase two of CADE 2 implementation by 2014. However, IRS reported that it did not receive funding in fiscal year 2011 that would have allowed it to meet the 2014 time frame. Modernized e-File (MeF). IRS is in the final stages of retiring its legacy e-file system, which preparers and others use to transmit e- filed returns to IRS, and replacing it with MeF. Early in the 2012 filing season, IRS experienced problems transferring data from MeF to other IRS systems. IRS officials said that they solved the problem in early February. IRS officials recently reiterated their intention to turn off the legacy e-file in October 2012 as planned. However, more recently, IRS processing officials told us they would reevaluate the situation after the 2012 filing season. MeF's benefits include allowing taxpayers to provide additional documentation via portable document files (PDF), as opposed to filing on paper. In addition, MeF should generate clearer notices to taxpayers when a return is rejected by IRS compared to the legacy e-file system. The Commissioner of Internal Revenue has talked about a long-term vision to increase pre-refund compliance checks before refunds are sent to taxpayers. As previously noted, early error correction can benefit taxpayers by preventing interest and penalties from accumulating. In one example, IRS is exploring a process where third parties would send information returns to IRS earlier so they could be matched against taxpayers' returns when the taxpayer files the return as opposed to the current requirement that some information returns go to taxpayers before being sent to IRS. The intent is to allow IRS to match those information returns to tax returns during the filing season rather than after refunds have been issued. Another option for expanding pre-refund compliance checks is additional math error authority (MEA) that Congress would need to grant IRS through statute. MEA allows IRS to correct calculation errors and check for obvious noncompliance, such as claims above income and credit limits. Despite its name, MEA encompasses much more than simple arithmetic errors. It also includes, for instance, identifying incorrect Social Security numbers or missing forms. The errors being corrected can either be in the taxpayers' favor or result in additional tax being owed. MEA is less intrusive and burdensome to taxpayers than audits and reduces costs to IRS. It also generally allows taxpayers who make errors on their returns to receive refunds faster than if they are audited. This is due, in part, to the fact that IRS does not have to follow its standard deficiency procedures when using MEA--it must only notify the taxpayer that the assessment has been made and provide an explanation of the error. Taxpayers have 60 days after the notice is sent to request an abatement. Although IRS has MEA to correct certain errors on a case-by-case basis, it does not have broad authority to do so. In 2010, we suggested that Congress consider broadening IRS's MEA with appropriate safeguards against the misuse of that authority. In the absence of broader MEA, we have identified specific cases where IRS could benefit from additional MEA that have yet to be enacted. These include authority to: use prior years' tax return information to ensure that taxpayers do not improperly claim credits or deductions in excess of applicable lifetime limits, use prior years' tax return information to automatically verify taxpayers' compliance with the number of years the Hope credit can be claimed, and identify and correct returns with ineligible (1) individual retirement account (IRA) "catch-up" contributions and (2) contributions to traditional IRAs from taxpayers over age 70 1/2. In 2009, Congress enacted our suggestion that IRS use MEA to ensure that taxpayers do not improperly claim the First-Time Homebuyer Credit in multiple years, which we estimate resulted in savings of about $95 million. Tax code complexity can make it difficult for taxpayers to voluntarily comply. Efforts to simplify or reform the tax code may help reduce burdensome record keeping requirements for taxpayers and make it easier for individuals and businesses to understand and voluntarily comply with their tax obligations. For example, eliminating or combining tax expenditures, such as exemptions, deductions, and credits, could help taxpayers reduce unintentional errors and limit opportunities for tax evasion. Frequent changes in the tax code also reduce its stability, making tax planning more difficult and increasing uncertainty about future tax liabilities. Limiting the frequency of changes to the tax code could also help reduce calls to IRS with questions about the changes. We have reported that IRS annually receives millions of calls about tax law changes. Reducing complexity in the tax code could take a variety of forms, ranging from comprehensive tax reform to a more incremental approach focusing on specific tax provisions. Policymakers may find it useful to compare any proposed changes to the tax code based on a set of widely accepted criteria for assessing alternative tax proposals. These criteria include the equity, or fairness, of the tax system; the economic efficiency, or neutrality, of the system; and the simplicity, transparency, and administrability of the system. These criteria can sometimes conflict, and the weight one places on each criterion will vary among individuals. Our publication Understanding the Tax Reform Debate: Background, Criteria, & Questions may be useful in guiding policymakers as they consider tax reform proposals. In closing, improving the taxpayer experience and increasing voluntary compliance will not be achieved through a single solution. Because voluntary compliance is influenced by so many factors, multiple approaches, such as those listed here, will be needed. Chairman Baucus, Ranking Member Hatch, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Committee may have at this time. For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Joanna Stamatiades, Assistant Director; LaKeshia Allen; David Fox; Tom Gilbert; Kirsten Lauber; Sabrina Streagle, and Weifei Zheng. As shown in table 3, in recent years, the level of access to telephone assistors has declined and average wait time has increased. In addition, the volume of overage correspondence has steadily increased. On a positive note, tax law and account accuracy remains high. As shown in table 4, access to IRS assistors has declined over the last few years. IRS officials attribute the higher-than-planned level of service so far this year to a slight decline in the demand for live assistance. At the same time, the number of automated calls has significantly increased which IRS officials attributed in part to taxpayers calling about refunds, and requesting transcripts (i.e., a copy of their tax return information).
The U.S. tax system depends on taxpayers calculating their tax liability, filing their tax return, and paying what they owe on time--what is often referred to as voluntary compliance. Voluntary compliance depends on a number of factors, including the quality of IRS's assistance to taxpayers, knowledge that its enforcement programs are effective, and a belief that the tax system is fair and other people are paying their share of taxes. Voluntary compliance is also influenced by other parties, including paid tax return preparers, tax software companies, and information return filers (employers, financial institutions, and others who report income or expense information about taxpayers to IRS). For this testimony, GAO was asked to (1) evaluate the current state of IRS's performance and its effect on the taxpayer experience, and (2) identify opportunities to improve the taxpayer experience and voluntary compliance. This testimony is based on prior GAO reports and recommendations. Additionally, GAO analyzed IRS data in delivering selected taxpayer services in recent years. The Internal Revenue Service (IRS) has made improvements in processing tax returns, and electronic filing (e-filing), which provides benefits to taxpayers including faster refunds, continues to increase. However, IRS's performance in providing service over the phone and responding to paper correspondence has declined in recent years. For 2012, as with previous years, IRS officials attribute the lower performance to other funding priorities. The following are among the opportunities to improve the taxpayer experience and increase voluntary compliance that GAO identifies in this testimony: IRS can provide more self-service tools to give taxpayers better access to information. IRS can create an automated telephone line for amended returns (a source of high call volume) and complete an online services strategy that provides justification for adding new self-service tools online. Better leveraging of third parties could provide taxpayers with other avenues to receive service. Paid preparers and tax software providers combine to prepare about 90 percent of tax returns. IRS is making progress implementing new regulation of paid preparers. As it develops better data, IRS should be able to test strategies for improving the quality of tax return preparation by paid preparers. Similarly, IRS may also be able to leverage tax software companies. Expanded information reporting could reduce taxpayer burden and improve accuracy. Expanded information reporting, such as the recent requirements for banks and others to report businesses' credit card receipts to IRS, can reduce taxpayers' record keeping and give IRS another tool. Implementing modernized systems should provide faster refunds and account updates. Modernized systems should allow IRS to conduct more accurate and faster compliance checks, which benefits taxpayers by detecting errors before interest and penalties accrue. Expanding pre-refund compliance checks could result in more efficient error correction. Expanding such checks could reduce the burden of audits on taxpayers and their costs to IRS. Reducing tax complexity could ease taxpayer burden and make it easier to comply. Simplifying the tax code could reduce unintentional errors and make intentional tax evasion easier to detect. GAO has made numerous prior recommendations that could help improve the taxpayer experience. Congress and IRS have acted on some recommendations, while others are reflected in the strategies presented in this testimony.
3,666
685
As of June 2008, there were approximately 58 million first-lien home mortgages outstanding in the United States. According to a Federal Reserve estimate, outstanding home mortgages represented over $10 trillion in mortgage debt. The primary mortgage market has several segments and offers a range of loan products: The prime market segment serves borrowers with strong credit histories and provides the most competitive interest rates and mortgage terms. The subprime market segment generally serves borrowers with blemished credit and features higher interest rates and fees than the prime market. The Alternative-A (Alt-A) market segment generally serves borrowers whose credit histories are close to prime, but the loans often have one or more higher-risk features, such as limited documentation of income or assets. The government-insured or -guaranteed market segment primarily serves borrowers who may have difficulty qualifying for prime mortgages but features interest rates competitive with prime loans in return for payment of insurance premiums or guarantee fees. Across all of these market segments, two types of loans are common: fixed-rate mortgages, which have interest rates that do not change over the life of the loans, and adjustable-rate mortgages (ARM), which have interest rates that change periodically based on changes in a specified index. Delinquency, default and foreclosure rates are common measures of loan performance. Delinquency is the failure of a borrower to meet one or more scheduled monthly payments. Default generally occurs when a borrower is 90 or more days delinquent. At this point, foreclosure proceedings against the borrower become a strong possibility. Foreclosure is a legal (and often lengthy) process with several possible outcomes, including that the borrower sells the property or the lender repossesses the home. Two measures of foreclosure are foreclosure starts (loans that enter the foreclosure process during a particular time period) and foreclosure inventory (loans that are in, but have not exited, the foreclosure process during a particular time period). One of the main sources of information on the status of mortgage loans is the Mortgage Bankers Association's quarterly National Delinquency Survey. The survey provides national and state-level information on mortgage delinquencies, defaults, and foreclosures back to 1979 for first- lien purchase and refinance mortgages on one-to-four family residential units. The data are disaggregated by market segment and loan type-- fixed-rate versus adjustable-rate--but do not contain information on other loan or borrower characteristics. In response to problems in the housing and financial markets, the Housing and Economic Recovery Act of 2008 was enacted to strengthen and modernize the regulation of the government-sponsored enterprises (GSEs)--Fannie Mae, Freddie Mac, and the Federal Home Loan Banks-- and expand their mission of promoting homeownership. The act established a new, independent regulator for the GSEs called the Federal Housing Finance Agency, which has broad new authority, generally equivalent to the authority of other federal financial regulators, to ensure the safe and sound operations of the GSEs. The new legislation also enhances the affordable housing component of the GSEs' mission and expands the number of families Fannie Mae and Freddie Mac can serve by raising the loan limits in high-cost areas, where median house prices are higher than the regular conforming loan limit, to 150 percent of that limit. The act requires new affordable housing goals for Federal Home Loan Bank mortgage purchase programs, similar to those already in place for Fannie Mae and Freddie Mac. The act also established the HOPE for Homeowners program, which the Federal Housing Administration (FHA) will administer within the Department of Housing and Urban Development (HUD), to provide federally insured mortgages to distressed borrowers. The new mortgages are intended to refinance distressed loans at a significant discount for owner-occupants at risk of losing their homes to foreclosure. In exchange, homeowners share any equity created by the discounted restructured loan as well as future appreciation with FHA, which is authorized to insure up to $300 billion in new loans under this program. Additionally, the borrower cannot take out a second mortgage for the first five years of the loan, except under certain circumstances for emergency repairs. The program became effective October 1, 2008, and will conclude on September 30, 2011. To participate in the HOPE for Homeowners program, borrowers must also meet specific eligibility criteria as follows: Their mortgage must have originated on or before January 1, 2008. They must have made a minimum of six full payments on their existing first mortgage and must not have intentionally missed mortgage payments. They must not own a second home. Their mortgage debt-to-income ratio for their existing mortgage must be greater than 31 percent. They must not knowingly or willfully have provided false information to obtain the existing mortgage and must not have been convicted of fraud in the last 10 years. The Emergency Economic Stabilization Act, passed by Congress and signed by the President on October 3, 2008, created TARP, which outlines a troubled asset purchase and insurance program, among other things. The total size of the program cannot exceed $700 billion at any given time. Authority to purchase or insure $250 billion was effective on the date of enactment, with an additional $100 billion in authority available upon submission of a certification by the President. A final $350 billion is available under the act but is subject to Congressional review. The legislation required that financial institutions that sell troubled assets to Treasury also provide a warrant giving Treasury the right to receive shares of stock (common or preferred) in the institution or a senior debt instrument from the institution. The terms and conditions of the warrant or debt instrument must be designed to (1) provide Treasury with reasonable participation in equity appreciation or with a reasonable interest rate premium, and (2) provide additional protection for the taxpayer against losses from the sale of assets by Treasury and the administrative expenses of TARP. To the extent that Treasury acquires troubled mortgage-related assets, the act also directs Treasury to encourage servicers of the underlying loans to take advantage of the HOPE for Homeowners Program. Treasury is also required to consent, where appropriate, to reasonable requests for loan modifications from homeowners whose loans are acquired by the government. The act also requires the Federal Housing Finance Agency, the Federal Deposit Insurance Corporation (FDIC), and the Federal Reserve Board to implement a plan to maximize assistance to homeowners, that may include reducing interest rates and principal on residential mortgages or mortgage-backed securities owned or managed by these institutions. The regulators have also taken steps to support the mortgage finance system. On November 25, 2008, the Federal Reserve announced that it would purchase up to $100 billion in direct obligations of the GSEs (Fannie Mae, Freddie Mac, and the Federal Home Loan Banks), and up to $500 billion in mortgage-backed securities backed by Fannie Mae, Freddie Mac, and Ginnie Mae. It undertook the action to reduce the cost and increase the availability of credit for home purchases, thereby supporting housing markets and improving conditions in financial markets more generally. Also, on November 12, 2008, the four financial institution regulators issued a joint statement underscoring their expectation that all banking organizations fulfill their fundamental role in the economy as intermediaries of credit to businesses, consumers, and other creditworthy borrowers, and that banking organizations work with existing mortgage borrowers to avoid preventable foreclosures. The regulators further stated that banking organizations need to ensure that their mortgage servicing operations are sufficiently funded and staffed to work with borrowers while implementing effective risk-mitigation measures. Finally, on November 11, 2008, the Federal Housing Finance Agency (FHFA) announced a streamlined loan modification program for home mortgages controlled by the GSEs. Most mortgages are bundled into securities called residential mortgage- backed securities that are bought and sold by investors. These securities may be issued by GSEs and private companies. Privately issued mortgage- backed securities, known as private label securities, are typically backed by mortgage loans that do not conform to GSE purchase requirements because they are too large or do not meet GSE underwriting criteria. Investment banks bundle most subprime and Alt-A loans into private label residential mortgage-backed securities. The originator/lender of a pool of securitized assets usually continues to service the securitized portfolio. Servicing includes customer service and payment processing for the borrowers in the securitized pool and collection actions in accordance with the pooling and servicing agreement. The decision to modify loans held in a mortgage-backed security typically resides with the servicer. According to some industry experts, the servicer may be limited by the pooling and servicing agreement with respect to performing any large- scale modification of the mortgages that the security is based upon. However, others have stated that the vast majority of servicing agreements do not preclude or routinely require investor approval for loan modifications. We have not assessed how many potentially troubled loans face restrictions on modification. National default and foreclosure rates rose sharply during the 3-year period from the second quarter of 2005 through the second quarter of 2008 to the highest level in 29 years (fig.1). More specifically, default rates more than doubled over the 3-year period, growing from 0.8 percent to 1.8 percent. Similarly, foreclosure start rates--representing the percentage of loans that entered the foreclosure process each quarter--grew almost three-fold, from 0.4 percent to 1 percent. Put another way, nearly half a million mortgages entered the foreclosure process in the second quarter of 2008, compared with about 150,000 in the second quarter of 2005. Finally, foreclosure inventory rates rose 175 percent over the 3-year period, increasing from 1.0 percent to 2.8 percent, with most of that growth occurring since the second quarter of 2007. As a result, almost 1.25 million loans were in the foreclosure inventory as of the second quarter of 2008. Default and foreclosure rates varied by market segment and product type, with subprime and adjustable-rate loans experiencing the largest increases during the 3-year period we examined. More specifically: In the prime market segment, which accounted for more than three- quarters of the mortgages being serviced, 2.4 percent of loans were in default or foreclosure by the second quarter of 2008, up from 0.7 percent 3 years earlier. Foreclosure start rates for prime loans began the period at relatively low levels (0.2 percent) but rose sharply on a percentage basis, reaching 0.6 percent in the second quarter of 2008. In the subprime market segment, about 18 percent of loans were in default or foreclosure by the second quarter of 2008, compared with 5.8 percent 3 years earlier. Subprime mortgages accounted for less than 15 percent of the loans being serviced, but over half of the overall increase in the number of mortgages in default and foreclosure over the period. Additionally, foreclosure start rates for subprime loans more than tripled, rising from 1.3 percent to 4.3 percent (see fig. 2). In the government-insured or -guaranteed market segment, which represented about 10 percent of the mortgages being serviced, 4.8 percent of the loans were in default or foreclosure in the second quarter of 2008, up from 4.5 percent 3 years earlier. Additionally, foreclosure start rates in this segment increased modestly, from 0.7 to 0.9 percent. ARMs accounted for a disproportionate share of the increase in the number of loans in default and foreclosure in the prime and subprime market segments over the 3-year period. In both the prime and subprime market segments, ARMs experienced relatively steeper increases in default and foreclosure rates, compared with more modest growth for fixed rate mortgages. In particular, foreclosure start rates for subprime ARMs more than quadrupled over the 3-year period, increasing from 1.5 percent to 6.6 percent. Default and foreclosure rates also varied significantly among states. For example, as of the second quarter of 2008, the percentage of mortgages in default or foreclosure ranged from 1.1 percent in Wyoming to 8.4 percent in Florida. Other states that had particularly high combined rates of default and foreclosure included California (6.0 percent), Michigan (6.2 percent), Nevada (7.6 percent), and Ohio (6.0 percent). Every state in the nation experienced growth in their foreclosure start rates from the second quarter of 2005 through the second quarter of 2008. By the end of that period, foreclosure start rates were at their 29-year maximums in 17 sta As shown in figure 3, percentage increases in foreclosure start rates differed dramatically by state. The foreclosure start rate rose at least 10 percent in every state over the 3-year period, but 23 states experienced a increase of 100 percent or more. Several states in the "Sun Belt" region, such as Arizona, California, Florida, and Nevada, had among the highest percentage increases in foreclosure start rates. In contrast, 7 states experienced increases of 30 percent or less, including North Carolin Oklahoma, and Utah. tes. Treasury is currently examining strategies for homeownership preservation, including maximizing loan modifications, in light of a refocus in its use of TARP funds. Treasury's initial focus in implementing TARP was to stabilize the financial markets and stimulate lending to businesses and consumers by purchasing troubled mortgage-related assets-- securities and whole loans--from financial institutions. Treasury planned to use its leverage as a major purchaser of troubled mortgages to work with servicers and achieve more aggressive mortgage modification standards. However, Treasury subsequently concluded that purchasing troubled assets would take time to implement and would not be sufficient given the severity of the problem. Instead, Treasury determined that the most timely, effective way to improve credit market conditions was to strengthen bank balance sheets quickly through direct purchases of equit in banks. y The standard agreement between Treasury and the participating institutions in the CPP includes a number of provisions, some in the "recitals" section at the beginning of the agreement and other detailed terms in the body of the agreement. The recitals refer to the participating institutions' future actions in general terms--for example, "the Company agrees to work diligently, under existing programs to modify the terms of residential mortgages as appropriate to strengthen the health of the U.S. housing market." Treasury and the regulators have publicly stated that they expect these institutions to use the funds in a manner consistent with the goals of the program, which include both the expansion of the flow of credit and the modification of the terms of residential mortgages. But, to date it remains unclear how OFS and the regulators will monitor how participating institutions are using the capital injections to advance the purposes of the act. The standard agreement between Treasury and the participating institutions does not require that these institutions track or report how they use or plan to use their capital investments. In our first 60-day report to Congress on TARP, mandated by the Emergency Economic Stabilization Act, we recommended that Treasury, among other things, work with the bank regulators to establish a systematic means for determining and reporting on whether financial institutions' activities are generally consistent with the purposes of CPP. Without purchasing troubled mortgage assets as an avenue for preserving homeownership, Treasury is considering other ways to meet this objective. Treasury has established and appointed an interim chief for the Office of the Chief of Homeownership Preservation under OFS. According to Treasury officials, the office is currently staffed with federal government detailees and is in the process of hiring individuals with expertise in housing policy, community development and economic research. Treasury has stated that it is working with other federal agencies, including FDIC, HUD, and FHFA to explore options to help homeowners under TARP. According to the Office of Homeownership Preservation interim chief, Treasury is considering a number of factors in its review of possible loan modification options, including the cost of the program, the extent to which the program minimizes recidivism among borrowers helped out of default, and the number of homeowners the program has helped or is projected to help remain in their homes. However, to date the Treasury has not completed its strategy for preserving homeownership. Among the strategies for loan modification that Treasury is considering is a proposal by FDIC that is based on its experiences with loans held by a bank that was recently put in FDIC conservatorship. The former IndyMac Bank, F.S.B., was closed July 11, 2008, and FDIC was appointed the conservator for the new institution, IndyMac Federal Bank, F.S.B. As a result, FDIC inherited responsibility for servicing a pool of approximately 653,000 first-lien mortgage loans, including more than 60,000 mortgage loans that were more than 60 days past due, in bankruptcy, in foreclosure, and otherwise not currently paying. On August 20, 2008, the FDIC announced a program to systematically modify troubled residential loans for borrowers with mortgages owned or serviced by IndyMac Federal. According to FDIC, the program modifies eligible delinquent mortgages to achieve affordable and sustainable payments using interest rate reductions, extended amortization, and where necessary, deferring a portion of the principal. FDIC has stated that by modifying the loans to an affordable debt-to-income ratio (38 percent at the time) and using a menu of options to lower borrowers' payments for the life of their loan, the program improves the value of the troubled mortgages while achieving economies of scale for servicers and stability for borrowers. According to FDIC, as of November 21, 2008, IndyMac Federal has mailed more than 23,000 loan modification proposals to borrowers and over 5,000 borrowers have accepted the offers and are making payments on modified mortgages. FDIC states that monthly payments on these modified mortgages are, on average, 23 percent or approximately $380 lower than the borrower's previous monthly payment of principal and interest. According to FDIC, a federal loss sharing guarantee on re-defaults of modified mortgages under TARP could prevent as many as 1.5 million avoidable foreclosures by the end of 2009. FDIC estimated that such a program, including a lower debt- to-income ratio of 31 percent and a sharing of losses in the event of a re- default, would cost about $24.4 billion on an estimated $444 billion of modified loans, based on an assumed re-default rate of 33 percent. We have not had an opportunity to independently analyze these estimates and assumptions. Other similar programs under review, according to Treasury, include strategies to guarantee loan modifications by private lenders, such as the HOPE for Homeowners program. Under this new FHA program, lenders can have loans in their portfolio refinanced into FHA-insured loans with fixed interest rates. HERA had limited the new insured mortgages to no more than 90 percent of the property's current appraised value. However, on November 19, 2008, after action by the congressionally created Board of Directors of the HOPE for Homeowners program, HUD announced that the program had been revised to, among other things, increase the maximum amount of the new insured mortgages in certain circumstances. Specifically, the new insured mortgages cannot exceed 96.5 percent of the current appraised value for borrowers whose mortgage payments represent no more than 31 percent of their monthly gross income and monthly household debt payments no more than 43 percent of monthly gross income. Alternatively, the new mortgage may be set at 90 percent of the current appraised value for borrowers with monthly mortgage and household debt-to-income ratios as high as 38 and 50 percent, respectively. These loan-to-value ratio maximums mean that in many circumstances the amount of the restructured loan would be less than the original loan amount and, therefore, would require lenders to write down the existing mortgage amounts. According to FHA, lenders benefit by turning failing mortgages into performing loans. Borrowers must also share a portion of the equity resulting from the new mortgage and the value of future appreciation. This program first became available October 1, 2008. FHA has listed on the program's Web site over 200 lenders that, as of November 25, 2008, have indicated to FHA an interest in refinancing loans under the HOPE for Homeowners program. See the appendix to this statement for examples of federal government and private sector residential mortgage loan modification programs. Treasury is also considering policy actions that might be taken under CPP to encourage participating institutions to modify mortgages at risk of default, according to an OFS official. While not technically part of CPP, Treasury announced on November 23, 2008, that it will invest an additional $20 billion in Citigroup from TARP in exchange for preferred stock with an 8 percent dividend to the Treasury. In addition, Treasury and FDIC will provide protection against unusually large losses on a pool of loans and securities on the books of Citigroup. The Federal Reserve will backstop residual risk in the asset pool through a non-recourse loan. The agreement requires Citigroup to absorb the first $29 billion in losses. Subsequent losses are shared between the government (90 percent) and Citigroup (10 percent). As part of the agreement, Citigroup will be required to use FDIC loan modification procedures to manage guaranteed assets unless otherwise agreed. Although any program for modifying loans faces a number of challenges, particularly when the loans or the cash flows related to them have been bundled into securities that are sold to investors, foreclosures not only affect those losing their homes but also their neighborhoods and have contributed to increased volatility in the financial markets. Some of the challenges that loan modification programs face include making transparent to investors the analysis supporting the value of modification over foreclosure, designing the program to limit the likelihood of re- default, and ensuring that the program does not encourage borrowers who otherwise would not default to fall behind on their mortgage payments. Additionally, there are a number of potential obstacles that may need to be addressed in performing large-scale modification of loans supporting a mortgage-backed security. As noted previously, the pooling and servicing agreements may preclude the servicer from making any modifications of the underlying mortgages without approval by the investors. In addition, many homeowners may have second liens on their homes that may be controlled by a different loan servicer, potentially complicating loan modification efforts. Treasury also points to challenges in financing any new proposal. The Secretary of the Treasury, for example, noted that it was important to distinguish between the type of assistance, which could involve direct spending, from the type of investments that are intended to promote financial stability, protect the taxpayer, and be recovered under the TARP legislation. However, he recently reaffirmed that maximizing loan modifications was a key part of working through the housing correction and maintaining the quality of communities across the nation. However, Treasury has not specified how it intends to meet its commitment to loan modification. We will continue to monitor Treasury's efforts as part of our ongoing TARP oversight responsibilities. Going forward, the federal government faces significant challenges in effectively deploying its resources and using its tools to bring greater stability to financial markets and preserving homeownership and protecting home values for millions of Americans. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. Eligible borrowers are those with loans owned or serviced by IndyMac Federal Bank Affordable mortgage payment achieved for the seriously delinquent or in default borrower through interest rate reduction, amortization term extension, and/or principal forbearance Payment must be no more than 38 percent of the borrower's monthly gross income Losses to investor minimized through a net present value test that confirms that the modification will cost the investor less than foreclosure Borrowers can refinance into an affordable loan insured by FHA Eligible borrowers are those who, among other factors, as of March 2008, had total monthly mortgage payments due of more than 31 percent of their gross monthly income New insured mortgages cannot exceed 96.5 percent of the current loan-to-value ratio (LTV) for borrowers whose mortgage payments do not exceed 31 percent of their monthly gross income and total household debt not to exceed 43 percent; alternatively, the program allows for a 90 percent LTV for borrowers with debt-to-income ratios as high as 38 (mortgage payment) and 50 percent (total household debt) Requires lenders to write down the existing mortgage amounts to either of the two LTV Eligible borrowers are those who, among other factors, have missed three payments or more Servicers can modify existing loans into a Freddie Mae or Fannie Mac loan, or a portfolio loan with a participating investor An affordable mortgage payment, of no more than 38 percent of the borrower's monthly gross income, is achieved for the borrower through a mix of reducing the mortgage interest rate, extending the life of the loan or deferring payment on part of the principal Eligible borrowers are those with subprime or pay option adjustable rate mortgages serviced by Countrywide and originated by Countrywide prior to December 31, 2007 Options for modification include refinance under the FHA HOPE for Homeowners program, interest rate reductions, and principal reduction for pay option adjustable rate mortgages First-year payments mortgage payments will be targeted at 34 percent of the borrower's income, but may go as high as 42 percent Annual principal and interest payments will increase at limited step-rate adjustments Affordable mortgage payment achieved for the borrower at risk of default through interest rate reduction and/or principal forbearance Modification may also include modifying pay-option ARMs to 30-year, fixed-rate loans or interest-only payments for 10 years Modification includes flexible eligibility criteria on origination dates, loan-to-value ratios, rate floors and step-up adjustment features This program was created in consultation with Fannie Mae, Freddie Mac, HOPE NOW and its twenty-seven servicer partners, the Department of the Treasury, FHA and FHFA. For further information about this statement, please contact Mathew J. Scire, Director, Financial Markets and Community Investment, on (202) 512-8678 or [email protected]. In addition to the contact named above the following individuals from GAO's Financial Markets and Community Investment Team also made major contributors to this testimony: Harry Medina and Steve Westley, Assistant Directors; Jamila Jones and Julie Trinder, Analysts-in-Charge; Jim Vitarello, Senior Analyst; Rachel DeMarcus, Assistant General Counsel; and Emily Chalmers and Jennifer Schwartz, Communications Analysts. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A dramatic increase in mortgage loan defaults and foreclosures is one of the key contributing factors to the current downturn in the U.S. financial markets and economy. In response, Congress passed and the President signed in July the Housing and Economic Recovery Act of 2008 and in October the Emergency Economic Stabilization Act of 2008 (EESA), which established the Office of Financial Stability (OFS) within the Department of the Treasury and authorized the Troubled Asset Relief Program (TARP). Both acts establish new authorities to preserve homeownership. In addition, the administration, independent financial regulators, and others have undertaken a number of recent efforts to preserve homeownership. GAO was asked to update its 2007 report on default and foreclosure trends for home mortgages, and describe the OFS's efforts to preserve homeownership. GAO analyzed quarterly default and foreclosure data from the Mortgage Bankers Association for the period 1979 through the second quarter of 2008 (the most recent quarter for which data were available). GAO also relied on work performed as part of its mandated review of Treasury's implementation of TARP, which included obtaining and reviewing information from Treasury, federal agencies, and other organizations (including selected banks) on home ownership preservation efforts. To access GAO's first oversight report on Treasury's implementation of TARP, see GAO-09-161 . Default and foreclosure rates for home mortgages rose sharply from the second quarter of 2005 through the second quarter of 2008, reaching a point at which more than 4 in every 100 mortgages were in the foreclosure process or were 90 or more days past due. These levels are the highest reported in the 29 years since the Mortgage Bankers Association began keeping complete records and are based on its latest available data. The subprime market, which consists of loans to borrowers who generally have blemished credit and that feature higher interest rates and fees, experienced substantially steeper increases in default and foreclosure rates than the prime or government-insured markets, accounting for over half of the overall increase. In the prime and subprime market segments, adjustable-rate mortgages experienced steeper growth in default and foreclosure rates than fixed-rate mortgages. Every state in the nation experienced growth in the rate at which loans entered the foreclosure process from the second quarter of 2005 through the second quarter of 2008. The rate rose at least 10 percent in every state over the 3-year period, but 23 states experienced an increase of 100 percent or more. Several states in the "Sun Belt" region, including Arizona, California, Florida, and Nevada, had among the highest percentage increases. OFS initially intended to purchase troubled mortgages and mortgage-related assets and use its ownership position to influence loan servicers and to achieve more aggressive mortgage modification standards. However, within two weeks of EESA's passage, Treasury determined it needed to move more quickly to stabilize financial markets and announced it would use $250 billion of TARP funds to inject capital directly into qualified financial institutions by purchasing equity. In recitals to the standard agreement with Treasury, institutions receiving capital injections state that they will work diligently under existing programs to modify the terms of residential mortgages. It remains unclear, however, how OFS and the banking regulators will monitor how these institutions are using the capital injections to advance the purposes of the act, including preserving homeownership. As part of its first TARP oversight report, GAO recommended that Treasury, among other things, work with the bank regulators to establish a systematic means for determining and reporting on whether financial institutions' activities are generally consistent with program goals. Treasury also established an Office of Homeownership Preservation within OFS that is reviewing various options for helping homeowners, such as insuring troubled mortgage-related assets or adopting programs based on the loan modification efforts of FDIC and others, but it is still working on its strategy for preserving homeownership. While Treasury and others will face a number of challenges in undertaking loan modifications, including making transparent to investors the analysis supporting the value of modification versus foreclosure, rising defaults and foreclosures on home mortgages underscore the importance of ongoing and future efforts to preserve homeownership. GAO will continue to monitor Treasury's efforts as part of its mandated TARP oversight responsibilities.
5,784
895
Medicare covers up to 100 days of care in a SNF after a beneficiary has been hospitalized for at least 3 days. To qualify for the benefit, the patient must need skilled nursing or therapy on a daily basis. For the first 20 days of SNF care, Medicare pays all the costs, and for the 21st through the 100th day, the beneficiary is responsible for daily coinsurance of $95 in 1997. To qualify for home health care, a beneficiary must be confined to his or her residence ("homebound"); require part-time or intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for skilled nursing; physical, occupational, and speech therapy; medical social services; and home health aide visits. Beneficiaries are not liable for any coinsurance or deductibles for these home health services, and there is no limit on the number of visits for which Medicare will pay. for each type of visit (skilled nursing, physical therapy, and so on) but are applied in the aggregate; that is, an agency's costs over the limit for one type of visit can be offset by costs below the limit for another. Both SNF and home health cost limits are adjusted for differences in wage levels across geographic areas. Also, exemptions from and exceptions to the cost limits are available to SNFs and home health agencies that meet certain conditions. While the cost-limit provisions of Medicare's cost reimbursement system for SNFs and home health agencies give some incentives for providers to control the affected costs, these incentives are considered by health financing experts to be relatively weak, especially for providers with costs considerably below their limit. On the other hand, it is generally agreed that prospective payment systems (PPS) give providers increased cost- control incentives. The administration proposes establishing PPSs for SNF and home health care and estimates that Medicare savings exceeding $10 billion would result over the next 5 fiscal years. The Medicare SNF and home health benefits are two of the fastest growing components of Medicare spending. From 1989 to 1996, Medicare part A SNF expenditures increased over 300 percent from $2.8 billion to $11.3 billion. During the same period, part A expenditures for home health increased from $2.4 billion to $17.7 billion--an increase of over 600 percent. SNF and home health payments currently represent 8.6 percent and 13.5 percent of part A Medicare expenditures, respectively. At Medicare's inception in 1966, the home health benefit under part A provided limited posthospital care of up to 100 visits per year after a hospitalization of at least 3 days. In addition, the services could only be provided within 1 year after the patient's discharge and had to be for the same illness. Part B coverage of home health was limited to 100 visits per year. These restrictions under part A and part B were eliminated by the Omnibus Reconciliation Act of 1980 (ORA, P.L. 96-499), but little immediate effect on Medicare costs occurred. for the SNF and home health benefits that had the effect of liberalizing coverage criteria, thereby making it easier for beneficiaries to obtain SNF and home health coverage. Additionally, the changes prevent HCFA's claims processing contractors from denying physician-ordered SNF or home health services unless the contractors can supply specific clinical evidence that indicates which particular services should not be covered. The combination of these legislative and coverage policy changes has had a dramatic effect on utilization of these two benefits in the 1990s, both in terms of the number of beneficiaries receiving services and in the extent of these services. (App. I contains figures that show growth in SNF and home health expenditures in relation to the legislative and policy changes.) For example, ORA 1980 and HCFA's 1989 home health guideline changes have essentially transformed the home health benefit from one focused on patients needing short-term care after hospitalization to one that serves chronic, long-term care patients as well. The number of beneficiaries receiving home health care more than doubled in the last few years, from 1.7 million in 1989 to about 3.9 million in 1996. During the same period, the average number of visits to home health beneficiaries also more than doubled, from 27 to 72. In a recent report on home health, we found that from 1989 to 1993, the proportion of home health users receiving more than 30 visits increased from 24 percent to 43 percent and those receiving more than 90 visits tripled, from 6 percent to 18 percent, indicating that the program is serving a larger proportion of longer-term patients. Moreover, about a third of beneficiaries receiving home health care did not have a prior hospitalization, another possible indication that chronic care is being provided. their use is done by Medicare. Moreover, SNFs can cite high ancillary service use to justify an exception to routine service cost limits, thereby increasing routine service payments. Between 1990 and 1996, the number of hospital-based SNFs increased over 80 percent, from 1,145 such agencies to 2,088. Hospitals can benefit from establishing a SNF unit in a number of ways. Hospitals receive a set fee for a patient's entire hospital stay, based on a patient's diagnosis related group (DRG). Therefore, the quicker that hospitals discharge a patient into a SNF, the lower that patient's inpatient hospital care costs are. We found that in 1994, patients with any of 12 DRGs commonly associated with posthospital SNF use had 4 to 21 percent shorter stays in hospitals with SNF units than patients with the same DRGs in hospitals without SNF units. Additionally, by owning a SNF, hospitals can increase their Medicare revenues through receipt of the full DRG payment for patients with shorter lengths of stay and a cost-based payment after the patients are transferred to the SNF. Rapid growth in SNF and home health expenditures has been accompanied by decreased, rather than increased, funding for program safeguard activities. For example, our March 1996 report found that part A contractor funding for medical review had decreased by almost 50 percent between 1989 and 1995. As a result, while contractors had reviewed over 60 percent of home health claims in fiscal year 1987, their review target had been lowered by 1995 to 3.2 percent of all claims (or even, depending on available resources, to a required minimum of 1 percent). We found that a lack of adequate controls over the home health program, such as little intermediary medical review and limited physician involvement, makes it nearly impossible to know whether the beneficiary receiving home care qualifies for the benefit, needs the care being delivered, or even receives the services being billed to Medicare. Also, because of the small percentage of claims now selected for review, home health agencies that bill for noncovered services are less likely to be identified than was the case 10 years ago. Similarly, the low level of review of SNF services makes it difficult to know whether the recent increase in ancillary use is medically necessary (for example, because patient mix has shifted toward those who need more services) or simply a way for SNFs to get more revenues. Finally, because relatively few resources are available for auditing end- of-year provider cost reports, HCFA has little ability to identify whether home health agencies or SNFs are charging Medicare for costs unrelated to patient care or other unallowable costs. Because of the lack of adequate program controls, it is quite possible that some of the recent increase in home health and SNF expenditures stems from abusive practices. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191), also known as the Kassebaum-Kennedy Act, has increased funding for program safeguards. However, per-claim expenditures will remain below the level in 1989, after adjusting for inflation. We project that, in 2003, payment safeguard spending as authorized by Kassebaum-Kennedy will be just over one-half of the 1989 per-claim level, after adjusting for inflation. The goal in designing a PPS is to ensure that providers have incentives to control costs and that, at the same time, payments are adequate for efficient providers to furnish needed services and at least recover their costs. If payments are set too high, Medicare will not save money and cost-control incentives can be weak. If payments are set too low, access to and quality of care can suffer. In designing a PPS, selection of the unit of service for payment purposes is important because the unit used has a strong effect on the incentives providers have for the quantity and quality of services they provide. Taking account of the varying needs of patients for different types of services-- routine, ancillary, or all--is also important. A third important factor is the reliability of the cost and utilization data used to compute rates. Good choices for unit of service and cost coverage can be overwhelmed by bad data. We understand that the administration will propose a SNF PPS that would pay per diem rates covering all facility cost types and that payments would be adjusted for differences in patient case mix. Such a system is expected to be similar to HCFA's ongoing SNF PPS demonstration project that is testing the use of per diem rates adjusted for resource need differences using the Resource Utilization Group, version III (RUG-III) patient classification system. This project was recently expanded to include coverage of ancillary costs in the prospective payment rates. An alternative to the proposal's choice of a day of care as the unit of service is an episode of care--the entire period of SNF care covered by Medicare. While substantial variation exists in the amount of resources needed to treat beneficiaries with the same conditions when viewed from the day-of-care perspective, even more variation exists at the episode- of-care level. Resource needs are less predictable for episodes of care. Moreover, payment on an episode basis may result in some SNFs inappropriately reducing the number of covered days. Both factors make a day of care the better candidate for a PPS unit of service. Furthermore, the likely patient classification system, RUG-III, is designed for and being tested in a per diem PPS. On the other hand, a day-of-care unit gives few, if any, incentives to control length of stay, so a review process for this purpose would still be needed. The states and HCFA have a lot of experience with per diem payment methods for nursing homes under the Medicaid program, primarily for routine costs but also, in some cases, for total costs. This experience should prove useful in designing a per diem Medicare PPS. Regarding the types of costs covered by PPS rates, a major contributor to Medicare's SNF cost growth has been the increased use of ancillary services, particularly therapy services. This, in turn, means that it is important to give SNFs incentives to control ancillary costs, and including them under PPS is a way to do so. However, adding ancillary costs does increase the variability of costs across patients and place additional importance on the case-mix adjuster to ensure reasonable and adequate rates. audits of a projectable sample of SNF cost reports. The results could then be used to adjust cost report databases to remove the influence of unallowable costs, which would help ensure that inflated costs are not used as the base for PPS rate setting. The summary of the administration's proposal for a home health PPS is very general, saying only that a PPS for an appropriate unit of service would be established in 1999 using budget neutral rates calculated after reducing expenditures by 15 percent. HCFA estimates that this reduction will result in savings of $4.7 billion over fiscal years 1999 through 2002. The choice of the unit of service is crucial, and there is limited understanding of the need for and content of home health services to guide that choice. Choosing either a visit or an episode as the unit of service would have implications for both cost control and quality of care, depending on the response of home health agencies. For example, if the unit of service is a visit, agencies could profit by shortening the length of visits. At the same time, agencies could attempt to increase the number of visits, with the net result being higher total costs for Medicare, making the per-visit choice less attractive. If the unit of service is an episode of care over a period of time such as 30 or 100 days, agencies could gain by reducing the number of visits during that period, potentially lowering quality of care. For these reasons, HCFA needs to devise methods to ensure that whatever unit of service is chosen will not lead to increased costs or lower quality of care. If an episode of care is chosen as the unit of service, HCFA would need a method to ensure that beneficiaries receive adequate services and that any reduction in services that can be accounted for by past overprovision of care does not result in windfall profits for agencies. In addition, HCFA would need to be vigilant to ensure that patients meet coverage requirements, because agencies would be rewarded for increasing their caseloads. HCFA is currently testing various PPS methods and patient classification systems for possible use with home health care, and the results of these efforts may shed light on the unit-of-service question. We have the same concerns about the quality of HCFA's home health care cost report databases for PPS rate-setting purposes as we do for the SNF database. Again, we believe that adjusting the home health databases, using the results of thorough cost report audits of a projectable sample of agencies, would be wise. We are also concerned about the appropriateness of using current Medicare data on visit rates to determine payments under a PPS for episodes of care. As we reported in March 1996, controls over the use of home health care are virtually nonexistent. Operation Restore Trust, a joint effort by federal and state agencies in several states to identify fraud and abuse in Medicare and Medicaid, found very high rates of noncompliance with Medicare's coverage conditions in targeted agencies. For example, in a sample of 740 beneficiaries drawn from 43 home health agencies in Texas and 31 in Louisiana that were selected because of potential problems, some or all of the services received by 39 percent of the beneficiaries were denied. About 70 percent of the denials were because the beneficiary did not meet the homebound definition. Although these are results from agencies suspected of having problems, they illustrate that substantial amounts of noncovered care are likely to be reflected in HCFA's home health care utilization data. For these reasons, it would also be prudent for HCFA to conduct thorough on-site medical reviews of a projectable sample of agencies to give it a basis to adjust utilization rates for purposes of establishing a PPS. The administration has also announced that it will propose requiring SNFs to bill Medicare for all services provided to their beneficiary residents except for physician and some practitioner services. We support this proposal as we did in a September 1995 letter to you, Mr. Chairman. We and the HHS Inspector General have reported on problems, such as overutilization of supplies, that can arise when suppliers bill separately for services for SNF residents. A consolidated billing requirement would make it easier for Medicare to identify all the services furnished to residents, which in turn would make it easier to control payments for those services. The requirement would also help prevent duplicate billings for supplies and services and billings for services not actually furnished by suppliers. In effect, outside suppliers would have to make arrangements with SNFs under such a provision so that nursing homes would bill for suppliers' services and would be financially liable and medically responsible for the care. to work with the Subcommittee and others to help sort out the potential implications of suggested revisions. This concludes my prepared remarks, and I will be happy to answer any questions. For more information on this testimony, please call William Scanlon on (202) 512-7114 or Thomas Dowdal, Senior Assistant Director, on (202) 512-6588. Patricia Davis also contributed to this statement. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed Medicare's skilled nursing facility (SNF) and home health care benefits and the administration's forthcoming legislative proposals related to them. GAO noted that: (1) Medicare's SNF costs have grown primarily because a larger portion of beneficiaries use SNFs than in the past and because of a large increase in the provision of ancillary services; (2) for home health care costs, both the number of beneficiaries and the number of services used by each beneficiary have more than doubled; (3) a combination of factors led to the increased use of both benefits: (a) legislation and coverage policy changes in response to court decisions liberalized coverage criteria for the benefits, enabling more beneficiaries to qualify for care; (b) these changes also transformed the nature of home health care from primarily posthospital care to more long-term care for chronic conditions; (c) earlier discharges from hospitals led to the substitution of days spent in SNFs for what in the past would have been the last few days of hospital care, and increased use of ancillary services, such as physical therapy, in SNFs; and (d) a diminution of administrative controls over the benefits, resulting at least in part from fewer resources being available for such controls, reduced the likelihood of inappropriately submitted claims being denied; (4) the major proposals by the administration for both SNFs and home health care are designed to give the providers of these services increased incentives to operate efficiently by moving them from a cost reimbursement to a prospective payment system; (5) however, what remains unclear about these proposals is whether an appropriate unit of service can be defined for calculating prospective payments and whether the Health Care Financing Administration's databases are adequate for it to set reasonable rates; (6) the administration is also proposing that SNFs be required to bill for all services provided to their Medicare residents rather than allowing outside suppliers to bill; and (7) this latter proposal has merit, because it would make control over the use of ancillary services significantly easier.
3,706
428
Medicare falls within the administrative jurisdiction of the Health Care Financing Administration (HCFA) of the Department of Health and Human Services (HHS). HCFA establishes regulations and guidance for the program and contracts with about 72 private companies--such as Blue Cross and Aetna--to handle claims screening and processing and to audit providers. Each of these commercial contractors works with its local medical community to set coverage policies and payment controls. As a result, billing problems are handled, for the most part, by contractors, and they are the primary referral parties to law enforcement agencies for suspected fraud. Medicare's basic nursing home benefit covers up to 100 days of certain posthospital stays in a skilled nursing facility. Skilled nursing facilities submit bills for which they receive interim payment; final payments are based on costs within a cost-limit cap. This benefit is paid under part A, Hospital Insurance, which also pays for hospital stays and care provided by home health agencies and hospices. Even if Medicare beneficiaries do not meet the conditions for Medicare coverage of a skilled nursing facility stay, they are still eligible for the full range of part B benefits. Although Medicaid or the resident may be paying for the nursing home, Medicare will pay for ancillary services and items such as physical and other types of therapy, prosthetics, and surgical dressings. Part B is voluntary part of the Medicare program that beneficiaries may elect and for which they pay monthly premiums. Part B also pays for physician care and diagnostic testing. About 6 million people have both Medicare and Medicaid coverage, and, of these, over 4.8 million represent state "buy-ins" for Medicare coverage.Dually eligible beneficiaries are among the most vulnerable Medicare beneficiaries. They are generally poor, have a greater incidence of serious and chronic conditions, and are much more likely to be institutionalized. As a matter of fact, about 1.4 million reside in institutions, while only 600,000 of the approximately 31 million Medicare beneficiaries without Medicaid coverage are in institutions. Over half of all dually eligible patients over 85 reside in nursing facilities. When a copayment is required, a Medicare beneficiary or a representative designated by the beneficiary, receives an "Explanation of Medicare Benefits" (EOMB), which specifies the services billed on behalf of the individual. The EOMB is an important document because beneficiaries and their families can use it to verify that the services were actually performed. The dually eligible population, however, often does not have a representative in the community to receive and review this document. In fact, many nursing home patients actually have the nursing home itself receive the EOMBs on their behalf. In 1996, Medicare spent $11.3 billion on skilled nursing facility benefits and an undetermined amount on part B ancillary services and items. The providers of these services and items can bill Medicare in a variety of ways. With this variety comes the opportunity to blur the transactions that actually took place and inflate charges for services rendered. Ancillary services and items for Medicare beneficiaries in nursing facilities can be provided by the nursing facility itself, a company wholly or partially owned by the nursing facility, or an independent supplier or practitioner. Our work has shown that independent providers and suppliers can bill Medicare directly for services or supplies without the knowledge of the beneficiary or the facility and companies that provide therapy are able to inflate their billings. Nursing facilities often do not have the in-house capability to provide all the services and supplies that patients need. Accordingly, outside providers market their services and supplies to nursing facilities to meet the needs of the facilities' patients. HCFA's reimbursement system allows these providers to bill Medicare directly without confirmation from the nursing facility or a physician that the care or items were necessary or delivered as claimed. As a result, the program is vulnerable to exploitation. representatives gain access to records not because they have any responsibility for the direct care of these patients, but solely to market their services or supplies. From these records, unscrupulous providers can obtain all the information necessary to order, bill, and be reimbursed by Medicare for services and supplies that are in many instances not necessary or even provided. In 1996, we reported the following examples: A group optometric practice performed routine eye examinations on nursing facility patients, a service not covered by Medicare. The optometrist was always preceded by a sales person who targeted the nursing facility's director of nursing or its social worker and claimed the group was offering eye examinations at no cost to the facility or the patient. The nursing facility gave the sales person access to patients' records, and this person then obtained the information necessary to file claims. Nursing staff would obtain physicians' orders for the "free" examinations, and an optometrist would later arrive to conduct the examinations. The billings to Medicare, however, were for services other than eye examinations--services that were never furnished or were unnecessary. The owner of a medical supply company approached nursing facility administrators in several states and offered to provide supplies for Medicare patients at no cost to the facility. After reviewing nursing facility records, this company identified Medicare beneficiaries, obtained their Medicare numbers, developed lists of supplies on the basis of diagnoses, identified attending physicians, and made copies of signed physician orders in the files. The supplier then billed Medicare for items it actually delivered but also submitted 4,000 fraudulent claims for items never delivered. As part of the 1994 judgment, the owner forfeited $328,000 and was imprisoned and ordered to make restitution of $971,000 to Medicare and $60,000 to Medicaid. A supplier obtained a list of Medicare patients and their Medicare numbers from another supplier who had access to this information. The first supplier billed Medicare for large quantities of supplies that were never provided to these patients, and both suppliers shared in the approximately $814,000 in reimbursements. We found that nursing home staff's giving providers or their representatives inappropriate access to patient medical records was a major contributing cause to the fraud and abuse cases we reviewed. Many nursing facilities rely on specialized rehabilitation agencies--also termed outpatient therapy agencies--to provide therapy services. These agencies can be multilayered, interconnected organizations--each layer adding costs to the basic therapy charge--that use outside billing services, which can also add to the cost. In those situations in which the nursing facility contracts and pays for occupational and speech therapy services for a Medicare-eligible stay, Medicare might pay the nursing facility what it was charged because of the limited amount of review conducted by claims processing contractors. In practice, however, because of the difficulty in determining what are reasonable costs and the limited resources available for auditing provider cost reports, there is little assurance that inflated charges are not actually being billed and paid. Until recently, HCFA had not established salary guidelines, which are needed to define reasonable costs for occupational or speech therapy. Without such benchmarks, it is difficult for Medicare contractors to judge whether therapy providers overstate their costs. Even for physical therapy, for which salary guidelines do exist, the Medicare-established limits do not apply if the therapy company bills Medicare directly. This is why Medicare has been charged $150 for 15 minutes of therapy when surveys show that average statewide salaries for therapists employed by hospitals and nursing facilities range from $12 to $25 per hour. Our analysis of a sample drawn from a survey of five contractors found that over half of the claims they received for occupational and speech therapy from 1988 to 1993 exceeded $172 in charges per service. Assuming this was the charge for 15 minutes of treatment--which industry representatives described as the standard billing unit--the hourly rate charged for these claims would have been more than $688. It should be noted that neither HCFA nor its contractors could accurately tell us what Medicare actually paid the providers in response to these claims. The amount Medicare actually pays is not known until long after the service is rendered and the claim processed. Although aggregate payments are eventually determinable, existing databases do not provide actual payment data for any individual claim. HCFA pays contractors to process claims and to identify and investigate potentially fraudulent or abusive claims. We have long been critical of the unstable funding support HCFA's contractors have to carry out these program integrity activities. We recently reported that funding for Medicare contractor program safeguard activities declined from 74 cents to 48 cents per claim between 1989 and 1996. During that same period, the number of Medicare claims climbed 70 percent to 822 million. Such budgetary constraints have placed HCFA and its contractors in the untenable position of needing to review more claims with fewer resources. While Medicare contractors do employ a number of effective automated controls to prevent some inappropriate payments, such as suspending claims that do not meet certain conditions for payment for further review, our 1996 report on 70 fraud and abuse cases showed that atypical charges or very large reimbursements routinely escaped those controls and typically went unquestioned. The contractors we reviewed had not put any "triggers" in place that would halt payments when cumulative claims exceeded reasonable thresholds. Consequently, Medicare reimbursed providers, who were subsequently found guilty of fraud or billing abuses, large sums of money over a short period without the contractor's becoming suspicious. The following examples highlight the problem: A supplier submitted claims to a Medicare contractor for surgical dressings furnished to nursing facility patients. In the fourth quarter of 1992, the contractor paid the supplier $211,900 for surgical dressing claims. For the same quarter a year later, the contractor paid this same supplier more than $6 million without becoming suspicious, despite the 2,800-percent increase in the amount paid. A contractor paid claims for a supplier's body jackets that averaged about $2,300 per quarter for five consecutive quarters and then jumped to $32,000, $95,000, $235,000, and $889,000 over the next four quarters, with no questions asked. A contractor reimbursed a clinical psychology group practice for individual psychotherapy visits lasting 45 to 50 minutes when the top three billing psychologists in the group were allegedly seeing from 17 to 42 nursing facility patients per day. On many days, the leading biller of this group would have had to work more than 24 uninterrupted hours to provide the services he claimed. A contractor paid a podiatrist $143,580 for performing surgical procedures on at least 4,400 nursing facility patients during a 6-month period. For these services to be legitimate, the podiatrist would have had to serve at least 34 patients a day, 5 days a week. The Medicare contractors in these two cases did not become suspicious until they received complaints from family members, beneficiaries, or competing providers. The EOMB was critical in identifying the specific items and services being billed to Medicare. Although EOMBs have in the past only been required when the beneficiary had a deductible or copayment, HIPAA now requires HCFA to provide an explanation of Medicare benefits for each item or service for which payment may be made, without regard to whether a deductible or coinsurance may be imposed. This provision is still of limited value, however, for nursing home residents who designate the nursing home to receive the EOMBs--which is more common for the dually eligible population. In other cases, contractors initiated their investigations because of their analyses of paid claims (a practice referred to as "postpayment medical review"), which focused on those providers that appeared to be billing more than their peers for specific procedures. One contractor, for instance, reimbursed a laboratory $2.7 million in 1991 and $8.2 million in 1992 for heart monitoring services allegedly provided to nursing facility patients . The contractor was first alerted in January 1993 through its postpayment review efforts when it noted that this laboratory's claims for monitoring services exceeded the norm for its peers. In all these cases, we believe the large increases in reimbursements over a short period or the improbable cumulative services claimed for a single day should have alerted the contractors to the possibility that something unusual was happening and prompted an earlier review. People do not usually work 20-hour days, and billings by a provider for a single procedure do not typically jump 13-fold from one quarter to the next or progressively double every quarter. Initiatives on various fronts are now under way to address fraud and abuse issues we have discussed here today. Several of these initiatives, however, are in their early stages, and it is too soon to assess whether they will, in fact, prevent fraud and abuse in the nursing facilities environment. Last year, we recommended that HCFA establish computerized prepayment controls that would suspend the most aberrant claims. HCFA has since strengthened its instructions to its contractors, directing them to implement prepayment screens to prevent payment of billings for egregious amounts or patterns of medically unnecessary services or items. HCFA also authorized its contractors to deny automatically the entire line item for any services that exceed the egregious service limits. In regard to therapy services, after a lengthy administrative process, HCFA proposed salary guidelines last month for physical, occupational, speech, and respiratory therapists who furnish care to beneficiaries under a contractual arrangement with a skilled nursing facility. The administration estimates these changes will result in savings to Medicare of $1.7 billion between now and the year 2001, and $3.9 billion between now and the year 2006. The proposed rule would revise the current guideline amounts for physical and respiratory therapies and introduce, for the first time, guideline amounts for occupational therapy and speech/language pathology services. In March 1995, the Secretary of HHS launched Operation Restore Trust (ORT), a 2-year interagency, intergovernmental initiative to combat Medicare and Medicaid fraud and abuse. ORT targeted its resources on three health care areas susceptible to exploitation, including nursing facility care in five states (California, Florida, Illinois, New York, and Texas) with high Medicare and Medicaid enrollment and rapid growth in billed services. overutilization of supplies, that can arise when suppliers bill separately for services for nursing home residents. A consolidated billing requirement would make it easier to control payments for these services and give nursing facilities the incentive to monitor them. The requirement would also help prevent duplicate billings and billings for services and items not actually provided. In effect, outside suppliers would have to make arrangements with skilled nursing facilities so that they would bill for suppliers' services and would be financially liable and medically responsible for the care. HIPAA established the Medicare Integrity Program, which ensures that the program safeguard activities function is funded separately from other claims processing activities. HIPAA also included provisions on "administrative simplification." A lack of uniformity in data among the Medicare program, Medicaid state plans, and private health entities often makes it difficult to compare programs, measure the true effect of changes in health care financing, and coordinate payments for dually eligible patients. For example, HIPAA requires, for the first time, that each provider be given a unique provider number to be used in billing all insurers, including Medicare and Medicaid. The new provisions also require the Secretary of HHS to promulgate standards for all electronic health care transactions; the data sets used in those transactions; and unique identifiers for patients, employers, providers, insurers, and plans. These standards will be binding on all health care providers, insurers, plans, and clearinghouses. The multiple ways that providers and suppliers can bill for services to nursing home patients and the lax oversight of this process contribute to the vulnerability of payments for the health care of this population. As a result, excessive or fraudulent billings may go undetected. We are encouraged, however, by the administration's recent proposal for consolidated billing, which we believe will put more responsibility on nursing home staff to oversee the services and items being billed on behalf of residents. As more details concerning these or other proposals become available, we will be glad to work with the Subcommittee and others to help sort out their potential implications. This concludes my prepared remarks. I will be happy to answer any questions. For more information on this testimony, please call Leslie G. Aronovitz on (312) 220-7600 or Donald B. Hunter on (617) 565-7464. Lisanne Bradley also contributed to this statement. Medicare Post-Acute Care: Facility Health and Skilled Nursing Facility Cost Growth and Proposals for Prospective Payment (GAO/T-HEHS-97-90, Mar. 4, 1997). Skilled Nursing Facilities: Approval Process for Certain Services May Result in Higher Medical Costs (GAO/HEHS-97-18, Dec. 20, 1996). Medicare: Early Resolution of Overcharges for Therapy in Nursing Facilities Is Unlikely (GAO/HEHS-96-145, Aug. 16, 1996). Fraud and Abuse: Providers Target Medicare Patients in Nursing Facilities (GAO/HEHS-96-18, Jan. 24, 1996). Fraud and Abuse: Medicare Continues to Be Vulnerable to Exploitation by Unscrupulous Providers (GAO/T-HEHS-96-7, Nov. 2, 1995). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/HEHS-95-171, Aug. 8, 1995). Medicare: Reducing Fraud and Abuse Can Save Billions (GAO/T-HEHS-95-157, May 16, 1995). Medicare: Tighter Rules Needed to Curtail Overcharges for Therapy in Nursing Facilities (GAO/HEHS-95-23, Mar. 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the challenges that exist in combatting fraud and abuse in the nursing facility environment. GAO noted that: (1) while most providers abide by the rules, some unscrupulous providers of supplies and services have used the nursing facility setting as a target of opportunity; (2) this has occurred for several reasons: (a) the complexities of the reimbursement process invite exploitation; and (b) insufficient control over Medicare claims has reduced the likelihood that inappropriate claims will be denied; (3) GAO is encouraged by a number of recent efforts to combat fraud and abuse, the pending implementation of provisions in the Health Insurance Portability and Accountability Act (HIPPA) and a legislative proposal made by the administration; and (4) while these efforts should make a difference in controlling fraud and abuse in nursing homes, it is too early to tell whether these efforts will be sufficient.
3,978
183
DHS serves as the sector-specific agency for 10 of the sectors: information technology; communications; transportation systems; chemical; emergency services; nuclear reactors, material, and waste; postal and shipping; dams; government facilities; and commercial facilities. Other sector-specific agencies are the departments of Agriculture, Defense, Energy, Health and Human Services, the Interior, the Treasury, and the Environmental Protection Agency. (See table 1 for a list of sector-specific agencies and a brief description of each sector). The nine sector-specific plans we reviewed generally met NIPP requirements and DHS's sector-specific plan guidance; however, the extent to which the plans met this guidance, and therefore their usefulness in enabling DHS to identify gaps and interdependencies across the sectors, varied depending on the maturity of the sector and on how the sector defines its assets, systems, and functions. As required by the NIPP risk management framework (see fig. 1), sector-specific plans are to promote the protection of physical, cyber, and human assets by focusing activities on efforts to (1) set security goals; (2) identify assets, systems, networks, and functions; (3) assess risk based on consequences, vulnerabilities, and threats; (4) establish priorities based on risk assessments; (5) implement protective programs; and (6) measure effectiveness. In addition to these NIPP risk management plan elements outlined above and according to DHS's sector-specific plan guidance, the plans are also to address the sectors' efforts to (1) implement a research and development program for critical infrastructure protection and (2) establish a structure for managing and coordinating the responsibilities of the federal departments and agencies--otherwise known as sector-specific agencies--identified in HSPD-7 as responsible for critical-infrastructure protection activities specified for the 17 sectors. Most of the plans included the required elements of the NIPP risk management framework, such as security goals and the methods the sectors expect to use to prioritize infrastructure, as well as to develop and implement protective programs. However, the plans varied in the extent to which they included key information required for each plan element. For example, all of the plans described the threat analyses that the sector conducts, but only one of the plans described any incentives used to encourage voluntary risk assessments, as required by the NIPP. Such incentives are important because a number of the industries in the sectors are privately owned and not regulated, and the government must rely on voluntary compliance with the NIPP. Additionally, although the NIPP called for each sector to identify key protective programs, three of the nine plans did not address this requirement. DHS officials told us that this variance in the plans can, in large part, be attributed to the levels of maturity and cultures of the sectors, with the more mature sectors generally having more comprehensive and complete plans than sectors without similar prior working relationships. For example, the banking and finance and energy sector plans included most of the key information required for each plan element. According to DHS officials, this is a result of these sectors having a history and culture of working with the government to plan and accomplish many of the same activities that are being required for the sector-specific plans. Therefore, these sectors were able to create plans that were more comprehensive and developed than those of less mature sectors, such as the public health and health care and agriculture and food sectors. The plans also varied in how comprehensively they addressed their physical, human, and cyber assets, systems, and functions because sectors reported having differing views on the extent to which they were dependent on each of these assets, systems, and functions. According to DHS's sector-specific plan guidance, a comprehensive identification of such assets is important because it provides the foundation on which to conduct risk analysis and identify the appropriate mix of protective programs and actions that will most effectively reduce the risk to the nation's infrastructure. Yet, only one of the plans--drinking water and water treatment--specifically included all three categories of assets. For example, because the communications sector limited its definition of assets to networks, systems, and functions, it did not, as required by DHS's plan guidance, include human assets in its existing security projects and the gaps it needs to fill related to these assets to support the sector's goals. In addition, the national monuments and icons plan defined the sector as consisting of physical structures with minimal cyber and telecommunications assets because these assets are not sufficiently critical that damaging or destroying them would interfere with the continued operation of the physical assets. In contrast, the energy sector placed a greater emphasis on cyber attributes because it heavily depends on these cyber assets to monitor and control its energy systems. DHS officials also attributed the difference in the extent to which the plans addressed required elements to the manner in which the sectors define their assets and functions. The plans, according to DHS's Office of Infrastructure Protection officials, are a first step in developing future protective measures. In addition, these officials said that the plans should not be considered to be reports of actual implementation of such measures. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify gaps and interdependencies across the sectors in order to plan future protective measures. It is also unclear, from reviewing the plans, how far along each sector actually is in identifying assets, setting priorities, and protecting key assets. DHS officials said that to make this determination, they will need to review the sectors' annual progress reports, due in this month, that are to provide additional information on plan implementation as well as identify sector priorities. Representatives of 10 of 32 councils said the plans were valuable because they gave their sectors a common language and framework to bring the disparate members of the sector together to better collaborate as they move forward with protection efforts. For example, the government facilities council representative said that the plan was useful because relationships across the sector were established during its development that have resulted in bringing previously disjointed security efforts together in a coordinated way. The banking and finance sector's coordinating council representative said that the plan was a helpful way of documenting the history, the present state, and the future of the sector in a way that had not been done before and that the plan will be a working document to guide the sector in coordinating efforts. Similarly, an energy sector representative said that the plan provides a common format so that all participants can speak a common language, thus enabling them to better collaborate on the overall security of the sector. The representative also said that the plan brought the issue of interdependencies between the energy sector and other sectors to light and provided a forum for the various sectors to collaborate. DHS's Office of Infrastructure Protection officials agreed that the main benefit of these plans was that the process of developing them helped the sectors to establish relationships between the private sector and the government and among private sector stakeholders that are key to the success of protection efforts. However, representatives of 8 of the 32 councils said the plans were not useful to their sectors because (1) the plans did not represent a true partnership between the federal and private sectors or were not meaningful to all the industries represented by the sector or (2) the sector had already taken significant protection actions, thus, developing the plan did not add value. The remaining council representatives did not offer views on this issue. Sector representatives for three transportation modes--rail, maritime, and aviation--reported that their sector's plan was written by the government and that the private sector did not participate fully in the development of the plan or the review process. As a result, the representatives did not believe that the plan was of value to the transportation sector as a whole because it does not represent the interests of the private sector. Similarly, agriculture and food representatives said writing the plan proved to be difficult because of the sector's diversity and size--more than 2,000,000 farms, one million restaurants, and 150,000 meat processing plants. They said that one of the sector's biggest challenges was developing a meaningful document that could be used by all of the industries represented. As a result of these challenges, the sector submitted two plans in December 2006 that represented a best effort at the time, but the sector council said it intends to use the remainder of the 2007 calendar year to create a single plan that better represents the sector. In contrast, the coordinating council representative for nuclear reactors, materials, and waste sector said that because the sector's security has been robust for a long time, the plan only casts the security of the sector in a different light, and the drinking water and water treatment systems sector said that the plan is a "snapshot in time" document for a sector that already has a 30-year history of protection, and thus the plan did not provide added value for the sector. Officials at DHS's Office of Infrastructure Protection acknowledged that these sectors have a long history of working together and in some cases have been doing similar planning efforts. However, the officials said that the effort was of value to the government because it now has plans for all 17 sectors and it can begin to use the plans to address the NIPP risk management framework. Representatives of 11 of 32 councils said the review process associated with the plans was lengthy. They commented that they had submitted their plans in advance of the December 31, 2006, deadline, but had to wait 5 months for the plan to be approved. Eight of them also commented that while they were required to respond within several days to comments from DHS on the draft plans, they had to wait relatively much longer during the continuing review process for the next iteration of the draft. For example, a representative of the drinking water and water treatment sector said that the time the sector had to incorporate DHS's comments into a draft of the plan was too short--a few days--and this led the sector to question whether its members were valued partners to DHS. DHS's Infrastructure Protection officials agreed that the review process had been lengthy and that the comment periods given to sector officials were too short. DHS officials said this occurred because of the volume of work DHS had to undertake and because some of the sector-specific agencies were still learning to operate effectively with the private sector under a partnership model in which the private sector is an equal partner. The officials said that they plan to refine the process as the sector-specific agencies gain more experience working with the private sector. Conversely, representatives from eight of 32 councils said the review process for the plans worked well, and five of these council representatives were complimentary of the support they received from DHS. The remaining council representatives did not offer views on this topic. For example, an information technology (IT) sector coordinating council representative said that the review and feedback process on their plan worked well and that the Office of Infrastructure Protection has helped tremendously in bringing the plans to fruition. However, sector coordinating council representatives for six sectors also voiced concern that the trusted relationships established between the sectors and DHS might not continue if there were additional turnover in DHS, as has occurred in the past. For example, the representative of one council said they had established productive working relationships with officials in the Offices of Infrastructure Protection and Cyber Security and Communications, but were concerned that these relationships were dependent on the individuals in these positions and that the relationships may not continue without the same individuals in charge at DHS. As we have reported in the past, developing trusted partnerships between the federal government and the private sector is critical to ensure the protection of critical infrastructure. Nine of 32 sector representatives said that their preexisting relationships with stakeholders helped in establishing and maintaining their sector councils, and two noted that establishing the councils had improved relationships. Such participation is critical to well-functioning councils. For example, representatives from the dams, energy, and banking and finance sectors, among others, said that existing relationships continue to help in maintaining their councils. In addition, the defense industrial base representatives said the organizational infrastructure provided by the sector councils is valuable because it allows for collaboration. Representatives from the national monuments and icons sector said that establishing the government sector council has facilitated communication within the sector. We also reported previously that long-standing relationships were a facilitating factor in council formation and that 10 sectors had formed either a government council or sector council that addressed critical infrastructure protection issues prior to DHS's development of the NIPP. As a result, these 10 sectors were more easily able to establish government coordinating councils and sector coordinating councils under the NIPP model. Several councils also noted that the Critical Infrastructure Partnership Advisory Council (CIPAC), created by DHS in March 2006 to facilitate communication and information sharing between the government and the private sector, has helped facilitate collaboration because it allows the government and industry to interact without being open to public scrutiny under the Federal Advisory Committee Act. This is important because previously, meetings between the private sector and the government had to be open to the public, hampering the private sector's willingness to share information. Conversely, seven sector council representatives reported difficulty in achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. For example, the public health and health care sector representative said that getting the numerous sector members to participate is a challenge, and the government representative noted that because of this, the first step in implementing the sector-specific plan is to increase awareness about the effort among sector members to encourage participation. Similarly, due to the size of the commercial facilities sector, participation, while critical, varies among its industries, according to the government council representative. Meanwhile, the banking and finance sector representatives said that the time commitment for private sector members and council leaders makes participation difficult for smaller stakeholders, but getting them involved is critical to an effective partnership. Likewise, the IT sector representatives said engaging some government members in joint council meetings is a continuing challenge because of the members' competing responsibilities. Without such involvement, the officials said, it is difficult to convince the private sector representatives of the value of spending their time participating on the council. Additionally, obtaining state and local government participation in government sector councils remains a challenge for five sectors. Achieving such participation is critical because these officials are often the first responders in case of an incident. Several government council representatives said that a lack of funding for representatives from these entities to travel to key meetings has limited state and local government participation. Others stated that determining which officials to include was a challenge because of the sheer volume of state and local stakeholders. DHS Infrastructure Protection officials said that the agency is trying to address this issue by providing funding for state and local participation in quarterly sector council meetings and has created a State, Local and Tribal and Territorial Government Coordinating Council (SLTTGCC)--composed of state, local, tribal, and territorial homeland security advisers--that serves as a forum for coordination across these jurisdictions on protection guidance, strategies, and programs. Eleven of the 32 council representatives reported continuing challenges with sharing information between the federal government and the private sector. For example, six council representatives expressed concerns about the viability of two of DHS's main information-sharing tools--the Homeland Security Information Network (HSIN) or the Protected Critical Infrastructure Information (PCII) program. We reported in April 2007 that the HSIN system was built without appropriate coordination with other information-sharing initiatives. In addition, in a strategic review of HSIN, DHS reported in April 2007 that it has not clearly defined the purpose and scope of HSIN and that HSIN has been developed without sufficient planning and program management. According to DHS Infrastructure Protection officials, although they encouraged the sectors to use HSIN, the system does not provide the capabilities that were promised, including providing the level of security expected by some sectors. As a result, they said the Office of Infrastructure Protection is exploring an alternative that would better meet the needs of the sectors. In addition, three council representatives expressed concerns about whether information shared under the PCII program would be protected. Although this program was specifically designed to establish procedures for the receipt, care, and storage of critical infrastructure information submitted voluntarily to the government, the representatives said potential submitters continue to fear that the information could be inadequately protected, used for future legal or regulatory action, or inadvertently released. In April 2006, we reported that DHS faced challenges implementing the program, including being able to assure the private sector that submitted information will be protected and specifying who will be authorized to have access to the information, as well as to demonstrate to the critical infrastructure owners the benefits of sharing the information to encourage program participation. We recommended, among other things, that DHS better (1) define its critical-infrastructure information needs and (2) explain how this information will be used to attract more users. DHS concurred with our recommendations. In September 2006 DHS issued a final rule that established procedures governing the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to DHS. DHS is in the process of implementing our additional recommendations that it define its critical-infrastructure information needs under the PCII program and better explain how this information will be used to build the private sector's trust and attract more users. To date, DHS has issued a national plan aimed at providing a consistent approach to critical infrastructure protection, ensured that all 17 sectors have organized to collaborate on protection efforts, and worked with government and private sector partners to complete all 17 sector-specific plans. Nevertheless, our work has shown that sectors vary in terms of how complete and comprehensive their plans are. Furthermore, DHS recognizes that the sectors, their councils, and their plans must continue to evolve. As they do and as the plans are updated and annual implementation reports are provided that begin to show the level of protection achieved, it will be important that the plans and reports add value, both to the sectors themselves and to the government as a whole. This is critical because DHS is dependent on these plans and reports to meet its mandate to evaluate whether gaps exist in the protection of the nation's most critical infrastructure and key resources and, if gaps exist, to work with the sectors to address the gaps. Likewise, DHS must depend on the private sector to voluntarily put protective measures in place for many assets. It will also be important that sector councils have representative members and that the sector-specific agencies have buy-in from these members on protection plans and implementation steps. One step DHS could take to implement our past recommendations to strengthen the sharing of information is for the PCII program to better define its critical infrastructure information needs and better explain how this information will be used to build the private sector's trust and attract more users. As we have previously reported, such sharing of information and the building of trusted relationships are crucial to the protection of the nation's critical infrastructure. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at any time. For further information on this testimony, please contact Eileen Larence at (202) 512-8777 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Susan Quinlan, Assistant Director; R. E. Canjar; Landis Lindsey; E. Jerry Seigler; and Edith Sohna. We assessed the sector specific plans (SSPs) using 8 criteria, consisting of 40 key information requirements. We extracted this information from the requirements included in the NIPP as well as on the detailed sector- specific plan guidance issued by DHS. Each criterion reflects a component DHS required for the completion of the SSP. The 8 criteria we used are listed below along with the corresponding 40 key information requirements. Section 1: Sector Profile and Goals 1. Did the sector include physical and human assets as part of its sector profile? 2. Does the SSP identify any regulations or key authorities relevant to the sector that affect physical and human assets and protection? 3. Does the SSP show the relationships between the sector specific agency and the private sector, other federal departments and agencies, and state and local agencies that are either owner/operators of assets or provide a supporting role to securing key resources? 4. Does the SSP contain sector-specific goals? 5. Does the SSP communicate the value of the plan to the private sector, other owners, and operators? Section 2: Identify Assets, Systems, Networks, and Functions 6. Does the SSP include a process for identifying the sector's assets and functions, both now and in the future? 7. Does the SSP include a process to identify physical and human asset dependencies and interdependencies? 8. Does the SSP describe the criteria being used to determine which assets, systems, and networks are and are not of potential concern? 9. Does the SSP describe how the infrastructure information being collected will be verified for accuracy and completeness? 10. Does the SSP discuss the risk assessment process, including whether the sector is mandated by regulation or are primarily voluntary in nature. 11. Does the SSP address whether a screening process (process to determine whether a full assessment is required) for assets would be beneficial for the sector, and if so, does it discuss the methodologies or tools that would be used to do so? 12. Does the SSP identify how potential consequences of incidents, including worst case scenarios, would be assessed? 13. Does the SSP describe the relevant processes and methodologies used to perform vulnerability assessments? 14. Does the SSP describe any threat analyses that the sector conducts? 15. Does the SSP describe any incentives used to encourage voluntary performance of risk assessments? Section 4: Prioritize Infrastructure 16. Does the SSP identify the party responsible for conducting a risk-based prioritizing of the assets? 17. Does the SSP describe the process, current criteria, and frequency for prioritizing sector assets? 18. Does the SSP provide a common methodology for comparing both physical and human assets when prioritizing a sector's infrastructure? Section 5: Develop and Implement Protective Programs 19. Does the SSP describe the process that the SSA will use to work with asset owners to develop effective long-term protective plans for the sector's assets? 20. Does the SSP identify key protective programs (and their role) in the sector's overall risk management approach? 21. Does the SSP describe the process used to identify and validate specific program needs? 22. Does the SSP include the minimum requirements necessary for the sector to prevent, protect, respond to, and recover from an attack? 23. Does the SSP address implementation and maintenance of protective programs for assets once they are prioritized? 24. Does the SSP address how the performance of protective programs is monitored by the sector-specific agencies and security partners to determine their effectiveness? Section 6: Measure Progress 25. Does the SSP explain how the SSA will collect, verify and report the information necessary to measure progress in critical infrastructure/key resources protection? 26. Does the SSP describe how the SSA will report the results of its performance assessments to the Secretary of Homeland Security? 27. Does the SSP call for the development and use of metrics that will allow the SSA to measure the results of activities related to assets? 28. Does the SPP describe how performance metrics will be used to guide future decisions on projects? 29. Does the SSP list relevant sector-level implementation actions that the SSA and its security partners deem appropriate? Section 7: Research and Development for Critical Infrastructure/Key Resources Protection 30. Does the SSP describe how technology development is related to the sector's goals? 31. Does the SSP identify those sector capability requirements that can be supported by technology development? 32. Does the SSP describe the process used to identify physical and human sector-related research requirements? 33. Does the SSP identify existing security projects and the gaps it needs to fill to support the sector's goals? 34. Does the SSP identify which sector governance structures will be responsible for R&D? 35. Does the SSP describe the criteria that are used to select new and existing initiatives? Section 8: Manage and Coordinate SSA Responsibilities 36. Does the SSP describe how the SSA intends to staff and manage its NIPP responsibilities? (e.g., creation of a program management office.) 37. Does the SSP describe the processes and responsibilities of updating, reporting, budgeting, and training? 38. Does the SSP describe the sector's coordinating mechanisms and structures? 39. Does the SSP describe the process for developing the sector-specific investment priorities and requirements for critical infrastructure/key resource protection? 40. Does the SSP describe the process for information sharing and protection? This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As Hurricane Katrina so forcefully demonstrated, the nation's critical infrastructures--both physical and cyber--have been vulnerable to a wide variety of threats. Because about 85 percent of the nation's critical infrastructure is privately owned, it is vital that public and private stakeholders work together to protect these assets. The Department of Homeland Security (DHS) is responsible for coordinating a national protection strategy and has promoted the formation of government and private councils for the 17 infrastructure sectors as a collaborating tool. The councils, among other things, are to identify their most critical assets, assess the risks they face, and identify protective measures in sector-specific plans that comply with DHS's National Infrastructure Protection Plan (NIPP). This testimony is based primarily on GAO's July 2007 report on the sector-specific plans and the sector councils. Specifically, it addresses (1) the extent to which the sector-specific plans meet requirements, (2) the council members' views on the value of the plans and DHS's review process, and (3) the key success factors and challenges that the representatives encountered in establishing and maintaining their councils. In conducting the previous work, GAO reviewed 9 of the 17 draft plans and conducted interviews with government and private sector representatives of the 32 councils, 17 government and 15 private sector. Although the nine sector-specific plans GAO reviewed generally met NIPP requirements and DHS's sector-specific plan guidance, eight did not describe any incentives the sector would use to encourage owners to conduct voluntary risk assessments, as required by the NIPP. Most of the plans included the required elements of the NIPP risk management framework. However, the plans varied in how comprehensively they addressed not only their physical assets, systems, and functions, but also their human and cyber assets, systems and functions, a requirement in the NIPP, because the sectors had differing views on the extent to which they were dependent on each of these assets. A comprehensive identification of all three categories of assets is important, according to DHS plan guidance, because it provides the foundation on which to conduct risk analyses and identify appropriate protective actions. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify security gaps and critical interdependencies across the sectors. DHS officials said that to determine this, they will need to review the sectors' annual reports. Representatives of the government and sector coordinating councils had differing views regarding the value of sector-specific plans and DHS's review of those plans. While 10 of the 32 council representatives GAO interviewed reported that they saw the plans as being useful for their sectors, representatives of eight councils disagreed because they believed the plans either did not represent a partnership among the necessary key stakeholders, especially the private sector or were not valuable because the sector had already progressed beyond the plan. In addition, representatives of 11 of the 32 councils felt the review process was too lengthy, but 8 thought the review process worked well. The remaining council representatives did not offer views on these issues. As GAO reported previously, representatives continued to report that their sector councils had preexisting relationships that helped them establish and maintain their sector councils. However, seven of the 32 representatives reported continuing difficulty achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. Eleven council representatives reported continuing difficulties sharing information between the public and private sectors as a challenge, and six council representatives expressed concerns about the viability of the information system DHS intends to rely on to share information about critical infrastructure issues with the sectors or the effectiveness of the Protected Critical Infrastructure Information program--a program that established procedures for the receipt, care, and storage of information submitted to DHS. GAO has outstanding recommendations addressing this issue, with which DHS generally agreed and is in the process of implementing.
5,334
823
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies' major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued after transmittal of the president's budget, provide a direct linkage between an agency's longer term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies' performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA--the opportunity to assess federal agencies' reported performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. GSA's overall mission is to provide policy leadership and expert solutions in services, space, and products at the best value to enable federal employees to accomplish their work-related responsibilities. As part of this mission, GSA recognizes that it must provide federal agencies with the highest quality service at a competitive cost. In its September 2000 strategic plan, GSA discussed the major goals related to its mission, which are to promote responsible asset management, compete effectively for the federal market, excel at customer service, meet federal social and environmental objectives, and anticipate future workforce needs. For the three key selected outcomes--quality products and services are provided to federal agencies at competitive prices and significant price savings to the government; federal buildings are safe, accessible, and energy efficient; and federal buildings are adequately maintained--GSA's fiscal year 2000 performance report indicated that GSA met or exceeded 21 of the 34 performance goals related to the 3 outcomes. For the remaining 13 goals, GSA did not meet 11 goals and was unable to measure 2 goals. In its report, GSA (1) typically described various strategies it planned to implement for achieving the unmet goals and (2) generally discussed the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance for many goals. For such goals, the report discussed fiscal year 2000 performance and what performance could be expected in fiscal year 2001. In addition, the fiscal year 2002 performance plan included discussions of strategies for each of the goals that supported the three outcomes. As in fiscal year 1999, GSA's performance report showed that it had achieved mixed results for this outcome in fiscal year 2000. GSA's 31 performance goals for this outcome were typically outcome-oriented, measurable, and quantifiable. The goals addressed a wide range of issues involving products and services in such areas as supply and procurement, real property operations, vehicle acquisition and leasing, travel and transportation, information technology (IT), and telecommunications. GSA reported that it exceeded or met 19 of the 31 goals in fiscal year 2000 in such areas as leasing operations, real property disposal and operations, supply and procurement, vehicle acquisition and leasing, travel and transportation, personal property management, and network services. For the remaining 12 goals, GSA did not meet 10 goals and was unable to measure its performance on 2 goals. GSA cited reasons for not meeting or measuring the goals or explained that it was analyzing data to determine the reasons. GSA also discussed to some extent various approaches, including plans, actions, and time frames, to achieve most of the unmet goals. The unmet goals were in such areas as leasing and real property operations, supply and procurement, and vehicle acquisition and leasing; the unmeasured goals were in the vehicle acquisition and leasing and travel and transportation areas. As it did in the fiscal year 1999 report, GSA revised many goals and measures for this key outcome in the fiscal year 2000 performance report. The revisions ranged from updating target performance levels to broadening the scope of various goals to include services as well as products. In addition, in its fiscal year 2000 report, GSA described the effects of the fiscal year 2000 performance on the estimated fiscal year 2001 performance for 15 of the 31 goals related to this outcome. GSA's fiscal year 2002 performance plan also had 31 goals related to this outcome. The plan had strategies for all the goals, which covered a wide range of activities that clearly described major steps to reach the goals. For example, to help achieve the goal of maximizing cost avoidance through reutilization and donation of excess federal personal property, GSA's strategies included making the property visible through the Federal Disposal System, which is an information system that identifies available surplus property. Also, to achieve the goal of increasing the number of products and services available to federal customers on the Internet, GSA's strategies included a requirement that starting in October 2001, all new schedule contractors had 6 months to include their products and services on GSA Advantage!™️, the on-line service for obtaining products and services. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. For the second key outcome, GSA's fiscal year 2000 performance report, like the fiscal year 1999 report, had one goal related to building security. Specifically, the goal was to reduce the number of buildings that have costs in the high range of the benchmark set by private sector experts while maintaining effective security in government buildings. In addition to this goal, GSA discussed the issue of building security in a separate section of the performance report. The section explained that GSA is changing its approach from a reactive posture of patrol and incident response to a proactive stance of crime prevention and threat reduction. The section also said that GSA seeks to identify and reduce risk through automated risk assessment surveys and a comprehensive nationwide risk threat assessment. For the security goal, GSA had initially established a measure that would compare the agency's protection costs with similar costs in the private sector. However, GSA's fiscal year 2000 performance report recognized, as did its fiscal year 1999 report, that security could not be measured by costs alone. Thus, GSA did not use its initial cost-related measure but relied on customer satisfaction as an interim measure of the quality of protection services at government buildings while it developed a new measure. As it did in fiscal year 1999, GSA reported that it exceeded its fiscal year 2000 customer satisfaction target. The fiscal year 2000 report explained that GSA was developing a national security measure that is intended to assess the overall risk of threats to government buildings more comprehensively. The new threat assessment measure is being developed to consider the motives, opportunities, and means that outside groups or individuals may possess to threaten the security of government buildings. GSA also will include customer satisfaction in developing the measure. GSA's fiscal year 2000 report said that this information is quantifiable and can be used to calculate risk scores for specific buildings. Building scores can be combined to establish a national threat assessment index, which can be used over time to help measure GSA's efforts to reduce the level of threat or risk to government buildings. GSA anticipated implementing the new measure in fiscal year 2001. GSA's fiscal year 2002 performance plan includes a new security goal related to its overall efforts to reduce threats to buildings. As part of this goal, GSA developed a regional threat composite index, which was designed to help identify and quantify the level of risk or threat to federal buildings located in specific geographical areas and assess GSA's performance in reducing such threats. GSA expects that by fiscal year 2002, the regional indexes will be used to establish a national threat assessment index baseline. Strategies related to this goal clearly described major steps to reach the goal and included such efforts as obtaining timely criminal intelligence information, reducing the number of violent incidents, and partnering with security contractors. By developing and implementing the new security goal and its related measure, GSA has taken steps to address the recommendation in our June 2000 GPRA report. This recommendation called for GSA to develop security goals and measures that are more programmatic, that hold agency officials more accountable for results, and that allow GSA to determine if security strategies are working as intended. In addition, the plan continues to have a customer satisfaction goal, which includes such strategies as (1) using focus groups at buildings to help GSA better understand what is needed to improve customer satisfaction with security; and (2) sharing practices that have enhanced customer satisfaction scores among building managers, law enforcement security officers, and other building personnel nationwide. GSA's fiscal year 2002 performance plan also included a goal related to the conservation of energy consumption in federal buildings. Executive Order 13123, dated June 3, 1999, stated that energy consumption is to be reduced by 35 percent by fiscal year 2010 compared with the 1985 baseline. In the fiscal year 2002 plan, GSA identified various energy conservation strategies, such as pursuing methods that would help GSA facilities to be recognized by DOE and EPA for achievements in effective environmental design and construction and using utility management techniques to enhance building operations' efficiency. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. Neither the report nor the plan included any performance goals directly related to federal building accessibility. For the third key outcome, GSA's fiscal year 2000 performance report, like the fiscal year 1999 report, included two goals under this outcome, which showed mixed performance results. The goals, which were related to the timeliness of and cost controls over repairs and alterations to GSA buildings, were objective, measurable, and quantifiable. The measures generally indicated progress toward meeting the goals. GSA reported that for fiscal year 2000, its performance exceeded the cost control goal but did not meet the timeliness goal. For the unmet goal, GSA discussed reasons why the goal was not met and described actions it has taken to facilitate meeting the goal in the future. Although GSA did not specifically discuss the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance for the two goals, it did say that it is planning to develop more comprehensive measures for each goal. We recently issued two reports that discussed some aspects of GSA's efforts to maintain its buildings. Specifically, in March 2000 and April 2001, we reported, among other things, that GSA's buildings needed billions of dollars for unfunded repairs and alterations; funding limitations were a major obstacle to reducing these needs; and serious consequences, including health and safety concerns, resulted from delaying or not performing repairs and alterations at some buildings. In its fiscal year 2002 performance plan, GSA included three goals related to this outcome. Two of these goals were similar to the goals in the fiscal year 2000 performance report, which involved improving the timeliness of building repairs and alterations and reducing cost escalations for repairs and alterations. In its fiscal year 2002 plan, GSA identified various strategies that clearly described major steps to be taken to achieve the two goals. For the goal related to improving the timeliness of repairs and alterations, GSA identified such strategies as implementing a Web-based program to streamline its building evaluation reports and optimizing the inventory tracking system to better monitor the backlog of work items. For the goal related to reducing cost escalations, GSA identified such strategies as (1) limiting project changes by obtaining up-front commitments from client agencies on the scope, schedules, and costs associated with building repairs and alterations; and (2) using design options that allow for adjusting repair and alteration projects to meet unforeseen events, such as budget reductions or higher-than-anticipated contractor bids. GSA's fiscal year 2002 plan also had a third goal related to this outcome that involved estimating the government's financial liabilities for environmental clean-up costs in its properties, such as owned and leased buildings. GSA stated that federal agencies are required to identify, document, and quantify the environmental financial liabilities related to all owned and leased properties within their inventories. In the fiscal year 2002 plan, GSA described its overall strategy for achieving this new goal. GSA explained its strategy as a multiphased approach; the first step of this approach will be to conduct "due care" assessments that will identify the federal properties that pose environmental hazards. GSA expects these assessments to be completed by 2002. For properties with documented environmental contamination, subsequent phases of the approach will involve identifying the nature and extent of such contamination. Using this information, GSA's overall strategy is to establish environmental financial liability baselines that will help the agency set targets for reducing such liabilities in future years. For the goals related to this outcome, GSA discussed data validation and verification efforts in both the fiscal year 2000 report and the fiscal year 2002 plan. Generally, GSA's fiscal year 2000 performance report and fiscal year 2002 performance plan had some significant differences that made the current documents more descriptive and informative than GSA's fiscal year 1999 performance report and fiscal year 2001 performance plan. In addition to a more explicit discussion of approaches for achieving unmet goals and the effects of fiscal year 2000 performance on estimated fiscal year 2001 performance, the fiscal year 2000 report included expanded discussions of (1) the data sources that GSA relied on to measure performance for specific goals; and (2) the management challenges identified by GSA's IG, which included two issues we identified as governmentwide high-risk areas--strategic human capital management and information security. Also, a recent study prepared by university researchers noted some overall improvement of GSA's fiscal year 2000 performance report compared with its fiscal year 1999 report. Although GSA's fiscal year 2002 performance plan was similar in some respects to the fiscal year 2001 plan, the fiscal year 2002 plan was a more informative document, primarily because it included more detailed discussions of GSA's data validation and verification efforts and the management challenges identified by GSA's IG. Also, the fiscal year 2002 plan contained new information that enhanced the plan, including discussions of (1) a new strategic goal related to meeting federal social and environmental objectives that was included in GSA's September 30, 2000, strategic plan; (2) governmentwide reforms established by OMB; and (3) performance goals for three GSA staff offices that were not included in the fiscal year 2001 plan. The fiscal year 2000 performance report made strides toward addressing the recommendation in our June 2000 GPRA report that identified the need for better implementation of GPRA guidance. In contrast with its fiscal year 1999 performance report, GSA's fiscal year 2000 report either discussed for all unmet goals the reasons why the goals were not achieved or explained that GSA was studying these matters. In addition, the report typically discussed the various approaches needed for achieving the goals in the future. Also, unlike the fiscal year 1999 report, the fiscal year 2000 report described the impact of fiscal year 2000 performance on estimated 2001 performance for many of the goals related to the three outcomes. The fiscal year 2000 performance report also included an enhanced discussion of data sources and the quality of data that GSA used to measure performance. Unlike the fiscal year 1999 performance report, the fiscal year 2000 report included an expanded discussion of the data sources used by its four major organizational components--the Public Buildings Service (PBS), Federal Supply Service (FSS), Federal Technology Service (FTS), and Office of Governmentwide Policy (OGP). For example, PBS identified a number of systems from which it obtained performance data, such as the System for Tracking and Administering Real Property, which is its primary source of real property data. In some cases, these discussions went a step beyond identifying systems and gave some information on data validity and verification. For example, PBS mentioned that its National Electronic and Accounting System is independently audited and has received an unqualified opinion for 13 consecutive years; its customer satisfaction measures from the Gallup Organization, a management consulting firm, come with a 95 percent statistical confidence level. In addition, FTS stated that it has purchased a system for collecting and evaluating performance measurement data and plans to implement the system in 2001. GSA stated in the report that it considers its performance data to be generally complete and reliable. However, GSA recognized that data improvements may be needed and said it is currently reviewing its data collection procedures. GSA's efforts in this area are well founded because GSA's IG recently reported that GSA has not implemented a system of internal controls to ensure that appropriate levels of management understand and are performing the necessary reviews of performance data to enable them to make assertions about the completeness and existence of the data and systems supporting the measures. Unlike the fiscal year 1999 performance report, GSA discussed the GSA IG's management challenges in the fiscal year 2000 report. The six challenges were (1) management controls, (2) information technology solutions, (3) procurement activities, (4) human capital, (5) aging federal buildings, and (6) protection of federal facilities and personnel. The fiscal year 2000 report highlighted major issues related to the challenges and discussed GSA's approaches to address them. Also, we noted that two of the six challenges addressed issues related to two governmentwide high- risk areas--strategic human capital management and information security--that were in our January 2001 high-risk update. The fiscal year 2000 report explained that GSA intended to address the management challenges more fully in its fiscal year 2002 performance plan, which is discussed later in this report. In May 2001, a study by university researchers cited overall improvement in GSA's fiscal year 2000 performance report compared with its fiscal year 1999 report. The study, which was prepared by researchers who worked under the Mercatus Center's Government Accountability Project at George Mason University, compared fiscal years 1999 and 2000 GPRA performance reports for 24 federal agencies primarily in the 3 areas of transparency, public benefits, and leadership. On the basis of numerical scores that the researchers assigned to the three areas, GSA's fiscal year 2000 performance report showed improvement in all three areas over its fiscal year 1999 report. The improvements, which we also recognized, were related to such matters as (1) data sources, (2) explanations of why GSA failed to meet various performance goals, and (3) management challenges. In some respects, GSA's fiscal year 2002 performance plan was similar to the fiscal year 2001 plan. Both plans discussed such matters as (1) GSA's overall mission, strategic plan, and related strategic goals; and (2) performance goals with related measures and strategies to achieve the goals, links to GSA's budget, and data validation and verification efforts. Also, both performance plans provided highlights of the extent to which its four major organizational components--PBS, FSS, FTS, and OGP-- contributed to the accomplishment of GSA's overall mission. In addition, we noted that both the fiscal year 2001 and fiscal year 2002 plans included information about cross-cutting issues, which are issues in which GSA's organizational components work collaboratively with each other and with other federal agencies outside GSA. For example, FSS and PBS collaborate in meeting customers' real and personal property needs in dealing with relocations or setting up new office facilities. Another example involved FSS' work with DOE and EPA to make it easier for agencies to comply with the requirements of environmentally related Executive Orders. GSA's fiscal year 2001 and 2002 plans discussed evaluations and studies of agency programs. For example, FSS included in both plans information on various ongoing and completed program evaluations and major studies, which are generally intended to help FSS determine how it can best accomplish its overall mission of providing supplies and services to federal agencies. These evaluations and studies covered a wide range of topics, such as providing efficient and effective supply chains that can best meet customers' needs; maintaining appropriate controls over various purchases associated with GSA vehicles, such as fuel; and monitoring the quality of contractor-performed audits of transportation bills. We also identified some differences between the two plans that enhanced the fiscal year 2002 plan and made it a more descriptive and informative document compared with the fiscal year 2001 plan. Most notably, these differences involved expanded and more explicit discussions of data validation and verification and management challenges. We also noted that the fiscal year 2002 plan contained some new information that enhanced the plan, including discussions of a new strategic goal related to meeting federal social and environmental objectives that was included in GSA's September 30, 2000, strategic plan; efforts to implement governmentwide reforms established by OMB; and performance goals for the three GSA staff offices of CFO, CIO, and CPO that were not included in the fiscal year 2001 plan. The fiscal year 2002 plan included an expanded discussion of GSA's data validation and verification activities. In fact, GSA added an agencywide data validation and verification section to the plan that discusses, among other things, general controls and procedures used to validate and verify data. In discussing this issue, GSA described the types of performance data used, procedures for collecting such data, controls to help verify and validate each type of data used, and efforts to increase confidence in the data. For example, GSA explained that it has undertaken an extensive effort to review, certify, and clean up data in its larger computer systems, such as PBS' System for Tracking and Administering Real Property, to help ensure that the systems operate as intended. In addition, GSA stated that it helps maintain data quality through ongoing staff training. Also, GSA stated that for its manual or smaller computer systems, the importance of data confirmation is stressed, which involves having more than one person responsible for the data. GSA's fiscal year 2002 plan also included a more explicit discussion of its efforts to address the six management challenges that GSA's IG identified. In discussing the challenges, GSA generally recognized the importance of continued attention to the challenges and described its overall efforts to address them. For example, in discussing the human capital challenge, GSA described various programs, such as a succession plan for PBS leadership designed to help ensure that GSA can continue to meet its future responsibilities despite impending employee turnover due to retirements. Also, in discussing the challenge of dealing with aging federal buildings, GSA explained that its first capital priority is to fund repairs and alterations for its buildings and said it is currently studying ways to better determine the appropriate level of funding for the repair and alteration program. In addition, the fiscal year 2002 plan included more performance goals that appeared to be related to the management challenges, including the issues of strategic human capital management and information security, which we identified as governmentwide high-risk areas. Also, the plan included a new goal that involved federal building security, which appears to respond to the recommendation in our June 2000 GPRA report that GSA develop security goals and measures. In addition, we noted that in GSA's fiscal year 2002 performance plan, new information was included that enhanced the plan. For instance, the plan discusses a new strategic goal related to meeting federal social and environmental objectives, which was included in GSA's September 30, 2000, strategic plan. Overall, this goal is aimed at fulfilling the intent of socioeconomic laws and executive orders and helping GSA's customers to do so as well. As part of this strategic goal, GSA stated that it takes steps to safeguard the environment and conserve energy, help the disabled and disadvantaged to become more productive, consider the environment in its business decisions, and use natural resources in a sustainable manner. In the fiscal year 2002 plan, GSA established some performance goals that are related to this strategic goal, which involved, among other things, providing opportunities for small businesses and minority- and women- owned businesses to obtain GSA contracts. Also, the fiscal year 2002 performance plan discusses GSA's ongoing and planned efforts to implement five governmentwide reforms established by OMB. In a February 14, 2001, memorandum to the heads and acting heads of federal departments and agencies, OMB explained that in order to help achieve the President's vision of improving government functions and achieving operational efficiencies, agencies should include in their fiscal year 2002 plans some performance goals related to the five reforms that would significantly enhance agencies' administration and operation. These reforms are delayering management levels to streamline organizations, reducing erroneous payments to beneficiaries and other recipients of government funds, making greater use of performance-based contracts, expanding the application of on-line procurement and other e-government services and information, and expanding OMB Circular A-76 competitions and more accurate inventories as required by the Federal Activities Inventory Reform (FAIR) Act. GSA identified various performance goals that focused on implementing some of the governmentwide reforms. For example, for the reform that deals with expanding the application of on-line procurement and other e- government services and information, GSA stated that it established Federal Business Opportunities, also known as FedBizOpps, to provide government buyers with convenient, universal access for posting and obtaining information about acquisitions on the Internet. GSA said that the establishment of FedBizOpps is discussed under its performance goal for providing a "single point of entry" to vendors that wish to do business with the federal government. In some instances, GSA did not identify performance goals that addressed the reforms, but it provided reasons for not doing so. For example, for the reform concerning the reduction of erroneous payments, GSA explained that it has not yet established performance goals related to this reform but plans to establish such goals in next year's performance plan. Also, GSA's fiscal year 2002 plan included performance goals for three staff offices that were not in the fiscal year 2001 plan. Responsibility for these goals falls within the jurisdiction of three staff offices that report directly to GSA's Administrator; these are the offices of CFO, CIO, and CPO. The plan had 10 goals for these offices that covered (1) financial matters that CFO oversees, such as electronic collections and payments of invoices; (2) information technology matters that CIO oversees, such as costs and schedules associated with information technology capital investment projects; and (3) human capital matters that CPO oversees, such as the use of on-line university training courses to help improve employee skills. It should be noted that 5 of the 10 goals appeared to be related to the 2 areas of strategic human capital management and information security, which we identified as governmentwide high-risk areas The following section provides more information on GSA's efforts to address the two high-risk areas. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding the first area, we noted that GSA's fiscal year 2000 performance report discussed actions it has taken or plans to take to address strategic human capital management issues, which primarily involved training and developmental opportunities for employees. Also, we noted that GSA's fiscal year 2002 plan had goals and measures related to strategic human capital management matters, which involved such activities as training and developing employees and improving the cycle time for recruiting. Regarding information security, we noted that GSA's fiscal year 2000 performance report did not identify actions to address information security issues. However, our analysis showed that GSA's fiscal year 2002 plan had a goal and measure related to information security, which involved GSA's efforts to resolve in a timely manner all high-risk vulnerabilities and conditions detected by audits and reviews. The plan also states that FTS has an Office of Information Security, which provides federal agencies with services that are designed to develop a secure government information infrastructure. A more detailed discussion of GSA's efforts to address the two high-risk areas identified by GAO, along with the GSA IG's management challenges, can be found in appendix I. Our analysis indicates that both the fiscal year 2000 performance report and fiscal year 2002 performance plan were more informative and useful documents than GSA's prior year report and plan. As we recommended in our June 2000 GPRA report, GSA's fiscal year 2000 report and fiscal year 2002 plan responded more fully to GPRA implementing guidance and made a concerted effort to address the issue of building security. We recognize that tracking and reporting on intended performance results is an iterative process and that GSA needs to continually review and adjust its plans and reports to be responsive to an ever-changing environment. Given the complexities associated with preparing GPRA plans and reports, it is our view that GSA is making overall progress in responding to the annual GPRA planning and reporting requirements. Therefore, we are not making additional recommendations at this time. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from OMB for developing performance plans and reports, including OMB Circular A-11, Part 2; previous reports and evaluations by us and others; our knowledge of GSA's operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on GSA's other GPRA-related efforts. We also discussed our review with officials in GSA's Office of the Chief Financial Officer and Office of the Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Committee on Governmental Affairs as important mission areas for the agency and generally reflect the outcomes for GSA's key programs and activities. We examined and reviewed all performance goals in GSA's fiscal year 2000 report and focused on those goals that were directly related to the three key outcomes. Also, we reviewed the fiscal year 2000 report and fiscal year 2002 plan and compared them with the agency's prior year performance report and plan for these outcomes. In addition, we reviewed the fiscal year 2000 report and fiscal year 2002 plan for information related to the major management challenges confronting GSA that were identified by GSA's Office of the Inspector General in November 2000. These challenges included the issues of strategic human capital management and information security, which GAO identified as governmentwide high-risk areas in our January 2001 performance and accountability series and high-risk update. We did not independently verify the information contained in GSA's fiscal year 2000 performance report and fiscal year 2002 performance plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of GSA's performance data. We conducted our review from April through June 2001 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from GSA's Administrator. On July 25, 2001, GSA officials in the Office of the Chief Financial Officer provided us oral comments on a draft of this report. Specifically, GSA's Deputy Budget Director and the Managing Director for Planning told us that they agreed with the contents of the report. Also, the officials told us that the name of FTS' Office of Information Security has been changed to the Office of Information Assurance and Critical Infrastructure Protection. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Administrator, GSA; and the Director, OMB. Copies will also be made available to others upon request. If you or your staff have any questions, please call me at (202) 512-8387 or notify me at [email protected]. Key contributors to this report were William Dowdal, Anne Hilleary, David Sausville, and Gerald Stankosky. The following table identifies the six major management challenges confronting the General Services Administration (GSA), which were identified by GSA's Inspector General (IG). Two of the six challenges also addressed two issues--strategic human capital management and information security--that GAO identified as governmentwide high-risk areas. The first column lists the challenges identified by GSA's IG and highlights the two agency challenges--human capital and information technology solutions--that addressed issues related to our two governmentwide high-risk areas. The second column discusses GSA's progress in resolving its challenges, which was discussed in the agency's fiscal year 2000 performance report. The third column discusses the extent to which GSA's fiscal year 2002 performance plan includes performance goals and measures to address the two high-risk areas that GAO identified and the management challenges that GSA's IG identified. In reviewing GSA's fiscal year 2000 performance report and fiscal year 2002 performance plan, we found that both documents included expanded discussions of the GSA IG's challenges, which represented a general improvement over the fiscal year 1999 report and fiscal year 2001 plan. In the fiscal year 2000 report and the fiscal year 2002 plan, GSA recognized the importance of continued attention to the challenges and described overall efforts to address them. Furthermore, GSA's fiscal year 2000 report and fiscal year 2002 plan included various goals that appeared to be related to most or all of the challenges. Specifically, the performance report contained various goals that appeared to be related to four of the six challenges, and the performance plan had goals and measures that appeared to be related to all six challenges.
This report reviews the General Services Administration's (GSA) performance report for fiscal year 2000 and its performance plan for fiscal year 2002 to assess GSA's progress in achieving key outcomes important to its mission. GAO found that some goals were met or exceeded and others were not met. For fiscal year 2002, GSA set up a strategy to better meet these goals. Overall, GSA's fiscal year 2000 performance report and fiscal year 2002 plan were more informative and useful than its report and plan from last year.
7,071
109
Although there is no generally agreed upon definition of partnering, for purposes of this report, partnering arrangements include, but are not limited to (1) use of public sector facilities and employees to perform work or produce goods for the private sector; (2) private sector use of public depot equipment and facilities to perform work for either the public or private sector; and (3) work-sharing arrangements, using both public and private sector facilities and/or employees. Work-sharing arrangements share similar characteristics to the customer-supplier partnerships on which we have previously reported. Partnering arrangements exclude the normal service contracting arrangements where contract personnel are used to supplement or assist depot personnel in performing work in depot facilities. DOD spends about $13 billion, or 5 percent of its $250 billion fiscal year 1997 budget, on depot maintenance, which includes repair, rebuilding, and major overhaul of weapon systems, including ships, tanks, and aircraft. The Army has five depots managed by the Industrial Operations Command (IOC), and the Air Force has five depots managed by the Air Force Materiel Command (AFMC). The Navy's three aviation depots and four shipyards are managed by the Naval Air and Sea Systems Commands. Also, a significant amount of depot repair activities is performed at various private contractor facilities. Depots operate through a working capital fund. The fund is used to finance a depot's cost of producing goods and services for its customers. The fund is reimbursed through customer payments for the goods and services provided and is to be self-sustaining and operate on a break-even basis over the long term. Defense spending and force structure reductions during the 1980s and 1990s resulted in substantial excess capacity in both public and private sector industrial repair and overhaul facilities. Some of DOD's excess depot maintenance capacity has been reduced through the base realignment and closure process. However, the services and the private sector continue to have large industrial facilities and capabilities that are underused. We have reported and testified that reducing such excess capacity and resulting inefficiencies could save hundreds of millions of dollars each year. Navy officials state that they have already significantly reduced excess capacity by closing three of six aviation depots and four of eight shipyards. To address its excess capacity problem, DOD continues to seek legislative authority for additional base closures under a base realignment and closure type process. However, due to congressional concerns over local social and economic impacts of such closures and questions regarding the savings and experiences from previous closures, such authority has not been provided. There is also a continuing debate between the Congress and the administration over where and by whom the remaining depot workloads will be performed. Central to this debate has been DOD's efforts to rely more on the private sector for depot maintenance and statutory provisions that (1) require public-private competitions for certain workloads, (2) limit private sector workloads to 50 percent of the available funding for a particular fiscal year, and (3) require maintaining certain core capabilities in the public depots. DOD, the Congress, and the private sector have shown an interest in partnering arrangements as another tool to address the problems of excess capacity and declining workloads. DOD agrees with partnering concepts and discusses partnering in both the Defense Planning Guidance, which contains guidance for the services to develop their strategic plans, and in the fourth comprehensive Quadrennial Defense Review, a report required by the Military Force Structure Review Act of 1996, which was included in the National Defense Authorization Act for Fiscal Year 1997. In the Defense Planning Guidance, DOD directs the services to encourage commercial firms to enter into partnerships with depots to reduce excess capacity, overhead burdens, and maintain critical skills. In the Quadrennial Defense Review, DOD states that it will use in-house facilities to partner with industry to preserve depot-level skills and use excess capacity. A number of statutory provisions enacted primarily during the 1990s provide, within limitations, the authority and framework for partnering. Specifically, provisions in title 10 permit working capital funded activities, such as public depots, within specified limits, to sell articles and services to persons outside DOD and to retain the proceeds. Central among these limitations is that any goods or services sold by the depots must not be available commercially. Also, the National Defense Authorization Act for Fiscal Year 1995 authorized the Secretary of Defense to conduct activities to encourage commercial firms to enter into partnerships with depots. Further, section 361 of the National Defense Authorization Act for Fiscal Year 1998, provides that the Secretary of Defense shall enable public depots to enter into public-private cooperative arrangements, which shall be known as "public-private partnerships" for the purpose of maximizing the utilization of the depots' capacity. However, the 1998 Authorization Act does not appear to have expanded the services' ability to enter into such arrangements since section 361 did not contain any specific sales or leasing authority for use in partnering. Table 1 shows the major provisions in title 10, along with relevant sections in the 1995 and 1998 National Defense Authorization Acts, which facilitate partnering. The Army and the Air Force, for various reasons, view partnering arrangements differently. The Army believes that there are substantial opportunities within its legal authority to enter into contractual arrangements with private sector companies for the sale of goods and services. It has entered into a number of such arrangements using this authority. The Air Force believes such opportunities are very limited and has not entered into any such arrangements. The Army has entered into partnering arrangements under the legislation covering sales of goods and services. A sales arrangement is a contract between a depot and a private firm whereby a depot provides specific goods and services. The Army has designated which depots may sell articles and service outside of DOD and has issued specific implementing guidance. In 1995, the U.S. Army Depot Systems Command (now IOC) issued policy guidance for its facilities to enter into sales, subcontracts, and teaming arrangements with private industry. In July 1997, IOC developed the criterion for determining commercial availability. Under the criterion, a customer must certify that the good or service is not reasonably available in sufficient quantity or quality in the commercial market to timely meet its requirements. Cost cannot be a basis for determining commercial availability. The Army has also entered into a number of work-sharing arrangements that do not require specific legislative authority. They differ from a sales arrangement in that there is no contract between a depot and a private firm. The Air Force has not approved any proposed partnering arrangements. The Secretary of Defense has delegated to the Secretary of the Air Force the authority to designate which depots may sell articles and services outside of DOD. However, the Air Force Secretary has not made any such designations nor developed criteria to determine whether a good or service is available from a domestic commercial source. Air Force officials state that 10 U.S.C. 2553, like the corresponding Army sales statute (10 U.S.C. 4543), prohibits the Air Force from selling articles or services if those articles or services are available from a domestic commercial source. However, unlike the Army, Air Force officials believe the restriction prohibits the sale of almost any product or service their depots could provide. Army depots have entered into a number of partnering arrangements under the current statutory framework and within the context of the public-private workload mix for depot maintenance. These arrangements include sales under 10 U.S.C. 4543 and subcontracting under 10 U.S.C. 2208(j). Red River, Tobyhanna, and Anniston Army Depots all have ongoing arrangements with private industry to provide services such as testing and repair of communications equipment; development of training devices; testing of circuit card assemblies; and overhaul, conversion, and grit blasting of tracked vehicles. For example, table 2 lists sales statute partnering initiatives that are underway at the Anniston depot as of July 1997. In each of these sales arrangements, the Army has awarded the private sector company a contract to perform a certain scope of work. The contractor then makes a business decision to have the depot perform a portion of that work under the sales statutes. The sale is accomplished by a contract between the depot and the private sector firm that allows the depot to be reimbursed for costs associated with fulfilling the contract. These costs are estimated by maintenance personnel and are based on direct labor, materials, and in-house support costs. The contractor must pay the depot in advance for performing the service, and the depot reimburses its working capital fund to cover these estimated costs. For illustrative purposes, the FOX vehicle upgrade and M113 grit blast/test track partnering arrangements are described in more detail below. Following award of the FOX vehicle upgrade contract to General Dynamics Land Systems, Anniston representatives informed the contractor that the depot had facilities and capabilities that could meet the contractor's needs and provide for substantial facility cost savings and other benefits. In January 1997, officials from Anniston and General Dynamics Land Systems agreed to partner on the upgrade of 62 FOX reconnaissance vehicles. The partnering agreement included a 4-year contract with the depot under 10 U.S.C. 4543. Under the contract, the depot performs asbestos removal, grinding, welding, machining, cleaning and finishing, and prime and final paint operations. Under the terms of the contract with the Army, General Dynamics Land Systems does the upgrade using the depot's facilities. Depot facilities are provided to General Dynamics Land Systems as government-furnished property under its contract with the Army and revert back to the Army when the contract is complete. Depot personnel stated that this partnering arrangement has resulted in (1) a lower total cost for the combined work performed, (2) sustainment of core depot capabilities, and (3) overhead savings from using underutilized facilities. The depot has received about $1 million for its efforts on the first eight vehicles. The contractor stated that this project is a good example of a mutually beneficial program; the contractor reports that it would have cost more to perform the depot's share of the work at another location. The contractor also reports that it is spending $450,000 to upgrade buildings at the depot and that it will occupy 27,000 square feet of otherwise vacant or underutilized space. A General Dynamics Land Systems official stated that by occupying space at the Anniston depot there was a savings to the program cost. The partnering arrangement on the M113 grit blast/test track project was entered into under 10 U.S.C. 4543 and 2208(j). The Army was seeking a way to meet its fielding schedule for the M113 and asked United Defense Limited Partnership if it could partner with the Anniston depot to help meet fielding requirements. Under this partnering arrangement, United Defense Limited Partnership contracted with the depot to perform grit blasting on the vehicle hulls and the depot provided use of its test track facilities pursuant to a subcontract with the contractor under 10 U.S.C. 2208(j). Army officials stated that this partnership will allow them to meet the fielding schedule and reduce overall program costs. Contractor officials stated by using the depot's grit blasting and test track facilities, the need to build facilities to perform these functions was negated. The Army and private sector defense firms have established noncontractual partnering relationships by sharing workloads. Army program managers generally determine the mix of work between depots and private sector contractors. On any particular workload, either a depot or a private sector firm could receive all or part of the work. Under the Army's work-sharing partnering arrangements, a depot and a contractor share specific workloads, based on each party's strengths. The private sector firms' share of the workload is performed pursuant to a contract with the activity supporting the program. Thus, there are no contracts directly between depots and private sector firms; however, there are memorandums of understanding and detailed agreements on how the partnerships will operate. These agreements generally provide mechanisms to mitigate risks, mediate disputes, and standardize work processes. Discussion of such arrangements at Anniston and Letterkenny depots follows. General Dynamics Land Systems, the original equipment manufacturer for the Abrams tank, and Anniston entered into a work-share partnering arrangement to upgrade the tank. Anniston and the contractor jointly initiated the Abrams Integrated Management XXI program in 1993 to mitigate a number of problems, including a declining depot-level maintenance workload, limited production of new Abrams tanks, and fleet sustainment. The goal of this arrangement was to unite the tank industrial base expertise in armored vehicle restoration, make needed improvements, and extend the life of the fleet while reducing the dollars required to support the fleet. The Army approved the arrangement based on its objectives and projected benefits and awarded General Dynamics Land Systems a contract on a sole-source basis for its share of the work. Under this arrangement, the depot disassembles the vehicles, prepares the hull and turret for reassembly, and performs component restoration and overhaul, and then the contractor uses these components for assembly, system integration, and testing. According to depot officials, this partnering strategy retains core capabilities by allowing the depot to maintain its current skill base and reduces overhead costs through additional labor hours. A contractor representative cited benefits from the partnering arrangement such as developing new programs and creating additional business opportunities. The Paladin program is a work-share partnering arrangement between Letterkenny Army Depot and United Defense Limited Partnership. In 1991, the Army determined that full-scale production of the Paladin, a self-propelled howitzer, would be maintained within the private sector. However, due to factors such as cost growth and quality concerns, potential offerors were encouraged to use government facilities to the maximum extent practical. United Defense Limited Partnership proposed that the Letterkenny depot partner with it on reconfiguring the Paladin, which would include the contractor doing its portion of the work at the depot. United Defense Limited Partnership won the contract in April 1993, and the "Paladin Enterprise" was formed in May 1993. Both parties signed a memorandum of understanding that established the roles and rules of the partnership. Under this arrangement, the depot performs chassis and armament overhaul, modification, and conversion to the new configuration. The contractor is required to provide most of the Paladin-unique chassis components, a new turret, subsystems for automatic fire control, and the integration of all components. According to depot officials, all participants in this arrangement are benefiting from the dual use of the depot. Specifically, depot officials reported that collocating the contractor at the depot has resulted in numerous savings, including $15 million in cost avoidance by eliminating material processing through the Defense Logistics Agency, and renovation of a government warehouse at the contractor's expense valued at $3.4 million. Contractor representatives stated that this arrangement has allowed the contractor to remain in the tracked vehicle market and to retain critical skills and technology that will be needed when DOD resumes new vehicle production. The contractor is looking for additional partnering opportunities and believes that its experience with Paladin will enhance its ability to partner on future contracts. None of the Army's partnering arrangements reviewed included the leasing of excess or nonexcess depot equipment or facilities as permitted under sections 2471 and 2667 of title 10. However, there are a number of partnering arrangements in which depot facilities are provided to contractors as government-furnished property for the performance of the contracts. The Air Force has not approved several proposals for its depots to provide products or services to the private sector. For example, in January 1997, ABB Autoclave Systems, Inc., on behalf of Porsche Engineering Services, requested the use of Warner Robins Air Logistics Center's fluid cell press to form door panels. The press manufacturer stated that the depot and Cessna had the only fluid cell presses with the table size needed to produce these door panels. However, the Cessna press was not available. The Center's Commander requested approval from AFMC to enter into this partnering arrangement with Porsche. In April 1997, AFMC denied the request because it believed that it did not have the authority to enter into such a partnering arrangement since the Secretary of the Air Force had not designated any depots to enter into such arrangements nor issued implementing guidance to use in determining commercial availability. In another case, the Oklahoma City Air Logistics Center had excess capacity in its engine test cell and proposed to AFMC that it enter into a partnering agreement with Greenwich Air Services, Inc. Under the terms of the agreement, Greenwich would lease the test cell facilities for testing commercial high bypass turbofan engines. The Center believed that this arrangement would more fully use its test cell, thereby reducing excess capacity. Greenwich also viewed the arrangement as a "win-win" proposal that would defray or delay a capital investment expense and increase its product line. However, AFMC did not approve the request because the Secretary of the Air Force had not designated any depot to enter into sales arrangements nor issued implementing guidance to use in determining commercial availability. The Commander, AFMC, stated that he is neither a proponent nor opponent of partnering arrangements. However, he would consider approving such arrangements if it could be demonstrated that they would save money. He stated that his approach to cost reduction is (1) identify what is excess and divest it, (2) lease any underused capacity, and (3) then, and only if dollar savings can be demonstrated, explore partnering opportunities. In an era of reduced defense procurement, commercial contractors have become more interested in sharing repair and maintenance workloads with depots. Additionally, depots, in an effort to reduce overhead costs and retain core capabilities, are willing to enter into partnering arrangements with the private sector. A legal framework and the authority to enter into partnering arrangements exist in title 10. These authorities differ in some respects between the Army and the Air Force as do their approach to partnering. The Army has used this legislation, as well as work sharing, to initiate several partnering arrangements which, according to Army and contractor officials, have been mutually beneficial. The Air Force, on the other hand, has not initiated any partnering arrangements, citing the lack of a designation from the Secretary of the Air Force identifying which logistics centers may use the sales statutes and the legislative requirement that the good or service provided by the depot not be commercially available. The Air Force, unlike the Army, has not developed criterion to determine commercial availability, and in the absence of such criterion, has been reluctant to enter into any sales arrangements. Considering DOD's expressed support of partnering, we recommend that the Secretary of the Air Force designate the Air Logistics Centers that may use the sales statutes and provide implementing guidance to include criteria for determining the commercial availability of goods or services provided by the centers. To develop information on the legal framework under which partnering can occur, we identified and reviewed legislation, DOD and the services' policies and procedures, and talked to the services' Offices of General Counsel. We surveyed the services to determine what partnering arrangements were ongoing or had been proposed at their depots, and the services' views of such arrangements. In addition, we interviewed officials at the Office of the Secretary of Defense; Air Force Headquarters, Washington, D.C.; Army Headquarters, Washington, D.C.; the Naval Sea Systems Command, Arlington, Virginia; the Naval Air Systems Command, Patuxent River, Maryland; the Army Material Command, Alexandria, Virginia; Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; and the Army's IOC, Rock Island, Illinois; and the Army's program manager for Abram tanks. We also visited the Ogden Air Logistics Center, Hill Air Force Base, Utah, and the Anniston Army Depot, Anniston, Alabama. To obtain private sector views on partnering, we interviewed officials and obtained information from Lockheed Martin, Arlington, Virginia; General Dynamics Land Systems, Anniston, Alabama; United Defense Limited Partnership, Arlington, Virginia.; and United Defense Limited Partnership-Steel Products Division, Anniston, Alabama. We did not independently verify the benefits reported by the depots and the contractors; however, we did obtain documentation related to and supporting the reported figures. We conducted our review between June 1997 and February 1998 in accordance with generally accepted government auditing standards. DOD concurred with our findings and recommendation and provided a number of comments that it characterized as technical. Where appropriate, we made minor changes and clarifications in response to these comments. However, we believe that one of the comments warrants further discussion. DOD commented that the definition of partnering varies and that the Air Force has done many projects that could be considered partnering. As an example, DOD cited an agreement between Warner Robins Air Logistics Center and Lockheed Martin Corporation for repair services for the LANTIRN navigation and targeting systems. During our review, we discussed the LANTIRN project with officials from Warner Robins. It was explained that the project was to be implemented in two phases, with phase I being a firm-fixed price contract awarded to Lockheed Martin for the repair of 40 items. According to Warner Robins officials, this contract was essentially the same as any contract the Center enters into except the contractor would perform the work at Center facilities. These officials stated that phase I of the LANTIRN project does not constitute a partnering arrangement. However, under phase II of the project, if approved, Lockheed would subcontract with the Center for repair services to the LANTIRN for foreign military sales. This would be considered a partnership arrangement as defined in our report, because it constitutes the use of public sector facilities and employees to perform work or produce goods for the private sector. We are sending copies of this report to the Secretaries of Defense, the Army, the Air Force, and the Navy; the Director, Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you have any questions concerning this report, please contact me at (202) 512-8412. Major contributors to this report are listed in appendix III. Enlogex, Inc. Pulse Engineering, Inc. Ronald L. Berteotti, Assistant Director Patricia J. Nichol, Evaluator-in-Charge Oliver G. Harter, Senior Evaluator Kimberly C. Seay, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the use of partnering arrangements between the Department of Defense (DOD) and private-sector contractors to use excess capacity at military service repair depots, focusing on the: (1) legal framework under which partnering can occur; and (2) types of current partnering arrangements and the services' and industry's views of such arrangements. GAO noted that: (1) a number of statutory provisions enacted primarily during the 1990s provide, under certain conditions, the authority and framework for partnering arrangements; (2) various provisions of title 10 of the United States Code allow the services to sell articles and services outside DOD for limited purposes and under certain conditions; (3) the Army has this authority for many of its industrial facilities under section 4543 of title 10; (4) the Army controls the sales authority under this provision; (5) the authority for the remaining DOD industrial facilities, including those of the Air Force, is contained in 10 U.S.C. 2553; (6) it requires the Secretary of Defense to designate which facilities will have the authority to sell articles and services outside of DOD; (7) under both provisions, the goods or services sold must not be available commercially in the United States and providing these goods and services must not interfere with a facility's military mission; (8) due in part to these differing authorities, the extent to which the Army and the Air Force pursue partnering arrangements varies; (9) the Army has designated depots that may sell articles and services outside of DOD and has developed criteria for determining when such goods and services are not commercially available; (10) at the time of GAO's review the Army had established 13 partnering arrangements using both the sales statutes in title 10 and worksharing arrangements not requiring specific legislation; (11) Army and private-sector officials state that partnering has improved operational efficiencies at their respective facilities and that they are pursuing additional partnering opportunities; (12) the Secretary of Defense has delegated to the Secretary of the Air Force the authority to designate which facilities may sell articles and services outside of DOD; (13) however, the Air Force Secretary has not made any such designations nor developed criteria to determine whether a good or service is available from a domestic commercial source; (14) there have been several private-sector and depot proposals to enter into partnering arrangements but none have been approved; and (15) the Commander of the Air Force Materiel Command states that he is not opposed to partnering, but he is not willing to enter into such arrangements unless savings can be demonstrated.
5,040
552
ACPVs are non-tactical vehicles, or vehicles not used in combat operations, that can be lightly or heavily armored. The level of armoring depends on the expected threat. Both light and heavy armored vehicles provide 360 degree protection of the passenger compartment against ballistic threats, with commercial light armored vehicles providing slightly less protection than commercial heavy armored vehicles. Both variants are intended to transport American citizens and service members, as well as other passengers, in and around dangerous areas. ACPVs are extensively modified from commercially available sedans, trucks, or sport utility vehicles as they are intended to be inconspicuous and blend in with local traffic. ACPVs differ from traditional DOD military armored vehicles in various ways. First of all, traditional military armored vehicles are designed with military applications in mind, and typically the armor is integral to the design and construction. That is not the case with ACPVs, which are initially built for commercial markets and later disassembled, armored, and reassembled. Secondly, military armored vehicles are acquired through major defense acquisition programs while ACPVs are not. The guidance and regulation associated with major defense acquisition programs is generally not applicable to ACPVs. Moreover, an ACPV is considered a modified commercial item in that it is an item customarily used by the general public, except for modifications (armoring) made to meet the government's requirements. Therefore, ACPVs are not subject to the developmental and operational testing required of major defense acquisition programs, although material and acceptance testing for functionality, armor certification, and roadworthiness is to occur. Figure 1 presents a comparative illustration between a typical ACPV and a typical military armored vehicle. To meet its need for ACPVs, DOD components can procure vehicles through a variety of means. According to officials from the DOD components in our review --the Army, Air Force, Navy, Marine Corps, and the DIA--DOD components procured more than 410 ACPVs from 2011 through 2015. Due to corroborating documentation being unavailable in a few cases, we were unable to adequately verify the exact total number of vehicles. Appendix I provides additional details on this limitation. Due to classification concerns, we do not identify procurement quantities at the individual component level. DOD and its components--Army, Navy, Air Force, Marine Corps, and Defense Intelligence Agency, the largest buyer of ACPVs in DOD--are subject to a plethora of guidance related to the procurement of ACPVs, much of which is similar--and, in most cases, identical--to that used by State. For DOD, that guidance exists at the overall federal level, the department level, and the individual component level. State follows guidance that exists at both the federal level and the department level. For both agencies, the guidance covers key aspects of ACPV acquisitions, including procurement methods, protection levels, vendor clearances, inspection and acceptance, warranties, and fleet oversight. Agency officials at State and DOD components cited the FAR as the capstone guidance for their procurement activities. At the DOD level, in 2007, the Undersecretary of Defense for Policy, issued DOD Instruction C-4500.51, DOD Commercially Procured and Leased Armored Vehicle Policy. The department delegates much of the responsibilities for ACPV procurement to the components. In addition to the FAR, State follows its own guidance, which includes the Foreign Affairs Manual and Foreign Affairs Handbook on ACPV procurement, inspection, and fleet management. Multiple methods exist for the procurement of ACPVs, including standalone contracts negotiated directly with a vendor, purchases from the GSA Multiple Award Schedule Program, interagency acquisitions, and no-cost transfers from other agencies with excess property. The four methods used for procurement are described in more detail below. Direct Contracts with Vendors: Since ACPVs are modified commercial items, agencies can utilize streamlined procedures for solicitation and evaluation, provided under the FAR. With this approach, the agency issues a request for proposals. Vendors respond with their pricing, armor certifications, delivery schedules, warranty information, and any other information required. The agency then evaluates the offerors' proposals and makes an award. Use of General Services Administration Schedules Program: ACPVs can be procured from GSA's Multiple Award Schedule program. This program provides federal agencies with a simplified process for obtaining commercial supplies and services at prices associated with volume buying. In these cases, the GSA has prequalified and awarded indefinite delivery/indefinite quantity contracts--contracts that provide for an indefinite quantity, within stated limits, for a fixed time--to a number of vendors, and agencies can place orders against those contracts to meet their needs. Interagency Acquisitions: An interagency acquisition takes place when an agency that needs supplies or services obtains them from another agency. The Economy Act of 1932, as implemented in the FAR, provides general authority for federal agencies to undertake interagency acquisitions when a more specific statutory authority does not exist. Interagency acquisitions under the Economy Act can save the government duplicative effort and costs when appropriately used and leverage the government's buying power. In doing so, the acquiring agency can convey responsibility for several aspects of the procurement to a separate agency that is better poised to execute the acquisition. Excess Personal Property Transfers: In some cases, an agency may have excess inventory and can transfer ACPVs at no cost to the acquiring agency, thus avoiding the procurement process altogether and, in a sense, resulting in savings by the acquiring entity. The FAR states that agencies whose property is transferred to other agencies shall not be reimbursed for the property in any manner. DOD has outlined minimum blast and ballistic armoring requirements for protection against explosives and firearms, respectively, for ACPVs in DODI C-4500.51, but the detailed armoring specifications outlined in the instruction are classified. Generally, the specifications detail the minimum ballistic and blast protection standards that must be satisfied by all DOD ACPVs, whether they are light or heavy armored vehicles. State also has a classified policy that outlines armoring specifications for the ACPVs it procures for use in locations around the world. The FAR contains provisions for safeguarding classified information that apply to all federal agencies procuring goods and services, including DOD and State. While neither DOD nor State policies for ACPVs directly address vendor clearances, both agencies must comply with the FAR. Depending on the armoring specifications cited in the contract, a vendor supplying ACPVs to the government may require access to classified information. To accommodate such cases, Executive Order 12829 created the National Industrial Security Program, for which the Secretary of Defense is the executive agent, to safeguard classified information released to contractors. To implement the order, DOD issued the National Industrial Security Program Operating Manual to prescribe requirements, restrictions, and other safeguards necessary to prevent unauthorized disclosure of classified information and to control authorized disclosure of classified information released by executive branch departments and agencies to their contractors. The FAR requires a security requirements clause when the contract may require access to classified information. The clause requires the contractor to comply with the requirements identified in the National Industrial Security Program Operating Manual. In addition, as part of the process of obtaining a facility clearance, a contractor must sign a DOD Security Agreement, which documents the security responsibilities of both the contractor and the government in accordance with the requirements of the manual. As a part of this program, Defense Security Services within DOD administers and implements the defense portion of the National Industrial Security Program. Defense Security Services serves as the interface between the government and "cleared industry" and maintains a database of contractors that have valid, current facility clearances that allow for the safeguarding of classified material. While the July 2007 DODI C-4500.51 does not contain any specific instructions requiring ACPV inspection and acceptance procedures, it does state that DOD component heads shall ensure that the vehicles comply with armoring standards and existing acquisition regulations and specifically mentions the FAR. State's ACPV policy is similar to the DODI with respect to inspections, but State is also required to comply with FAR. The FAR provides that agencies shall ensure that contracts include inspection and other quality requirements that are determined necessary to protect the government's interest. The regulation goes on to state that commercial item contracts shall rely on a contractor's existing quality assurance system as a substitute for compliance with government inspection and testing before items are provided for acceptance, unless customary market practices for the commercial item being acquired permit in-progress inspection. The FAR contains additional language that provides the contracting officer with discretion in determining the type and extent of contract quality requirements, which could include additional inspections. In particular, the FAR states that the government shall not rely on inspection by the contractor if the contracting officer determines that the government has a need to test the supplies prior to acceptance, and, in making that determination, the FAR directs the contracting officer to consider, among other things, the nature of the supplies and services being acquired, their intended uses, and the potential losses in the event of defects. Similar to the areas outlined above, the DODI C-4500.51does not contain any specific language requiring warranties for ACPV procurements, but it states the vehicles shall be procured in accordance with the FAR. Likewise, State's armored vehicle policy does not include specific references to warranties. However, State is bound by the FAR. The FAR states that the use of warranties is not mandatory. However, the FAR sets forth criteria that contracting officers shall consider when deciding whether a warranty is appropriate. These factors include, but are not limited to, complexity and function, the item's end use, difficulty of detecting defects before acceptance, and potential harm to the government if the item is defective. The FAR also offers suggested terms and conditions that contracting officers may incorporate into contracts. For example, in the event defects are discovered, the government may obtain an equitable adjustment of the contract or direct the contractor to repair or replace the defective item at the contractor's expense. The DODI C-4500.51 outlines a number of responsibilities for different DOD officials that relate to ACPV fleet management and, ultimately, oversight. In particular, the instruction establishes that an assistant secretary within the Under Secretary of Defense, Policy, shall be the principal individual responsible for collecting and reporting information specific to DOD's ACPV fleet. Part of that reporting includes providing ACPV-related information to Congress. State policy includes similar provisions for ACPV management and oversight. While DOD and the components have developed policies and procedures for managing their non-tactical vehicle fleets, the language contained in those instructions often defers to DODI C-4500.51 for specific ACPV guidance. Table 1 identifies the different component-level policies that exist for ACPVs, a brief description, and whether there is a particular office within the component for ACPV-related matters. Similar to the instructions and manuals used by the DOD components, State's Foreign Affairs Manual outlines roles and responsibilities for its armored vehicle program. Other policies and procedures are incorporated by reference in these manuals for items such as armoring standards, vehicle procurement, assignments (i.e., locations), maintenance, and disposal. This guidance also assigns a single State entity--the Bureau of Diplomatic Security--as having overarching responsibility for the armored vehicle program. Selected DOD components in our review complied with guidance for the procurement and inspection of ACPVs for the contracts we reviewed. Further, we found evidence of in-progress inspections of DOD's ACPVs, although the Army conducted such inspections for only a single contract action. DOD utilized the four procurement methods described above for acquiring the vehicles, all of which are allowable under the FAR. The blast and ballistic armoring standards referenced in the contract actions we reviewed satisfy the levels of protection required under DODI C- 4500.51. For classified contract actions, vendor security clearances were requested and verified. All the contract actions reviewed had similar warranty provisions and generally reflected what is stated in the FAR. We found no evidence of contracts for correcting armoring deficiencies after delivery. The contracts we reviewed generally included FAR-based language for inspections and acceptance and in-progress inspections. Further, due to implementation of Office of the Secretary of Defense (OSD) efficiency initiatives, DOD components no longer report ACPV information to the OSD, as required by DODI C-4500.51. Moreover, the Army has no central office with complete oversight of contracting and fleet management activities or that maintains all relevant ACPV-specific information. In accordance with allowable FAR provisions, DOD components in our review utilized four procurement methods to acquire ACPVs between 2011 and 2015. Specifically, the components used direct contracts with vendors, GSA multiple award schedules, interagency acquisitions, and excess personal property transfers to acquire the vehicles. According to DOD officials, DOD components consider multiple factors in deciding how to procure ACPVs and meet armoring requirements, including the quantity of ACPVs needed, the components' expertise in procuring the vehicles, the components' technical specifications, and the urgency of the requirement. Table 2 presents the DOD components included in our review and the four methods they used to procure ACPVs. The Army and DIA awarded contracts directly to vendors, as well as placing orders under GSA's Multiple Award Schedule Program. According to DOD officials, one DOD component contracted with a vendor who subcontracted the armoring work; in this type of arrangement, the subcontractor is generally referred to as a third- party armorer. The Navy and Marine Corps used interagency acquisitions pursuant to the Economy Act whereby State ordered ACPVs on their behalf using State contract vehicles. Marine Corps and Navy officials stated that, by doing so, they abdicated all procurement responsibilities to State. This approach also allowed these components to leverage State's volume purchasing power, which, according to a Navy official, resulted in cost savings for ACPVs. DIA received some ACPVs as transfers from State's and another agency's excess property. State officials stated that Marine Corps may also have received some ACPVs as excess property from State's inventory but were unable to provide corresponding documentation. We saw no evidence of fund transfers as the ACPVs were transferred free of charge to DIA, in compliance with the FAR. While the contract actions we reviewed generally did not explicitly reference the DODI C-4500.51 armoring specifications, they did reference other standards that were similar in most respects to those specifications, which allowed them to avoid creating a classified contract. These included standards from State, the North Atlantic Treaty Organization Standardization Agency, and the European Committee for Standardization. These standards are similar to the DODI armoring specifications in many respects, but the North Atlantic Treaty Organization standards and the European Standards are unclassified. The three armoring specifications that were most frequently referenced in the contract actions we reviewed included State standards, North Atlantic Treaty Organization standards, and European standards. In cases where the contract documentation referred to standards that did not satisfy the minimum armoring specifications outlined in the DODI C- 4500.51, there was supplemental language in the contract that compensated for the differences. The North Atlantic Treaty Organization standards contained ballistic and blast specifications similar to the DODI C-4500.51, while the European standards cover only ballistic armoring specifications. Any additional details regarding the differences between the standards are classified. DOD is currently updating its criteria with regards to armoring standards pursuant to findings and proposed steps contained in an August 2015 DOD report on ACPVs. The DOD report stated that the department should regularly review and update armoring specifications. The department cancelled DOD Instruction C-4500.51 in May 2017 because, according to an OSD official, the Undersecretary of Defense for Acquisition, Technology and Logistics did not want the responsibility for determining the new armoring requirements. Anticipating the cancellation of DODI C-4500.51, the department issued a separate instruction. This instruction, DODI O-2000.16 Volume 1, dated November 2016, gave DIA responsibility for developing minimum standard inspection criteria for ACPVs. DIA is also responsible for disseminating specifications for the acquisition or modification of ACPVs and overseeing their incorporation into contracts awarded by DOD components. DIA officials said the criteria have been developed, but the agency is still determining how they will be distributed to the components. Also, DIA has not yet established a process for ensuring components incorporate those criteria in their contracts. According to DIA officials, implementing a process for oversight may be challenging for their agency. As of April 2017, DIA had not yet determined how long it would take to complete these actions. The majority of DOD components' contract actions we reviewed were unclassified, and, in those cases, no security clearance information was required or requested. For the unclassified contract actions, the contractors never required access to any classified information and the components did not require security clearances. This included the contract actions with the third party armorers--neither the prime nor subcontractors required any classified information, so there was neither a need nor a request for security clearances. Some of the contract actions we reviewed were classified because they required armoring in accordance with State standards, which are classified, while other contract actions cited to alternative standards and, therefore, were unclassified. Specifically, when the Army required a security clearance, the vendor provided evidence of its facility clearance with its proposal. For the Navy and Marine Corps, their ACPVs were procured via interagency acquisition using State contracts, which were all classified, as they required armoring to the classified State standards. In these cases, State officials told us that their Industrial Security Division performs an initial check of whether prospective vendors possess the required security clearance and provides results to the contracting office. According to officials, at contract award the Industrial Security Division issues a final, signed classification specification form to document that the selected vendor's clearance is in accordance with the requirements of the contract. In these cases, we found evidence that State took steps to ensure vendors were properly vetted and cleared, including obtaining signed classification specification forms. State and all selected DOD components, with the exception of the Army, provided evidence of in-progress inspections for each contract action used to procure ACPVs between 2011 and 2015. All contract actions reviewed included provisions for inspections and acceptance, including in-progress inspections. According to DIA officials, conducting in-progress inspections of their ACPVs is a best practice and a key step to ensuring vehicle quality and safety. We reviewed documentation for each DIA contract action and found evidence of in-progress inspections for all of them. Such evidence included detailed trip summary reports that documented multiple aspects of in-progress inspections at vendor armoring facilities. The in-progress inspection trip reports identified problems early that could be corrected before another in-progress inspection or the final inspection; deficiencies were dealt with before delivery and acceptance of the ACPVs. The trip reports contained detailed narratives listing the inspection dates, manufacturing facilities, inspection attendees, pictures of the vehicles, and any problems and corrections. The reports contained thorough trip narratives detailing the ACPVs' performance and road tests, any problems with the ACPVs, how problems were corrected from an earlier in-progress inspection, and any action items or follow-up for the contractor. We received evidence of State conducting in-progress inspections on ACPVs procured on behalf of the Navy through interagency acquisitions. As with DIA, State considers in-progress inspections to be a best practice when procuring ACPVs. Those inspections were similar to DIA's. Specifically, the contract files contained checklists for in-progress inspections of opaque armor, transparent armor, and roadworthiness, as well as vehicle components such as the engine, exterior, interior, operation/control, and special equipment/options. The files also contained evidence of final inspection armoring checklists completed by State personnel. State personnel inspected the vehicle's chassis, glass, serviceability, appearance, and roadworthiness. Based on our review of in-progress inspections conducted by State, there were issues with vehicles ranging from problems with adhesive or fenders to a need to reseal transparent armor. Lastly, there was evidence of final acceptance, indicating that any issues discovered in inspections were addressed, with both State officials and Navy officials accepting the ACPVs under these interagency acquisitions. According to a Marine Corps official, they deferred to State to conduct in- progress inspections of Marine Corps' ACPVs procured through interagency acquisitions. Marine Corps identified the State contracts that were utilized to procure their ACPVs and State provided evidence of final and in-progress inspections and acceptance for vehicles procured under those contracts. However, GAO could not confirm that those inspection records correlated to the Marine Corps' ACPVs in every case. The inspection records referenced vehicle identification numbers that linked to State's contracts and task orders, but neither State nor the Marine Corps were able to provide all the task orders required to corroborate these purchases. While this demonstrated that inspections were conducted for vehicles procured under these contracts, it did not allow verification that all the Marine Corps' ACPV orders were placed under those contracts. Army contract actions contained language and clauses for in-progress inspections as well as final inspections and acceptance, and Army officials provided evidence of final inspections and acceptance of ACPVs procured between 2011 and 2015. However, Army officials conducted in- progress inspections for a single procurement in 2011. Although the remaining Army contract actions included clauses that allowed such inspections, the Army instead depended on the vendors' certified quality control and inspection processes to ensure the vehicles were manufactured to specifications. Army officials acknowledged they did not conduct in-progress inspections for any other ACPVs procured between 2011 and 2015, but maintained that they had visited all the armoring facilities in the past under other contracts prior to the period of our review. However, the Army's lack of in-progress inspections results in the service relying on the vendor's quality control processes and therefore a presumption of quality for those vehicles produced without component- level, firsthand verification of armoring processes and safety. As we noted earlier, both State and DIA found problems during their in-progress inspections that may not have been discovered otherwise. As a result, there is the risk that Army ACPVs may be placed into service with undetected defects. As mentioned above, DOD is updating its ACPV criteria. These updated criteria are expected to include minimum specifications for inspections pursuant to findings and proposed steps contained in DOD's August 2015 report on ACPVs to the House Armed Services Committee. According to the report, the minimum inspection criteria will include various stages of inspections, including in-progress inspection. Although this is a positive step, these changes have not yet been approved, promulgated to the components, and implemented, nor is there a mechanism in place to ensure the criteria are being consistently applied and executed across the components. Until these criteria are approved and implemented, the risk of vehicles deploying with defects remains. While the FAR provides that contracts for commercial items shall generally rely on the contractor's existing quality assurance system as a substitute for government inspection, the regulation also provides the contracting officer with discretion to conduct in-progress inspections when deemed appropriate. Specifically, the FAR directs the contracting officer to consider the nature of the supplies and services being acquired and the potential losses in the event of defects. Both DIA and State determined that in-progress inspections of ACPVs are warranted, as the intended use of these vehicles is to transport American citizens and service members through dangerous areas, and failures stemming from armoring deficiencies could endanger passengers. In addition, officials from both DIA and State consider in-progress inspections imperative and a best practice towards ensuring their ACPVs are armored in a manner that improves the likelihood that vehicles meet contractual specifications. DIA in-progress inspections discovered vehicle deficiencies that required corrective actions. These inspections are above and beyond the quality control procedures provided by the vendors. They serve as safeguards and provide greater confidence that ACPVs are being built in a manner that satisfies minimum armoring specifications and that the ACPVs are protecting the lives of the people who rely on them in potentially dangerous situations. The nature of the armoring process itself suggests in-progress inspections are important. The armoring process involves disassembling the commercial vehicles, integrating the armor, and then rebuilding the vehicles, which essentially conceals evidence of the armoring techniques. As a result, any defects that are not discovered during the armoring process may not be noticeable during the government's final inspection and acceptance event. Given the intended use of these vehicles to transport American citizens and service members as well as other passengers that are considered high-value targets through dangerous areas, further inspection of ACPVs is an important step in the quality assurance system. All contract actions we reviewed had some form of warranty provision. Most contract actions we reviewed had a 1- to 3-year warranty range for opaque armor (i.e., steel). All contract actions had a 2-year warranty for transparent armor (i.e., glass) and coverage at the ACPV fielded location with no cost to government. All DIA contract actions also had 2-year warranties for workmanship. The FAR has no mandatory policy requiring warranties, but it does direct contracting officers to consider several factors when determining whether a warranty is appropriate for an acquisition. DOD officials stated that any problems with the ACPVs were minor, such as window noise. These problems were documented and corrected in the inspection phase before final acceptance by the government. Officials from the components stated that their ACPVs did not have any catastrophic failures during testing or in the field. Further, we found no evidence of contract actions for correcting armoring deficiencies after delivery. Office of Secretary of Defense Principal Staff Assistants and DoD Component Heads, in coordination with the Director, Administration and Management and General Counsel of the Defense Department, will eliminate all non-essential, internally generated reports, including any and all reports generated with a commissioning date prior to 2006. The Director, Administration and Management shall publish guidance regarding use of, cost benefit analysis of, and establishing sunset provisions for, report requirements. While OSD eliminated this reporting, as mentioned above, there is a requirement in DODI O-2000.16 Volume 1 for DIA to oversee incorporation of armoring and inspection criteria in all components' contracts. DOD officials stated this requirement will require some coordination among the components. DIA officials said the agency does not currently have a mechanism for such oversight and that establishing such a mechanism could be challenging. This situation puts a premium on coordination between DIA and the services and increases the importance of services being able to provide procurement and inspection information to DIA. With the exception of the Army, all the DOD components we reviewed have a central point of contact and mechanisms for managing and organizing their ACPV information. According to an Army official, while the Army's program office for non-tactical vehicles can track Army-wide vehicle condition for replacement decisions, that office does not maintain more comprehensive ACPV information, such as information for contract execution and vehicle inspections, across the entire Army. This decentralized approach for ACPV management leaves the Army with an incomplete picture of various ACPV-related matters, including consistency of procurement and inspection methods. For example, since the Army does not have ACPV information in a centralized manner, it may be difficult for the Army to provide information on the types of contracts used for procuring these vehicles and whether in-progress inspections are being conducted, to DIA for oversight. It could also present challenges to the Army for consistent application of best practices and lessons learned between the purchasing entities, as well as difficulty leveraging contracting mechanisms to obtain the best value for the government. Federal standards for internal control call for mechanisms that allow for oversight intended to help an organization, such as the Army, meet objectives and manage risks for activities such as ACPV procurement. The internal control standards advocate for an oversight structure to fulfill responsibilities set forth in laws and regulations and for control activities at various levels to help meet objectives and manage risks. Such control activities would include management reviews to compare actual performance to planned or expected results throughout the organization. Further, internal controls advocate for reports for use by the organization to ensure compliance with internal objectives, evaluate compliance with laws and regulations, and inform outside stakeholders. While selected DOD components in our review are complying with guidance, policies, and procedures for ensuring the safety and quality of ACPVs, opportunities exist for the Army to provide greater assurances that vehicles meet armoring and quality specifications. DOD's use of ACPVs to transport personnel through areas that are understood to have potential for attack increases the importance of in-progress inspections and oversight. Such inspections provide greater assurances that vendors are adhering to established quality assurance procedures and delivering vehicles that satisfy the armoring standards for protecting passengers. By DIA's own admission, overseeing the implementation of revised armoring and inspection standards in DOD contracts will be a challenge. A focal point within each of the DOD components that can collect and report ACPV-related contracting information to DIA could help ease that burden. While many components have a single, centralized office that is responsible for all aspects of ACPV that would be capable of reporting this information, the Army's non-tactical vehicle office does not maintain similar information. In that regard, the Army could benefit from a centralized point of contact that can collect and, ultimately, report to DIA information pertaining to all aspects of the component's ACPV safety, procurements, and fleet status. To help ensure that ACPV armoring and quality standards are met, that evolving department and component policies are consistent, and that they are consistently applied, we recommend that the Secretary of Defense: Until the department approves and implements the updated armoring and inspection standards, direct the Secretary of the Army to conduct in-progress inspections at the armoring vendor's facility for each procurement; and Direct the Secretary of the Army to designate a central point of contact for collecting and reporting ACPV information to facilitate DIA's oversight of armoring and inspection standards in these contracts. We provided drafts of this product to the Department of Defense (DOD) and the State Department for comment. In its comments, reproduced in appendix II, DOD concurred with our recommendations. DOD also provided a technical comment, which we incorporated as appropriate. As we made no recommendations to the State Department, it did not provide comments. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Director of the Defense Intelligence Agency; and the Secretary of State. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addresses DOD components procurement of Armored Commercial Passenger-Carrying Vehicles (ACPVs). The objectives are to determine (1) DOD's guidance and procedures for acquiring ACPVs and how they compare with those at State; and (2) the extent to which selected DOD components adhere to guidance, policy, and procedures for ensuring the safety and quality of ACPVs. To assess DOD's guidance and procedures for acquiring ACPVs and how they compare with those at State, we reviewed the DOD instruction for specific guidance pertaining to ACPV acquisitions and the department- level policies for procuring modified commercial vehicles. We also reviewed the associated federal acquisition regulations that pertain to the various aspects of our review, namely those for procurement mechanisms, warranties, security clearances, inspection, and acceptance. We identified service-specific guidance that could also apply to the acquisition and inspection of ACPVs and interviewed DOD service and agency officials to verify their applicability to ACPV procurement. We researched the State Foreign Affairs Manual and Foreign Affairs Handbook for specific sections dealing with various aspects of ACPV procurement and inspection and verified their applicability during meetings with State officials. We summarized the contents of DOD and State policies for comparative purposes. We also analyzed armoring standards that were referenced in contract file documents--which included State standards, North Atlantic Treaty Organization standards, and European standards--and compared them with the minimum armoring standards outlined in DOD Instruction C-4500.51, the relevant instruction for the timeframe we assessed. The specific armoring standards contained in the DOD Instruction and State policy are classified, which preclude us from presenting a detailed assessment of those standards in this report. To determine the extent to which selected DOD components--namely the Army, Navy, Marine Corps, and DIA, the largest procurer of these vehicles for use overseas--adhered to guidance, policy, and procedures for ensuring the safety and quality of armored commercial passenger- carrying vehicles, we worked with DOD and State officials to identify contract actions that were used to acquire ACPVs that DOD components received between 2011 and 2015. We selected this time frame to cover from when DOD stopped reporting this information to Congress to the most recently available information at the time of our review. For each contract action, we reviewed numerous documents, including base contracts, task orders, work statements, vendor proposals, invoices, and inspection reports, in order to identify evidence of contracting mechanisms, armoring specifications, vendor clearances, inspection and acceptance, and fleet management. We also created data collection instruments, populated them with the information obtained during the course of our review, verified the information with agency officials through multiple interviews, and created summary analyses that allowed us to succinctly present the information in our report. We searched the federal procurement database in order to identify any instances where separate contracts were executed to correct any deficiencies that were discovered after vehicles were fielded. We were unable to identify any such contracts. In order to determine the total quantities of ACPVs that selected DOD components purchased between 2011 and 2015, we sent questionnaires to agency officials asking specifically about procurement quantities. We also reviewed contract file documentation that pertained to quantities obtained over that time frame and summarized the results. Although we calculated a quantity for ACPVs that DOD components procured from 2011 to 2015, the State information for vehicles it provided to DOD was inconsistent with information provided by all the services. As a result, we were unable to verify the exact total number of vehicles DOD components acquired over this time frame. For example, State and Marine Corps officials both reported vehicle quantities and contract numbers, but they were unable to provide task orders to validate those quantities. We conducted this performance audit from May 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Marie A. Mak, (202) 512-4841, [email protected]. In addition to the contact named above, J. Kristopher Keener, Assistant Director; Emily Bond; Thomas M. Costa; Andrea C. Evans; Marcus C. Ferguson; Kristine R. Hassinger; and Hai V. Tran made key contributions to this report.
DOD uses armored military vehicles for combat and operational support, but it also uses armored commercial vehicles to transport military and civilian personnel in areas that pose a threat to their safety. These vehicles differ in many ways, including mission and appearance. The House Armed Services Committee report accompanying the National Defense Authorization Act for Fiscal Year 2017 contained a provision for GAO to assess multiple aspects of DOD's procurement practices for ACPVs. This report assesses (1) DOD's guidance and procedures for acquiring ACPVs and how they compare with those at the Department of State; and (2) the extent to which selected DOD components adhere to guidance and procedures for ensuring the safety and quality of ACPVs. To conduct this work, GAO analyzed policies, procedures, and regulations that govern aspects of acquiring, armoring, inspecting, and managing ACPVs; interviewed DOD and State Department officials; and compared armoring standards DOD components--Army, Navy, Marine Corps, and Defense Intelligence Agency--use for ACPVs against minimally acceptable protection standards. GAO reviewed contract actions for selected DOD components between 2011 and 2015. The Department of Defense (DOD) and the defense components in GAO's review--Army, Navy, Air Force, Marine Corps, and Defense Intelligence Agency, the largest buyer of armored commercial passenger-carrying vehicles (ACPV) in DOD--have a plethora of guidance related to ACPV procurement. This guidance is similar to that used by the Department of State, which also procures a large number of these vehicles (see figure). DOD officials GAO spoke with cited the Federal Acquisition Regulation as the capstone guidance for procurement activities. For DOD, guidance also exists department-wide and at the individual component levels. Guidance covers numerous aspects of ACPV acquisitions, including procurement methods, protection levels, inspection and acceptance, warranties, and oversight. ACPV-related contract actions for the selected DOD components generally complied with guidance, policies, and procedures for ensuring the safety and quality of ACPVs and included contract language that met minimum armoring standards. However, opportunities exist for the Army to improve its processes for in-progress inspections--inspections that occur as the vehicle is being armored--as the Army instead depended primarily on the vendors' quality control processes. GAO's review of contract actions used to procure ACPVs for selected DOD components between 2011 and 2015 showed that in-progress inspections were conducted, with the exception of the Army, which conducted such inspections for only a single contract action. Without in-progress inspections, the Army is accepting risk in the safety of its vehicles. Further, with the exception of the Army, all the DOD components have a central office and mechanisms for reporting ACPV information. This decentralized approach leaves the Army with an incomplete picture of various ACPV-related matters, including procurement and inspection methods. Federal standards for internal control call for mechanisms that allow for oversight intended to help an organization, such as the Army, ensure compliance with armoring and inspection standards. Without a designated central point of contact, the Army may face challenges for reporting ACPV information to DOD officials responsible for overseeing the implementation of armoring and inspection standards department-wide. The Secretary of Defense should require the Army to conduct in-progress inspections and designate a central point of contact for ACPV information. DOD concurred with the recommendations.
7,942
769
The Army's current mission at Rocky Mountain Arsenal is to clean up the contaminated soils, structures, and groundwater there. The arsenal, established in 1942, occupies 17,000 acres northeast of Denver, Colorado, and is contaminated from years of chemical and weapons activities. The Army manufactured chemical weapons, such as napalm bombs and mustard gas, and conventional munitions until the 1960s and destroyed weapons at the arsenal through the 1980s. In addition, it leased a portion of the arsenal to Shell from 1952 to 1987 to produce herbicides and pesticides. In 1983, the United States sued Shell Oil Company for its share of the cleanup costs. In February 1989, after extended litigation, the Army and Shell signed the Rocky Mountain Arsenal Settlement Agreement and the related Rocky Mountain Arsenal Federal Facility Agreement. The agreements apportion cleanup costs to be paid by each party and costs to be shared by both, direct that environmental legislation be complied with, and provide a procedure for resolving disputes. An additional document, the Army/Shell Rocky Mountain Arsenal Financial Manual, provides an overview of financial, accounting, and auditing policies for costs related to the cleanup. Descriptions of the agreements and cost categories and guidance are contained in appendixes I and II. Shell uses contractors for cleanup activities. Two primary contracts provide for studies and cleanup activities and cover about 86 percent of Shell's shared costs. A third contract provides for public affairs support. Each quarter, Shell provides the Army a claim for its allocable, or shared, costs. After review, the Army generates a quarterly statement, from which the Army determines how much each party owes. Under the agreements, the shared cost to be borne by each party is a percentage of the total shared costs (see table 1). As we previously reported, when the Army negotiated the settlement agreement, it estimated the shared cleanup cost would be less than $700 million, which would not have breached the demarcation between the 65/35 percent split and the 80/20 percent split. The Department of Defense (DOD) currently estimates the cost for arsenal cleanup at $2.1 billion. As of December 1995, the Army's quarterly statement showed shared costs of $656 million. Army officials stated that shared costs reached $700 million in November 1996, and thus, the Army would begin paying 80 percent of the shared costs. According to Army officials, as of December 1995, the Army had incurred $308 million in costs not shared by Shell. Shell officials told us Shell's nonallocable costs amounted to $95 million for studies, cleanup activities, and program management costs, including litigation. The Army's process to review cost sharing claims under its settlement agreement with Shell is insufficient to ensure that costs are documented and appropriate. Weaknesses in the process involve (1) documentation to support claims, (2) agreements to define which costs should be shared, (3) separation of duties for recording and reviewing shared costs, and (4) documentation of decisions on the treatment of capital assets and disposition of real estate. Federal standards require that, among other elements, internal control systems provide reasonable assurance that assets are safeguarded and that revenues and expenditures are recorded and accounted for properly. The Arsenal Financial Manual allows costs to be disputed on several grounds. Specifically, costs can be disputed if: the work was not supported by a task plan, the work was not performed or the costs were not incurred, duplicate charges were made, or the costs were arbitrary and capricious in comparison with normal commercial practices. However, the Army's review of the costs to be shared with Shell has been minimal. Our work showed that additional documentation is available in most cases and could have been reviewed by the Army. In some cases, however, more documentation would have been needed to perform detailed reviews. We examined 153 randomly selected summary vouchers covering $31 million of Shell's allocable costs incurred from January 1988 to February 1995. As part of this examination, we reviewed documentation that Shell had provided the Army in support of its quarterly cost claim. We also reviewed secondary documentation maintained by the primary contractor. Based on these examinations and additional data later provided by Shell, we stated in our draft report that 31 entries for items totaling $3.1 million lacked the documentation needed for the Army to review the appropriateness of the cost claims. In some cases, the claims were partially documented, and in others, there was no documentation provided. In commenting on the draft report, Shell stated that in every instance, adequate information was either already in our possession or provided to us in meetings during March and April 1996. Shell further stated that full support was attached to invoices for each of three examples cited in our report. We again met with representatives of Shell and its principal contractor, Morrison Knudsen, in November 1996, but most of the documentation was not yet available and we agreed to examine additional documentation that was provided to us in December 1996. As a result of the most recent data, we revised the examples described below. The difficulty in obtaining documentation for the three examples illustrates our point that the Army needs to have procedures for documentation and the examination of claims. Taking the additional information into consideration, the following are examples from our sample of selected summary vouchers where insufficient documentation was available to make an adequate review of shared costs. For a $666,035 line item at first described as "other direct costs," support for only $30,125 had been provided to us at the time of our draft report. Shell provided detailed support by December 1996 for an additional $479,015. The detailed support indicated that the costs were for contractor studies and left $156,895 in need of further documentation. $301,977 for brine disposal by a subcontractor did not have, at the time of our draft report, information on the quantity to be paid for, such as number and size of railroad tank cars. The separate agreements cited in Shell's comments permitted payments up to a limit, but data on actual amounts were still needed. Such data were provided for $266,723, but were still lacking for the remaining $35,254. $187,275 of $326,566 for operations of an incinerator appeared to be for incentive awards but was not specified sufficiently, such as the number or type, to show the basis for the expenditure. The claim did not actually include awards, and support for $166,183 was provided in December 1996, although a clear link to invoices was not always shown. The remaining $21,092 lacked sufficient detail. Overall, the Army does not have detailed procedures for examining Shell's shared costs. In the absence of such procedures, the Army's examination consists of comparing Shell's monthly costs with the previous month's costs to look for significant variances. We found that the Army has not fully exercised its authority to review the costs of Shell's contractors and subcontractors. For example, the Army shared about $48 million in costs that Shell claimed for technical studies, but has not examined the relevant contracts. Army officials said that they operate with Shell in an atmosphere of trust. They also stated that they believe that they have no right to interfere in Shell's relationship with its contractors and that standard government contract controls do not apply to Shell's commercial contracts. Notwithstanding these points, the Army is permitted to review Shell's costs under the arsenal agreements and should do so to ensure that costs being claimed are appropriate. The arsenal agreements require that shared costs be supported by an approved task plan or other written agreement. The arsenal's Program Manager's Office and Shell officials have made numerous agreements implementing the guidance in the settlement agreement. However, not all agreements were written, and written agreements sometimes lacked approval signatures, estimates of costs to be incurred, clear descriptions of the tasks to be done, or statements that costs can be shared. Of the 153 summary vouchers we reviewed, 48 lacked specific written support, such as a signed agreement, a statement stipulating that the item was allocable or reimbursable, or authorization for the task. In some cases where signed agreements were lacking, Shell and the Army used their commercial and government practices as a standard in determining reasonableness of costs. Community relations activities is one area where cost sharing agreements have not been finalized and documentation was limited, thus making it difficult to adequately review claims. A written agreement was drafted and dated June 1990 (retroactive to January 1988), but was never signed. Although the unsigned agreement called for the Army to assume the lead responsibility in this area, Shell retained a contractor to provide public relations support. Shell and Army officials stated that for guidance on community relations activities, they refer to the requirements of the Comprehensive Environmental Response Compensation and Liability Act. Our random sample included $481,000 in charges for public affairs activities, and the Army had approved them based on two Shell statements of allocable costs that gave totals for broad categories. Incurred from August 1991 through December 1992, the largest categories were for public affairs activities regarding the successful operation of an incinerator ($245,047), public education/involvement ($120,927), and agency support ($73,864). Each category in the statements included a brief summary but no breakout of amounts for specific activities. Breakouts were often available on request, but detailed expense data were incomplete. For example, Shell provided additional data to us showing that public education/involvement included subcategories such as an arsenal brochure ($19,066), a Fish and Wildlife Service Spring Event ($14,480), and Bald Eagle Day ($15,567). Further, the detailed data for Bald Eagle Day showed $4,679 for unspecified labor costs; $4,622 for promotional "eagle pencils;" $3,026 for advertising; $1,278 for bus service; and other categories of less than $1,000 each for such items as photographs, videotape, copying, and box lunches. We did not review the appropriateness of individual cost claims. However, the above examples further demonstrate that the Army has not ensured it has sufficient information to review shared costs. The arsenal's Director of Public Affairs stated that he would require supporting documentation on such claims in the future. Federal standards require that internal control systems provide reasonable assurance that expenditures are documented, recorded, and accounted for properly. We found that the Army has not adequately documented its decisions concerning some capital assets and real estate. For example, as part of interim response activities, Shell had to vacate an office building it owned and occupied on the arsenal. The Army provided land on the arsenal for Shell to build a replacement building. The Army also reimbursed Shell for the full $670,000 cost of construction. Several provisions in the arsenal agreements could allow construction to take place on the arsenal. Depending on the circumstances that caused the building to be vacated and a replacement built, the construction might have been an Army-only cost, a Shell-only cost, or a shared cost. In this case, the building was treated as an Army-only cost, but the reasons for this treatment were not documented. In another instance, the Army did not document the basis for a transaction with Shell. Shell purchased property located just outside the arsenal's north boundary for about $4 million. The Army needed access to the land to conduct offsite groundwater treatment activities. The groundwater treatment was a shared cost. Shell purchased the land because it was able to do so more quickly than the Army would have been able to, according to Army and Shell officials. For its use of the property, the Army paid Shell about $2 million through transaction adjustments--half the purchase price. The land is well situated for commercial and industrial development as it is near an interstate highway and the new Denver International Airport (see fig. 1). Shell will retain the land when cleanup is complete. Another instance involved capital assets purchased by Shell and charged as an allocable cost. The Army could receive a proportionate credit for such assets as vehicles, office equipment, and furniture, when they are disposed of or sold. However, the identification and disposition of the allocable assets was not documented. In discussing this issue, Army and Shell officials did not provide detailed documentation, but described the disposition of a large set of assets relating to an incinerator. They stated that the Army had received a credit for items sold and that other items were being stored. Because the same Army staff members record, review, and audit Shell's allocable costs, the Army does not have adequate control over the shared cost process. Federal internal control standards require that key duties and responsibilities such as recording and reviewing transactions be separated systematically among individuals to protect the government against error, waste, and wrongful acts. Moreover, the Army and Shell staff who conduct the day-to-day operation of the shared cost system also review the shared costs annually. In 1988 and 1989, the Army Audit Agency reviewed Shell's costs and found numerous problems, including insufficient documentation and costs claimed without a task plan. Although the annual reviews by operating staff continue, there have been no other independent verifications or follow-on audits of Shell's shared costs. The Army will be paying 80 percent of millions of dollars in shared costs for the cleanup of Rocky Mountain Arsenal. Strengthening its review process for shared cost claims is key to ensuring appropriate sharing of costs. Thus, we recommend that the Secretary of the Army establish specific procedures for the examination of Shell's cost claims and documentation, including costs of its contractors and subcontractors; establish standard procedures for the approval and documentation of supplementary agreements regarding the allocability of costs and treatment of capital assets and real estate; and require that such key duties and responsibilities as recording and reviewing transactions be performed by different individuals. Both DOD and Shell provided written comments on a draft of this report (see apps. III and IV). DOD concurred with our recommendations regarding procedures for documentation of costs and agreements, but noted that adequate documentation exists for most shared cost claims. In its comments, Shell did not agree that documentation it made available was insufficient to review the appropriateness of the cost claims. In its comments concerning our two recommendations for procedures to ensure documentation of costs and agreements, DOD stated that most claims were documented. However, we identified cases where documentation for summary vouchers and cost sharing agreements for the tasks involved was lacking. We continue to believe that these conditions represent weaknesses in the Army's review process. With regard to Shell documentation, we do not recommend action on individual items, but focus on the Army's review process. We agree that Shell provided records, but the amounts did not always support the summary vouchers we examined. We believe that our comments regarding the weaknesses in the review process are correct, but revised our report to reflect the additional information provided by Shell and its contractor. Our initial review raised questions about support for 55 of 153 items. After discussing the 55 with Shell and its contractor and examining additional contractor documents during March and April 1996, we reduced the number of items with questions to the 31 cited in our draft report, including the 3 examples. Following Shell's written comments, we met again in November and December regarding the examples. A substantially greater amount is now supported, but gaps remain in each example, as described in this report. Finally, DOD partially concurred with our recommendation for separation of duties, stating that it complies with requirements under procedures now in place. We recognize that internal controls are adapted to the risks being faced and the resources available. DOD has attempted to address such control issues by designating one person in a two-person group to be a staff accountant to review data and the other to make sure data are generally complete. We believe controls could be further strengthened by having others--who do not conduct the day-to-day operation--be responsible for the annual review of shared costs. This is a particular issue where only one external review has been made of transactions, and that was just after the settlement agreement was put in place 8 years ago. We interviewed officials at, and reviewed documentation provided by the arsenal Program Manager; Shell Oil Company, Denver, Colorado, and Houston, Texas; the Defense Contract Audit Agency, Boise, Idaho; Morrison Knudsen and Holme Roberts Owen, Denver, Colorado; and the state of Colorado. We obtained and reviewed Army and Shell shared cost documentation, but we did not verify the total reported costs. We reviewed 153 randomly selected items from Shell's journal entries for allocable and reimbursable costs incurred from January 1988 to February 1995. We also reviewed all monthly invoices for allocable costs from the Shell contractors Morrison Knudsen and Holme Roberts Owen incurred for the fourth quarter, ending November 1988, through the third quarter 1995. We examined supporting documents provided by Shell and its contractors. We did not review the appropriateness of individual cost claims. Although we examined additional documentation provided by shell and its contractor for 3 examples in our report, we did not pursue additional documentation for the remaining 28 of the 31 sample items cited in the report. We conducted our review from April 1995 to December 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to appropriate congressional committees. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8412. Major contributors to this report are listed in appendix V. The Army and Shell formalized their agreements and guidance regarding activities and costs for environmental cleanup at the Rocky Mountain Arsenal in the Rocky Mountain Arsenal Settlement Agreement, the Federal Facility Agreement, and the Financial Manual. The Settlement Agreement establishes a mechanism for apportioning cleanup responsibilities and costs between the Army and Shell. The agreement defines allocable costs and includes lists of Shell-only and Army-only costs. Under this agreement, Shell may hire contractors "subject to the approval of the Army." The Federal Facility Agreement ensures compliance with environmental legislation, including the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 (42 U.S.C. 9601), and establishes a procedure that allows the various participants to cooperate in environmental cleanup at the arsenal. It "provides the process for the planning, selection, design, implementation, operation, and maintenance of response actions taken pursuant to CERCLA as the result of the release or threatened release of hazardous substances, pollutants or contaminants at or from the arsenal, including the public participation process." The Financial Manual describes the financial, accounting, and auditing procedures to be used for shared costs incurred in connection with arsenal cleanup. It describes primary and secondary documentation for allocable costs and includes examples of some documentation. It provides procedures under which cost-related disputes between the Army and Shell are to be settled, but it does not include procedures for examining and accepting shared costs. The Manual stipulates that the procedures described in it will be conducted in accordance with generally accepted accounting principles consistently applied. The following material summarizes cost definitions found in the Rocky Mountain Arsenal Settlement Agreement, which provides guidance regarding allocable, reimbursable, Shell-only, and Army-only costs. The Army and Shell supplement this guidance with agreements on the specific tasks to be included in each category. The Settlement Agreement defines allocable costs as all response costs, excluding Army-only and Shell-only costs; all response costs for activities outside the arsenal boundaries; associated costs for involvement of the Environmental Protection Agency, the Agency for Toxic Substances and Disease Registry, and the Department of the Interior; all natural resource damage assessment costs; and other costs agreed on in writing by the Army and Shell as allocable costs. Exhibit D of the Settlement Agreement describes Shell-only costs as those pertaining to the following actions: demolition, removal, and disposal of all buildings and structures owned by Shell or its predecessor company (includes a list of the structures); demolition, removal, and disposal of all equipment in Shell-owned structures and in buildings leased by Shell immediately before the effective date of the Settlement Agreement; assessment activities associated with the two above activities; Shell staff at the Central Repository and the Joint Administrative Record Shell activities associated with dispute resolution, judicial review, and the Technical Review Committee; and Shell's program management, including labor, materials, supplies, and overhead for Shell's Denver Project Site Team, litigation support, legal fees, and auditing expenses. Exhibit C of the Settlement Agreement describes Army-only costs as those pertaining to the following actions: assessment, demolition, removal, and disposal of all buildings, structures, and equipment not listed as Shell-only in Exhibit D; assessment, identification, removal, and disposal of unexploded ordnance; assessment, decontamination, removal, treatment, and/or disposal of all soil, excluding soil that includes a Shell compound, in specified areas; Army staff, and all facilities and equipment, for the Central Repository and the Joint Administrative Record and Document Facility; Army activities associated with dispute resolution, judicial review, and the Army program management, including labor, materials, supplies, and overhead for the Army's arsenal Program Manager's Office and its divisions, litigation support, legal fees, and auditing expenses; and other specific miscellaneous actions, such as emergency action responses to a release of pollutants or contaminants. Margaret Armen, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed cleanup costs claimed by Shell Oil Company and shared by Shell and the U.S. Army at Rocky Mountain Arsenal, Colorado, focusing on: (1) selected aspects of the processes that the Army uses to review cost claims under its settlement agreement with Shell; and (2) the adequacy of these processes. GAO found that: (1) the process the Army uses to review claims under its cost sharing for cleanup at the arsenal has not been sufficient to ensure that costs claimed by Shell are appropriate; (2) specifically, the review process does not always ensure that sufficient documentation is available to review claimed costs and formal agreements exist to define which costs should be shared; (3) the review process generally does not look at the detailed documentation supporting cost claims; (4) GAO's work showed in most cases further information was available, but in some cases it was not; (5) also, the review process does not have effective checks and balances, such as separation of key duties and responsibilities and independent reviews; (6) for example, staff associated on a daily basis with the shared cost system also conduct the annual assessment of the shared costs; and (7) the combination of limited documentation and inadequate controls places the government at the risk of paying for unwarranted charges.
4,833
266
With the enactment of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) in 1980, the Congress created the Superfund program to clean up the nation's most severely contaminated hazardous waste sites. The Congress extended the program in 1986 and 1990 and is now considering another reauthorization. Under CERCLA, EPA investigates contaminated areas and places the most highly contaminated sites on the National Priorities List (NPL) for study and cleanup. As of December 1996, there were 1,210 sites on the NPL. After a site is placed on the NPL, EPA extensively studies and evaluates the site to determine the appropriate cleanup remedy for it. The remedy selected depends upon the site's characteristics, such as the types and levels of contamination, the risks posed to human health and the environment, and the applicable cleanup standards. The site's cleanup can be conducted by EPA or the party responsible for the contamination, with oversight by EPA or the state. Through fiscal year 1995, the latest period for which EPA has data, EPA had selected incineration as a Superfund cleanup remedy 43 times, or in about 6 percent of the decisions on remedies it had reached through that date.At the time of our review, three incinerators were operating at Superfund sites--the Bayou Bonfouca site in Louisiana, the Times Beach site in Missouri, and the Baird and McGuire site in Massachusetts. As of October 1996, EPA planned to use incineration at four additional sites. Incineration is the burning of substances by a controlled flame in an enclosed area that is referred to as a kiln. Incineration involves four basic steps: (1) wastes, such as contaminated soil, are prepared and fed into the incinerator; (2) the wastes are burned, converting contamination into residual products in the form of ash and gases; (3) the ash is collected, cooled, and removed from the incinerator; and, (4) the gases are cooled, remaining contaminants are filtered out, and the cleaned gases are released to the atmosphere through the incinerator's stack. (See fig. 1.) Incinerators may be fixed facilities that accept waste from a variety of sources, or they may be transportable or mobile systems. Fixed facility hazardous waste incinerators are required by the Resource Conservation and Recovery Act of 1976 (RCRA) to obtain an operating permit from EPA. RCRA regulates all facets of the generation, transportation, treatment, storage, and disposal of hazardous wastes in the United States. RCRA requires that fixed facility hazardous waste incinerators be operated according to EPA's regulations and be inspected by EPA every 2 years. Incinerators used to clean Superfund sites are generally "transportable," that is, they are transported to the site in pieces, assembled, and removed when the cleanup is complete. These incinerators are constructed and operated by contractors. CERCLA exempts any portion of a cleanup action conducted entirely on-site, including incineration, from the need to obtain any permit. However, CERCLA requires EPA to apply legally applicable or relevant and appropriate environmental standards from other federal laws, including RCRA, to Superfund cleanups. Accordingly, EPA requires incinerators at Superfund sites to meet RCRA's substantive requirements, such as the act's standards for emissions. EPA relies on four principal methods to ensure the safe operation of incinerators used to clean up Superfund sites. These methods are (1) setting site-specific standards for emissions and operations, (2) incorporating safety features into an incinerator's emergency systems, (3) monitoring emissions at the incinerator's stack and along the site's perimeter, and (4) providing 24-hour on-site oversight. (See app. I for more details on the safeguards at the three incinerators in operation at the time of our review.) EPA establishes specific cleanup standards for each incinerator used at a Superfund site. These standards are based on studies of the site's characteristics (e.g., the type and concentration of contamination present) conducted during the incinerator's design and construction. Standards can be adopted from other environmental programs or laws, such as RCRA or the Toxic Substances Control Act. Typically, RCRA's standards for fixed facility hazardous waste incinerators are applied. RCRA's standards govern the extent to which an incinerator must destroy and remove contaminants and set limits on emissions from the incinerator. EPA establishes the operating parameters needed for the incinerator to achieve the emissions standards and tests the parameters through a "trial burn" required under RCRA. The operating parameters can include the temperature of the kiln, the minimum oxygen levels needed to break down contaminants in the kiln, and the maximum carbon monoxide levels that may be produced. Although not required by EPA's regulations, a trial burn plan was reviewed by a RCRA expert at all the sites we visited to determine whether the proper operating conditions were being tested. According to EPA officials, if the incinerator operates within the parameters established at the trial burn, the incinerator will be operating safely. Besides establishing standards for emissions and operations, EPA requires engineering controls to prevent the standards from being exceeded. In addition, incinerators at the three sites we visited had built-in safety features unique to each model to prevent excessive emissions of contaminants in the event of an emergency shutdown. RCRA's regulations, which EPA applies at Superfund sites, require that incinerators have devices, called automatic waste feed cutoffs, that will stop contaminated waste from being fed into an incinerator when the operating conditions deviate from the required operating parameters. The waste feed would be cut off, for example, when a change in pressure or a drop in temperature occurred that could compromise the kiln's effective incineration of the contaminants. These cutoffs are set with a "cushion" so that the waste feed shuts down before the incinerator operates outside the established parameters. The number and type of waste feed cutoffs will depend on the requirements for each site. According to EPA officials, some cutoffs are routine, to be expected during the normal course of an incinerator's operations, and a sign that the safety mechanisms are working properly. For example, cutoffs can be triggered by expected changes in pressure within the kiln brought on by variations in the waste input stream. However, other cutoffs, especially repeated cutoffs, can be signs of problems. At the three sites we visited, all of the incinerators had some additional safety measures, not required by regulation, in the event that a critical part of the incinerator failed. At the Times Beach and the Bayou Bonfouca sites, the incinerators have emergency systems that fully shut down the incinerator and decontaminate the gases remaining in the system at the time of the shutdown. These systems seal off the gases and expose them to a high-temperature flame to destroy any residual contamination. At the Baird and McGuire site, the emergency system ensures that metals and particulates are removed before gases are emitted from the kiln. The most common reason for activating the emergency systems at the three sites was a shutdown caused by a power outage. EPA continuously monitors the air in the vicinity of an incinerator to ensure that emissions from the stack and from areas where soil is being excavated before being put into the incinerator do not exceed the maximum permitted levels. Air monitoring at the sites involves measuring conditions in real time and performing detailed laboratory analyses of samples that are collected over a longer period of time. For example, at the Baird and McGuire site, stack emissions are monitored continuously to measure key indicators of combustion, such as the oxygen levels in exhaust gases, to ensure that the incinerator is operating properly. For organic contamination, a more detailed laboratory analysis is carried out during the trial burn to provide additional assurance that dioxin, a cancer-causing substance produced by the burning of organic substances, is not excessively emitted. The Baird and McGuire site also has nine air monitors at its perimeter, each of which is hooked up to alarms that sound if emission levels approach the established parameters. These monitors, which are intended primarily to detect possible emissions from the on-site excavation of contaminated soil, monitor and record data every minute. According to the incineration contractor's project manager at the Baird and McGuire site, the air monitors picked up elevated levels only once during an excavation, when a drum of chemicals was removed. In a situation such as this, the excavation is slowed to bring emissions down to required levels. According to EPA's reports for the three sites we visited, emissions from the incinerators' stacks never exceeded the permitted levels. Although 24-hour oversight is not required by regulations or formal EPA policy, Corps of Engineers or state officials continuously observed the operations of the incinerator at each of the sites we visited. For the two cleanups that EPA managed (at the Baird and McGuire and Bayou Bonfouca sites), EPA had contracted with the U.S. Army Corps of Engineers for on-site oversight, while at Times Beach, where a responsible party was conducting the cleanup, a Missouri state agency provided oversight. At the time of our visit, these sites had staff to cover operations 24 hours a day. For example, at Baird and McGuire, 12 Corps of Engineers staff were assigned to monitor the incinerator's operations. On-site observation involves visual inspections and record reviews to ensure that the incineration companies are meeting the operating conditions specified by EPA. At the sites we visited, Corps of Engineers or state officials were responsible for checking the operating parameters displayed on computer screens in the incineration control rooms and inspecting measurement devices on incineration equipment to verify that they were working properly. For example, at Times Beach, a state official monitored operations from an on-site computer screen, while a state RCRA employee obtained the computerized information from his office in the state capitol to ensure that the conditions of the state's RCRA permit were being met. At Bayou Bonfouca, Corps officials examined operation log books and talked to incinerator operators to look for any problems and oversaw the procedures for testing and sampling emissions from the incinerator. The officials were also responsible for reviewing the air-monitoring reports and operation summary reports required of the incineration company and reporting their findings to EPA. In addition to the safeguards discussed above, EPA planned two additional methods to promote the safe operation of Superfund incinerators but never fully implemented them. First, EPA issued a directive requiring inspectors from its hazardous waste incinerator inspection program to periodically evaluate Superfund incinerators. This requirement had not been followed at two of the three incinerators operating at the time of our review. Second, EPA has not carried out its intention to systematically ensure that the lessons learned about an incinerator's operations in one incineration project are applied to subsequent projects. EPA is relying upon informal communication to transfer "best practices" from one incineration project to the next. In 1991, EPA issued a directive requiring that the same type of inspections that are conducted at RCRA-permitted hazardous waste incinerators be conducted at Superfund incinerators. In 1993, EPA issued interim guidance on how to perform these inspections at Superfund incinerators. This guidance required that inspectors in EPA's regional offices review the operating records for Superfund incinerators and examine the units to ensure that they were operating within their established parameters. Only one of the three incinerators we visited had received such an inspection. That incinerator received two inspections, one of which was conducted while the incinerator was shut down for maintenance. EPA regional staff we talked to were unaware of the directive and guidance on these inspections. EPA headquarters personnel told us that they were unaware that the inspections were not taking place but confirmed with the regions that only one region was inspecting Superfund incinerators. EPA officials attributed the lack of inspections to the higher priority given to other enforcement demands and a reorganization of enforcement functions, which muddied the responsibility for inspecting the incinerators. Headquarters officials said they would encourage the regions to do the inspections in the future. According to officials from EPA's Office of Enforcement and Compliance Assurance (OECA), who are responsible for implementing the inspection program, RCRA incinerator inspectors had visited Superfund incinerators when the guidance was first issued in 1993. However, these inspectors said their inspections were hampered because they did not have a site-specific document containing the requirements for each incinerator's operations that they could use to evaluate these operations. At Superfund sites where transportable incinerators are used, EPA may specify standards, operating parameters, emergency controls, and requirements for air monitoring and on-site oversight in various documents, such as a contract with the operator of the incinerator, a court-approved consent decree with the responsible party, or a work plan for the site. In contrast, fixed facility hazardous waste incinerators require a RCRA permit, which documents the conditions under which an incinerator must operate. Inspectors use the conditions specified in the permit as criteria for evaluating the incinerator's operations. For Superfund incinerators, however, an operating permit is not required. The 1993 interim guidance for inspecting Superfund incinerators recognized the need for a single document specifying site-specific operating requirements and procedures and stated that such a document would be developed. However, no such document was developed because, according to EPA officials, other priorities intervened. EPA officials attributed the lack of recent Superfund incinerator inspections, in part, to the lack of a consolidated list of requirements. The Superfund, RCRA, and OECA officials we interviewed on this question agreed that Superfund incinerators should be inspected. They stated that experienced RCRA hazardous waste incinerator inspectors in EPA's regional offices have knowledge and experience that makes them well qualified to evaluate the operations of Superfund incinerators. These officials believed that an inspection by an outside, independent inspector was important even if an incinerator had on-site oversight. RCRA officials told us that at the few RCRA-permitted hazardous waste incinerators with on-site inspectors, the inspectors are rotated every 6 months in order to maintain their independence and objectivity. In addition, they said that experienced incinerator inspectors would have more expertise than the Corps of Engineers or state staff assigned to oversee the incinerators' operations. Although these staff do receive training, they are generally not experts on incineration. Because EPA site managers may work on as few as one or two projects at a time and because incineration is not a common remedy at Superfund sites, managers may have limited experience with incineration. However, EPA does not have any formal mechanism to share the lessons learned about an incinerator's operations. The need for information-sharing is illustrated by experiences at two sites we visited. The Bayou Bonfouca site had a policy to stop feeding waste to the incinerator during severe storms. This policy was adopted to reassure the public that the incinerator would not suffer an emergency shutdown during a storm-related power outage. The Times Beach site, which was using the same incinerator model, did not formally adopt this policy until after a severe storm had knocked out the power at that incinerator, causing an emergency shutdown. The storm and power outage caused the emergency emissions system and the perimeter air monitors to fail. (See app. I for details.) The lessons learned from these problems could be applied to future incineration projects to prevent similar problems from arising. However, EPA has no formal mechanism to ensure that other incineration projects can benefit from the Times Beach experience. EPA officials agreed that they should be sharing the lessons learned from each site. According to officials, they had intended to do so by issuing fact sheets, but the effort was dropped before any fact sheets were issued. The officials stated that the fact sheets were not issued because of a fear that information on problems with incinerators' operations could be used against them in litigation. In addition, they attempted to have monthly conference calls with all of the managers of incineration sites, but the effort soon faded away. However, EPA officials told us that they do informally share lessons learned through discussions with regional staff responsible for incineration sites. Also, they encourage site managers to visit other incineration sites to learn from the experiences there; however, they do not currently intend to revive their plans for preparing fact sheets. EPA employs a number of techniques to encourage the safe operation of Superfund incinerators. These techniques include mechanical features, such as air monitors, as well as operational procedures, such as 24-hour independent oversight. However, residents of the areas surrounding incinerators frequently desire an extra degree of assurance that the incinerators are operating safely. EPA has not followed through on other opportunities to improve its oversight of incinerators and thereby provide additional assurance to the public. First, EPA has not followed its own policy of having RCRA hazardous waste incinerator inspectors inspect Superfund incinerators. Although these inspections would provide the public with independent evaluations of the incinerators' compliance, they did not take place, in part, because consolidated lists were not made available to inspectors of the standards, design requirements, and operating rules for each site where incineration is used. Inspectors could use such lists, just as they use the operating permits for fixed facility hazardous waste incinerators, as an aid in evaluating compliance. Second, EPA's attempts to systematically share the lessons learned from site to site were never fully implemented. Because incinerators are being used at relatively few Superfund sites, EPA project managers may have little or no experience with them. These managers would benefit from the experiences of other managers of sites where incinerators have been used. At the sites we visited, operational problems occurred that might be avoided at other incineration projects if the knowledge gained was preserved and shared. To provide further assurance that incinerators at Superfund sites are being operated safely, we recommend that the Administrator, EPA, implement the agency's guidance for having RCRA hazardous waste incinerator inspectors evaluate Superfund incinerators, including the development of a single document specifying site-specific operating requirements and procedures for these incinerators, and document the lessons learned about safe operation from the experiences of each Superfund site where incineration is used and institute a systematic process to share this information at other sites where incinerators are used. We provided copies of a draft of this report to EPA for its review and comment. On January 29, 1997, we met with EPA officials, including a senior process manager from EPA's Office of Emergency and Remedial Response and officials from EPA's Office of Enforcement and Compliance Assurance and Solid Waste and Emergency Response, to obtain their comments. EPA generally agreed with the facts, conclusions, and recommendations in the report. However, while not disagreeing that the lessons learned should be documented, EPA did question the benefits of preparing voluminous site-specific studies on lessons learned, given the decreasing use of incineration. We concur that the type of documentation should be concise and the format useful. EPA also provided technical and editorial comments, which we incorporated in the report as appropriate. To examine EPA's oversight of incinerators at Superfund sites, we visited the three Superfund sites with operating incinerators: the Baird and McGuire site in Massachusetts, the Bayou Bonfouca/Southern Shipbuilding site in Louisiana, and the Times Beach site in Missouri. At these sites, we spoke with EPA, state government, U.S. Army Corps of Engineers, and contractor officials to determine how the incinerators operate, what safety measures they employ to ensure safe operation, and what oversight activities occur. We also interviewed EPA officials in regions I, VI, and VII and in the headquarters offices of Solid Waste, Emergency and Remedial Response; Pollution Prevention and Toxics; and Enforcement and Compliance Assurance. In addition, we obtained and analyzed documents and data from EPA and from the relevant states, counties, and responsible parties when necessary. Our work was performed in accordance with generally accepted government auditing standards from February through December 1996. As arranged with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies of the report to other appropriate congressional committees; the Administrator, EPA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Should you need further information, please call me at (202) 512-6520. Major contributors to this report are listed in appendix II. We visited the three Superfund incinerators that were in operation at the time of our review: the Baird and McGuire site in Holbrook, Massachusetts; the Bayou Bonfouca/Southern Shipbuilding site in Slidell, Louisiana; and the Times Beach Superfund site near St. Louis, Missouri. A brief description of the incineration project at each site follows. The Baird and McGuire site, approximately 14 miles south of Boston, is a former chemical manufacturing facility that operated for 70 years until it was shut down in 1983. This 20-acre site is contaminated with approximately 200,000 pounds of chemicals and metals, including creosote, herbicides and pesticides, arsenic, lead, and dioxin. Chemicals from the site have contaminated groundwater, a nearby river, and a nearby lake. EPA chose to incinerate soil and other contaminated material on-site because it judged that this remedy would be the most protective of human health and because complicating factors made other remedies, such as covering the contaminated areas with a clay cap, inappropriate. These factors included the location of part of the site in a 100-year flood plain, the existence of wetlands on the site, and the potential for the contamination to spread farther (via groundwater) if the site was not effectively treated. In addition, dioxin is present at the site, leaving few off-site treatment possibilities because regulations limit the locations at which dioxin-contaminated material can be treated. The operation of the incinerator at the Baird and McGuire site began in June 1995 and is expected to be completed in April 1997. The incinerator was designed specifically to remediate the high levels of metal contamination at the site. (See fig. I.1.) It is configured to capture the metals (which cannot be destroyed by the incineration process and may be present in the gases produced by the burn) in a pollution control device before they are emitted into the atmosphere. The incinerator has 13 automatic waste feed cutoffs. In case the incinerator is totally shut down, a diesel backup system will keep filtration systems running to prevent the release of hazardous emissions. Emissions from the site are monitored continuously from the incinerator's stack and from nine locations along the site's perimeter. Oversight is carried out by 12 staff from the U.S. Army Corps of Engineers, who receive technical assistance from an engineering consulting firm. According to a Corps engineer at the site, the Corps staff complete inspection reports detailing on-site events 2 to 3 times per day and provide weekly summary reports for EPA's review. The Bayou Bonfouca site includes 55 acres of sediment and surface water that were contaminated with wood-treating chemicals from an abandoned creosote works plant. The main threats to human health at this site included direct contact with contaminated groundwater, the potential for contamination to spread to a nearby waterway during flooding, and the potential for direct contact with concentrated hazardous material at the unsecured site. From February 1992 through September 1995, EPA incinerated contaminated soil and other material. After incinerating the waste from the Bayou Bonfouca site, EPA began to use the incinerator to burn similar wastes from the nearby Southern Shipbuilding Superfund site. (See fig. I.2.) This site was contaminated with 110,000 cubic yards of sludge, containing mostly polycyclic aromatic hydrocarbons that were left from barge cleaning and repair operations. Polycyclic aromatic hydrocarbons are chemicals formed during the incomplete burning of coal, oil, gas, refuse, or other organic substances. In addition to 15 automatic waste feed cutoff parameters to prevent the incinerator from operating outside the regulatory limits, the incinerator has an emergency stack venting system that further treats the gases from the kiln if the incinerator is totally shut down. In case of a power outage or another event that would cause the major functions of the incinerator to fail, this emergency system draws the kiln gases into an emergency stack where a flame further destroys contaminants. According to an incineration contractor official at the Bayou Bonfouca site, this emergency system prevents the release of kiln gases that exceed emission regulations. Oversight at the Bayou Bonfouca site is carried out by a team of nine Corps of Engineers inspectors. These inspectors check the computer screens in the incinerator's control room every 2 hours to ensure that the incinerator is operating within the regulatory parameters set during the trial burn. The Corps team also inspects the incinerator's machinery, is present for all sampling and testing done by the incineration company, and documents all of the automatic waste feed cutoffs. Corps officials review monthly, quarterly, and yearly reports from the incineration contractor. Air monitoring at the site includes continuous monitoring from the stack, the excavation site, and other areas of the site, and samples are taken daily for more complete chemical analysis. According to Corps officials, emissions have never exceeded regulatory levels. In addition, EPA Region VI had two RCRA inspections completed at the Bayou Bonfouca site. However, the incinerator was shut down for maintenance at the time of one of the inspections. This Bayou Bonfouca/Southern Shipbuilding project was completed in November 1996. The Times Beach Superfund site is a 0.8-square-mile area, 20 miles southwest of St. Louis, that was contaminated with dioxin. The contamination resulted from spraying unpaved roads with dioxin-tainted waste oil to control dust. EPA decided to incinerate soil from Times Beach and 26 other nearby sites that were contaminated in the same way. (See fig. I.3.) EPA believed that incineration was the best remedy for the large volumes of dioxin-contaminated soil and the large pieces of contaminated debris to be treated. The incineration project at Times Beach began in March 1996 and is expected to be completed in March 1997. The Times Beach site is unusual because EPA obtained a RCRA permit to operate the incinerator. A permit is generally not required at Superfund sites, and the process of obtaining it resulted in some delays in beginning operations. However, EPA regional officials obtained the permit to provide nearby residents with additional assurance that the incinerator would operate safely and would be removed after the project was completed, rather than being kept in place to burn contaminated material from other sites. As required by the permit, the Times Beach incinerator has 17 automatic waste feed cutoffs. In addition, the incinerator includes the same emergency system that is used at Bayou Bonfouca. Oversight at Times Beach is handled primarily by the Missouri Department of Natural Resources. State officials monitor operations on-site and via computer in the state capitol. Three on-site state employees originally provided oversight 24 hours a day. Currently, the state has oversight officials at the site 11-1/2 hours each weekday and 9 hours a day on the weekend. In addition, they conduct unannounced random visits to the site during off hours. To supplement the state's oversight, St. Louis County inspects operations and tracks the results of air-monitoring testing to ensure that the incinerator's emissions are in compliance with the limits set in the county's air pollution permit. According to a county official, although formal inspections are required about once every 2 years, the county informally monitors the site more frequently. As with the other sites, Times Beach has two levels of air monitoring: continuous monitoring and a more detailed laboratory analysis. According to EPA officials, emissions from the incinerator have never exceeded the permissible levels. Despite extensive monitoring at the Times Beach site, incidents have occurred. Once, when an unexpected storm interrupted electrical power and caused a shutdown, the emergency system failed to fire. High winds had blown out the pilot lights on this treatment system, which should have fired after the power to the incinerator had been lost. Without the firing, the emergency system did not further treat the kiln gases as it was designed to do. Although EPA concluded that the event caused no significant health effects, the agency could only estimate emission levels during the shutdown because the air-monitoring equipment that would have recorded the actual emission levels was on the same circuit as the incinerator and, therefore, was not operating during the event. To prevent future emergency shutdowns from storm-related power losses, the incineration contractor hired local weather forecasting services to improve storm warnings and formally adopted a standard operating procedure to stop the waste feeds during severe weather. (This standard operating procedure had already been in force at the Bayou Bonfouca/Southern Shipbuilding Superfund site when the event occurred.) In addition, other measures were taken to prevent the emergency system's pilot lights from being blown out and to decrease the number of power outages. Improper handling of the emission samples taken during a dioxin stack test was alleged following the discovery that the test samples were taken by a company that is a subsidiary of the incineration contractor. EPA maintains that the incinerator operator followed all required procedures for testing the samples. EPA has no regulation that prohibits the incineration contractor or one of its subsidiaries from taking, transporting, or analyzing the test samples. In addition, the time taken to deliver the samples to the laboratory was questioned--8 days from the time the samples left the site until they arrived at the laboratory. According to EPA officials, the samples are stable, making the time taken to get them to the laboratory unimportant. State officials reviewed the testing and determined that the results were valid. However, in December 1996, the EPA Ombudsman issued a report on the allegations and recommended that a new stack test be conducted to ensure public confidence in the cleanup. EPA agreed to implement the Ombudsman's recommendation. James F. Donaghy, Assistant Director Jacqueline M. Garza, Staff Evaluator Richard P. Johnson, Attorney Adviser William H. Roach, Jr., Senior Evaluator Paul J. Schmidt, Senior Evaluator Magdalena A. Slowik, Intern Edward E. Young, Jr., Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) use of incineration at Superfund sites, focusing on: (1) what safeguards EPA uses to promote the safe operation of incinerators at these sites; and (2) whether EPA has fully implemented its planned system of safeguards. GAO noted that: (1) EPA relies upon four main methods to promote the safe operation of incinerators used at Superfund sites; (2) these methods are: (a) required site-specific standards for an incinerator's emissions and performance; (b) engineering safety features built into the incinerator's systems; (c) air monitoring to measure the incinerator's emissions; and (d) on-site observation of the incinerator's operations; (3) EPA sets standards after it studies each site's characteristics; (4) each incinerator is designed with safety features intended to stop its operation if it fails to meet the specified operating conditions; (5) air monitors are placed in the incinerator's stack and around the site's perimeter to measure the incinerator's emissions; (6) at the three Superfund sites with ongoing incineration projects at the time of GAO's review, EPA had arranged for 24-hour, on-site oversight from either the U.S. Army Corps of Engineers or a state government to ensure that the incinerator was operating properly; (7) in addition to the four methods discussed above, EPA managers intended to use two other techniques, inspections and applications of lessons learned, to encourage safe operations, but neither was fully implemented; (8) EPA has not used inspectors from its hazardous waste incinerator inspection program to evaluate the operations of all Superfund incinerators as it required in a 1991 directive; (9) only one of the three incinerators GAO visited had received such an inspection; (10) EPA regional staff responsible for hazardous waste incinerator inspections were unaware that the Superfund incinerators were supposed to be inspected, and EPA headquarters officials were unaware that the inspections were not occurring; (11) EPA managers did not follow through on their intention to systematically apply the lessons learned from incineration at one site to other sites; (12) they had intended to prepare documents describing problems and solutions at each incineration project for use in designing and operating other projects and to hold periodic conference calls with the managers from incineration sites to discuss issues of common interest; (13) both of these methods of transferring information were dropped for various reasons; (14) GAO found that the lessons learned from the problems experienced at the sites GAO visited could benefit other sites; and (15) EPA headquarters officials told GAO that they encouraged Superfund project managers to share their experiences with incineration but had not facilitated this exchange in a structured way.
7,196
625
Under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), which created the Superfund program in 1980, the Environmental Protection Agency (EPA) assesses uncontrolled hazardous waste sites and places those posing the greatest risks to human health and the environment on the National Priorities List (NPL) for cleanup. As of September 1995, this list included 1,232 sites. Cleanup standards and the degree of cleanup needed for Superfund sites are discussed in section 121(d) of the CERCLA statute, as amended by the Superfund Amendments and Reauthorization Act of 1986 (SARA). This section requires that Superfund sites be cleaned up to the extent necessary to protect both human health and the environment. In addition, cleanups must comply with requirements under federal environmental laws that are legally "applicable" or "relevant and appropriate" (ARAR) as well as with such state environmental requirements that are more stringent than the federal standards. Furthermore, Superfund cleanups must at least attain levels established under the Safe Drinking Water Act and the Clean Water Act, where such standards are relevant and appropriate as determined by the potential use of the water and other considerations. The federal standards most frequently considered relevant and appropriate for groundwater cleanups at Superfund sites are set under the Safe Drinking Water Act. This act establishes standards, called maximum contaminant levels (MCL), for certain contaminants in water delivered by public drinking water systems. As of March 1996, the MCLs included numeric limits on about 70 contaminants. The MCLs take into account estimates of the human health risks posed by contaminants. They also consider whether it is technically and economically feasible to reduce the contamination to a level that no longer poses a health risk. Although MCLs are legally applicable to drinking water systems, section 121(d) of CERCLA generally requires that they be considered relevant and appropriate standards for cleaning up contaminated groundwater that is a potential source of drinking water. For example, the MCL for benzene is 5 micrograms per liter. This concentration would generally be the cleanup level for benzene in groundwater that is a potential source of drinking water unless the state has promulgated a more stringent standard or other requirement that is relevant and appropriate. There are few federal standards for contaminants in soil that are considered potentially applicable or relevant and appropriate except those for certain highly toxic contaminants, most notably polychlorinated biphenyls (PCB) and lead. Under the Toxic Substances Control Act, EPA sets requirements for cleaning up PCB contamination. In addition, EPA has issued guidance for cleaning up lead in soil. Early in its investigation of a site, EPA determines, on the basis of the contamination present and the conditions at the site, which chemical-specific and other standards may be considered applicable or relevant and appropriate. As EPA proceeds with the selection of a cleanup method, it adjusts the list of standards to be considered on the basis of information gained during its investigation. Among the potential standards considered are any state environmental standards that are more stringent than the federal standards for the same contaminants. In addition to numeric standards for specific contaminants, some states have set more generalized standards or policies that may have to be considered when cleaning up Superfund sites. For example, some states have established "antidegradation" policies for groundwater that could require more stringent cleanups than cleanups based on health risks. These policies are intended, among other things, to protect the state's groundwater as a potential source of drinking water. If federal or state standards do not exist for a given contaminant, the party responsible for cleaning up a Superfund site may use a site-specific risk assessment to help establish a cleanup level for that contaminant. A risk assessment evaluates the extent to which people may be exposed to the contaminant, given its concentration and the physical characteristics of the site. For example, the type of soil and the depth of the groundwater may affect whether and how quickly waste will migrate and reach a population. A risk assessment uses exposure and toxicity data to estimate the increased probability, or risk, that people could develop cancer or other health problems through exposure to this contamination. A risk estimate can be used along with the proposed waste management strategy to help determine the extent of the cleanup necessary at a site. EPA has published guidance for conducting risk assessments, a set of documents referred to collectively as the Risk Assessment Guidance for Superfund. These documents outline the well-established risk assessment principles and procedures that can be used to gather and assess information on human health risks. The documents also include information on mathematical models that can be used to estimate health risks at a site, given the contaminants present and the means of exposure to them. In addition to this guidance, EPA maintains an Integrated Risk Information System (IRIS), an on-line database on the toxicity of numerous chemicals, and publishes the Health Effects Assessment Summary Tables (HEAST), another source of information on contaminants' toxicity. EPA uses this guidance in conducting baseline risk assessments at Superfund sites, which it uses in deciding whether the human health and environmental risks posed by the contaminants are serious enough to warrant cleaning up the sites. Some states also use EPA's risk assessment guidance in setting their standards for specific chemicals. States that have set environmental standards have made decisions about what levels, or concentrations, of chemical contaminants can remain at hazardous waste sites after cleanups. We analyzed the processes that the states in our survey said they went through, as well as the factors that they said they took into consideration, in developing their soil and groundwater standards. In this section, we first summarize (1) the extent to which the states based their soil standards on estimates of the human health risks posed by contaminants at the sites and (2) the methods that the states used to estimate these risks. We then report on the factors other than health risks that the states said they considered when developing their soil standards. Since the bases for the states' standards for groundwater differed somewhat from those for soil, we summarized the information on groundwater standards separately. Finally, since federal drinking water standards are frequently used as cleanup standards for groundwater, we compared the states' groundwater standards to the federal standards for the same contaminants to determine the extent of their correspondence. We have included the information we obtained from the 33 states in our survey. In all, 21 of the 33 states had set their own standards for either soil or groundwater, or for both media. (See table 2.1.) Thirteen of the 21 states had set their own soil standards, and 20 had set some groundwater standards that were in addition to or different from the MCLs for drinking water, as discussed in the remainder of this section. All 13 of the states with soil standards indicated that they considered risks to human health when developing their standards. The number of chemical-specific standards per state ranged from about 10 to nearly 600. All but one of these states generally relied on EPA's guidance for estimating health risks from contaminants (Missouri had developed its soil standards before EPA issued its guidance). These states said that they had used EPA's guidance, either alone or in combination with their own methodologies and policies, to estimate health risks. (See table 2.2.) For example, Pennsylvania said that it had used EPA's guidance to estimate the toxicity of contaminants and its own model to estimate how much contamination from the soil might travel into groundwater. These estimates are two of the major components in the health risk calculation. uses at Superfund sites, which extends from 1 in 10,000 to 1 in 1 million. As shown in table 2.2, eight states chose the more stringent risk level of 1 in 1 million for individual carcinogens in soil, while five states chose the somewhat less stringent risk level of 1 in 100,000. For noncarcinogens in soil, 11 states used the same measure that EPA uses at Superfund sites, while 2 states used a somewhat more stringent measure. Ten of the 13 states considered factors in addition to health risks when setting their soil standards. As a result, their standards could be either more or less stringent than those based solely on estimates of health risks. These other factors included the following: Chemical levels that occur naturally in the environment. In some locations, certain contaminants may exist naturally in the soil in concentrations differing from those that would be allowed under standards based on risks to human health. For such contaminants, the states typically set their standards at the naturally occurring levels rather than at the levels based solely on risk. In some cases, this practice would result in less stringent cleanups than would be necessary to meet the risk-based standards. However, since some chemicals do not occur naturally in the environment, this practice would in some instances result in more stringent cleanups than would otherwise be required. Detection limits and practical quantification limits. When the concentrations of some contaminants that could remain in the soil without posing health risks fell below the levels that can be accurately measured or detected by current technology, the states said that they typically adopt less stringent, but measurable, concentrations as their standards. Secondary, or aesthetic, criteria. Some chemicals cause unpleasant odors or other problems at levels that do not pose human health risks. The states may set their standards for these chemicals below risk-based levels to protect the public from such problems. Twenty of the 33 states we surveyed said that they had set some chemical-specific standards that would limit the concentrations of various toxic chemicals that could be present in groundwater at Superfund sites. These states not only adopted some of the existing federal standards, such as MCLs, but also set some standards in addition to or different from them. The number of chemical-specific standards per state ranged from about 30 to nearly 600. While the remaining states that we surveyed had not developed any of their own groundwater standards, the federal MCLs are typically used as Superfund cleanup standards for groundwater. Nineteen of the 20 states had based their groundwater standards, at least in part, on estimates of the human health risks posed by exposure to chemical contaminants. (See table 2.3.) In the remaining state, none of the officials currently involved in implementing the standards could provide historical information on how the standards had been developed. Sixteen of the states had calculated their own health risk estimates when setting the standards for at least some of the contaminants. Three of the states had not predominantly developed their own estimates but had instead adopted standards developed by others, including some or all of the MCLs, that were based on estimates of health risks. All 16 states that had developed formulas for calculating human health risks had used guidance from EPA on how to estimate such risks, either alone or in combination with their own procedures and formulas. (See table 2.4.) In setting their standards, 13 of these states used a risk level of 1 in 1 million for individual carcinogens, while 3 states used the less stringent risk level of 1 in 100,000. For individual noncarcinogens, 15 states used a measure that was as stringent as EPA's, while 1 state used a more stringent measure. All but 2 of these 16 states said that they had considered factors in addition to human health risks when setting their groundwater standards. Taking such factors into account can affect the concentration of a chemical that a state will allow to remain under its standard. As a result, a standard may be either more or less stringent than one based solely on human health risks. may require more stringent cleanups than would be required solely on the basis of risk. Because the federal MCLs are typically used as cleanup standards for groundwater used as drinking water at Superfund sites and many of the states based some of their own groundwater standards on the federal MCLs, we compared the states' standards for contaminants to the corresponding MCLs. We found that if a federal MCL existed for a chemical that was included in a state's standards, the state usually set its standard at this level. However, a majority of the states had standards for a few chemicals that differed from the MCLs. These standards tended to be more stringent than the MCLs. The states offered a variety of explanations for why their standards were more stringent than the federal MCLs. Two states set more stringent levels for certain contaminants if they could detect the contaminants at levels below the MCLs. Several states reported that some of their standards were more stringent because these standards had not been adjusted, as the MCLs had been, for other factors, such as cost or technical feasibility. Some states' standards may also have been more stringent because the states had antidegradation policies for groundwater. For example, Wisconsin mandates that the environment be restored to the extent practicable. Consequently, it has set "preventive action limits" for contaminants in groundwater that may be used to determine the extent of the cleanup required at Superfund sites unless it can be shown that meeting such limits would not be technically or economically feasible. All of the preventive action limits are more stringent than the corresponding federal MCLs. They limit the concentrations of chemicals that can cause cancer to one-tenth the concentrations allowed under the MCLs, and they limit the concentrations of chemicals that can cause other health effects to one-fifth the concentrations allowed under the MCLs. However, the state allows exemptions for contaminants that occur naturally at levels exceeding the preventive action limits. Nearly all of the states had only a few, if any, standards for contaminants that were less stringent than the corresponding federal MCLs. However, under SARA, only those numeric standards that are more stringent than the federal standards are to be considered as cleanup levels at Superfund sites. Even though the states have set environmental standards, they have found that applying these standards uniformly to all sites may not be effective because conditions can vary from one hazardous waste site to another. As a result, sites may pose different levels of health risks and may, therefore, require different degrees of cleanup. We examined whether the states (1) allow the level of cleanup determined to be necessary under their standards to be adjusted to take into account site-specific conditions and (2) set different standards for different uses of the land or groundwater (e.g., set more stringent cleanup standards for land that could be used for residential than for industrial purposes). Overall, the states provided more flexibility in applying their soil standards than their groundwater standards. Eight of the 13 states that had soil standards indicated that they allow the extent of the cleanup deemed necessary under their standards to be adjusted for site-specific factors. For example: Georgia's risk reduction standards include the option of determining cleanup target concentrations for contaminants on the basis of site-specific risk assessments. Minnesota characterized its standards as "quick reference numbers," rather than fixed limits, that are considered when determining how extensively to clean up a site. Thus, cleanup levels can be tailored to local conditions. For example, if exposure to contaminants in soil were reduced or eliminated because the soil was inaccessible, the cleanup levels would not need to meet the standards. Alternatively, if multiple contaminants with the same toxic effect were found at the same location, the cleanup level for each individual contaminant might be more stringent than the standard. Pennsylvania said that it has developed interim standards pending final regulations for about 100 soil contaminants but considers these to be "worst case" numbers that can be adjusted to reflect site-specific conditions. contaminated soil. Alternatively, under certain conditions, some states allow cleanups to be based on site-specific risk assessments. Three of these states also said that they permitted less stringent cleanup levels than those based on their standards if meeting them was not technologically feasible or if naturally occurring levels of chemicals in the local environment were higher than the levels set by the standards. However, the use of such alternatives was the exception rather than the rule. Some of the states also indicated that even if they do not provide much flexibility in applying their standards, they may permit flexibility in determining how to achieve the required level of protection. For example, instead of requiring costly incineration of contaminated soil to meet its standards, a state may allow the area to be covered with a clay cap so that people cannot come into contact with the contaminants. The states may also provide flexibility by establishing different standards for different projected uses of the land at a site. Ten of the 13 states with soil standards told us they had set such standards. For example, Michigan said that it had defined soil standards for three types of land uses: residential, industrial, and commercial (with two subcategories). Generally, the more stringent standards apply to residential property, since people are more likely to be exposed to contaminants for a longer period of time on residential property than on other types of property. While most states allowed flexibility in their cleanup levels for soil, the states were less flexible in setting cleanup levels for groundwater. The degree of flexibility largely depended on whether the groundwater was considered a potential source of drinking water. place a notice in deed records to inform future property owners of any contamination left on the property. Cleanups under the third standard must also use federal MCLs when available, but for contaminants without corresponding MCLs, site-specific risk-based cleanup levels can be determined on the basis of the site's projected use. The third standard also requires deed notification. The remaining 16 states indicated that, in general, for groundwater used as drinking water or considered potentially usable as drinking water, their standards were fixed limits that must be achieved during cleanup. Most of these states did say, though, that they allowed certain limited exceptions to their standards or the use of a site-specific risk assessment under some circumstances. For example, if the contaminated water came from an area where the contamination would not immediately threaten communities, a state might let the contamination be reduced naturally over time rather than require that it be cleaned up immediately. The states gave various reasons for the relative inflexibility of their groundwater standards for drinking water. First, some of the states said that they were mirroring the federal MCLs for drinking water, which are also fixed limits. Some of the states also noted that, as discussed in section 2, they consider groundwater that may possibly be used as drinking water as a valuable resource that needs to be conserved. Although the states in our survey told us that their standards for groundwater used as drinking water are relatively fixed, some states also reported that they provided some degree of flexibility by not classifying all groundwater as drinking water. They also set less stringent standards for groundwater that would not be considered a potential source of drinking water. For example, Connecticut's groundwater classification system acknowledges that in certain areas, such as those that have had long-term industrial or commercial use, the groundwater would not be a suitable source of drinking water unless it were treated. The state does not usually require that the groundwater in such areas be cleaned up to the standards for drinking water. Also, some states do not classify groundwater as drinking water if it has a high mineral content or if it is located in a geological formation that does not yield much water. agricultural purposes, groundwater of special ecological significance (e.g., supporting a vital wetland), and groundwater in urban, industrial, or commercial areas. Seven of these 12 states indicated that site-specific factors can be taken into account when determining the extent of the cleanup needed for these other types of groundwater. For example, Rhode Island told us that it allows the cleanup levels for some contaminants to differ from the levels set in its standards. For example, vapors escaping from volatile organic chemicals in the groundwater could accumulate in overlying buildings and cause potential health effects. In some cases, these vapors could build up and cause threats of explosion. In setting its "urban" groundwater standards, this state conservatively assumed that the buildings would not be ventilated and that the vapors from the underlying groundwater would be trapped in the buildings. However, in deciding how extensively to clean up a site, the state allows for a consideration of site-specific factors, such as depths to groundwater. When site-specific factors are considered, the cleanup levels may not need to be as stringent as the standards alone would require. The Chairmen and Ranking Minority Members of the House Committee on Transportation and Infrastructure and its Subcommittee on Water Resources and Environment asked us to determine whether states (1) when setting numeric standards for cleanups at hazardous waste sites, based them on estimates of the human health risks posed by exposure to contamination and (2) when using standards, provide the flexibility to adjust the level of cleanup prescribed by the standards to take into account the conditions and risks found at individual waste sites. To accomplish these objectives, we conducted a telephone survey of 33 states, receiving a response rate of 100 percent. We selected these states because they included approximately 91 percent of the sites that the Environmental Protection Agency (EPA) had included on its National Priorities List (NPL) as of April 1995. We obtained information for standards for contaminants in soil and groundwater, the two media most frequently cleaned up at Superfund sites. (See app. II for a list of the states, the number of NPL sites in each state, the types of standards in each state, and the types of authority for the standards.) We defined standards as limits on the concentrations of toxic chemicals in soil and groundwater and included limits promulgated in a state's laws or regulations or established as guidance or policy. We also included in our definition only standards that might be used as the basis for setting cleanup levels at a Superfund facility. Because petroleum spills are not covered under Superfund legislation, we excluded states that had established standards only for petroleum products under their separate programs for cleaning up leaking underground storage tanks. We excluded states that had simply adopted the federal standards set under the Safe Drinking Water Act or had established antidegradation policies without also setting specific numeric limits on contaminants. The questions in our survey included (1) whether a state's standards were derived from a risk-based formula and/or other factors, such as the naturally occurring levels of contamination in the soil and groundwater; (2) whether the formulas were based on EPA's guidance or on the state's own methodologies for estimating human health risks from contamination; (3) what risk levels, such as a 1-in-1-million increased probability of contracting cancer, were used in setting the standards; (4) whether the standards were set for different uses of the land or groundwater; and (5) whether the standards were considered fixed limits or the state provided flexibility to adjust the cleanup levels based on these standards to take into account specific conditions at a site. We interviewed the managers of states' Superfund programs, technical experts in these programs, and other key officials responsible for developing and/or implementing the states' standards. When necessary to clarify information, we contacted officials again for follow-up questions. The data we obtained were current as of September 1995. To ensure the accuracy of our information, we provided state officials with a summary of the information we had compiled on their standards for their review. In addition, we provided copies of a draft of our report to EPA officials, including the Director of the Office of Emergency and Remedial Response and officials responsible for working with state Superfund programs, for their review and comment. They said that the report was an accurate discussion of states' standards and provided several technical changes and clarifications on the Superfund law's requirements for cleanups. We incorporated their changes and suggestions. We conducted our audit work from March 1995 through March 1996. (continued) Stanley J. Czerwinski, Associate Director Eileen R. Larence, Assistant Director Sharon E. Butler, Senior Evaluator Susan E. Swearingen, Senior Evaluator Luann M. Moy, Senior Social Science Analyst Josephine Gaytan, Information Processing Assistant The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on how states establish and apply environmental standards when cleaning up Superfund sites, focusing on whether states: (1) base their standards on human health risks; and (2) provide flexibility so that the level of cleanup can be adjusted according to the extent of contamination. GAO found that: (1) 20 of the 21 states reviewed base their hazardous waste site standards on the danger posed to human health, and the cost and technical feasibility of achieving them; (2) states base their groundwater standards on existing federal drinking water standards; (3) when states set their environmental standards at levels other than the federal limit, they tend to be more stringent; (4) states provide more flexibility in adjusting the cleanup level when the cleanup involves soil pollution rather than groundwater pollution, in order to reflect a particular site's condition and health risk; (5) more than half of the states with soil standards regularly allow their cleanup levels to be adjusted for site-specific conditions; (6) less than one-fourth of the states with groundwater standards allow their cleanup levels to be adjusted; and (7) those states not allowing cleanup level adjustments view their groundwater as a potential source of drinking water and implement different standards, depending on the projected use of land or groundwater.
5,401
266
The radio frequency spectrum is the resource that makes possible wireless communications and supports a vast array of government and commercial services. DOD uses spectrum to transmit and receive critical voice and data communications involving military tactical radio, air combat training, precision-guided munitions, unmanned aerial systems, and aeronautical telemetry and satellite control, among others. The military employs these systems for training, testing, and combat operations throughout the world. Commercial entities use spectrum to provide a variety of wireless services, including mobile voice and data, paging, broadcast television and radio, and satellite services. In the United States, FCC manages spectrum for nonfederal users under the Communications Act, while NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues as governed by the National Telecommunications and Information Administration Organization Act. FCC and NTIA, with direction from Congress and the President, jointly determine the amount of spectrum allocated for federal, nonfederal, and shared use. FCC and NTIA manage the spectrum through a system of frequency allocation and assignment. Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. (Fig. 1 illustrates examples of allocated spectrum uses, including DOD systems using the 1755-1850 MHz band.) In addition, spectrum managers specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing users, such as commercial entities or government agencies, with a license or authorization to use a specific portion of spectrum. FCC assigns licenses within frequency bands to commercial enterprises, state and local governments, and other entities. Since 1994, FCC has used competitive bidding, or auctions, to assign certain licenses to commercial entities for their use of spectrum. Auctions are a market- based mechanism in which FCC assigns a license to the entity that submits the highest bid for specific bands of spectrum. NTIA authorizes spectrum use through frequency assignments to federal agencies. More than 60 federal agencies and departments combined have over 240,000 frequency assignments, although 9 departments, including DOD, hold 94 percent of all frequency assignments for federal use. Congress has taken a number of steps to facilitate the deployment of innovative, new commercial wireless services to consumers, including requiring more federal spectrum to be reallocated for commercial use. Relocating communications systems entails costs that are affected by many variables related to the systems themselves as well as the relocation plans. Some fixed microwave systems, for example, can use off-the-shelf commercial technology and may just need to be re-tuned to accommodate a change in frequency. However, some systems may require significant modification if the characteristics of the new spectrum frequencies differ sufficiently from the original spectrum. Specialized systems, such as those used for surveillance and law enforcement purposes, may not be compatible with commercial technology, and therefore agencies have to work with vendors to develop equipment that meets mission needs and operational requirements. In 2004, the Commercial Spectrum Enhancement Act (CSEA) established a Spectrum Relocation Fund, funded from auction proceeds, to cover the costs incurred by federal entities that relocate to new frequency assignments or transition to alternative technologies. The auction of spectrum licenses in the 1710-1755 MHz band was the first with relocation costs to take place under CSEA. Twelve agencies previously operated communication systems in this band, including DOD. CSEA designated 1710-1755 MHz as "eligible frequencies" for which federal relocation costs could be paid from the Spectrum Relocation Fund. In September 2006, FCC concluded the auction of licenses in the 1710- 1755 MHz band and, in accordance with CSEA, a portion of the auction proceeds is currently being used to pay spectrum relocation expenses. In response to the President's 2010 memorandum requiring that additional spectrum be made available for commercial use within 10 years, in January 2011, NTIA selected the 1755-1850 MHz band as the priority band for detailed evaluation and required federal agencies to evaluate the feasibility of relocating systems to alternative spectrum bands. DOD provided NTIA its input in September 2011, and NTIA subsequently issued its assessment of the viability for accommodating commercial wireless broadband in the band in March 2012. Most recently, the President's Council of Advisors on Science and Technology published a report in July 2012 recommending specific steps to ensure the successful implementation of the President's 2010 memorandum. The report found, for example, that clearing and vacating federal users from certain bands was not a sustainable basis for spectrum policy largely because of the high cost to relocate federal agencies and disruption to the federal missions. It recommended new policies to promote the sharing of federal spectrum. The sharing approach has been questioned by CTIA--The Wireless Association and its members, which argue that cleared spectrum and an exclusive-use approach to spectrum management has enabled the U.S. wireless industry to invest hundreds of billions of dollars to deploy mobile broadband networks resulting in economic benefits for consumers and businesses. Actual costs to relocate communications systems for 12 federal agencies from the 1710-1755 MHz band have exceeded original estimates by about $474 million, or 47 percent, as of March 2013. The original transfers from the Spectrum Relocation Fund to agency accounts, totaling over $1 billion, were made in March 2007. Subsequently, some agencies requested additional monies from the Spectrum Relocation Fund to cover relocation expenses. Agencies requesting the largest amounts of subsequent transfers include the Department of Justice ($294 million), the Department of Homeland Security ($192 million), the Department of Energy ($35 million), and the U.S. Postal Service ($6.6 million). OMB and NTIA officials expect the final relocation cost to be about $1.5 billion compared with the original estimate of about $1 billion. Total actual costs exceed estimated costs for many reasons, including unforeseen challenges, unique issues posed by specific equipment location, the transition timeframe, costs associated with achieving comparable capability, and the fact that some agencies may not have properly followed OMB and NTIA guidance to prepare the original cost estimate. NTIA reports that it expects agencies to complete the relocation effort between 2013 and 2017. Although 11 of the 12 agencies plan to spend the same amount or more than they estimated, DOD expects to complete the 1710-1755 MHz transition for about $275 million, or approximately $80 million less than its cost estimate. DOD's cost estimates, some made as early as 1995, changed over time as officials considered different relocation scenarios with differing key assumptions and their thinking evolved about the systems that would be affected, according to DOD and NTIA officials. Cost estimates to relocate military systems from the late 1990s and early 2000s ranged from a low of $38 million to as much as $1.6 billion, depending on the scenario. DOD's final cost estimate to relocate from the band was about $355 million. DOD officials told us that the relocation of systems from the 1710-1755 MHz band has been less expensive than originally estimated because many of its systems were simply re-tuned to operate in the 1755-1850 MHz band. The auction of the 1710-1755 MHz band raised almost $6.9 billion in gross winning bids from the sale of licenses to use these frequencies. This revenue minus the expected final relocation costs of approximately $1.5 billion suggests that the auction of the band will raise roughly $5.4 billion for the U.S. Treasury. As mentioned above, NTIA reports that it expects agencies to complete the relocation effort between 2013 and 2017; therefore, the final net revenue amount may change. For example, the Department of the Navy has already initiated a process to return almost $65 million to the Spectrum Relocation Fund. DOD's Office of Cost Assessment and Program Evaluation (CAPE) led the effort to prepare the department's preliminary cost estimate portion of its study to determine the feasibility of relocating its 11 major radio systems from the 1755-1850 MHz band. To do so, CAPE worked closely with cost estimators and others at the respective military services regarding the technical and cost data needed to support the estimate and how they should be gathered to maintain consistency across the services. The services' cost estimators compiled and reviewed the program data, identified the appropriate program content affected by each system's relocation, developed cost estimates under the given constraints and assumptions, and internally reviewed the estimates consistent with their standard practices before providing them to CAPE. CAPE staff then reviewed the services' estimates for accuracy and consistency, and obtained DOD management approval on its practices and findings. According to DOD officials, CAPE based this methodology on the cost estimation best practices it customarily employs. We reviewed DOD's preliminary cost estimation methodology and evaluated it against GAO's Cost Guide, which also identifies cost estimating best practices that help ensure cost estimates are comprehensive, well-documented, accurate, and credible. These characteristics of cost estimates help minimize the risk of cost overruns, missed deadlines, and unmet performance targets: A comprehensive cost estimate ensures that costs are neither omitted nor double counted. A well-documented estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference. An accurate cost estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs. A credible estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions. DOD officials developed the preliminary cost estimate as a less-rigorous, "rough-order-of-magnitude" cost estimate as outlined by NTIA, not a budget-quality cost estimate. Because of this, we performed a high-level analysis, applying GAO's identified best practices to DOD's cost estimate and methodology, and did not review all supporting data and analysis. Overall, we found that DOD's cost estimate was consistent with the purpose of the feasibility study, which was to inform the decision-making process to reallocate 500 MHz of spectrum for commercial wireless broadband use. Additionally, we found that DOD's methodology substantially met the comprehensive and well-documented characteristics of reliable cost estimates, and partially met the accurate and credible characteristics. Comprehensive--Substantially Met: We observed that DOD's estimate included complete information about systems' life cycles, an appropriate level of detail to ensure cost elements were neither omitted nor double-counted, and overarching study assumptions that applied across programs. However, some programs did not list all the discrete tasks required for relocation, and not all the individual programs had evidence of cost-influencing ground rules and assumptions. Well-documented--Substantially Met: We found that management reviewed and accepted the estimate, the estimate was consistent with the technical baseline data, and documentation for the majority of programs was sufficient that an analyst unfamiliar with the program could understand and replicate what was done. However, the documentation also captured varying levels of detail on source data and its reliability, as well as on calculations performed and estimation methodology used, some of which were not sufficient to support a rough-order-of-magnitude estimate. Accurate--Partially Met: We found that DOD properly applied appropriate inflation rates and made no apparent calculation errors. In addition, the estimated costs agreed with DOD's prior relocation cost estimate for this band conducted in 2001. However, no confidence level was specifically stated in DOD's cost estimate to determine if the costs considered are the most likely costs, which is required to fully or substantially meet this characteristic. Credible--Partially Met: We observed that DOD cross-checked major cost elements and found them to be similar. However, some sensitivity analyses and risk assessments were only completed at the program level for some programs, and not at all at a summary level. Performing risk assessments and sensitivity analyses on all projects and at the summary level is required to fully meet this characteristic, and is required on a majority of projects and at the summary level to substantially meet this characteristic. Even though DOD's preliminary cost estimate substantially met some of our best practices, as the assumptions supporting the estimate change over time, costs may also change. According to DOD officials, any change to key assumptions about the bands to which systems would move could substantially change relocation costs. Because decisions about the time frame for relocation and the spectrum bands to which the various systems would be reassigned have not been made yet, DOD based its current estimate on the most likely assumptions, provided by NTIA, some of which have already been proven inaccurate or are still undetermined. For example: Relocation bands: According to DOD officials, equipment relocation costs vary depending on the relocation band's proximity to the current band. Moving to bands further away than the assumed relocation bands could increase costs; moving to closer bands could decrease costs. In addition, congestion, in both the 1755-1850 MHz band and the potential bands to which its systems might be moved, complicates relocation planning. Also, DOD officials said that many of the potential spectrum bands to which DOD's systems could be relocated would not be able to accommodate the new systems unless other actions are also taken. For example, the 2025-2110 MHz band, into which DOD assumed it could move several systems and operate them on a primary basis, is currently allocated to commercial electronic news gathering systems and other commercial systems. To accommodate military systems within this band, FCC would need to withdraw this spectrum from commercial use to allow NTIA to provide DOD primary status within this band, or FCC would have to otherwise ensure that commercial systems operate on a non-interference basis with military systems. FCC has not initiated a rulemaking procedure to begin such processes. Relocation start date: DOD's cost estimate assumed relocation would begin in fiscal year 2013, but no auction has been approved, so relocation efforts have not begun. According to DOD officials, new equipment and systems continue to be deployed in and designed for the current band, and older systems are retired. This changes the overall profile of systems in the band, which can change the costs of relocation. For example, a major driver of the cost increase between DOD's 2001 and 2011 relocation estimates for the 1755-1850 MHz band was the large increase in the use of unmanned aerial systems. DOD deployed these systems very little in 2001, but their numbers had increased substantially by 2011. Conversely, equipment near the end of its life cycle when the study was completed may be retired or replaced outside of relocation efforts, which could decrease relocation costs. Inflation: Inflation will drive up costs as more time elapses before the auction occurs. In addition to changing assumptions, the high-level nature of a rough- order-of-magnitude estimate means that it is not as robust as a detailed, budget-quality lifecycle estimate, and its results should not be considered or used with the same confidence. DOD officials said that for a spectrum- band relocation effort, a detailed, budget-quality cost estimate would normally be done during the transition planning phase once a spectrum auction has been approved, and would be based on specific auction and relocation decisions. No official government revenue forecast has been prepared by CBO, FCC, NTIA, or OMB for a potential auction of the 1755-1850 MHz band licenses, but some estimates might be prepared once there is a greater likelihood of an auction. Officials at these agencies knowledgeable about estimating revenue from the auction of spectrum licenses said that it is too early to produce meaningful forecasts for a potential auction of the 1755-1850 MHz band. Moreover, CBO only provides written estimates of potential receipts when a congressional committee reports legislation invoking FCC auctions. OMB officials said NTIA, with OMB concurrence, will transmit federal agency relocation cost estimates to assist FCC in establishing minimum bids for an auction once it is announced. OMB would also estimate receipts and relocation costs as part of the President's budget. OMB analysts would use relocation cost information from NTIA to complete OMB's estimate of receipts. Although no official government revenue forecast exists, an economist with the Brattle Group, an economic consulting firm, published a revenue forecast in 2011 for a potential auction of the 1755-1850 MHz band that forecasted revenues of $19.4 billion for the band. We did not evaluate the accuracy of this revenue estimate. Like all forecasts, the Brattle Group study was based on certain assumptions. The study assumed that the 1755-1850 MHz band would be generally cleared of federal users. It also assumed the AWS-1 average nationwide price of $1.03 per MHz-pop as a baseline price for spectrum allocated to wireless broadband services, and that the 1755-1780 MHz portion of the band would be paired with the 2155-2180 MHz band, which various industry stakeholders currently support. The study assumed that the 95 MHz of spectrum between 1755 and 1850 MHz would be auctioned as part of a total of 470 MHz of spectrum included in 6 auctions sequenced 18 months apart and spread over 9 years with total estimated net receipts of $64 billion. In addition, the study adjusted the price of spectrum based on the increase in the supply of spectrum over the course of the six auctions, as well as for differences in the quality of the spectrum bands involved. Like all goods, the price of licensed spectrum, and ultimately the auction revenue, is determined by supply and demand. This fundamental economic concept helps to explain how the price of licensed spectrum could change depending on how much licensed spectrum is available now and in the future, and how much licensed spectrum is demanded by the wireless industry for broadband applications. Government agencies can influence the supply of spectrum available for licensing, whereas expectations about profitability determine demand for spectrum in the marketplace. Supply. In 2010, the President directed NTIA to work with FCC to make 500 MHz of spectrum available for use by commercial broadband services within 10 years. This represents a significant increase in the supply of spectrum available for licensing in the marketplace. As with all economic goods, the price and value of licensed spectrum are expected to fall as additional supply is introduced, all other things being equal. Demand. The expected, potential profitability of a spectrum license influences the level of demand for it. Currently, the demand for licensed spectrum is increasing and a primary driver of this increased demand is the significant growth in commercial-wireless broadband services, including third and fourth generation technologies that are increasingly used for smart phones and tablet computers. Some of the factors that would influence the demand for licensed spectrum are: Clearing versus Sharing: Spectrum is more valuable, and companies will pay more to license it, if it is entirely cleared of incumbent federal users, giving them sole use of licensed spectrum; spectrum licenses are less valuable if access must be shared. Sharing could potentially have a big impact on the price of spectrum licenses. In 2012, the President's Council of Advisors on Science and Technology advocated that sharing between federal and commercial users become the new norm for spectrum management, especially given the high cost and lengthy time it takes to relocate federal users. Certainty and Timing: Another factor that affects the value of licensed spectrum is the certainty about when it becomes available. Any increase in the probability that the spectrum would not be cleared on time would have a negative effect on the price companies are willing to pay to use it. For example, 7 years after the auction of the 1710- 1755 MHz band, federal agencies are still relocating systems. The estimated 10-year timeframe to clear federal users from the 1755- 1850 MHz band, and potential uncertainty around that timeframe, could negatively influence demand for the spectrum. Available Wireless Services: Innovation in the wireless broadband market is expected to continue to drive demand for wireless services. For example, demand continues to increase for smartphones and tablets as new services are introduced in the marketplace. These devices can connect to the Internet through regular cellular service using commercial spectrum, or they can use publicly available (unlicensed) spectrum via wireless fidelity (Wi-Fi) networks to access the Internet. The value of the spectrum, therefore, is determined by continued strong development of and demand for wireless services and these devices, and the profits that can be realized from them. Chairman Udall, Ranking Member Sessions, and Members of the Subcommittee, this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For questions about this statement, please contact Mark L. Goldstein, Director, Physical Infrastructure Issues, at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Mike Clements, Assistant Director; Stephen Brown; Jonathan Carver; Jennifer Echard; Emile Ettedgui; Colin Fallon; Bert Japikse; Elke Kolodinski; Joshua Ormond; Jay Tallon; and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Radio frequency spectrum is the resource that makes possible wireless communications. Balancing competing industry and government demands for a limited amount of spectrum is a challenging and complex task. In 2006, FCC completed an auction of spectrum licenses in the 1710-1755 MHz band that had previously been allocated for federal use. As part of an effort to make additional spectrum available for commercial use, DOD assessed the feasibility of relocating 11 major communication systems from the 1755-1850 MHz band. In September 2011, DOD found that it would cost about $13 billion over 10 years to relocate most operations from the 1755-1850 MHz band. GAO was asked to review the costs to relocate federal spectrum users and revenues from spectrum auctions. This testimony addresses our preliminary findings on (1) estimated and actual relocation costs and revenue from the previously auctioned 1710-1755 MHz band, (2) the extent to which DOD followed best practices to prepare its preliminary cost estimate for vacating the 1755-1850 MHz band, and (3) existing government or industry forecasts for revenue from an auction of the 1755-1850 MHz band. GAO reviewed relevant reports; interviewed DOD, FCC, NTIA, and Office of Management and Budget officials and industry stakeholders; and analyzed the extent to which DOD's preliminary cost estimate met best practices identified in GAO's Cost Estimating and Assessment Guide (Cost Guide). Actual costs to relocate federal users from the 1710-1755 megahertz (MHz) band have exceeded the original $1 billion estimate by about $474 million as of March 2013, although auction revenues appear to exceed relocation costs by over $5 billion. Actual relocation costs exceed estimated costs for various reasons, including unforeseen challenges and some agencies not following the National Telecommunications and Information Administration's (NTIA) guidance for preparing the cost estimate. In contrast, the Department of Defense (DOD) expects to complete relocation for about $275 million or approximately $80 million less than its $355 million estimate. According to DOD officials, the relocation of systems from this band has been less expensive than originally estimated because many systems were simply re-tuned to operate in the adjacent 1755-1850 MHz band. The auction of the 1710-1755 MHz band raised almost $6.9 billion in gross winning bids. NTIA expects agencies to complete the relocation effort between 2013 and 2017; therefore, final net auction revenue (auction revenue less relocation costs) may change. DOD's preliminary cost estimate for relocating systems from the 1755-1850 MHz band substantially or partially met GAO's best practices, but changes in key assumptions may affect future costs. Adherence with GAO's Cost Guide helps to minimize the risk of cost overruns, missed deadlines, and unmet performance targets. GAO found that DOD's estimate substantially met the comprehensive and well-documented best practices. For instance, it included complete information about systems' life cycles and documentation for the majority of systems was sufficient. However, not all programs had evidence of cost-influencing ground rules and assumptions, and some of the source data were insufficient. GAO also determined that DOD partially met the accurate and credible best practices. For example, DOD applied appropriate inflation rates and its estimated costs generally agreed with its 2001 cost estimate for this band. However, DOD did not develop a confidence level, making it difficult to determine if the costs considered are the most likely costs, and DOD only completed some sensitivity analyses and risk assessments at the program level for some programs. DOD officials said that changes to key assumptions could substantially change its costs. Most importantly, decisions about which spectrum band DOD would relocate to are still unresolved. Nevertheless, DOD's cost estimate was consistent with its purpose--informing the decision to make additional spectrum available for commercial wireless services. No government revenue forecast has been prepared for a potential auction of licenses in the 1755-1850 MHz band, and a variety of factors could influence auction revenues. One private sector study in 2011 forecasted $19.4 billion in auction revenue for licenses in this band, assuming that federal users would be cleared and the nationwide spectrum price from a previous auction, adjusted for inflation, would apply to this spectrum. The price of spectrum, and ultimately auction revenue, is determined by supply and demand. The Federal Communications Commission (FCC) and NTIA jointly influence the amount of spectrum allocated to federal and nonfederal users (the supply). The potential profitability of a spectrum license influences its demand. Several factors would influence profitability and demand, including whether the spectrum is cleared of federal users or must be shared.
4,639
988
Medicare covers medically necessary ambulance services when no other means of transportation to receive health care services is appropriate, given the beneficiary's medical condition at the time of transport. Medicare pays for both emergency and nonemergency ambulance transports that meet the established criteria. To receive Medicare reimbursement, providers of ambulance services must also meet vehicle and crew requirements. Transport in any vehicle other than an ambulance--such as a wheelchair or stretcher van--does not qualify for Medicare payment. Medicare pays for different levels of ambulance services, which reflect the staff training and equipment required to meet the patient's needs. Basic life support (BLS) is provided by emergency medical technicians (EMT). Advanced life support (ALS) is provided by paramedics or EMTs with advanced training. ALS with specialized services is provided by the same staff as standard ALS but involves additional equipment. Currently, Medicare uses different payment methods for hospital-based and freestanding ambulance providers. Hospital-based providers are paid based on their reasonable costs. For freestanding providers, Medicare generally pays a rate based on reasonable charges, subject to an upper limit that essentially establishes a maximum payment amount. Freestanding providers can bill separately for mileage and certain supplies. Between 1987 and 1995, Medicare payments to freestanding ambulance providers more than tripled, from $602 million to almost $2 billion, rising at an average annual rate of 16 percent. Overall Medicare spending during that same time increased 11 percent annually. From 1996 through 1998, payments to freestanding ambulance providers stabilized at about $2.1 billion. BBA stipulated that total payments under the fee schedule for ambulance services in 2000 should not exceed essentially the amount that payments would have been under the old payment system. This requirement is known as a budget neutrality provision. In 1997, 11,135 freestanding and 1,119 hospital-based providers billed Medicare for ground transports. The freestanding providers are a diverse group, including private for-profit, nonprofit, and public entities. They include operations staffed almost entirely by community volunteers, public ventures that include a mix of volunteer and professional staff, and private operations using paid staff operating independently or contracting their services to local governments. In our July 2000 report, we noted that about 34 percent were managed by local fire departments. In several communities a quasi-government agency owned the ambulance equipment and contracted with private companies for staff. The majority of air ambulance transports are provided by hospital-based providers. An estimated 275 freestanding and hospital-based programs provide fixed-wing and rotor-wing air ambulance transports, which represent a small proportion (about 5 percent) of total ambulance payments. In our July 2000 report, we noted that several factors characterizing rural ambulance providers may need consideration in implementing an appropriate payment policy. These include: High per-transport costs in low-volume areas. Compared to their urban and suburban counterparts, rural ambulance providers have fewer transports over which to spread their fixed costs because of the low population density in rural areas. Yet, rural providers must meet many of the same basic requirements as other providers to maintain a responsive ambulance service, such as a fully equipped ambulance that is continually serviced and maintained and sufficient numbers of trained staff. As a result, rural providers that do not rely on volunteers generally have higher per-transport costs than their urban and suburban counterparts. Longer distances traveled. A common characteristic of rural ambulance providers is a large service area, which generally requires longer trips. Longer trips increase direct costs from increased mileage costs and staff travel time. They also raise indirect costs because ambulance providers must have sufficient backup services when vehicles and staff are unavailable for extended periods. Current Medicare payment policy generally allows freestanding providers to receive a payment for mileage. Nevertheless, mileage-related reimbursement issues, such as the amount paid for mileage, represent a greater concern to rural providers because of the longer distances traveled. Lack of alternative transportation services. Rural areas may lack alternative transport services, such as taxis, van services, and public transportation, which are more readily available in urban and suburban areas. This situation is complicated by the fact that some localities require ambulance providers to transport in response to an emergency call, even if the severity of the problem has not been established. Because of this situation, some providers transport a Medicare beneficiary whose need for transport does not meet Medicare coverage criteria and must therefore seek payment from the beneficiary or another source. Reliance on Medicare revenue. Medicare payments account for a substantial share of revenue for rural ambulance providers that bill Medicare. Among rural providers, 44 percent of their annual revenue in 1998, on average, was from Medicare, compared to 37 percent for urban providers, according to Project Hope Center for Health Services, a nonprofit health policy research organization. Additionally, for some rural providers, other revenue sources--such as subsidies from local tax revenues, donations, or other fundraising efforts--have not kept pace with increasing costs of delivering the services. Decreasing availability of volunteer staff. Rural ambulance providers traditionally have relied more heavily on volunteer staff than providers in urban or suburban areas. Some communities having difficulty recruiting and retaining volunteers may have had to hire paid staff, which increases the costs of providing services. Medicare's proposed fee schedule, published in September 2000, reduces the variation in maximum payment amounts to similar providers for the same type of services. The considerable variation that exists in the current payment system does not necessarily reflect expected differences in provider costs. For example, in 1999, the maximum payments for two types of emergency transport--one requiring no specialized services and the other requiring specialized services--were the same in Montana at $231 for freestanding providers. In North Dakota, the maximum payment was about $350 and also did not differ measurably for the two types of transport services. In contrast, South Dakota's maximum payment for the less intensive transport was $137, which was $30 lower than the payment for the transport requiring specialized services. Per-mile payments also varied widely. For example, in rural South Dakota, the payment was just over $2 per mile, compared to $6 per mile in rural Wyoming. The shift to the proposed fee schedule would narrow the wide variation in payments to ambulance providers for similar services. The proposed schedule includes one fee for each level of service. This fee is not expected to vary among providers except for two possible adjustments-- one for geographic wage and price differences and the other based on the beneficiary's location, rural or urban. As a result, a national fee schedule is likely to provide increased per-trip payments to those providers that under the current system receive payments considerably below the national average and decreased payments to providers with payments that have been substantially above the national average. As part of its mandate, the negotiated rulemaking committee was directed to consider the issue of providing essential ambulance service in isolated areas. The committee recommended a rural payment adjustment to recognize higher costs associated with low-volume providers to ensure adequate access to ambulance services. Consistent with the committee's recommendation, the proposed fee schedule includes an additional mileage payment for the first 17 miles for all transports of beneficiaries in rural areas. The mileage payment adjustment, however, treats all providers in rural areas identically and does not specifically target providers that offer the only ambulance service for residents in the most isolated areas. As a result, some providers may receive the payment adjustment when they are not the only available source of ambulance service, so the adjustment may be too low for the truly isolated providers. In addition, the proposed rural adjustment is tied to the mileage payment rather than the base rate and, therefore, may not adequately help low- volume providers. Such providers may not have enough transports to enable them to cover the fixed costs associated with maintaining ambulance service. The per-mile cost would not necessarily be higher with longer trips. It is the base rate, which is designed to pay for general costs such as staff and equipment--and not the mileage rate--that may be insufficient for these providers. For that reason, adjusting the base rate rather than the mileage rate would better account for higher per-transport fixed costs. In response to our 2000 report, HCFA stated that it intends to consider alternative adjustments to more appropriately address payment to isolate, essential, low-volume rural ambulance providers. Whether or not a claim for ambulance transport is approved varies among carriers, and these discrepancies can translate into unequal coverage for beneficiaries. In 1998, between 9 percent and 26 percent of claims for payment of emergency and nonemergency ambulance transports were denied among the nine carriers that processed two-thirds of all ambulance claims. Different practices among carriers, including increased scrutiny due to concerns about fraud, may explain some of the variation in denial rates. Following are other inconsistencies in carrier practices cited in our July 2000 report that may help explain denial rate differences: National coverage policy exists only for some situations. Generally, Medicare coverage policies have been set by individual carriers rather than nationally by HCFA. For example, in 1998, the carrier covering ambulance providers in New Jersey and Pennsylvania reimbursed transports at ALS levels where local ordinances mandated ALS as the minimum standard of care for all transports. In contrast, the carrier for an ambulance provider in Fargo, North Dakota, reduced many of the provider's ALS claims to BLS payment rates, even though a local ordinance required ALS services in all cases. (The carrier's policy has since changed.) Some carriers were found to have applied criteria inappropriately, particularly for nonemergency transports. For example, for Medicare coverage of a nonemergency ambulance transport, a beneficiary must be bed-confined. In the course of our 2000 study, we found one carrier that processed claims for 11 states applied bed-confined criteria to emergency transports as well as those that were nonemergency. (The carrier's policy has since changed.) Providers were concerned that carriers sometimes determined that Medicare will cover an ambulance claim based on the patient's ultimate diagnosis, rather than the patient's condition at the time of transport. Medicare officials have stated that the need for ambulance services is to be based on the patient's medical condition at the time of transport, not the diagnosis made later in the emergency room or hospital. Ambulance providers are required to transport beneficiaries to the nearest hospital that can appropriately treat them. Carriers may have denied payments for certain claims because they relied on inaccurate survey information specifying what services particular hospitals offer when determining whether a hospital could have appropriately served a beneficiary. However, the survey information does not always accurately reflect the situation at the time of transport, such as whether a bed was available or if the hospital was able to provide the necessary type of care. Some providers lacked information about how to fill out electronic claims forms correctly. Volunteer staffs in particular may have had difficulty filing claims, as they often lacked experience with the requirements for Medicare's claims payment process. An improperly completed claim form increases the possibility of a denial. Claims review difficulties are exacerbated by the lack of a national coding system that easily identifies the beneficiary's health condition to link it to the appropriate level of service (BLS, ALS,or ALS with specialized services). As a result, the provider may not convey the information the carrier needs to understand the beneficiary's medical condition at the time of pickup, creating a barrier to appropriate reimbursement. Medicare officials have stated that a standardized, mandated coding system would be helpful and the agency has investigated alternative approaches for implementing such a system. The agency contends that using standardized codes would promote consistency in the processing of claims, reduce the uncertainty for providers regarding claims approval, and help in filing claims properly. Overall, the proposed fee schedule will improve the equity of Medicare's payment for ambulance providers. Payments will likely increase for providers that now receive payments that are lower than average, whereas payments will likely decline for those now receiving payments above the average. In our July 2000 report, we recommended that HCFA modify the payment adjuster for rural transports to ensure that it is structured to address the high fixed costs of low-volume providers in isolated areas, as these providers' services are essential to ensuring Medicare beneficiaries' access to ambulance services. HCFA agreed to work with the ambulance industry to identify and collect relevant data so that appropriate adjustments can be made in the future.
The Balanced Budget Act of 1997 required Medicare to change its payment system for ambulance services. In response, the Health Care Financing Administration (HCFA), now called the Centers for Medicare and Medicaid Services (CMS), proposed a fee schedule to standardize payments across provider types on the basis of national rates for particular services. Under the act, the fee schedule was to have applied to ambulance services furnished on or after January 1, 2000. HCFA published a proposed rule in September 2000 and has received public comment, but it has not yet issued a final rule. This testimony discusses the unique concerns of rural ambulance providers and the likely effects of the proposed fee schedule on these providers. Many rural ambulance providers face a set of unique challenges in implementing an appropriate payment policy. Rural providers--particularly those serving large geographic areas with low population density--tend to have high per-trip costs compared with urban and suburban providers. The proposed Medicare fee schedule does not sufficiently distinguish the providers serving beneficiaries in the most isolated rural areas and may not appropriately account for the higher costs of low-volume providers.
2,619
229
The disposal of LLRW is the end of the radioactive material lifecycle that spans production, use, processing, interim storage, and disposal. The nuclear utility industry generates the bulk of this LLRW through the normal operation and maintenance of nuclear power plants, and through the decommissioning of these plants. Other LLRW is generated from medical, industrial, agricultural, and research applications. Common uses of radioactive material are in radiotherapy, radiography, smoke detectors, irradiation and sterilization of food and materials, measuring devices, and illumination of emergency exit signs. In the course of working with these radioactive materials, other material, such as protective clothing and gloves, pipes, filters, and concrete, that come in contact with them will become contaminated and therefore need to be disposed of as LLRW. In the 1960s, the Atomic Energy Commission, a predecessor agency to DOE, began to encourage the development of commercial LLRW disposal facilities to accommodate the increased volume of commercial waste that was being generated. Six such disposal facilities were licensed, two of which, the Richland facility, licensed in 1965, and the Barnwell facility, licensed in 1969, remain today. Each of these facilities is located within the boundaries of or adjacent to a much larger site owned by DOE. The third facility, in Clive, Utah, operated by EnergySolutions (formerly known as Envirocare of Utah), was originally licensed by the state of Utah in 1988 to only accept naturally occurring radioactive waste. In 1991, Utah amended the facility's license to permit the disposal of some LLRW, and the Northwest Compact agreed to allow the facility to accept these wastes from noncompact states. By 2001, the facility was allowed to accept all types of class A waste. At this time, sufficient available disposal capacity exists for almost all LLRW. However, fast-approaching constraints on the availability of disposal capacity for class B and class C wastes could adversely affect the disposal of many states' LLRW. Specifically, beginning on June 30, 2008, waste generators in 36 states will be precluded from using the Barnwell disposal facility for their class B and class C LLRW. That facility currently accepts about 99 percent of the nation's class B and class C commercial LLRW. Although the Barnwell and Richland facilities have more than sufficient capacity to serve waste generators from the 14 states that are members of the facilities' respective compacts until at least 2050, the remaining 36 states will have no disposal options for their class B and class C LLRW. Although waste generators in these 36 states will no longer have access to Barnwell, they can continue to minimize waste generation, process waste into safer forms, and store waste pending the development of additional disposal options. While NRC prefers the disposal of LLRW, it allows on- site storage as long as the waste remains safe and secure. Since September 11, 2001, both the public's concern with, and its perception of, risk associated with radioactive release, including that from stored LLRW, have increased. However, should an immediate and serious threat come from any specific location of stored waste, NRC has the authority under the act to override any compact restrictions and allow shipment of the waste to a regional or other nonfederal disposal facility under narrowly defined conditions. Waste minimization techniques and storage can alleviate the need for disposal capacity, but they can be costly. For example, in June 2004 we reported that one university built a $12 million combined hazardous and radioactive waste management facility. Two-thirds of this facility is devoted to the processing and temporary storage of class A waste. Additional disposal capacity for the estimated 20,000 to 25,000 cubic feet of class B and class C LLRW disposed of annually at Barnwell may become available with the opening of a new disposal facility in Texas. This facility has received a draft license and appears to be on schedule to begin operations in 2010. Although the facility may accept some DOE cleanup waste, there is presently no indication that it will be made available to all waste generators beyond the two states that are members of the Texas Compact (Texas and Vermont). In contrast, available disposal capacity for the nation's class A waste does not appear to be a problem in either the short or long term. Our June 2004 report noted that EnergySolutions' Clive facility had sufficient disposal capacity, based upon then-projected disposal volumes, to accept class A waste for at least 20 years under its current license. This facility currently accepts about 99 percent of the nation's class A LLRW. Since our report was issued, domestic class A waste has declined from about 15.5 million cubic feet in 2005 to about 5 million cubic feet in 2007. This decline is primarily attributed to DOE's completion of several cleanup projects. DOE waste constituted about 50 percent of the total waste accepted by EnergySolutions in 2007. This reduction in projected class A disposal volumes will extend the amount of time the Clive facility can accept class A waste before exhausting its capacity. According to the disposal operator, capacity for this facility has been extended another 13 years, to 33 years of capacity. It is important to note, however, that our June 2004 analysis of available LLRW disposal capacity considered only domestically produced LLRW. We did not consider the impact of imported LLRW on available class A, B, and C disposal capacity at Clive, Barnwell, and Richland. Although disposal capacity at the time of our June 2004 report appeared adequate using then- projected waste disposal volumes, the impact of adding additional waste from overseas waste generators is unclear. While none of the foreign countries we surveyed for our March 2007 report indicated that they have disposal options for all of their LLRW, almost all either had disposal capacity for their lower-activity LLRW or central storage facilities for their higher-activity LLRW, pending the availability of disposal capacity. Specifically, we surveyed 18 foreign countries that previously had or currently have operating nuclear power plants or research reactors. Ten of the 18 countries reported having available disposal capacity for their lower-activity LLRW and 6 other countries have plans to build such facilities. Only 3 countries indicated that they have a disposal option for some higher-activity LLRW. Many countries that lack disposal capacity for LLRW provide centralized storage facilities to relieve waste generators of the need to store LLRW on-site. Specifically, 7 of the 8 countries without disposal facilities for lower-activity LLRW had centralized storage facilities. Eleven of the 15 countries without disposal facilities for at least some higher-activity LLRW provide central storage facilities for this material. Of the 18 countries we surveyed, only Italy indicated that it lacked disposal availability for both lower- and higher-activity LLRW and central storage facilities for this waste. As reported by Italy to the international Nuclear Energy Agency, in 1999, the government began to develop a strategy for managing the liabilities resulting from the country's past national nuclear activities. The strategy established a new national company to shut down all of Italy's nuclear power plants and to promptly decommission them. It also created a national agency that would establish and operate a disposal site for radioactive waste. A subsequent government decree in 2001 prompted an acceleration of the process to select a disposal site, with the site to begin operations in 2010. However, the Italian government has more recently reported it has encountered substantial difficulties establishing a disposal site because local governments have rejected potential site locations. In total, Italy will have an estimated 1.1 million cubic feet of lower-activity LLRW that will result from decommissioning its nuclear facilities in addition to the 829,000 cubic feet of this waste already in storage. Our March 2007 report identified several management approaches used in foreign countries that, if adopted in the United States, could improve the management of radioactive waste. These approaches included, among other things, using a comprehensive national radioactive waste inventory of all types of radioactive waste by volume, location, and waste generator; providing disposition options for all types of LLRW or providing central storage options for higher-radioactivity LLRW if disposal options are unavailable; and developing financial assurance requirements for all waste generators to reduce government disposition costs. We also identified another management approach used in most countries--national radioactive waste management plans--that also might provide lessons for managing U.S. radioactive waste. Currently, the United States does not have a national radioactive waste management plan and does not have a single federal agency or other organization responsible for coordinating LLRW stakeholder groups to develop such a plan. Such a plan for the United States could integrate the various radioactive waste management programs at the federal and state levels into a single source document. Our March 2007 report recommended that DOE and NRC evaluate and report to the Congress on the usefulness of adopting the LLRW management approaches used in foreign countries and developing a U.S. radioactive waste management plan. Although both agencies generally agreed with our recommendations, NRC, on behalf of itself and DOE, subsequently rejected two approaches that our March 2007 report discussed. Specifically, NRC believes that the development of national LLRW inventories and a national waste management plan would be of limited use in the United States. In a March 2008 letter to GAO on the actions NRC has taken in response to GAO's recommendations, NRC stated that the approach used in the United States is fundamentally different from other countries. In particular, NRC argued that, because responsibility for LLRW disposal is placed with the states, the federal government's role in developing options for managing and/or disposing of LLRW is limited. NRC also expressed concern about the usefulness and significant resources required to develop and implement national inventories and management plans. We continue to believe comprehensive inventories and a national plan would be useful. A comprehensive national radioactive waste inventory would allow LLRW stakeholders to forecast waste volumes and to plan for future disposal capacity requirements. Moreover, a national radioactive waste management plan could assist those interested in radioactive waste management to identify waste quantities and locations, plan for future storage and disposal development, identify research and development opportunities, and assess the need for regulatory or legislative actions. For example, there are no national contingency plans, other than allowing LLRW storage at waste generator sites, to address the impending closure of the Barnwell facility to class B and class C LLRW from noncompact states. The availability of a national plan and periodic reporting on waste conditions might also provide the Congress and the public with a more accessible means for monitoring the management of radioactive waste and provide a mechanism to build greater public trust in the management of these wastes in the United States. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Gene Aloise at (202) 512- 3841 or [email protected]. Major contributors to this statement were Daniel Feehan (Assistant Director), Thomas Laetz, Lesley Rinner, and Carol Herrnstadt Shulman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Disposal of radioactive material continues to be highly controversial. To address part of the disposal problem, in 1980, Congress made the states responsible for disposing of most low-level radioactive waste (LLRW), and allowed them to form regional compacts and to restrict access to disposal facilities from noncompact states. LLRW is an inevitable by-product of nuclear power generation and includes debris and contaminated soils from the decommissioning and cleanup of nuclear facilities, as well as metal and other material exposed to radioactivity. The Nuclear Regulatory Commission (NRC) ranks LLRW according to hazard exposure--classes A, B, C, and greater-than-class C (GTCC). The states are responsible for the first three classes, and the Department of Energy (DOE) is responsible for GTCC. Three facilities dispose of the nation's LLRW--in Utah, South Carolina, and Washington State. The testimony addresses (1) LLRW management in the United States and (2) LLRW management in other countries. It is substantially based on two GAO reports: a June 2004 report (GAO-04-604) and a March 2007, report (GAO-07-221) that examined these issues. To prepare this testimony, GAO relied on data from the two reports and updated information on current capacity for LLRW and access to disposal facilities. As GAO reported in 2004, existing disposal facilities had adequate capacity for most LLRW and were accessible to waste generators (hereafter referred to as disposal availability) in the short term, but constraints on the disposal of certain types of LLRW warranted concern. Specifically, South Carolina had decided to restrict access to its disposal facility by mid-2008 for class B and C waste--the facility now accepts about 99 percent of this waste generated nationwide--to only waste generators in the three states of its compact. If there are no new disposal options for class B and C wastes after 2008, licensed users of radioactive materials can continue to minimize waste generation, process waste into safer forms, and store waste pending the development of additional disposal options. While NRC prefers that LLRW be disposed of, it allows on-site storage as long as the waste remains safe and secure. In contrast, disposal availability for domestic class A waste is not a problem in the short or longer term. In 2004, GAO reported that the Utah disposal facility--which accepts about 99 percent of this waste generated nationwide--could accept such waste for 20 years or more under its current license based on anticipated class A waste volumes. Since 2005, the volume of class A waste disposed of has declined by two-thirds primarily because DOE completed several large cleanup projects, extending the capacity for an additional 13 years, for a total of 33 years of remaining disposal capacity. However, the June 2004 analysis, and the updated analysis, were based on the generation of LLRW only in the United States and did not consider the impact on domestic disposal capacity of importing foreign countries' LLRW. Ten of the 18 countries surveyed for GAO's March 2007 report have disposal options for class A, B and most of C waste, and 6 other countries have plans to build such facilities. Only 3 countries indicated that they have a disposal option for some class C and GTCC waste; however, almost all countries that do not provide disposal for LLRW have centralized storage facilities for this waste. Only Italy reported that it had no disposal or central storage facilities for its LLRW, although it plans to develop a disposal site for this waste that will include waste from its decommissioned nuclear power plants and from other nuclear processing facilities. Italy initially expected this disposal site to be operational by 2010, but local governments' resistance to the location of this disposal site has delayed this date. The March 2007 report also identified a number of LLRW management approaches used in other countries that may provide lessons to improve the management of U.S. radioactive waste. These approaches include the use of comprehensive national radioactive waste inventory databases and the development of a national radioactive waste management plan. Such a plan would specify a single entity responsible for coordinating radioactive waste management and include strategies to address all types of radioactive waste. GAO had recommended that NRC and DOE evaluate and report to the Congress on the usefulness of these approaches. While the agencies considered these approaches, they expressed particular concerns about the significant resources required to develop and implement a national inventory and management plan for LLRW.
2,496
958
OPM and agencies are continuing to address the problems with the key parts of the hiring process we identified in our May 2003 report. Significant issues and actions being taken include the following. Reforming the classification system. In our May 2003 report on hiring, we noted that many regard the standards and process for defining a job and determining pay in the federal government as a key hiring problem because they are inflexible, outdated, and not applicable to the jobs of today. The process of job classification is important because it helps to categorize jobs or positions according to the kind of work done, the level of difficulty and responsibility, and the qualifications required for the position, and serves as a building block to determine the pay for the position. As you know, defining a job and setting pay in the federal government has generally been based on the standards in the Classification Act of 1949, which sets out the 15 grade levels of the General Schedule system. To aid agencies in dealing with the rigidity of the federal classification system, OPM has revised the classification standards of several job series to make them clearer and more relevant to current job duties and responsibilities. In addition, as part of the effort to create a new personnel system for the Department of Homeland Security (DHS), OPM is working with DHS to create broad pay bands for the department in place of the 15- grade job classification system that is required for much of the federal civil service. Still, OPM told us that its ability to more effectively reform the classification process is limited under current law and that legislation is needed to modify the current restrictive classification process for the majority of federal agencies. As we note in the report we are issuing today, 15 of the 22 CHCO Council members responding to our recent survey reported that either OPM (10 respondents) or Congress (5 respondents) should take the lead on reforming the classification process, rather than the agencies themselves. Improving job announcements and Web postings. We pointed out in our May 2003 report that the lack of clear and appealing content in federal job announcements could hamper or delay the hiring process. Our previous report provided information about how some federal job announcements were lengthy and difficult to read, contained jargon and acronyms, and appeared to be written for people already employed by the government. Clearly, making vacancy announcements more visually appealing, informative, and easy to access and navigate could make them more effective as recruiting tools. To give support to this effort, OPM has continued to move forward on its interagency project to modernize federal job vacancy announcements, including providing guidance to agencies to improve the announcements. OPM continues to collaborate with agencies in implementing Recruitment One-Stop, an electronic government initiative that includes the USAJOBS Web site (www.usajobs.opm.gov) to assist applicants in finding employment with the federal government. As we show in the report we are issuing today, all 22 of the CHCO Council members responding to our recent survey indicated that their agencies had made efforts to improve their job announcements and Web postings. In the narrative responses to our survey, a CHCO Council member representing a major department said, for example, that the USAJOBS Web site is an excellent source for posting vacancies and attracting candidates. Another Council member said that the Recruitment One-Stop initiative was very timely in developing a single automated application for job candidates. Automating hiring processes. Our May 2003 report also emphasized that manual processes for rating and ranking job candidates are time consuming and can delay the hiring process. As we mentioned in our previous report, the use of automation for agency hiring processes has various potential benefits, including eliminating the need for volumes of paper records, allowing fewer individuals to review and process job applications, and reducing the overall time-to-hire. In addition, automated systems typically create records of actions taken so that managers and human capital staff can easily document their decisions related to hiring. To help in these efforts, OPM provides to agencies on a contract or fee-for- service basis an automated hiring system, called USA Staffing, which is a Web-enabled software program that automates the steps of the hiring process. These automated steps would include efforts to recruit candidates, use of automated tools to assess candidates, automatic referral of high-quality candidates to selecting officials, and electronic notification of applicants on their status in the hiring process. According to OPM, over 40 federal organizations have contracted with OPM to use USA Staffing. OPM told us that it has developed and will soon implement a new Web- based version of USA Staffing that could further link and automate agency hiring processes. As we mention in the report we are issuing today, 21 of the 22 CHCO Council members responding to our recent survey reported that their agencies had made efforts to automate significant parts of their hiring processes. Improving candidate assessment tools. We concluded in our May 2003 report that key candidate assessment tools used in the federal hiring process can be ineffective. Our previous report noted that using the right assessment tool, or combination of tools, can assist the agency in predicting the relative success of each applicant on the job and selecting the relatively best person for the job. These candidate assessment tools can include written and performance tests, manual and automated techniques to review each applicant's training and experience, as well as interviewing approaches and reference checks. In our previous report, we noted some of the challenges of assessment tools and special hiring programs used for occupations covered by the Luevano consent decree. Although OPM officials said they monitor the use of assessment tools related to positions covered under the Luevano consent decree, they have not reevaluated these assessment tools. OPM officials told us, however, that they have provided assessment tools or helped develop new assessment tools related to various occupations for several agencies on a fee-for-service basis. Although OPM officials acknowledged that candidate assessment tools in general need to be reviewed, they also told us that it is each agency's responsibility to determine what tools it needs to assess job candidates. The OPM officials also said that if agencies do not want to develop their own assessment tools, then they could request that OPM help develop such tools under the reimbursable service program that OPM operates. As we state in the report we are issuing today, 21 of the 22 CHCO Council members responding to our recent survey indicated that their agencies had made efforts to improve their hiring assessment tools. Although we agree that OPM has provided assistance to agencies in improving their candidate assessment tools and has collected information on agencies' use of special hiring authorities, we believe that major challenges remain in this area. OPM can take further action to address our prior recommendations related to assessment tools. OPM could, for example, actively work to link up agencies having similar occupations so that they could potentially form consortia to develop more reliable and valid tools to assess their job candidates. Despite agency officials' past calls for hiring reform, agencies appear to be making limited use of category rating and direct-hire authority, two new hiring flexibilities created by Congress in November 2002 and implemented by OPM in June of last year. Data on the actual use of these two new flexibilities are not readily available, but most CHCO Council members responding to our recent survey indicated that their agencies are making little or no use of either flexibility (see fig. 1). OPM officials also confirmed with us that based on their contacts and communications with agencies, it appeared that the agencies were making limited use of the new hiring flexibilities. The limited use of category rating is somewhat unexpected given the views of human resources directors we interviewed 2 years ago. As noted in our May 2003 report, many agency human resources directors indicated that numerical rating and the rule of three were key obstacles in the hiring process. Category rating was authorized to address those concerns. The report we are issuing today also includes information about barriers that the CHCO Council members believed have prevented or hindered their agencies from using or making greater use of category rating and direct hire. Indeed, all but one of the 22 CHCO Council members responding to our recent survey identified at least one barrier to using the new hiring flexibilities. Frequently cited barriers included the lack of OPM guidance for using the flexibilities, the lack of agency policies and procedures for using the flexibilities, the lack of flexibility in OPM rules and regulations, and concern about possible inconsistencies in the implementation of the flexibilities within the department or agency. In a separate report we issued in May 2003 on the use of human capital flexibilities, we recommended that OPM work with and through the new CHCO Council to more thoroughly research, compile, and analyze information on the effective and innovative use of human capital flexibilities. We noted that sharing information about when, where, and how the broad range of personnel flexibilities is being used, and should be used, could help agencies meet their human capital management challenges. As we recently testified, OPM and agencies need to continue to work together to improve the hiring process, and the CHCO Council should be a key vehicle for this needed collaboration. To accomplish this effort, agencies need to provide OPM with timely and comprehensive information about their experiences in using various approaches and flexibilities to improve their hiring processes. OPM--working through the CHCO Council--can, in turn, help by serving as a facilitator in the collection and exchange of information about agencies' effective practices and successful approaches to improved hiring. Such additional collaboration between OPM and agencies could go a long way to helping the government as a whole and individual agencies in improving the processes for quickly hiring highly qualified candidates to fill important federal jobs. In conclusion, the federal government is now facing one of the most transformational changes to the civil service in half a century, which is reflected in the new personnel systems for DHS and the Department of Defense and in new hiring flexibilities provided to all agencies. Today's challenge is to define the appropriate roles and day-to-day working relationships for OPM and individual agencies as they collaborate on developing innovative and more effective hiring systems. Moreover, for this transformation to be successful and enduring, human capital expertise within the agencies must be up to the challenge. Madam Chairwoman and Mr. Davis, this completes my statement. I would be pleased to respond to any questions that you might have. For further information on this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, (202) 512-6806 or at [email protected]. Individuals making key contributions to this testimony include K. Scott Derrick, Karin Fangman, Stephanie M. Herrold, Trina Lewis, John Ripper, Edward Stephenson, and Monica L. Wolford. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The executive branch hired nearly 95,000 new employees during fiscal year 2003. Improving the federal hiring process is critical given the increasing number of new hires expected in the next few years. In May 2003, GAO issued a report highlighting several key problems in the federal hiring process. That report concluded that the process needed improvement and included several recommendations to address the problems. Today, GAO is releasing a followup report requested by the subcommittee that discusses (1) the status of recent efforts to help improve the federal hiring process and (2) the extent to which federal agencies are using two new hiring flexibilities--category rating and direct-hire authority. Category rating permits an agency manager to select any job candidate placed in a best-qualified category. Direct-hire authority allows an agency to appoint individuals to positions without adherence to certain competitive examination requirements when there is a severe shortage of qualified candidates or a critical hiring need. Congress, the Office of Personnel Management (OPM), and agencies have all taken steps to improve the federal hiring process. In particular, Congress has provided agencies with additional hiring flexibilities, OPM has taken significant steps to modernize job vacancy announcements and develop the government's recruiting Web site, and most agencies are continuing to automate parts of their hiring processes. Nonetheless, problems remain with a job classification process and standards that many view as antiquated, and there is a need for improved tools to assess the qualifications of job candidates. Specifically, the report being released today discusses significant issues and actions being taken to (1) reform the classification system, (2) improve job announcements and Web postings, (3) automate hiring processes, and (4) improve candidate assessment tools. In addition, agencies appear to be making limited use of the two new hiring flexibilities contained in the Homeland Security Act of 2002--category rating and direct-hire authority--that could help agencies in expediting and controlling their hiring processes. GAO surveyed members of the interagency Chief Human Capital Officers Council who reported several barriers to greater use of these new flexibilities. Frequently cited barriers included (1) the lack of OPM guidance for using the flexibilities, (2) the lack of agency policies and procedures for using the flexibilities, (3) the lack of flexibility in OPM rules and regulations, and (4) concern about possible inconsistencies in the implementation of the flexibilities within the department or agency. The federal government is now facing one of the most transformational changes to the civil service in half a century, which is reflected in the new personnel systems for Department of Homeland Security and the Department of Defense and in new hiring flexibilities provided to all agencies. Today's challenge is to define the appropriate roles and day-to-day working relationships for OPM and individual agencies as they collaborate on developing innovative and more effective hiring systems. Moreover, human capital expertise within the agencies must be up to the challenge for this transformation to be successful and enduring.
2,338
626
A U.S. government-funded enterprise fund is an organization that is designed to promote the expansion of the private sector in developing and transitioning countries by providing financing and technical assistance to locally owned small and medium-sized enterprises. The U.S. government provides initial capital to an enterprise fund through a grant; the fund may then seek additional capital from the private sector to invest alongside the enterprise fund. Enterprise funds are modeled on investment management in the venture capital industry, in which venture capital is invested primarily in small companies during early stages of their development with the investors monitoring, advising, and following up on operational results. It is expected that some investments will fail, but successful ventures are intended to offset the losses over the long term. The U.S. government initially funded enterprise funds in the early 1990s to promote the development of the private sector in Eastern and Central European countries following the breakup of the former Soviet Union in December 1991. USAID invested $1.2 billion to establish 10 enterprise funds, covering 19 countries in Central and Eastern Europe and the former Soviet Union. In September 2013, USAID issued a lessons- learned report that documented the successes and challenges faced by the Eastern and Central European enterprise funds.concluded that while enterprise funds have demonstrated that they can be a successful tool in achieving positive financial returns and developmental objectives, results to date have been mixed, based upon the economic and political environment in which they operate along with the overall investment strategy and the specific investment decisions made by each fund's board and management team. The report also stated that, in many cases, the enterprise funds in Europe and Eurasia took up to 2 years before they were ready to make their first investments. In early 2011, the events characterized as the Arab Spring renewed interest in the potential use of the enterprise fund model in the Middle East region as well as in other countries undergoing economic and political transition. EAEF and TAEF were thus modeled after the enterprise funds in Eastern and Central Europe. EAEF was incorporated in October 2012 and funded in March 2013, when the grant agreement between USAID and EAEF was signed. TAEF was incorporated in February 2013 and funded in July 2013, when the grant agreement between USAID and TAEF was signed. The Funds' authorizing legislation allows them to achieve their goals through the use of loans, microloans, equity investments, insurance, guarantees, grants, feasibility studies, technical assistance, training for businesses receiving investment capital, and other measures. The Funds have a dual mandate, or "double bottom line," in that they are intended to achieve a positive return on investment while also achieving a positive development effect. The authority of the Funds to provide assistance expires on December 31, 2025. The Funds are established as nonprofit corporations that do not have shareholders and do not distribute dividends. The authorizing legislation states that each Fund shall have a board of directors that is composed of six private U.S. citizens and three private host-country citizens. The authorizing legislation further requires that board members have international business careers and demonstrated expertise in international and emerging markets investment activities. According to a September 2013 lessons-learned report by USAID on past enterprise funds, identifying and recruiting the most experienced individuals to serve on the fund's board of directors is the single most important element in achieving the fund's long-term development goals and financial profitability. U.S. board members serve on a volunteer basis, while the Egyptian and Tunisian citizen board members are permitted to receive compensation for their time and services. The Funds' boards are responsible for establishing their own operating and investment policies and directing their corporate affairs in accordance with applicable law and the grant agreements. EAEF has not made any investments in Egypt, as its first investment, to purchase an Egyptian bank, did not come to fruition. EAEF's investment strategy had been to purchase a bank that would lend money to small and medium-sized enterprises in Egypt. According to the EAEF Chairman, EAEF envisioned that it would have a greater impact on the Egyptian economy by making one large investment rather than a series of smaller investments. In August 2013, EAEF made plans to purchase a small bank in Egypt and subsequently conducted due diligence on the bank by hiring a large U.S. accounting firm to review the bank's financial situation, among other things. In June 2014, the EAEF Board of Directors approved a decision to acquire the bank. However, according to the EAEF Chairman, the Egyptian Central Bank rejected EAEF's application to purchase the bank. As of December 2014, EAEF was considering other investment options. According to EAEF officials, the Fund is now conducting due diligence on potential investments in the food and beverage, healthcare, and consumer finance sectors. The Chairman stated that he anticipates investing $60 million to $90 million in these three areas. Additionally, the EAEF Chairman told us that EAEF plans to consider investments in firms varying in size from SMEs to larger firms. USAID has obligated $120 million to EAEF, of which approximately $588,000 has been disbursed. Costs associated with performing the due diligence review constituted the majority of EAEF's expenditures through 2014. Specific categories of EAEF's expenditures include professional (e.g., legal) fees and travel expenses. Thus far, EAEF has spent less on administrative expenses than the approximately $3 million estimated for the first year in its preliminary budget. USAID has obligated $60 million to TAEF, of which TAEF has disbursed approximately $1.6 million, for administrative expenses and investments. TAEF plans to promote private sector development in Tunisia by investing in (1) a private equity fund that supports SMEs, (2) direct investments in SMEs smaller than those targeted by the private equity fund, (3) microfinance institutions, and (4) start-ups. In 2013, TAEF established a subsidiary company in Tunisia--the TAEF Advisory Company--that directly oversees TAEF's efforts in these four areas. In June 2014, TAEF committed to its first investment of over $2.4 million in a private equity fund that invests in SMEs in a variety of industries, such as telecommunications, agribusiness, and renewable energy. TAEF is one of several investors in the private equity fund; other investors include foreign donors. According to the TAEF Chairman, aggregate investments in the Fund from all sources total approximately $20 million. TAEF officials told us that the Fund will have representation on the equity fund's advisory committee. According to TAEF officials, the Fund has not yet made any investments in the remaining areas of direct investments in SMEs smaller than those targeted by the private equity fund, microfinance institutions, and start- ups. According to the TAEF Chairman, TAEF is in the process of conducting due diligence on two microfinance entities. Thus far, TAEF has spent less on administrative expenses than the approximately $900,000 estimated for the first year in its preliminary budget. Since their inception, EAEF and TAEF have made progress in establishing key administrative infrastructures necessary to support their investment operations. The Committee of Sponsoring Organizations of the Treadway Commission's (COSO) 2013 internal control evaluation tool establishes a framework for assessing management structures. As shown in table 1, EAEF and TAEF have made progress in establishing structures for administrative infrastructure, corporate governance, internal control, and human capital management in line with key elements of the COSO framework. Administrative infrastructure Administrative infrastructure refers to the basic systems and resources needed to set up and support organizations' operations-- which also contribute to developing a culture of accountability and control. Since being funded in 2013, EAEF and TAEF have focused on establishing essential administrative infrastructures. EAEF set up its headquarters in New York City, New York. In July 2014, EAEF hired its first employee to occupy the position of Chief of Staff and Director of Policy Planning. According to the EAEF Chairman, EAEF plans to hire an investment manager and a chief financial officer in the future. TAEF has a U.S. office located in Washington, D.C., and a Tunisian office located in Tunis, Tunisia, both of which are led by a managing director. TAEF plans to hire two investment officers in the future. EAEF and TAEF administrative expenses thus far have mostly consisted of professional fees (e.g., expenses for legal and consulting services), travel expenses, and so forth. Corporate governance Corporate governance can be viewed as the formation and execution of collective policies and oversight mechanisms to establish and maintain a sustainable and accountable organization while achieving its mission and demonstrating stewardship over its resources. Generally, an organization's board of directors has a key role in corporate governance through its oversight of executive management; corporate strategies; and risk management, audit, and assurance processes. The Funds have established bylaws and other rules for corporate governance. The bylaws cover the purpose of the Funds, voting rules, and the duties and responsibilities of corporate officers. The boards of both Funds have met regularly since their inceptions. In addition, the Funds have established corporate policies and procedures, which USAID has approved. In November 2014, the EAEF Board of Directors established several committees, including an investment committee, a governance and nominating committee, an external relations committee, and an audit committee. EAEF and TAEF each have to fill two vacant board member positions, one for a U.S. citizen and the other for a host country citizen. EAEF and TAEF are currently considering potential candidates to fill the vacant positions. EAEF and TAEF have established a variety of internal controls in the areas of control environment, risk assessment, control activities, information and communication, and monitoring, with additional actions under way. Internal control Internal control provides reasonable assurance that key management objectives-- efficiency and effectiveness of operations, reliability of financial reporting, and compliance with applicable laws and regulations--are being achieved. Areas of internal control include control environment, risk assessment, control activities, information and communication, and monitoring. Control environment. The Funds have established directives on ethical business practices and detailed conflict-of-interest policies. In addition, each Fund has a policy on disciplinary sanctions that states that any violation of the Fund's laws or ethical guidelines could subject an individual to potential disciplinary sanctions, such as probation or reduction in pay. Risk assessment. EAEF conducted a due diligence review for its first potential investment, the purchase of a bank. Among other things, EAEF hired a large accounting firm to review a sample of the bank's loans. TAEF established due diligence procedures in which it examined the governance, financial, operations, and legal status of its first investment. Before funding its first investment, TAEF carried out its due diligence procedures and determined that there were no significant issues (e.g., financial or legal issues) that would impede TAEF from making the investment. The meeting minutes of the board investment committee indicate that the board discussed the results of the due diligence assessment, including the extent of risk involved, and that the board unanimously approved the fund's first investment. Control activities. EAEF and TAEF have established several financial and cash management-related controls, including the following: Financial statements will be prepared on a quarterly basis and sent to the audit committees of the board of directors to review the performance of the Funds on a timely basis. Each Fund will, to the extent practicable, prepare an annual budget detailing its estimated operational requirements. The budget will be approved by the president and audit committee of the board of directors before the beginning of the Fund's fiscal year (January 1).financial reports that compare the actual results to the budgeted amounts. Quarterly, the board of directors will receive Expenses in excess of a certain amount must be approved in advance by the Chairman of the Board or the President (or their designees) and one other Director. All available periodic financial statements and (if prepared) audits for all entities in which the Fund has invested shall also be maintained for audit review and project monitoring. Information and communication. EAEF and TAEF corporate policies state that each Fund will maintain an investment database that lists all of its investments and will include information such as company name, amount of investment, and industry. The Funds have met with several external organizations to discuss their mission and activities, including U.S. government agencies, foreign governments, international organizations, and host country businesses. Monitoring. EAEF and TAEF have reported to external parties, including Congress, USAID, and the public, on their use of resources, with additional accountability actions under way. For example, both Funds submitted reports to Congress that detailed their administrative expenses for 2013, and both Funds have submitted quarterly financial reports to USAID for its review. With regard to performance planning and reporting, EAEF officials said that the Fund is in the process of developing its required performance monitoring plan. In November 2014, TAEF developed a solicitation for firms based in Tunisia to develop its performance monitoring plan. In terms of audits, the Funds are responsible for appointing independent certified or licensed public accountants, approved by USAID, to complete annual audits of the Fund's financial statements. According to the grant agreements, the audits will be conducted within the scope of U.S. generally accepted auditing standards. According to USAID officials, the Funds plan to have their 2013 and 2014 financial statements audited. Human capital management Cornerstones of human capital management include leadership; acquiring, developing, and retaining talent; and building a results- oriented culture. The Funds are meeting their initial human capital needs through hiring of a limited number of personnel to occupy key positions, such as a managing director. According to the EAEF and TAEF Chairmen, they envision their organizations as having a small number of personnel. Accordingly, both Funds have recruited a limited number of employees to support their administrative operations and initial investment planning. Specifically, EAEF has hired one employee as its Chief of Staff and Director of Policy Planning. TAEF has hired three employees to include a Managing Director based in Washington, D.C.; a Chief Operating Officer and Managing Director based in Tunis, Tunisia; and an Executive Assistant based in Tunis. The Funds took steps to recruit and hire their initial staff, such as by interviewing potential candidates and reviewing their resumes. The Funds have generally outsourced their accounting and legal functions. Both Funds have created job descriptions for their employees. To build a results-oriented culture, the Funds have established guidelines for providing compensation to their employees. For example, contingent upon USAID approval of a compensation framework, the Funds may enter into bonus or incentive compensation arrangements with their employees. The EAEF and TAEF grant agreements state that the salaries and other compensation of any of the directors, officers, and employees of the Funds shall be set at reasonable levels consistent with the nonprofit and public interest nature of the Funds. EAEF hired companies to do an executive compensation study and to administer its human capital policies, including terms of recruitment, hiring, and employee benefits. While the Funds have generally met their obligations under the grant agreements, neither Fund has submitted the performance monitoring plans required under the grant agreements. USAID has also not tracked the Funds' use of cash in a way that allows the agency to monitor whether EAEF and TAEF are spending it in a timely manner. Further, EAEF has not implemented those provisions under the grant agreement related to marking and public communications. Last, the Funds' corporate policies do not include key vetting procedures to prevent the illicit use of funds, the presence of which was expected by USAID. EAEF and TAEF have to date generally complied with the requirements in the grant agreements. The grant agreements contain 22 discrete requirements with which each of the Funds must comply, such as submission of quarterly financial reports to USAID and annual reports to Congress on administrative expenses. As of December 2014, TAEF had fully complied with 21 of the 22 requirements, and EAEF had fully complied with 17 of the 22, as shown in table 2.submitted the required annual reports on administrative expenses. Additionally, both Funds submitted the required quarterly financial statements. EAEF and TAEF have not yet submitted performance monitoring plans as required by the grant agreements. Specifically, the grant agreements require the Funds to develop performance monitoring plans in consultation with USAID within 120 days after the grant agreement enters into force. However, as of February 2015, EAEF and TAEF performance monitoring plans were approximately 19 months and 15 months overdue, respectively. The performance monitoring plans are intended to allow external stakeholders and, for the purposes of oversight, USAID to monitor the Funds' progress toward meeting their goals. The grant agreements also require that the performance monitoring plans include performance indicators, which must include return on investment for U.S. capital invested in Egypt and Tunisia through the Funds and the number of SMEs in Egypt and Tunisia benefitting from Fund activities. USAID and the Funds are to review the performance monitoring plans and associated indicators during the semiannual meetings with USAID to assess progress. Without performance monitoring plans, USAID and other stakeholders cannot assess progress toward agreed-upon goals and indicators during the semiannual reviews. USAID referred the Funds to monitoring and evaluation experts to assist the Funds in developing their performance monitoring plans, according to USAID officials. The EAEF and TAEF Chairmen told us that it would have been premature to submit a performance monitoring plan before finalizing investment strategies. TAEF and EAEF officials told us that they are currently seeking contractors to develop and implement performance monitoring plans. In November 2014, TAEF issued a scope of work that envisioned a performance monitoring plan being presented to USAID 60 days after the Fund had selected and engaged a contractor. According to EAEF officials, EAEF plans to submit a performance monitoring plan to USAID in early 2015. USAID's grant agreements with EAEF and TAEF state that they may request funds for anticipated expenditures for up to a 90-day period from the date of the request. In addition, USAID guidance on advance payments states that, generally, advance payments or any portion of an advance payment not liquidated within 150 days is considered delinquent.documented rationale from the agreement officer and approved by USAID's financial management office. EAEF and TAEF have not liquidated some of their advances within 150 days of payment, and the advances were therefore delinquent. After we shared our preliminary findings with USAID, program officials sought and obtained the necessary approvals. As of November 2014, EAEF had an outstanding balance of Any exception to this general rule must be supported by a approximately $247,000, and TAEF had an outstanding advance balance of approximately $477,000. The Funds reported their liquidation of their advance payments through quarterly financial reports that are sent only to the USAID program representative. However, USAID's financial management office is responsible for monitoring whether the Funds' advances are outstanding. Because USAID's financial management office was not receiving the quarterly financial reports, it was unable to ensure that the Funds were not maintaining USAID funds in excess of their immediate disbursement needs. In commenting on a draft of this report, USAID stated that although not strictly required by agency policy, the program representative is now sharing all quarterly financial information with the financial management office to facilitate oversight. EAEF has not implemented the provisions in its grant agreement related to marking and public communications. Those provisions require the Fund to develop a logo in addition to using the USAID logo, to acknowledge USAID's role in the provision of foreign assistance, and to use a general disclaimer in those instances where it is unable to obtain USAID's approval in advance of a public communication. We have reported in the past that marking can raise awareness about the source of assistance with individuals who come into contact with the assistance sites or materials. According to USAID and EAEF officials, the two organizations are working together to see that the Fund implements these provisions. The grant agreements aim to prevent the contribution of U.S. funds (1) to certain individuals (e.g., individuals and organizations associated with terrorism) by conducting appropriate vetting, (2) for certain purposes (e.g., funds may not be used toward the purchase of gambling equipment), (3) to political organizations not committed to democracy, and (4) to the military of another government.direct organizations to establish control activities such as policies and procedures that enforce management directives and help ensure that Internal control standards actions are taken to address risks. We found that the Funds have accounted in their corporate policies for three out of the four prohibitions related to preventing the contribution of EAEF or TAEF funds to illicit transactions or purposes. While USAID grant agreements with the funds establish procedures designed to prevent transactions with individuals and organizations associated with terrorism, and the Chairmen of both Funds have committed to mitigate any risk of illicit use of U.S. funds, neither Funds' corporate policies contain specific vetting provisions. Specifically, they lack provisions related to vetting potential investees and the requirement that any investee planning to lend U.S. funds in excess of $25,000 onward to another business or invest in another entity certify to the Funds that it will conduct certain due diligence activities to prevent their illicit use. While USAID approved the Funds' corporate policies, USAID officials subsequently indicated that they expected this prohibition related to vetting potential investees and onward lending to be included in the Funds' corporate policies. Since the Funds have made only one investment to date--TAEF's $2.4 million investment--there has been only one instance where vetting was necessary. In commenting on a draft of this report, the TAEF Chairman emphasized that the Fund carried out all required due diligence with respect to vetting and assured itself of the appropriateness of the investee's procedures. For example, TAEF provided us with documentation of TAEF's efforts to screen the investee's primary officials against the required vetting lists as well as the investee's policy for verifying the credentials of individuals and firms. In addition, in November 2014, TAEF signed a side letter with the investee in which the investee agreed to screen all future recipients against lists of proscribed parties. Since their inception in 2013, EAEF and TAEF have been awarded $180 million by USAID and have made progress in establishing their administrative infrastructures, internal controls, corporate governance mechanisms, and investment strategies. To date, the Funds have disbursed approximately $2 million of the $180 million awarded to them and thus have a significant amount of U.S. funding available for future investments. The Funds have generally complied with the requirements in their grant agreements with USAID. For example, the Funds have submitted required financial reports to USAID and Congress. In addition, USAID and the Funds continue to take steps to improve oversight and compliance with the grant agreements. However, they have not yet completed actions to further strengthen oversight and compliance in several areas. In the area of cash management, USAID is exploring ways to ensure that it has all necessary financial information from the Funds, but it has not yet ensured that the Funds liquidate cash advances in a timely manner. In addition, while both Funds are hiring contractors to develop performance monitoring plans--for which both Funds required an extension of the original submission deadline--neither Fund has completed its performance monitoring plan. Further, EAEF has not yet complied with the provisions in the grant agreement related to public communications, such as those requiring EAEF to acknowledge the U.S. government's financial contribution. While both Funds have demonstrated their commitment to ensuring that U.S. funds are not used for prohibited purposes, neither Fund has incorporated vetting requirements for individuals and organizations into its corporate policies. Taking steps to address these remaining items would strengthen USAID oversight and the Funds' compliance with the grant agreements, which will be particularly important as the Funds' investments grow in number and size. To further enhance USAID's oversight of the Funds and to ensure the Funds fully implement the grant agreements, we recommend that the Administrator of USAID take the following four steps: 1. establish a process to better manage cash advances to the Funds, 2. make certain that the Funds comply with grant agreement requirements related to performance monitoring, 3. ensure that the Funds comply with grant agreement requirements related to public communications, and 4. ensure that the Funds' corporate policies reflect grant agreement provisions regarding vetting requirements designed to prevent transactions with prohibited individuals and organizations. We provided a draft of this report to USAID, the Department of State (State), EAEF, and TAEF for review and comment. USAID and TAEF provided written comments, which we have reprinted in appendixes II and III, respectively. State provided technical comments, which we incorporated as appropriate. In its written comments, reprinted in appendix II, USAID concurred with our four recommendations and indicated the steps it was taking to implement each of them. Specifically, regarding our recommendation to establish a process to better manage cash advances, USAID stated that going forward the program representative would share Fund quarterly financial reports with the office of the Chief Financial Officer. In response to our recommendation pertaining to performance monitoring, USAID stated that it would work with each Fund to meet a revised deadline of the first quarter of 2015 to submit a completed performance monitoring plan. With regard to our recommendation pertaining to public communications, EAEF confirmed to USAID that it would meet all related requirements going forward, including proposing a logo in the first quarter of 2015. Lastly, the Chairmen of both Funds confirmed to USAID that they would propose amendments to their corporate policies to include the vetting procedures to their respective Boards. In its written comments, reprinted in appendix III, TAEF agreed with our findings and provided some additional information. For example, TAEF stated that the delay it requested to implement its performance monitoring plan would result in more timely and better program evaluation going forward. We are sending copies of this report to the appropriate congressional committees, State, USAID, and EAEF and TAEF. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Conferees for the bill that would become the Consolidated Appropriations Act, 2012 (Pub.L. No. 112-74) requested that we examine the management and oversight of the Egyptian-American Enterprise Fund (EAEF) and the Tunisian-American Enterprise Fund (TAEF) (the Funds) to determine if appropriate and sufficient safeguards exist against financial misconduct. In this report, we examined (1) the status of EAEF's and TAEF's investments, (2) EAEF's and TAEF's progress in establishing key management structures to support their missions and operations, and (3) the extent to which EAEF and TAEF have complied with certain requirements of the USAID grant agreements. To assess the extent to which the Funds have made investments, we reviewed the Funds' strategic planning documents and their due diligence reports. We obtained budget data from the U.S. Agency for International Development (USAID) on its obligations and disbursements to the Funds from fiscal years 2013 to 2014. We conducted an assessment of the reliability of the data by reviewing USAID's responses to a set of data reliability questions and by interviewing USAID budget officials. We found the data to be sufficiently reliable for our purposes. In addition, we interviewed the Chairmen and senior management of EAEF and TAEF to discuss their investment strategies, plans, and investment efforts thus far. To examine what progress the Funds have made in establishing key management structures, we reviewed EAEF and TAEF documents, including the Funds' statements of corporate policies and procedures, bylaws, employee job descriptions, organization charts, financial and annual reports, and board of director meeting minutes. We used the Committee of Sponsoring Organizations of the Treadway Commission's (COSO) Internal Control - 2013 Integrated Framework evaluation tool as a framework for gathering information on the Funds' management structures and assessing the extent to which they had established such Although our analysis included gaining an understanding of structures.EAEF's and TAEF's actions related to establishing internal control mechanisms, we did not evaluate the implementation of internal control at the Funds. We also interviewed EAEF and TAEF Chairmen and senior management to obtain information on the management structures the Funds had already established or planned to establish. To assess the extent of Fund compliance with certain grant agreement requirements, we used the EAEF and TAEF grant agreements as our primary criteria for identifying the requirements to which the Funds are subject. We identified 22 requirements that the Funds are subject to and then determined whether the Funds had met these requirements by collecting relevant USAID and Fund documentation, such as the Funds' reports to Congress on administrative expenses. We also reviewed the Funds' statement of corporate policies and procedures and documentation related to the Funds' efforts to develop performance monitoring plans. In addition, we interviewed the EAEF and TAEF Chairmen and senior management about their efforts to comply with the terms and conditions of the grant agreements as well as USAID officials regarding their efforts to oversee the Funds' compliance with the grant agreements. We also examined the process that USAID used to develop the EAEF and TAEF grant agreements, which entailed reviewing its agency policies, procedures for deviating from those policies, and the grant agreements themselves. We conducted this performance audit from March 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jason Bair (Assistant Director), R. Gifford Howland (Analyst-in-Charge), Debbie Chung, Emily Gupta, and Jeffrey Isaacs made key contributions to this report. Mark Dowling, Etana Finkler, Paul Kinney, and Steven Putansu provided additional support.
In the wake of the economic and political transitions associated with the "Arab Spring," Congress authorized the creation of enterprise funds for Egypt and Tunisia in 2011. EAEF and TAEF aim to develop the private sectors in these countries, particularly SMEs, through instruments such as loans, equity investments, and technical assistance. USAID signed grant agreements with both Funds in 2013 and has thus far obligated $120 million to EAEF and $60 million to TAEF. In this report, GAO examines (1) the status of the Funds' investments, (2) the Funds' progress in establishing key management structures to support their missions and operations, and (3) the extent to which the Funds have complied with requirements in the grant agreements. To address these objectives, GAO reviewed USAID and Fund documents, such as EAEF and TAEF grant agreements, policies and procedures, and the Funds' boards of directors meeting minutes. GAO also interviewed USAID and Fund officials. The Egyptian-American Enterprise Fund (EAEF) has not yet made any investments in Egypt, and the Tunisian-American Enterprise Fund (TAEF) has made an over $2.4 million investment in Tunisia. EAEF has not made any investments in Egypt as its initial investment did not proceed as planned. EAEF's attempt to purchase a bank in Egypt that would lend money to small and medium-sized enterprises (SME) was rejected by the Egyptian Central Bank. EAEF is now considering other options, such as investments in the food and beverage sector. TAEF's investment strategy is to invest in four different areas: (1) a private equity fund investing in SMEs, (2) direct investments in SMEs smaller than those targeted by the private equity fund, (3) microfinance institutions, and (4) start-ups. In June 2014, TAEF made an over $2.4 million investment in a private equity fund that invests in and finances Tunisian SMEs. EAEF and TAEF (the Funds) have made progress in establishing key management structures to support their mission and operations, with additional actions under way. In terms of administrative structures, both Funds have hired initial staff. Regarding their corporate governance, EAEF and TAEF both have boards of directors that have met regularly, adopted by-laws, and developed corporate policies and procedures. Both Funds plan to develop and implement additional management structures in the future, such as audits of their 2013 and 2014 financial statements. While TAEF and EAEF have generally fulfilled the requirements of the grant agreements, GAO found three gaps in the Funds' implementation and one gap in the U.S. Agency for International Development's (USAID) implementation. First, the Funds have not yet submitted their performance monitoring plans as required by the grant agreements. Second, EAEF has not implemented the provisions in its grant agreement related to public communications, such as development of its own logo. Third, the Funds' corporate policies do not include procedures to implement vetting requirements designed to prevent illicit use of the funds, the presence of which was expected by USAID. USAID has also not tracked the Funds' use of cash in a way that allows the agency to monitor whether EAEF and TAEF are spending it in a timely manner. Collectively, these gaps in implementation pose challenges for USAID's oversight of the Funds. GAO recommends that USAID take steps to further enhance its oversight of the Funds' compliance with the grant agreements and other requirements by establishing a process to better manage cash advances to the Funds; ensuring that the Funds comply with the grant agreement requirements related to performance monitoring and public communications; and ensuring that the Funds' corporate policies include vetting requirements. USAID concurred with our recommendations.
6,717
811
User fees or user charges are defined by OMB as assessments levied on a class of individuals or businesses directly benefiting from, or subject to regulation by, a government program or activity. Examples of user fees are trademark registration fees, park entrance fees, and food inspection fees. User fees represent the principle that identifiable individuals or businesses who receive benefits from governmental services beyond those that accrue to the general public should bear the cost of providing the service. General user fee authority was established under title V of the Independent Offices Appropriation Act (IOAA) of 1952. The IOAA gave agencies broad authority to assess user fees or charges on identifiable beneficiaries by administrative regulation. This does not authorize agencies to retain and/or use the fees they collect. In the absence of specific legislation that authorizes agencies to retain and/or use the fees they collect, fees must be deposited in the U.S. Treasury general fund. Authority to assess user fees may also be granted to agencies through the enactment of specific authorizing or appropriations legislation, which may or may not authorize the agencies to retain and/or use the fees they collect. OMB Circular A-25, dated July 8, 1993, establishes guidelines for federal agencies to use in assessing fees for government services and for the sale or use of government property or resources. The Circular (1) states that its provisions shall be applied by agencies in their assessment of user charges under the IOAA and (2) provides guidance to agencies regarding their assessment of user charges authorized under other statutes. A specific user fee rate or amount may be based on the full cost to the government of the service or goods provided or on market value, or may be set legislatively. The Circular outlines the circumstances under which agencies are to use cost recovery or market value for determining the fee amount. It defines full cost as all direct and indirect costs to any part of the federal government of providing goods or services, including, but not limited to, direct and indirect personnel costs (i.e., salaries and fringe benefits); overhead costs (i.e., rents and utilities); and management and supervisory costs. The Circular defines market value as the price for goods, resources, or services that is based on competition in open markets and creates neither a shortage nor a surplus of the goods, resources, or services. In some cases, legislation either sets the specific user fee rate or amount or stipulates how the fee is to be calculated, such as a formula. These fees can be based on partial cost recovery, partial market value, or some other basis. For example, the Social Security Administration's (SSA) fees for administration of state supplementary payments are legislatively set at $6.20 per payment for fiscal year 1998. An example of partial cost recovery is under Public Law 98-575, which excludes the recovery of overhead costs from the National Aeronautics and Space Administration's commercial space launch services fees. Both the CFO Act and OMB Circular A-25 provide that agencies review their user fees biennially. The CFO Act of 1990 requires an agency's CFO to review on a biennial basis the fees, royalties, rents, and other charges for services and things of value and make recommendations on revising those charges to reflect costs incurred. OMB Circular A-25 provides that each agency will review user charges biennially to include (1) assurance that existing charges are adjusted to reflect unanticipated changes in costs or market values and (2) a review of other programs within the agency to determine whether fees should be initiated for government services or goods for which it is not currently charging fees. Circular A-25 further states that agencies should discuss the results of the user fee reviews and any resultant proposals in the CFO annual report required by the CFO Act. The Circular also states that when the imposition of user charges is prohibited or restricted by existing law, agencies will review activities and recommend legislative changes when appropriate. Periodic reviews of all user fees are important because the reviews can provide agencies, the administration, and Congress with information on the government's costs to provide these services or, in some cases, the current market value of goods and services provided. To obtain the information for the first three objectives, we requested the CFOs of the 24 agencies to provide for fiscal year 1996 (1) a list of all user fees, (2) the basis (cost recovery, market value, or legislatively set) for determining the fee amount, (3) total amount of user fees collected in fiscal year 1996, and (4) supporting documents for the most recent review they had conducted of each user fee between fiscal years 1993 and 1997. We used 1996 fees because 1996 was the most recent year agencies had complete data. We reviewed the supporting documentation of the fee reviews to determine whether the reviews (1) indicated that direct and indirect costs were determined (if the fee was based on cost recovery) or current market value was determined (if the fee was based on market value) and (2) included an assessment of other programs within the agency to identify potential new user fees. We followed up with agency program officials when necessary to clarify the CFOs' responses. We also reviewed Federal Register notices for fiscal years 1993 through 1997 that discussed fee revisions and how the fees were calculated. In addition, we reviewed prior reports by the agencies' Inspectors General (IG) and us that covered user fees in CFO agencies during the time period covered by the scope of our work. We did not verify whether agencies reported all of their user fees. To obtain information on the fourth objective, we reviewed the CFO annual reports for fiscal years 1995 through 1997 and requested information from the 24 agencies on whether they reported the results of reviews in the CFO reports during fiscal years 1993 and 1994. To determine whether agencies were more likely to review fees if the fees were authorized to be used to cover agencies' expenses compared to when they were not, we obtained information from each of the agencies on whether they had legislative authority to use fees they collect. We then compared the number of reviews of fees that agencies were allowed to keep with the number of reviews of those that they were not allowed to keep. We reviewed relevant laws and regulations pertaining to user fees, including the CFO Act of 1990, the IOAA and other user fee authorizing legislation, and OMB Circular A-25. We also reviewed OMB Bulletins 94-01 and 97-01, Form and Content of Agency Financial Statements, to determine whether they contained user fee reporting requirements. We met with OMB officials to obtain additional information on OMB's user fee review and reporting requirements. In some cases, agencies said they did not formally conduct "biennial fee reviews" but instead periodically, generally annually, conducted fee rate updates that met the key requirements of a biennial review. In these instances, we considered the rate updates as user fee reviews. In those cases where agency documentation indicated that agencies determined the direct and indirect costs of providing services, we did not verify that both direct and indirect costs had been considered or that the types of costs considered were appropriate. Our previous work has concluded that, in general, the federal government does not have adequate cost accounting systems to track costs to specific programs or services. To audit each individual cost factor for the fees we reviewed was beyond our scope and would have involved more time and resources than were available. Our scope did not include fees charged to other federal agencies or federal employees. We also excluded insurance premiums because, according to an OMB official, they were not subject to Circular A-25 during the scope of our review. We excluded credit-related fees, such as loan guarantee fees, since OMB advised that credit-related fees were not covered by Circular A-25, but were governed by OMB Circular A-129, Policies for Federal Credit Programs and Non-Tax Receivables. We did our work at the 24 CFO agencies' headquarters in Washington, D.C., between June 1997 and June 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of the Office of Management and Budget and asked the Chief Financial Officers of the 24 agencies included in the review to verify the accuracy of their agencies' data used in the report. Their comments are discussed near the end of this letter. As table 1 shows, the 24 CFO agencies reported having 546 total user fees in effect in fiscal year 1996. Agencies reported that 397 of their fees were based on cost recovery, 35 were based on market value, and 114 were set by legislation. As previously stated, statute-based formulas can be based on either market value, cost recovery, or some other basis. Of the 24 CFO agencies with 546 reported user fees, 6 agencies reviewed all of their reported fees at least biennially as required by Circular A-25 during fiscal years 1993 through 1997, 3 reviewed all of their reported fees at least once, 11 reviewed some of their reported fees, and 4 did not review any of their reported fees during this period. The agencies reported that they had reviewed 259 of the fees annually, 159 biennially, and 34 once during this 5-year period, as shown in table 2. According to OMB Circular A-25, agencies should have reviewed the fees at least biennially. The fee reviews that were conducted annually or biennially were in compliance with the Circular. Excluding the three newly effective fees in table 2, 13 agencies did not comply with the Circular for 31 fees that were reviewed only once during the 5-year period. All of the 31 fees were in effect long enough to have had biennial reviews. Fifteen of the 24 CFO agencies had not reviewed 94 user fees at all during the 5-year period. These 94 fees were about 17 percent of the total 546 fees. The agencies provided various reasons for not conducting the reviews. For example, the Department of the Treasury's U.S. Customs Service reported that it had not reviewed its nine fees (reported as totaling over $1 billion in fiscal year 1996) because of insufficient cost data. Customs said that it was in the process of developing the necessary data to evaluate the fees and make recommendations to Congress on any necessary changes. The U.S. Agency for International Development reported that it did not review its three fees because the amount of user fees collected was minimal (reported as $50,000 for fiscal year 1996). SSA said that it had not reviewed its eight fees because the majority of its fees were either legislatively set or were based on the actual computation of the full cost to provide the service. According to an agency official, SSA was currently conducting a review of two of its fees and stated that four additional fees will be reviewed in conjunction with the agency's comprehensive evaluation of its fee charging policy. Of the 94 fees not reviewed, 42 were set by legislation. The 42 fees represent about 37 percent of the 114 fees set by legislation and about 45 percent of the fees that agencies had not reviewed. Several agencies reported that they had not reviewed the fees set by legislation because they believed the fees were either not subject to the user fee review requirements or could not be changed unless legislation was amended. For example, the Department of Veterans Affairs and the Department of Health and Human Services' Food and Drug Administration reported that they had not reviewed fees that were set by legislation because they believed the fees were not subject to the CFO Act. The Department of Transportation's Federal Aviation Administration (FAA) and the Department of Health and Human Services' Health Care Financing Administration (HCFA) reported that they did not review the fees because they believed the fees could not be changed unless legislation was amended. However, OMB Circular A-25 provides that all fees, including those set by specific legislation, be reviewed. One rationale for reviewing all user fees, even those where a policy decision was made to not recover full costs, is that the extent to which fees do not recover the direct and indirect costs--i.e., the government subsidy--should be transparent so that program managers can properly inform the public, Congress, and federal executives about the extent of the subsidy. OMB Circular A-25 provides that the user fee review include assurance that existing charges reflect costs or current market value. Of the 397 cost-based fees, agencies reviewed 357. For 352 (or about 99 percent) of the cost-based fees reviewed, documentation indicated that both direct and indirect costs were considered. Agencies had reviewed 23 of the 35 fees based on market value. Documentation indicated that current market value was assessed for 14 of the 23 reviewed fees. Overall, the reviews determining whether fees reflected cost or current market value resulted in 159 fee increases that became effective during the period we reviewed. We did not verify whether the agencies had appropriate cost accounting systems in place to identify all direct and indirect costs or whether the costs included were complete and appropriate. However, problems with CFO agencies' cost systems were one of the reasons given by the CFO Council in June 1997 for requesting the Financial Accounting Standards Advisory Board to delay implementation of SFFAS No. 4. Prior work by agency IGs and us has also shown that agencies often lack cost accounting systems to track costs by specific program or service. In 1998, we reported in our audit of the U.S. Government's 1997 Consolidated Financial Statement that the government was unable to support significant portions of the more than $1.6 trillion reported as the total net costs of government operations. We further stated that without accurate cost information, the federal government is limited in its ability to control and reduce costs, assess performance, evaluate programs, and set fees to recover costs where required. We also stated in the report that, as of the date of the report, only four agency auditors had reported that their agency's financial systems complied with the Federal Financial Management Improvement Act (FFMIA) of 1996 requirements for financial management systems. In 1996 and 1997, we reported that while three Power Marketing Administrations (PMA), with reported revenues of $997 million in fiscal year 1996, were generally following applicable laws and regulations regarding recovery of power-related costs, they were not recovering all costs. Although PMAs are required to recover all costs, they had not done so, partly because they did not follow the full cost definition as set forth in OMB Circular A-25. In addition, IGs within 6 of the 24 CFO agencies reported on weaknesses in agencies' procedures for determining the cost of goods or services for which there were user fees during the 5-year period covered by our scope. Also, in reference to market value assessments, we reported in 1996 and 1998 that the Department of Agriculture's U.S. Forest Service did not always obtain the fair market value for user fees covering the use of federal land. OMB Circular A-25 provides that agencies' user fee reviews should include a review of other agency programs to determine whether additional fees should be charged either under existing authority or by proposing new legislative authority. Of the 20 agencies that conducted user fee reviews, documentation indicated that seven agencies considered new fees, five agencies did not consider new fee opportunities because they did not provide a service for which a fee was not already charged, and eight agencies where the potential for new fees existed did not consider new fee opportunities. Agencies' reasons for not looking for new fee opportunities varied. The Department of Veterans Affairs reported that it views its nonfee services as goodwill to the community, and the agency would have to obtain legislative authority to charge for the nonfee services. An FAA official said FAA had not attempted to identify new individual user fees pending the outcome of the ongoing consideration being given to the financial restructuring of FAA, which was included in legislation proposed to Congress on April 20, 1998. The Department of Commerce's Bureau of the Census said that it is facing the task of achieving the best balance between maximizing the usefulness of data to the widest possible audience and charging for more of the information. HCFA reported that it had looked at potential user fees earlier and decided that the new fees would not be in the best interest of the government because either the cost of fee collection would have outweighed the expected revenues or the agency and the recipient benefited equally from the service. OMB Circular A-25 provides that agencies should discuss the results of the user fee reviews and any resultant proposals in the CFO annual reports required by the CFO Act. The act requires that the CFOs of the 24 agencies identified in the act submit an annual financial management report to the Director of OMB. To satisfy this CFO reporting requirement, agencies submit annual, audited financial statements. The CFO Act requires the Director of OMB to prescribe the form and content of the financial statements, consistent with applicable accounting principles, standards, and requirements. The CFO Act also requires that these agencies analyze the status of financial management and prepare and make their annual revisions to plans implementing the OMB governmentwide 5-year financial management plan. The OMB guidance is not clear as to how the user fee review results should be reported. Thirteen of the 24 CFO agencies had referenced the user fee reviews in either their annual financial statements or their annual revisions to the 5-year financial management plan between fiscal years 1993 and 1997 as follows: One agency reported review results in 4 of the 5 years. Five agencies reported review results in 2 of the 5 years. Seven agencies reported review results in 1 of the 5 years. Five of these seven agencies reported results for the first time in their fiscal year 1997 reports after we had asked about the reporting. Two of them said that they had not previously reported the reviews because the reporting guidance was not clear. The remaining three said (1) the total amount of fees was not material, (2) nonadherence was an oversight, and (3) prior reviews were informal and undocumented. The other 11 agencies reported that they had not reported the results of their biennial reviews, or lack thereof, in any of the CFO annual reports for fiscal years 1993 through 1997. As shown in table 3, eight agencies said they did not report the review results because either the total amount of fees was considered to be minimal and not material or the reporting requirements were confusing and not consistent with OMB guidance for the form and content of annual financial statements. Guidance for form and content states specifically what agencies should present in the annual financial statements and does not include the user fee reporting requirement. According to OMB officials, OMB has not provided any guidance on reporting the results of the user fee reviews other than Circular A-25. OMB agreed that Circular A-25 user fee reporting instructions need to be clarified and plans to address this during 1998, by updating Circular A-11, Preparation and Submission of Budget Estimates. An OMB official said Circular A-11 has a higher profile than Circular A-25 and was scheduled to be revised before Circular A-25. It did not appear that agencies placed significantly less emphasis on reviewing fees that went to Treasury's general fund than on fees of which all or a portion were authorized to cover agency expenses. In 78 percent of the 452 fees agencies reviewed, all or a portion of the fees were authorized to cover or reimburse agency expenses. In 67 percent of the 94 fees agencies did not review, all or a portion of the fees were authorized to cover or reimburse agency expenses. Generally, the CFO agencies did not fully adhere to OMB Circular A-25 and the CFO Act user fee review provisions requiring that user fee rates be reviewed biennially. It did not appear that agencies placed significantly less emphasis on reviewing fees that were to be deposited in Treasury's general fund than they placed on fees that were authorized to cover agencies' expenses. The agencies did not review all of the fees that should have been reviewed and reviewed fees set by legislation less often than other fees. For example, only 6 of the 24 CFO agencies reviewed all of their user fees at least biennially. Also, some agencies could be recovering less than their actual costs when their fees are based on cost recovery because of a lack of adequate cost accounting systems in the government to identify actual costs. Further, eight of the agencies did not include a review of potential new user fees as required by OMB. As a result, the government may not be recovering the costs or the current market value, where appropriate, for the goods and services it provides. OMB's guidance on how and where to report the results of user fee reviews is not clear. Many of the agencies reported that Circular A-25 user fee reporting instructions were confusing and had not reported the results of the user fee reviews in CFO reports. Administration officials and Congress, therefore, have incomplete information on whether the government is recovering costs of providing goods and services or is obtaining the current market value, where appropriate. We recommend that the Director of OMB clarify the user fee reporting instructions by specifying how agencies should report the results of their user fee reviews and address the issues of compliance with the biennial review requirements, including the requirements regarding statutorily set fees and agencies' consideration of potential new user fees. We requested written comments on a draft of this report from the Director of the Office of Management and Budget and oral comments from the Chief Financial Officers of the 24 agencies on the accuracy of information in the draft report pertaining to the agencies. On June 12, 1998, we received written comments from OMB's Assistant Director for Budget, which are included in appendix I. OMB commented that while it was pleased to see that most of the fees were reviewed annually or biennially, it shares our concern that agencies pay attention to the review and discussion requirements in the Chief Financial Officers Act of 1990 and OMB Circular A-25. OMB further stated that it will continue its efforts in 1998 to increase agency awareness and compliance with current CFO Act and Circular A-25 requirements. OMB said that it would highlight the requirements of user fee reviews in this year's update to Circular A-11 to make agencies more fully aware of the requirements. As of June 29, 1998, we had received responses from 23 of the 24 CFO agencies. We had not received a response from the Department of Housing and Urban Development. Seventeen agencies provided oral comments, and six agencies provided written comments. Ten of the agencies responded that they either had no comments on the draft report or agreed with the information in the report. Nine of the agencies provided additional information on their user fee reviews or suggested technical changes, which we considered and incorporated within the report where appropriate. Four agencies raised programmatic or policy-related issues, as follows: SSA said that it had reviewed two of its fees annually and asked us to revise our data to recognize this. SSA provided documentation it believed would support its contention that the reviews had been done. However, in our view, the documentation SSA provided was not sufficient evidence that the user fee reviews met the requirements of Circular A-25. Accordingly, we did not revise our report as SSA had requested, and we informed SSA of our decision. SSA also said it had reviewed two other fees and was deciding the fee amounts, and we noted this in the report. The Department of Health and Human Services, the National Aeronautics and Space Administration, and the Small Business Administration raised policy-related issues, such as the need for biennial reviews in light of the new Managerial Cost Accounting Standards and whether the new user fee definition in Circular A-11 supersedes the Circular A-25 definition. We did not cover these types of issues in our review, but expect that OMB will consider such issues as it revises its instructions on user fee reviews. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the Senate Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, and the Director of OMB. We will also make copies available to others upon request. Major contributors to this report are listed in appendix II. If you have any questions about the report, please call me on (202) 512-8387. Alan N. Belkin, Assistant General Counsel Jessica A. Botsford, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed agencies' adherence to the user fee review and reporting requirements in the Chief Financial Officers (CFO) Act of 1990 and Office of Management and Budget (OMB) Circular A-25, focusing on whether the agencies: (1) reviewed their user fee rates biennially during fiscal years (FY) 1993 through 1997; (2) determined both direct and indirect costs when reviewing fees based on costs or current market value for fees based on market value; (3) reviewed other programs within the agency to identify potential new user fees; and (4) reported the results of the user fee reviews in their CFO annual reports. GAO noted that: (1) six of the 24 CFO agencies reviewed all of their reported user fees at least every 2 years as required by OMB Circular A-25 during FY 1993 through FY 1997, 3 reviewed all of their reported fees at least once, 11 reviewed some of their reported fees, and 4 did not review any of their reported fees during this period; (2) the 24 agencies reported 546 user fees, of which 418 were reviewed either annually or biennially; (3) the agencies provided various reasons for not reviewing fees, including insufficient cost data and because some of the fees set by legislation could not be changed without new legislation; (4) it appeared that agencies did not place significantly less emphasis on reviewing fees that went to the Department of the Treasury's general fund than on fees authorized to cover agency expenses; (5) documentation provided by the agencies indicated that of the reviewed fees that were based on cost recovery, 99 percent included both direct and indirect costs; (6) fee review documentation indicated that of the 23 reviewed fees that were based on market value, 14 reviews included a determination of current market value; (7) GAO did not verify these cost data or market evaluations; (8) agency documentation also indicated that of the 20 agencies that conducted user fee reviews, 8 agencies that had the potential for new fees did not consider new fee opportunities in their reviews; (9) twelve of the 20 agencies either looked for potential new fees or reported that they did not provide a service for which a fee was not already charged; (10) eleven of the 24 agencies had not reported the results of their biennial reviews, or lack thereof, in their CFO annual reports for FY 1993 through FY 1997; (11) only six agencies reported the review results two or more times during the 5-year period; (12) most of the agencies not reporting their user fee reviews said they did not do so either because the total amount of the fees was considered to be minimal and not considered material or because they found the reporting requirements confusing; and (13) OMB agreed that reporting instructions for the user fee review need to be clarified and plans to address this matter during 1998, as it revises its instructions.
5,323
584
It would be useful at this point to describe several differences between multiemployer and single-employer plans. Multiemployer plans are established pursuant to collectively bargained agreements negotiated between labor unions representing employees and two or more employers and are generally jointly administered by trustees from both labor and management. Single-employer plans are administered by one employer and may or may not be collectively bargained. Multiemployer plans typically cover groups of workers in such industries as construction, retail food sales, and trucking, with construction representing 38 percent of all participants. In contrast, 47 percent of single-employer plan participants are in manufacturing. Multiemployer plans provide participants limited benefit portability in that they allow workers the continued accrual of defined benefit pension rights when they change jobs, if their new employer is also a sponsor of the same plan. This arrangement can be particularly advantageous in industries like construction, where job change within a single occupation is frequent over the course of a career. Single-employer plans are established and maintained by only one employer and do not normally offer benefit portability. Multiemployer plans also differ from so called multiple-employer plans that are not generally established through collective bargaining agreements and where many plans maintain separate accounts for each employer. The Teachers Insurance Annuity Association and College Retirement Equities fund (TIAA-CREF) is an example of a large multiple-employer plan organized around the education and research professions. TIAA-CREF offers a defined benefit contribution plan, in which contributions are accumulated over a career and paid out at retirement, often as an annuity. Below are some features that illustrate some key differences between single-employer and multiemployer plans: Contributions* In general, the same ERISA funding rules apply to both single- and multiemployer defined benefit pension plans. While ERISA and IRC minimum funding standards permit plan sponsors some flexibility in the timing of pension contributions, individual employers in multiemployer plans cannot as easily adjust their plan contributions. For multiemployer plans, contribution levels are usually negotiated through the collective bargaining process and are fixed for the term of the collective bargaining agreement, typically 2 to 3 years. Employer contributions to many multiemployer plans are typically made on a set dollar amount per hour of covered work, and thus to the number of active plan participants. With other things being equal, the reduced employment of active participants will result in lower contributions and reduced plan funding. Withdrawal liability* Congress enacted the Multiemployer Pension Plan Amendments Act (MPPAA) of 1980 to protect the pensions of participants in multiemployer plans by establishing a separate PBGC multiemployer plan insurance program and by requiring any employer wanting to withdraw from a multiemployer plan to be liable for its share of the plan's unfunded liability. The law contains a formula for determining the amount an employer withdrawing from a multiemployer plan is required to contribute, known as "withdrawal liability." This amount is based upon a proportional share of the plans' unfunded vested benefits. Furthermore, if a participating employer becomes bankrupt, MPPAA requires that the remaining employers in the plan assume the additional funding responsibility for the benefits of the bankrupt employer's plan participants. For single-employer plans, the sponsoring employer is liable only for the unfunded portion of its own plan or its current liability in a bankruptcy (distress termination). Different premiums and benefit guarantee levels* PBGC operates two distinct insurance programs, one for multiemployer plans and one for single-employer plans, which have separate insurance funds, different benefit guarantee rules, and different insurance coverage rules. The two insurance programs and PBGC's operations are financed through premiums paid annually by plan sponsors, investment returns on PBGC assets, assets acquired from terminated single employer plans, and recoveries from employers responsible for underfunded terminated single employer plans. Premium revenue totaled about $973 million in 2003, of which $948 million was paid into the single-employer program and $25 million paid to the multiemployer program. Single-employer plans pay PBGC an annual flat-rate premium of $19 per participant per year for pension insurance coverage. Plans that are underfunded generally also have to pay PBGC an additional annual variable rate premium of $9 per $1,000 of underfunding for the additional exposure they create for the insurance program. In contrast, the only premium for multiemployer plans is a flat $2.60 per participant per year. PBGC guarantees benefits for multiemployer pensioners at a much lower dollar amount than for single- employer pensioners: about $13,000 for 30 years of service for the former compared with about $44,000 annually per retiree at age 65 for the latter. Financial assistance and the insurable event* PBGC's "insurable event" for its multiemployer program is plan insolvency. A multiemployer plan is insolvent when its available resources are not sufficient to pay the level of benefits at PBGC's multiemployer guaranteed level for 1 year. In contrast, the insurable event for the single-employer program is generally the termination of the plan. In addition, unlike its role in the single-employer program where PBGC trustees weak plans and pays benefits directly to participants, PBGC does not take over the administration of multiemployer plans but instead, provides financial assistance in the form of loans when plans become insolvent. A multiemployer plan need not be terminated to qualify for PBGC loans, but it must be insolvent and is allowed to reduce or suspend payment of that portion of the benefit that exceeds the PBGC guarantee level. If the plan recovers from insolvency, it must begin repaying the loan on reasonable terms in accordance with regulations. Such financial assistance is infrequent; for example, PBGC has made loans totaling $167 million to 33 multiemployer plans since 1980 compared with 296 trusteed terminations of single-employer plans and PBGC benefit payments of over $4 billion in 2002-2003 alone. The net effect of these different features is that there is a different distribution of financial risk among, employers, participants and PBGC under the multiemployer program, compared with PBGC's single-employer program. Multiemployer member employers and participants bear far more financial risk, and PBGC, and implicitly the taxpayer, bear far less risk, under the multiemployer program. In addition, PBGC officials explained that the features of the multiemployer regulatory framework have also led to a lower frequency of financial assistance. They note that greater financial risks faced by employers and the lower guaranteed benefits assured participants create incentives for employers, participants, and their collective bargaining representatives to avoid insolvency and to collaborate in trying to find solutions to a plan's financial difficulties. While multiemployer plan funding has exhibited considerable stability over the past two decades, available data suggest that many plans have recently experienced significant funding declines. Since 1980, aggregate multiemployer plan funding has been stable, with the majority of plans funded above 90 percent of total liabilities and average funding at 105 percent in 2000. Recently, however, it appears that a combination of stock market declines coupled with low interest rates and poor economic conditions has reduced the assets and increased the liabilities of many multiemployer plans. In PBGC's 2003 annual report, the agency estimated that total underfunding of underfunded multiemployer plans reached $100 billion by year-end, from $21 billion in 2000, and that its multiemployer program had recorded a year-end 2003 net deficit of $261 million, the first deficit in more than 20 years. While most multiemployer plans continue to provide benefits at unreduced levels, the agency has also increased its forecast of the number of plans that will likely need financial assistance, from 56 plans in 2001 to 62 plans in 2003. Private survey data are consistent with this trend, with one survey by an actuarial consulting firm showing the percentage of fully funded client plans declining from 83 percent in 2001 to 67 percent in 2002. In addition, long-standing declines in the number of plans and worker participation continue. The number of insured multiemployer plans has dropped by a quarter since 1980 to fewer than 1,700 plans in 2003, according to the latest data available. Although in 2001, multiemployer plans in the aggregate covered 4.7 million active participants, representing about a fifth of all active defined benefit plan participants, this number has dropped by 1.4 million since 1980. Aggregate funding for multiemployer pension plans remained stable during the 1980s and 1990s. By 2000, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities, with the average plan funded at 105 percent of liabilities. As shown in figure 1, the aggregate net funding of multiemployer plans grew from a deficit of about $12 billion in 1980 to a surplus of nearly $17 billion in 2000. From 1980 to 2000, multiemployer plan assets grew at an annual average rate of 11.7 percent, to about $330 billion, exceeding the average 10.5 percent annual percentage growth rate of single-employer plan assets. During the same time period, liabilities for multiemployer and single-employer pensions grew at an average annual rate of about 10.2 percent and 9.9 percent respectively. A number of factors appear to have contributed to the funding stability of multiemployer plans, including: Investment strategy * Historically, multiemployer plans appear to have invested more conservatively than their single-employer counterparts. Although comprehensive data are not available, some pension experts have suggested that defined benefit plans in the aggregate are more than 60 percent invested in equities, which are associated with greater risk and volatility than many fixed-income securities. Experts have stated that, in contrast, equity holdings generally constitute 55 percent or less of the assets of most multiemployer plans. Contribution rates * Unlike funds for single-employer plans, multiemployer plan funds receive steady contributions from employers because those amounts generally have been set through multiyear collective bargaining contracts. Participating employers, therefore, have less flexibility to vary their contributions in response to changes in firm performance, economic conditions, and other factors. This regular contribution income is in addition to any investment return and helps multiemployer plans offset any declines in investment returns. Risk pooling * The pooling of risk inherent in multiemployer pension plans may also have buffered them against financial shocks and recessions, since the gains and losses of the plans are less immediately affected by the economic performance of individual employer plan sponsors. Multiemployer pension plans typically continue to operate long after any individual employer goes out of business because the remaining employers in the plan are jointly liable for funding the benefits of all vested participants. Greater average plan size * The stability of multiemployer plans may also partly reflect their size. Large plans (1,000 or more participants) constitute a greater proportion of multiemployer plans than of single-employer plans. (See figs. 2 and 3.) While 55 percent of multiemployer plans are large, only 13 percent of single-employer plans are large and 73 percent of single- employer plans have had fewer than 250 participants, as shown in figure 2. However, distribution of participants by plan size for multiemployer and single-employer plans is more comparable, with over 90 percent of both multiemployer and single-employer participants in large plans, as shown in figure 3. Although data limitations preclude any comprehensive assessment, available evidence suggests that since 2000, many multiemployer plans have experienced significant reductions in their funded status. PBGC estimated in its 2003 annual report that aggregate deficit of underfunded multiemployer plans had reached $100 billion by year-end, up from a $21 billion deficit at the start of 2000. In addition, PBGC reported a net accumulated deficit for its own multiemployer program of $261 million for fiscal year 2003, the first deficit since 1981 and its largest ever. (See fig. 4.) While most multiemployer plans continue to provide benefits at unreduced levels, PBGC has also reported that the deficit was primarily caused by new and substantial "probable losses," increasing the number of plans it classifies as likely requiring financial assistance in the near future from 58 plans with expected liabilities of $775 million in 2002 to 62 plans with expected liabilities of $1.25 billion in 2003. Private survey data and anecdotal evidence are consistent with this assessment of multiemployer funding losses. One survey by an actuarial consulting firm showed that the percentage of its multiemployer client plans that were fully funded declined from 83 percent in 2001 to 67 percent in 2002. Other, more anecdotal evidence suggests increased difficulties for multiemployer plans. For example, discussions with plan administrators have indicated that there has been an increase in the number of plans with financial difficulties in recent years, with some plans reducing or temporarily freezing the future accruals of participants. In addition, IRS officials recently reported an increase in the number of multiemployer plans (less than 1 percent of all multiemployer plans) requesting tax-specific waivers that would provide plans relief from current funding shortfall requirements. As with single-employer plans, falling interest rates coincident with stock market declines and generally weak economic conditions have contributed to the funding difficulties of multiemployer plans. The decline in interest rates in recent years has increased the present value of pension plan liabilities for DB plans in general, because the cost of providing future promised benefits increases when computed using a lower interest rate. At the same time, declining stock markets decreased the value of any equities held in multiemployer plan portfolios to meet those obligations. Finally, because multiemployer plan contributions are usually based on the number of hours worked by active participants, any reduction in their participant employment will reduce employer contributions to the plan. Despite their relative financial stability, the multiemployer system has experienced a steady decline in the number of plans and in the number of active participants over the past 2 decades. In 1980, there were 2,244 plans, and by 2003 the number had fallen to 1,623, a decline of about 27 percent. While a portion of the decline in the number of plans can be explained by consolidations through mergers, few new plans have been formed - only 5, in fact, since 1992. Meanwhile, the number of active multiemployer plan participants has declined in both relative and absolute terms. By 2001, only about 4.1 percent of the private sector workforce was composed of active participants in multiemployer pension plans, down from 7.7 percent in 1980 (see fig. 5), with the total number of active participants decreasing from about 6.1 million to about 4.7 million. Finally, as the number of active participants has declined, the number of retirees increased - from about 1.4 million to 2.8 million, and this increase had led to a decline in the ratio of active (working) participants to retirees in multiemployer plans. By 2001, there were about 1.7 active participants for every retiree, compared with 4.3 in 1980. (See fig. 6.) While the trend is also evident among single-employer plans, the decline in the ratio of active workers to retirees affects multiemployer funding more directly because employer contributions are tied to active employment. The higher benefit payouts required for greater numbers of retirees, living longer, and the reduced employer contributions resulting from fewer active workers combines to put pressure on the funding of multiemployer plans. A number of factors pose challenges to the long-term prospects of the multiemployer pension plan system. Some of these factors are specific to the features and nature of multiemployer plans, including a regulatory framework that some employers may perceive as financially riskier and less flexible than those covering other types of pension plans. For example, compared with a single-employer plan, an employer covered by a multiemployer plan cannot easily adjust annual plan contributions in response to the firm's own financial circumstances. This is because contribution rates are often fixed for periods of time by the provisions of the collective bargaining agreement. Collective bargaining itself, a necessary aspect of the multiemployer plan model and another factor affecting plans' prospects, has also been in long-term decline, suggesting fewer future opportunities for new plans to be created or existing ones to expand. As of 2003, union membership, a proxy for collective bargaining coverage, accounted for less than 9 percent of the private sector labor force and has been steadily declining since 1953. Experts have identified other challenges to the future prospects of defined benefit plans generally, including multiemployer plans. These include the growing trend among employers to choose defined contribution plans over DB plans, including multiemployer plans; the continued growing life expectancy of American workers, resulting in participants spending more years in retirement, thus increasing pension benefit costs; and increases in employer-provided health insurance costs, which are increasing employers' compensation costs generally, including pensions. Some factors raise questions about the long-term viability of multiemployer plans are specific to certain features of multiemployer plans themselves, including features of the regulatory framework that some employers may well perceive as less flexible and financially riskier than the features of other types of pension plans. For example, an employer covered by a multiemployer pension plan typically does not have the funding flexibility of a comparable employer sponsoring a single- employer plan. In many instances, the employer covered by the multiemployer plan cannot as easily adjust annual plan contributions in response to the firm's own financial circumstances. Employers that value such flexibility might be less inclined to participate in a multiemployer plan. Employers in multiemployer plans may also face greater financial risks than those in other forms of pension plans. For example, an employer sponsor of a multiemployer plan that wishes to withdraw from the plan is liable for its share of pension plan benefits not covered by plan assets upon withdrawal from the plan, rather than when the plan terminates, as with a single-employer plan. Employers in plans with unfunded vested benefits face an immediate withdrawal liability that can be costly. In addition, employers in fully funded plans also face the potential of costly withdrawal liability if the plan becomes underfunded in the future through the actions of other sponsors participating in the multiemployer plan. Thus, an employer's pension liabilities become a function not only of the employer's own performance but also the financial health of other plan sponsors in the multiemployer plan. These additional sources of potential liability can be difficult to predict, increasing employers' level of uncertainty and risk. Some employers may hesitate to accept such risks if they can sponsor other plans that do not have them, such as 401(k)-type defined contribution plans. The future growth of multiemployer plans is also predicated on the future of collective bargaining. Collective bargaining is an inherent feature of the multiemployer plan model. Collective bargaining, however, has been declining in the United States since the early 1950s. Currently, union membership, a proxy for collective bargaining coverage, accounts for less than 9 percent of the private sector labor force. In 1980, union membership accounted for about 19 percent of the entire national workforce and about 27 percent of the civilian workforce in 1953. Pension experts have suggested a variety of challenges faced by today's defined benefit pension plans, including multiemployer plans. These include the continued general shift away from DB plans to defined contribution (DC) plans, and the increased longevity of the U.S. population, which translates into a lengthier and more costly retirement. In addition, the continued escalation of employer health insurance costs has placed pressure on the compensation costs of employers, including pensions. Employers have tended to move away from DB plans and toward DC plans since the mid-1980s. The total number of PBGC-insured defined benefit plans, including single employer plans, declined from 97,683 in 1980 to 31,135 in 2002. (See fig. 7.) The number of DC plans sponsored by private employers nearly doubled from 340,805 in 1980 to 673,626 in 1998. Along with this continuing trend toward sponsoring DC plans, there has also been a shift in the mix of plans that private sector workers participate in. Labor reports that the percentage of private sector workers who participated in a primary DB plan has decreased from 38 percent in 1980 to 21 percent by 1998, while the percentage of such workers who participated in a primary DC plan has increased from 8 percent to 27 percent during this same period. Moreover, these same data show that by 1998, the majority of active participants (workers participating in their employer's plan) were in DC plans, whereas nearly 20 years earlier the majority of participants had been in DB plans. Experts have suggested a variety of explanations for this shift, including the greater risk borne by employers with DB plans, greater administrative costs and more onerous regulatory requirements, and that employees more easily understand and favor DC plans. These experts have also noted considerable employee demand for plans that state benefits in the form of an account balance and emphasize portability of benefits, such as is offered by 401(k)-type defined contribution pension plans. The increased life expectancy of workers also has important implications for defined benefit plan funding, including multiemployer plans. The average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise of 6.6 years from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. PBGC has noted that improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2002. This increased duration of retirement has required employers with defined benefit plans to increase their contributions to match this increase in benefit liabilities. This problem is exacerbated for those multiemployer plans with a shrinking pool of active workers because plan contributions are generally paid on a per work-hour basis, contributing to the funding strain we discussed earlier. Increasing health insurance costs are another factor affecting the long- term prospects of pensions, including multiemployer pensions. Recent increases in employer-provided health insurance costs are accounting for a rising share of total compensation, increasing pressure on employers' ability to maintain wages and other benefits, including pensions. Bureau of Labor Statistics data show that the cost of employer-provided health insurance has risen steadily in recent years, growing from 5.4 percent of total compensation in 1999 to 6.5 percent as of the third quarter of 2003. A private survey of employers found that employer-sponsored health insurance costs rose about 14 percent between the spring of 2002 and the spring of 2003, the third consecutive year of double-digit acceleration and the highest premium increase since 1990. Plan administrators and employer and union representatives that we talked with identified the rising costs of employer-provided health insurance as a key problem facing plans, as employers are increasingly forced to choose between maintaining current levels of pension and medical benefits. Although available evidence suggests that multiemployer plans are not experiencing anywhere near the magnitude of the problems that have recently afflicted the single-employer plans, there is cause for concern. The declines in interest rates and equities markets, and weak economic conditions in the early 2000s, have increased the financial stress on both individual multiemployer plans and the multiemployer framework generally. Most significant is PBGC's estimate of $100 billion in unfunded multiemployer plan liabilities that are being borne collectively by employers and plan participants. At this time, PBGC and, potentially, the taxpayer do not face the same level of exposure from this liability with multiemployer plans that they do with single-employer plans. This is because, as PBGC officials have noted, the current regulatory framework governing multiemployer plans redistributes financial risk toward employers and workers and away from the government. Employers face withdrawal and other liabilities that can be significant. In addition, should a multiemployer plan become insolvent, workers face the prospect of receiving far lower guaranteed benefits than workers receive under PBGC's single-employer program guaranteed limits. Together, not only do these features limit the exposure for PBGC, they create important incentives for all interested parties to resolve difficult financial situations that could otherwise result in plan insolvency. Because the multiemployer plans' structure balances risk in a manner that fosters constructive collaboration among interested parties, proposals to address multiemployer plans' funding stress should be carefully designed and considered for their long-term consequences. For example, proposals to shift plan liabilities to PBGC by making it easier for employers to exit multiemployer plans could help a few employers or participants but erode the existing incentives that encourage interested parties to independently face up to their financial challenges. In particular, placing additional liabilities on PBGC could ultimately have serious consequences for the taxpayer, given that with only about $25 million in annual income, a trust fund of less than $1 billion, and a current deficit of $261 million, PBGC's multiemployer program has very limited resources to handle a major plan insolvency that could run into billions of dollars. The current congressional efforts to provide funding relief are at least in part in response to the difficult conditions experienced by many plans in recent years. However, these efforts are also occurring in the context of the broader long-term decline in private sector defined benefit plans, including multiemployer plans, and the attendant rise of defined contribution plans, with their emphasis on greater individual responsibility for providing for a secure retirement. Such a transition could lead to greater individual control and reward for prudent investment and planning. However, if managed poorly, it could lead to adverse distributional effects for some workers and retirees, including a greater risk of a poverty-level income in retirement. Under this transition view, the more fundamental issues concern how to minimize the potentially serious, negative effects of the transition while balancing risks and costs for employers, workers, and retirees, and for the public as a whole. These important policy concerns make Congress's current focus on pension reform both timely and appropriate. This concludes my prepared statement. I am happy to answer any questions that the subcommittee may have. For further questions on this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Joseph Applebaum, Tim Fairbanks, Charles Jeszeck, Gene Kuehneman, Raun Lazier, and Roger J. Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Multiemployer defined benefit pension plans, which are created by collective bargaining agreements covering more than one employer and generally operated under the joint trusteeship of labor and management, provide coverage to over 9.7 million of the 44 million participants insured by the Pension Benefit Guaranty Corporation (PBGC). The recent termination of several large single-employer plans--plans sponsored by individual firms--has led to millions of dollars in benefit losses for thousands of workers and left PBGC, their public insurer, an $11.2 billion deficit as of September 30, 2003. The serious difficulties experienced by these single-employer plans have prompted questions about the health of multiemployer plans. This testimony provides information on differences between single employer and multiemployer pension plans, recent trends in the funding of multiemployer pension plans and worker participation in those plans, and factors that may pose challenges to the future prospects of multiemployer plans. GAO will soon release a separate report on multiemployer pension issues. The framework governing multiemployer plans generally places greater financial risk on employers and participants and less on PBGC than does PBGC's single-employer program. For example, in the event of employer bankruptcy, the remaining employers in the multiemployer plan assume additional funding responsibility. Further, PBGC's guaranteed participant benefit is much lower for multiemployer participants, and PBGC does not provide financial assistance until the multiemployer plan is insolvent. Following two decades of relative financial stability, many multiemployer plans appear to have suffered recent funding losses, while long-term declines in participation and plan formation continue. At the close of the 1990s, the majority of multiemployer plans reported assets exceeding 90 percent of total liabilities. Since then, stock market declines, coupled with low interest rates and poor economic conditions, have reduced assets and increased liabilities for many plans. In its 2003 annual report, PBGC estimated that underfunded multiemployer plans now face an aggregate unfunded liability of $100 billion, up from $21 billion in 2000. PBGC also reported an accumulated net deficit of $261 million for its multiemployer program in 2003, the first since 1981. Meanwhile, since 1980, there has been a steady decline in the number of plans, from over 2,200 plans to fewer than 1,700, and a 1.4 million decline in the number of active workers in plans. The long-term prospects of the multiemployer system face a number of challenges. Some are inherent in the multiemployer design and regulatory framework, such as the greater perceived financial risk and reduced flexibility for employers, compared with other plan types. The long-term decline of collective bargaining also results in fewer participants and employers available to expand or create new plans. Other factors that pose challenges include the growing trend among employers to choose defined contribution plans; the increasing life expectancy of workers, which raises the cost of defined benefit plans; and continuing increases in employer health insurance costs, which compete with pensions for employer funding.
5,928
649
Securing transportation systems and facilities is complicated, requiring balancing security to address potential threats while facilitating the flow of people and goods. These systems and facilities are critical components of the U.S. economy and are necessary for supplying goods throughout the country and supporting international commerce. U.S. transportation systems and facilities move over 30 million tons of freight and provide approximately 1.1 billion passenger trips each day. The Ports of Los Angeles and Long Beach estimate that they alone handle about 43 percent of the nation's oceangoing cargo. The importance of these systems and facilities also makes them attractive targets to terrorists. These systems and facilities are vulnerable and difficult to secure given their size, easy accessibility, large number of potential targets, and proximity to urban areas. A terrorist attack at these systems and facilities could cause a tremendous loss of life and disruption to our society. An attack would also be costly. According to testimony by a Port of Los Angeles official, a 2002 labor dispute led to a 10-day shutdown of West Coast port operations, costing the nation's economy an estimated $1.5 billion per day. A terrorist attack to a port facility could have a similar or greater impact. One potential security threat stems from those individuals who work in secure areas of the nation's transportation system, including seaports, airports, railroad terminals, mass transit stations, and other transportation facilities. It is estimated that about 6 million workers, including longshoreman, mechanics, aviation and railroad employees, truck drivers, and others access secure areas of the nation's estimated 4,000 transportation facilities each day while performing their jobs. Some of these workers, such as truck drivers, regularly access secure areas at multiple transportation facilities. Ensuring that only workers that do not pose a terrorism security risk are allowed unescorted access to secure areas is important in helping to prevent an attack. According to TSA and transportation industry stakeholders, many individuals that work in secure areas are currently not required to undergo a background check or a stringent identification process in order to access secure areas. In addition, without a standard credential that is recognized across modes of transportation and facilities, many workers must obtain multiple credentials to access each transportation facility they enter. In the aftermath of the September 11, 2001, terrorist attacks, the Aviation and Transportation Security Act (ATSA) was enacted in November 2001. Among other things, ATSA required TSA to work with airport operators to strengthen access control points in secure areas and consider using biometric access control systems to verify the identity of individuals who seek to enter a secure airport area. In response to ATSA, TSA established the TWIC program in December 2001 to mitigate the threat of terrorists and other unauthorized persons from accessing secure areas of the entire transportation network, by creating a common identification credential that could be used by workers in all modes of transportation. In November 2002, the Maritime Transportation Security Act of 2002 (MTSA) was enacted and required the Secretary of Homeland Security to issue a maritime worker identification card that uses biometrics, such as fingerprints, to control access to secure areas of seaports and vessels, among other things. The responsibility for securing the nation's transportation system and facilities is shared by federal, state, and local governments, as well as the private sector. At the federal government level, TSA, the agency responsible for the security of all modes of transportation, has taken the lead in developing the TWIC program, while the Coast Guard is responsible for developing maritime security regulations and ensuring that maritime facilities and vessels are in compliance with these regulations. As a result, TSA and the Coast Guard are working together to implement TWIC in the maritime sector. Most seaports, airports, mass transit stations, and other transportation systems and facilities in the United States are owned and operated by state and local government authorities and private companies. As a result, certain components of the TWIC program, such as installing card readers, will be the responsibility of these state and local governments and private industry stakeholders. TSA--through a private contractor--tested the TWIC program from August 2004 to June 2005 at 28 transportation facilities around the nation, including 22 port facilities, 2 airports, 1 rail facility, 1 maritime exchange, 1 truck stop, and a U.S. postal service facility. In August 2005, TSA and the testing contractor completed a report summarizing the results of the TWIC testing. TSA also hired an independent contractor to assess the performance of the TWIC testing contractor. Specifically, the independent contractor conducted its assessment from March 2005 to January 2006, and evaluated whether the testing contractor met the requirements of the testing contract. The independent contractor issued its final report on January 25, 2006. Since its creation, the TWIC program has received about $79 million in funding for program development. (See table 1.) The TWIC program is designed to enhance security using several key components (see fig. 1). These include Enrollment: Transportation workers will be enrolled in the TWIC program at enrollment centers by providing personal information, such as a social security number and address, and be photographed and fingerprinted. For those workers who are unable to provide quality fingerprints, TSA is to collect an alternate authentication identifier. Background checks: TSA will conduct background checks on each worker to ensure that individuals do not pose a security threat. These will include several components. First, TSA will conduct a security threat assessment that may include, for example, terrorism databases or terrorism watch lists, such as TSA's No-fly and selectee lists. Second, a Federal Bureau of Investigation criminal history records check will be conducted to identify if the worker has any disqualifying criminal offenses. Third, workers' immigration status and mental capacity will be checked. Workers will have the opportunity to appeal the results of the threat assessment or request a waiver in certain limited circumstances. TWIC card production: After TSA determines that a worker has passed the background check, the worker's information is provided to a federal card production facility where the TWIC card will be personalized for the worker, manufactured, and then sent back to the enrollment center. Card issuance: Transportation workers will be informed when their cards are ready to be picked up at enrollment centers. Once a card has been issued, workers will present their TWIC cards to security officials when they seek to enter a secure area and in the future will enter secure areas through biometric card readers. Since we issued our report on the TWIC program in September 2006, TSA has made progress toward implementing the TWIC program and addressing several of the problems that we previously identified regarding contract oversight and planning and coordination with stakeholders. In January 2007, TSA and the Coast Guard issued a TWIC rule that sets forth the requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector and awarded a $70 million contract for enrolling workers in the TWIC program. TSA is also taking steps designed to address requirements in the SAFE Port Act regarding the TWIC program, such as establishing a rollout schedule for enrolling workers and issuing TWIC cards at ports and conducting a pilot program to test TWIC access control technologies. TSA has also taken steps to strengthen TWIC contract planning and oversight and improve communication and coordination with its maritime stakeholders. Since September 2006, TSA reported that it has added staff with program and contract management expertise to help oversee the TWIC enrollment contract and taken additional steps to help ensure that contract requirements are met. In addition, TSA has also focused on improving communication and coordination with maritime stakeholders, such as developing plans for conducting public outreach and education efforts. On January 25, 2007, TSA and the Coast Guard issued a rule that sets forth the regulatory requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector. Specifically, the TWIC rule provides that workers and merchant mariners requiring unescorted access to secure areas of maritime facilities and vessels must enroll in the TWIC program, undergo a background check, and obtain a TWIC card before such access is granted. In addition, the rule requires owners and operators of maritime facilities and vessels to change their existing access control procedures to ensure that merchant mariners and any other individual seeking unescorted access to a secure area of a facility or vessel has a TWIC. Table 2 describes the specific requirements in the TWIC rule. The TWIC rule does not include the requirements for owners and operators of maritime facilities and vessels to purchase and install TWIC access control technologies, such as biometric TWIC card readers. As a result, the TWIC card will initially serve as a visual identity badge until access control technologies are required to verify the credentials when a worker enters a secure area. According to TSA, during the program's initial implementation, workers will present their TWIC cards to authorized security personnel, who will compare the cardholder to his or her photo and inspect the card for signs of tampering. In addition, the Coast Guard will verify TWIC cards when conducting vessel and facility inspections and during spot checks using hand-held biometric card readers to ensure that credentials are valid. According to TSA, the requirements for TWIC access control technologies will be set forth in a second proposed rule to be issued in 2008, at which time TSA will solicit public comments and hold public meetings. As part of the TWIC rule, TSA is also taking steps designed to address various requirements of the SAFE Port Act including that it implement TWIC at the 10 highest risk ports by July 1, 2007. According to TSA, the agency has categorized ports based on risk and has developed a schedule for implementing TWIC at these ports to address the deadlines in the SAFE Port Act. In addition, TSA is currently planning to conduct a pilot program at five maritime locations to test TWIC access control technologies, such as biometric card readers, in the maritime environment. According to TSA, the agency is partnering with the ports of Los Angeles and Long Beach to test TWIC access control technologies and plans to select additional ports to participate in the pilot in the near future. TSA and Port of Los Angeles officials told us that ports participating in the pilot will be responsible for paying for the costs of the pilot and plan to use federal port security grant funds for this purpose. According to TSA, the agency plans to begin the pilot in conjunction with the issuance of TWIC cards so the access control technologies can be tested with the cards that are issued to workers. Once the pilot has been completed, TSA plans to use the results in developing its proposed rule on TWIC access control technologies. Following the issuance of the TWIC rule in January 2007, TSA awarded a $70 million contract to a private company to enroll the estimated 770,000 workers required to obtain a TWIC card. According to TSA officials, the contract costs include $14 million for the operations and maintenance of the TWIC identity management system that contains information on workers enrolled in the TWIC program, $53 million for the cost of enrolling workers, and $3 million designated to award the enrollment contractor in the event of excellent performance. TSA officials stated that they are currently transitioning the TWIC systems to the enrollment contractor and testing these systems to ensure that they will function effectively during nationwide implementation. TSA originally planned to begin enrolling workers at the first port by March 26, 2007--the effective date of the TWIC rule. However, according to TSA officials, initial enrollments have been delayed. While TSA officials did not provide specific reasons for the delay, officials from the port where enrollments were to begin told us that software problems were the cause of the delay, and could postpone the first enrollments until May 2007. In addition, TSA and the Coast Guard have not set a date by which workers will be required to posses a TWIC card to access secure areas of maritime facilities and vessels. According to the TWIC rule, once the agency determines at which ports TWIC will be implemented and by what date, this schedule will be posted to the Federal Register. Since we issued our September 2006 report, TSA has taken several steps designed to strengthen contract planning and oversight. We previously reported that TSA experienced problems in planning for and overseeing the contract to test the TWIC program, which contributed to a doubling of TWIC testing contract costs and a failure to test all key components of the TWIC program. We recommended that TSA strengthen contract planning and oversight before awarding a contract to implement the TWIC program. TSA acknowledged these problems and has taken steps to address our recommendations. Specifically, TSA has taken the following steps designed to strengthen contract planning and oversight. Added staff with expertise in technology, acquisitions, and contract and program management to the TWIC program office. Established a TWIC program control office to help oversee contract deliverables and performance. Established monthly performance management reviews and periodic site visits to TWIC enrollment centers to verify performance data reported by the contractor. Required the enrollment contactor to survey customer satisfaction as part of contract performance. In addition to these steps, TSA has established a TWIC quality assurance surveillance plan that is designed to allow TSA to track the enrollment contractor's performance in comparison to acceptable quality levels. This plan is designed to provide financial incentives for exceeding these quality levels and disincentives, or penalties, if they are not met. According to the plan, the contractor's performance will be measured against established milestones and performance metrics that the contractor must meet for customer satisfaction, enrollment time, number of failures to enroll, and TWIC help desk response times, among others. TSA plans to monitor the contractor's performance through monthly performance reviews and by verifying information on performance metrics provided by the contractor. In addition to contract planning and oversight, TSA has also taken steps designed to address problems that were identified in our September 2006 report regarding communication and coordination with maritime stakeholders. We previously reported that stakeholders at all 15 TWIC testing locations that we visited cited poor communication and coordination by TSA during testing of the TWIC program. For example, TSA never provided the final results or report on TWIC testing to stakeholders that participated in the test, and some stakeholders stated that communication from TSA would stop for months at a time during testing. We recommended that TSA closely coordinate with maritime industry stakeholders and establish a communication and coordination plan to capture and address the concerns of stakeholders during implementation. TSA acknowledged that the agency could have better communicated with stakeholders at TWIC testing locations and has reported taking several steps to strengthen communication and coordination since September 2006. For example, TSA officials told us that the agency developed a TWIC communication strategy and plan that describes how the agency will communicate with the owners and operators of maritime facilities and vessels, TWIC applicants, unions, industry associations, Coast Guard Captains of the Port, and other interested parties. In addition, TSA required that the enrollment contractor establish a plan for communicating with stakeholders. TSA, the Coast Guard, and the enrollment contractor have taken additional steps designed to ensure close coordination and communication with the maritime industry. These steps include: Posting frequently asked questions on the TSA and Coast Guard Web-sites. Participating in maritime stakeholder conferences and briefings. Working with Coast Guard Captains of the Ports and the National Maritime Security Advisory Committee to communicate with local stakeholders. Conducting outreach with maritime facility operators and port authorities, including informational bulletins and fliers. Creating a TWIC stakeholder communication committee chaired by TSA, the Coast Guard, and enrollment contractor, with members from 15 maritime industry stakeholder groups. According to TSA, this committee will meet twice per month during the TWIC implementation. Several stakeholders we recently spoke to confirmed that TSA and its enrollment contractor have placed a greater emphasis on communicating and coordinating with stakeholders during implementation and on correcting past problems. For example, an official from the port where TWIC will first be implemented stated that, thus far, communication, coordination, and outreach by TSA and its enrollment contractor have been excellent, and far better than during TWIC testing. In addition, the TWIC enrollment contactor has hired a separate subcontractor to conduct a public outreach campaign to inform and educate the maritime industry and individuals that will be required to obtain a TWIC card about the program. For example, the port official stated that the subcontractor is developing a list of trucking companies that deliver to the port, so information on the TWIC enrollment requirements can be mailed to truck drivers. TSA and maritime industry stakeholders need to address several challenges to ensure that the TWIC program can be implemented successfully. As we reported in September 2006, TSA and its enrollment contractor face the challenge of transitioning from limited testing of the TWIC program to successful implementation of the program on a much larger scale covering 770,000 workers at about 3,500 maritime facilities and 5,300 vessels. Maritime stakeholders we spoke to identified additional challenges to implementing the TWIC program that warrant attention by TSA and its enrollment contractor, including educating workers on the new TWIC requirements, ensuring that enrollments begin in a timely manner, and processing numerous background checks, appeals, and waiver applications. Furthermore, TSA and industry stakeholders also face difficult challenges in ensuring that TWIC access control technologies will work effectively in the maritime environment, be compatible with TWIC cards that will be issued soon, and balance security with the flow of maritime commerce. In September of 2006, we reported that TSA faced the challenge of enrolling and issuing TWIC cards to a significantly larger population of workers in a timely manner than was done during testing of the TWIC program. In testing the TWIC program, TSA enrolled and issued TWIC cards to only about 1,700 workers at 19 facilities, well short of its goal of 75,000. According to TSA and the testing contractor, the lack of volunteers to enroll in the TWIC program testing and technical difficulties in enrolling workers, such as difficulty in obtaining workers' fingerprints to conduct background checks, led to fewer enrollments than expected. TSA reports that it used the testing experience to make improvements to the enrollment and card issuance process and has taken steps to address the challenges that we previously identified. For example, TSA officials stated that the agency will use a faster and easier method of collecting fingerprints than was used during testing and will enroll workers individually during implementation, as opposed to enrolling in large groups, as was done during testing. In addition, the TWIC enrollment contract Statement of Work requires the contractor to develop an enrollment test and evaluation program to ensure that enrollment systems function as required under the contract. Such a testing program will be valuable to ensure that these systems work effectively prior to full-scale implementation. We also reported that TSA faced the challenge of ensuring that workers are not providing false information and counterfeit identification documents when they enroll in the TWIC program. According to TSA, the TWIC enrollment process to be used during implementation will use document scanning and verification software to help determine if identification documents are fraudulent, and personnel responsible for enrolling workers will be trained to identify fraudulent documents. Since we issued our report in September 2006, we have also identified additional challenges to implementing the TWIC program that warrant attention by TSA and its enrollment contractor. We recently spoke with some maritime stakeholders that participated in TWIC testing and that will be involved in the initial implementation of the program to discuss their views on the challenges of enrolling and issuing TWIC cards to workers. These stakeholders expressed concerns regarding the following issues: Educating workers: TSA and its enrollment contractor face a challenge in identifying all workers that are required to obtain a TWIC card, educating them about how to enroll and receive a TWIC card, and ensuring that they enroll and receive a TWIC card by the deadlines to be established by TSA and the Coast Guard. For example, while longshoremen who work at a port every day may be aware of the new TWIC requirements, truck divers that deliver to the port may be located in different states or countries, and may not be aware of the requirements. Timely enrollments: One stakeholder expressed concern about the challenges the enrollment contactor faces in enrolling workers at his port. For example, at this port, the enrollment contactor has not yet begun to lease space to install enrollment centers--which at this port could be a difficult and time-consuming task due to the shortage of space. Stakeholders we spoke to also suggested that until TSA establishes a deadline for when TWIC cards will be required at ports, workers will likely procrastinate in enrolling, which could make it difficult for the contractor to enroll large populations of workers in a timely manner. Background checks: Some maritime organizations are concerned that many of their workers will be disqualified from receiving a TWIC card by the background check. These stakeholders emphasized the importance of TSA establishing a process to ensure timely appeals and waivers process for the potentially large population of workers that do not pass the check. According to TSA, the agency already has established processes for conducting background checks, appeals, and waivers for other background checks of transportation workers. In addition, TSA officials stated that the agency has established agreements with the Coast Guard to use their administrative law judges for appeal and waiver cases and plans to use these processes for the TWIC background check. In our September 2006 report, we noted that TSA and maritime industry stakeholders faced significant challenges in ensuring that TWIC access control technologies, such as biometric card readers, worked effectively in the maritime sector. Few facilities that participated in TWIC testing used biometric card readers that will be required to read the TWIC cards in the future. As a result, TSA obtained limited information on the operational effectiveness of biometric card readers, particularly when individuals use these readers outdoors in the harsh maritime environment, where they can be affected by dirt, salt, wind, and rain. In addition, TSA did not test the use of biometric card readers on vessels, although they will be required on vessels in the future. Also, industry stakeholders we spoke to were concerned about the costs of implementing and operating TWIC access control systems, linking card readers to their local access control systems, connecting to TSA's national TWIC database to obtain updated security information on workers, and how biometric card readers would be implemented and used on vessels and how these vessels would communicate with TSA's national TWIC database remotely. Because of comments regarding TWIC access control technology challenges that TSA received from maritime industry stakeholders on the TWIC proposed rule, TSA decided to exclude all access control requirements from the TWIC rule issued in January 2007. Instead, TSA plans to issue a second proposed rule pertaining to access control requirements in 2008, which will allow more time for maritime stakeholders to comment on the technology requirements and TSA to address the challenges that we and stakeholders identified. Our September 2006 report also highlighted the challenges that TSA and industry stakeholders face in balancing the security benefits of the TWIC program with the impact the program could have on maritime commerce. If implemented effectively, the security benefits of the TWIC program in preventing a terrorist attack could save lives and avoid a costly disruption in maritime commerce. Alternatively, if key components of the TWIC program, such as biometric card readers, do not work effectively, they could slow the daily flow of maritime commerce. For example, if workers or truck drivers have problems with their fingerprint verifications on biometric card readers, they could create long queues delaying other workers or trucks waiting in line to enter secure areas. Such delays could be very costly in terms of time and money to maritime facilities. Some stakeholders we spoke to also expressed concern with applying TWIC access control requirements to small facilities and vessels. For example, smaller vessels could have crews of less than 10 persons, and checking TWIC cards each time a person enters a secure area may not be necessary. TSA acknowledged the potential impact that the TWIC program could have on the flow of maritime commerce and plans to obtain additional public comments on this issue from industry stakeholders and develop solutions to these challenges in the second rulemaking on access control technologies. In our September 2006 report, we recommended that TSA conduct additional testing to ensure that TWIC access control technologies work effectively and that the TWIC program balances the added security of the program with the impact that it could have on the flow of maritime commerce. As required by the SAFE Port act, TSA plans to conduct a pilot program to test TWIC access control technologies in the maritime environment. According to TSA, the pilot will test the performance of biometric card readers at various maritime facilities and on vessels as well as the impact that these access control systems have on facilities and vessel business operations. TSA plans to use the results of this pilot to develop the requirements and procedures for implementing and using TWIC access control technologies in the second rulemaking. Preventing unauthorized persons from entering secure areas of the nation's ports and other transportation facilities is critical to preventing a terrorist attack. The TWIC program was initiated in December 2001 to mitigate the threat of terrorists accessing secure areas. Since our September 2006 report, TSA has made progress toward implementing the program, including issuing a TWIC rule, taking steps to implement requirements of the SAFE Port Act, and awarding a contract to enroll workers in the program. While TSA plans to begin enrolling workers and issuing TWIC cards in the next few months, it is important that the agency establish clear and reasonable timeframes for implementing TWIC. TSA officials told us that the agency has taken steps to improve contract oversight and communication and coordination with its maritime TWIC stakeholders since September 2006. While the steps that TSA reports taking should help to address the contract planning and oversight problems that we have previously identified and recommendations we have made, the effectiveness of these steps will not be clear until implementation of the TWIC program begins. In addition, significant challenges remain in enrolling about 770,000 persons at about 3,500 facilities in the TWIC program. As a result, it is important that TSA and the enrollment contractor make communication and coordination a priority to ensure that all individuals and organizations affected by the TWIC program are aware of their responsibilities. Further, TSA and industry stakeholders need to address challenges regarding enrollment and TWIC access control technologies to ensure that the program is implemented effectively. It is important that TSA and the enrollment contractor develop a strategy to ensure that any potential problems that these challenges could cause are addressed during TWIC enrollment and card issuance. Finally, it will be critical that TSA ensure that the TWIC access control technology pilot program fully test all aspects of the TWIC program on a full scale in the maritime environment and the results be used to ensure a successful implementation of these technologies in the future. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information on this testimony, please contact Norman J. Rabkin at (202) 512- 8777 or at [email protected]. Individuals making key contributions to this testimony include John Hansen, Chris Currie, Nicholas Larson, and Geoff Hamilton. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Transportation Security Administration (TSA) is developing the Transportation Worker Identification Credential (TWIC) to ensure that only workers that do not pose a terrorist threat are allowed to enter secure areas of the nation's transportation facilities. This testimony is based primarily on GAO's December 2004 and September 2006 reports on the TWIC program and interviews with TSA and port officials conducted in March and April 2007 to obtain updates on the TWIC program. Specifically, this testimony addresses (1) the progress TSA has made since September 2006 in implementing the TWIC program; and (2) some of the remaining challenges that TSA and the maritime industry must overcome to ensure the successful implementation of the TWIC program. Since we issued our report on the TWIC program in September 2006, TSA has made progress toward implementing the TWIC program and addressing several of the problems that we previously identified regarding contract oversight and planning and coordination with stakeholders. Specifically, TSA has issued a TWIC rule that sets forth the requirements for enrolling workers and issuing TWIC cards to workers in the maritime sector; awarded a $70 million dollar contract for enrolling workers in the TWIC program; developed a schedule for enrolling workers and issuing TWIC cards at ports and conducting a pilot program to test TWIC access control technologies; added additional staff with program and contract management expertise to help oversee the TWIC enrollment contract; and developed plans to improve communication and coordination with maritime stakeholders, including plans for conducting public outreach and education efforts. TSA and maritime industry stakeholders still face several challenges to ensuring that the TWIC program can be implemented successfully: (1) TSA and its enrollment contractor need to transition from limited testing of the TWIC program to successful implementation of the program on a much larger scale covering 770,000 workers at about 3,500 maritime facilities and 5,300 vessels. (2) TSA and its enrollment contractor will need to educate workers on the new TWIC requirements, ensure that enrollments begin in a timely manner, and process numerous background checks, appeals, and waivers. (3) TSA and industry stakeholders will need to ensure that TWIC access control technologies will work effectively in the maritime environment, be compatible with TWIC cards that will be issued, and balance security with the flow of maritime commerce. As TSA works to implement the TWIC program and begin enrolling workers, it will be important that the agency establish clear and reasonable time frames and ensure that all aspects of the TWIC program, including the TWIC access control technologies, are fully tested in the maritime environment.
5,957
548
Part of the Mariana Islands Archipelago, the CNMI is a chain of 14 islands in the western Pacific Ocean--just north of Guam and about 3,200 miles west of Hawaii. The CNMI has a total population of 53,890, according to preliminary results of the CNMI's 2016 Household, Income, and Expenditures Survey. Almost 90 percent of the population (48,200) resided on the island of Saipan, with an additional 6 percent (3,056) on the island of Tinian and 5 percent (2,635) on the island of Rota. Error! No text of specified style in document. for matters relating to foreign affairs and defense affecting the CNMI. The Covenant initially made many federal laws applicable to the CNMI, including laws that provide federal services and financial assistance programs. However, the Covenant preserved the CNMI's exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including certain federal minimum wage provisions and immigration laws, with certain limited exceptions. Under the terms of the Covenant, the federal government has the right to apply federal law in these exempted areas without the consent of the CNMI government. Section 902 of the Covenant provides that the U.S. and CNMI governments will designate special representatives to meet and consider in good faith issues that affect their relationship and to make a report and recommendations. Error! No text of specified style in document. immigration benefits, that is, the ability to live, and in some cases work, in the CNMI permanently or temporarily. DOI's Office of Insular Affairs coordinates federal policies and provides technical and financial assistance to the CNMI. The Covenant requires DOI to consult regularly with the CNMI on all matters affecting the relationship between the U.S. government and the islands. In May 2016, President Obama designated the Assistant Secretary for Insular Affairs as the Special Representative for the United States for the 902 Consultations, a process initiated at the request of the Governor of the CNMI to discuss and make recommendations to Congress on immigration and labor matters affecting the growth potential of the CNMI economy, among other topics. The 902 Consultations resulted in a report to the President in January 2017, which we refer to as the 902 Report. DOL requires employers to fully test the labor market for U.S. workers to ensure that U.S. workers are not adversely affected by the hiring of nonimmigrant and immigrant workers, except where not required by law. DOL also provides grants to the CNMI government supporting youth, adult, and dislocated worker programs. From 1999 through 2015, DOL provided such grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA). Error! No text of specified style in document. figure shows, from the lowest point in 2013, the number of employed workers increased by approximately 8 percent by 2015 (from 23,344 to 25,307). However, the number employed in 2015 (25,307) was still approximately 31 percent less than the number employed in 2007 (36,524). Error! No text of specified style in document. workers fell from a peak of almost 38,000 in 2002 (roughly 75 percent of the employed workers) and was under 13,000 in 2015. In contrast, since 2002, the number of domestic workers has fluctuated year to year, ranging from about 10,500 to about 13,500, but increased by 17 percent from 2013 to 2015. In 2007, the minimum wage provisions of the Fair Labor Standards Act of 1938 were applied to the CNMI, requiring the minimum wage in the CNMI to rise incrementally to the federal level in a series of scheduled increases. Under current law, the next minimum wage increase will occur on September 30, 2017, and the CNMI will reach the current U.S. minimum wage on September 30, 2018 (see table 1). Based on our preliminary analysis, we estimate that approximately 62 percent (15,818 of 25,657) of the CNMI's wage workers in 2014, assuming they maintained employment, would have been directly affected by the federally mandated 2016 wage increase, which raised CNMI's minimum wage from $6.05 to $6.55 per hour. Since 72 percent of the total foreign workers made less than or equal to $6.55 per hour in 2014, they were more likely to have been directly affected by the 2016 wage increase than domestic workers, with only 41 percent making less than or equal to $6.55. The Consolidated Natural Resources Act of 2008 amended the U.S.- CNMI Covenant to apply federal immigration law to the CNMI, following a transition period. Among other things, the act includes several provisions affecting foreign workers during the transition period. Error! No text of specified style in document. status that allows them to work in the CNMI. Dependents of CW-1 nonimmigrants (spouses and minor children) are eligible for dependent of a CNMI-Only transitional worker (CW-2) status, which derives from and depends on the CW-1 worker's status. In accordance with the Consolidated Natural Resources Act, DHS, through USCIS, has annually reduced the number of CW-1 permits, and is required to do so until the number reaches zero by the end of a transition period. Since 2011, DHS has annually determined the numerical limitation, terms, and conditions of the CW-1 permits (see table 2). The act was amended in December 2014 to extend the transition period until December 31, 2019, and eliminate the Secretary of Labor's authority to provide for future extensions of the CW program. Error! No text of specified style in document. Error! No text of specified style in document. associated states, and could allow them to live and work either in the United States and its territories or in the CNMI only. Since 1990, the CNMI's tourism market has experienced considerable fluctuation, as shown by the total annual number of visitor arrivals (see fig. 2). Total visitor arrivals to the CNMI dropped from a peak of 726,690 in fiscal year 1997 to a low of 338,106 in 2011, a 53 percent decline. Since 2011, however, visitor arrivals have increased by 48 percent, reaching 501,489 in fiscal year 2016. Error! No text of specified style in document. Korean visitors enter the CNMI under the U.S. visa waiver program, Chinese visitors are not eligible and are permitted to be temporarily present in the CNMI under DHS's discretionary parole authority, according to DHS officials. DHS exercises parole authority to allow, on a case-by-case basis, eligible nationals of China to enter the CNMI temporarily as tourists when there is significant public benefit, according to DHS data. From fiscal year 2011 to 2016 the percentage of travelers that arrived at the Saipan airport and were granted discretionary parole increased from about 20 percent to about 50 percent of the total travelers, according to our analysis of CBP data. Error! No text of specified style in document. If all CW-1 workers, or 45 percent of the total workers in 2015, were removed from the CNMI's labor market, our preliminary economic analysis projects a 26 to 62 percent reduction in the CNMI's 2015 GDP, depending on the assumptions made. To estimate the possible effect of a reduction in the number of workers with CW-1 permits in the CNMI to zero--through the scheduled end of the CW program in 2019--we employed an economic method that enabled us to simulate the effect of a reduction under a number of different assumptions. Error! No text of specified style in document. 50 percent likelihood that it would have ranged from $462 million to $583 million, which is 37 to 50 percent lower than the actual value; and 25 percent likelihood that it would have ranged from $353 million to $462 million, which is 50 to 62 percent lower than the actual value (see fig. 3). Across the full range of probable outcomes, the elimination of the CW program would result in a 26 to 62 percent decline in the CNMI's 2015 GDP, a relatively large negative effect on the economy. Error! No text of specified style in document. The CNMI economy currently is experiencing growing demand for workers, particularly among occupations in construction and hospitality. Since fiscal year 2013, demand for CW-1 permits has doubled, and in fiscal year 2016, demand exceeded the numerical limit (or cap) on approved CW-1 permits set by DHS. Approved CW-1 permits grew from 6,325 in fiscal year 2013 to 13,299 in fiscal year 2016. In 2016, when the cap was set at 12,999, DHS received enough petitions by May 6, 2016, to approve 13,299 CW-1 permits, reaching the cap 5 months prior to the end of the fiscal year. On October 14, 2016, 2 weeks into fiscal year 2017, DHS announced that it had received enough petitions to reach the CW-1 cap and would not accept requests for new fiscal year 2017 permits during the remaining 11 months. In interviews, some employers reported being surprised to learn that the cap had been reached when they sought renewals for existing CW-1 workers. See table 3 for the numerical limit of CW-1 permits and number of permits approved by fiscal year. Error! No text of specified style in document. Legend: - = Not applicable; CNMI = Commonwealth of the Northern Mariana Islands; CW-1 = CNMI- Only transitional worker; DHS = U.S. Department of Homeland Security. Based on DHS data on approved CW-1 permits, by country of birth, occupation, and business, from fiscal years 2014 through 2016, the number of permits approved for Chinese nationals increased, the number of permits approved for construction workers increased, and a large number of CW-1 permits were approved for three new businesses. Chinese nationals. In 2016, DHS approved 4,844 CW-1 permits for Chinese workers, increasing from 1,230 in 2015 and 854 in 2014. This represents a change in the source countries of CW-1 workers, with the percentage of workers from the Philippines declining from 65 to 53 percent during this period, while the share from China rose from 9 to 36 percent (see table 4). Error! No text of specified style in document. Legend: - = Not applicable; CNMI = Commonwealth of the Northern Mariana Islands; CW-1 = CNMI- Only transitional worker. Error! No text of specified style in document. Construction workers. In 2016, DHS approved 3,443 CW-1 permits for construction workers, increasing from 1,105 in 2015 and 194 in 2014 (see table 5). New businesses. In 2016, DHS approved 3,426 CW-1 permits for three construction businesses, representing 26 percent of all approved permits. Two of these businesses had not previously applied for CW-1 permits. The third business was new in 2015 and was granted only 62 CW-1 permits that year. Error! No text of specified style in document. later than 36 months from the date of the license, or by August 2017. See figure 4 for photos showing the initial gaming facility's development site in Saipan both before and during construction. Error! No text of specified style in document. restriction for such visas. However, China is not listed as an eligible country for H-2 visas. Amid the uncertainty of the future availability of foreign labor, the CNMI government has granted zoning permits to planned projects that will require thousands of additional workers. Twenty-two new development projects, including six new hotels or casinos in Saipan and two new hotels or casinos in Tinian, are planned for construction or renovation by 2019. Beyond the construction demand created by these projects, the CNMI's Bureau of Environmental and Coastal Quality estimates that at least 8,124 employees will be needed to operate the new hotels and casinos. According to data provided by the bureau, most of this planned labor demand is for development on the island of Tinian, where two businesses plan to build casino resorts, with an estimated labor demand of 6,359 workers for operations--more than twice the island's population in 2016. According to the Department of Treasury, the existing casino and hotel on Tinian closed in 2015 after having been fined $75 million by the U.S. Error! No text of specified style in document. Department of the Treasury for violations of the Bank Secrecy Act of 1970. One of the two Tinian developments offers overseas immigration services, including assistance with obtaining employment or investment- based immigration to the United States. We observed a billboard advertisement in Tinian with Chinese writing indicating that by investing in a new development in Tinian, an investor's family members would all get American green cards. This resort development, whose plans estimate a labor force of 859, has undertaken site preparation, while the other larger resort project, whose plans estimate a labor force of 5,500, had not initiated construction as of December 2016. Currently, the CNMI government does not have a planning agency or process to ensure that planned projects are aligned with the CNMI's available labor force, according to CNMI officials. In January 2017, a bill was introduced in the CNMI Senate to establish an Office of Planning and Development within the Office of the Governor. Our preliminary analysis shows that the current number of unemployed domestic workers in the CNMI is insufficient to replace the existing CW-1 workers or to fill all the nonconstruction jobs that planned development projects are expected to create once their business operations commence. Error! No text of specified style in document. show that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the number of workers needed to replace currently employed CW-1 workers in nonconstruction-related occupations. In addition, our preliminary analysis indicates that the unemployed workforce would fall far short of the demand for additional workers in nonconstruction related occupations needed to support the ongoing operations of planned development projects--currently estimated at 8,124 workers by 2019. Error! No text of specified style in document. of Palau). For example, in 2003, 1,909 freely associated state workers were employed in the CNMI as compared with 677 of these workers in 2015, according to CNMI tax data. Moreover, many citizens from the freely associated states migrate to the United States each year, including to nearby Guam. Guam and Hawaii, the closest U.S. areas to the CNMI, both have higher local minimum wages than the CNMI, currently at $8.25 and $9.25 per hour, respectively, according to DOL. Employers in the CNMI are required to attempt to recruit and hire U.S. workers. The CNMI government has a goal that all employers hire at least 30 percent U.S. workers, and employers are generally required to post all job openings to the CNMI Department of Labor's website. However, the CNMI government can and has granted exemptions to this requirement. From May 8, 2015, to May 27, 2016, seven businesses were granted exemptions, according to data provided by the CNMI Department of Labor. In addition, all employers that apply for CW-1 permits must attest that no qualified U.S. worker is available for the job opening. However, during our ongoing work, some of the CNMI employers with whom we met reported that they face the following challenges in recruiting and retaining U.S. citizens, among others: unsatisfactory results of job postings, high costs of recruiting, and difficulty in retaining U.S. workers. Error! No text of specified style in document. The federal and CNMI governments support programs seeking to address the CNMI's labor force challenges. These programs include job training funded by employers' CW-1 vocational education fees that DHS transfers to the CNMI government and employment and training assistance funded by DOL. Our preliminary analysis shows that in recent years, on average, DHS transferred about $1.8 million per year in CW-1 vocational education fees and DOL provided about $1.3 million per year to the CNMI for employment and training programs. DHS collects the $150 vocational education fee assessed for each foreign worker on a CW-1 petition and typically transfers the fees to the CNMI government each month. Results of our ongoing work indicate that to support vocational education curricula and program development in fiscal years 2012 through 2016, DHS transferred to the CNMI Treasury about $9.1 million in CW-1 fees. In fiscal years 2012 through 2016, the CNMI government allocated about $5.8 million of the $9.1 million in CW-1 vocational education fees to three educational institutions (see fig. 5). At present, the CW-1 fees support job training programs at Northern Marianas College and Northern Marianas Trades Institute and in recent years also funded job training provided by CNMI's Public School System. All three institutions reported using a majority of the CW-1 fees to pay the salaries and benefits of faculty and staff members involved in job training programs. Error! No text of specified style in document. Error! No text of specified style in document. established in 2008--received $1.7 million in CW-1 funding. The institute specializes in training youths and adults in construction, hospitality, and culinary trades. The institute's senior officers told us that in fiscal year 2016, 300 students were enrolled in the institute's fall, spring, and summer sessions, and as of November 2016, 132 of these students had found employment after completing their training. CNMI's Public School System. In fiscal years 2012 through 2015, the Public School System--which consists of 20 public schools, including 5 high schools that graduated 662 students in the 2014- 2015 school year--received $2 million in CW-1 funds for its cooperative education program designed to prepare high school students for the CNMI's job market. By the end of the 2014-2015 school year, 452 students were enrolled in the cooperative education program, according to the federal programs officer for the Public School System. As part of our ongoing work, we facilitated group discussions with current and former students of the CW-1-funded programs at each of the three institutions. Several participants told us that the training had helped them find jobs. Participants also identified specific benefits of the training they received, such as increased familiarity with occupations they intended to enter, learning communication skills tailored for specific work environments, and maintaining and improving skills in a chosen career path. However, the employers we interviewed in the CNMI told us that the benefits of the job training programs supported by the CW-1 vocational education fees were limited to Saipan and that programs run by Northern Marianas College and Northern Marianas Trades Institute were unavailable on Tinian and Rota. Error! No text of specified style in document. Preliminary results of our ongoing work show that from July 2012 through June 2016, DOL provided about $5.3 million in grants under the Workforce Investment Act of 1998 (WIA) and the Workforce Innovation and Opportunity Act of 2014 (WIOA) to the CNMI Department of Labor's Workforce Investment Agency for job search assistance, career counseling, and training. That agency carried out WIA programs in the CNMI and now administers programs under WIOA. DOL's Employment and Training Administration conducts federal oversight of these programs. Providers of DOL-funded worker training include Northern Marianas College, Northern Marianas Trades Institute, CNMI government agencies, and private businesses. Examples of training provided by these entities include courses toward certification as a phlebotomy technician, a nursing assistant, and a medical billing and coding specialist. The CNMI developed a state plan outlining a 4-year workforce development strategy under WIOA and submitted its first plan by April 1, 2016. The plan and the WIOA performance measures took effect in July 2016. According to its state plan, the CNMI Department of Labor has formed a task force to assess approaches for using workforce programs to prepare CNMI residents for jobs that will be available because of ongoing reductions in the number of foreign workers and the eventual expiration of the CW program. Error! No text of specified style in document. In December 2016, after 8 months of official 902 Consultations, informal discussions, and site visits to locations in the CNMI, the Special Representatives of the United States and the CNMI transmitted a report to the President that included six recommendations agreed to by the Special Representatives on immigration and labor matters: 1. Extending the CW program beyond 2019 and other amendments, such as raising the CW-1 cap and restoring the executive branch's authority to extend the CW program. 2. Providing permanent status for long-term guest workers. 3. Soliciting input on suggested regulatory changes to the CW program. 4. Considering immigration policies to address regional labor shortages. 5. Extending eligibility to the CNMI for additional federal workforce development programs. 6. Establishing a cooperative working relationship between DHS and the CNMI. Table 6 lists these six recommendations and summarizes proposed next steps toward implementing them that could be taken, according to the report. Chairman Murkowski, Ranking Member Cantwell, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this statement, please contact David Gootnick, Director, International Affairs and Trade at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Emil Friberg (Assistant Director), Julia Ann Roberts (Analyst-in-Charge), Sada Aksartova, David Blanding, Benjamin Bolitzer, David Dayton, and Moon Parks. Technical support was provided by Neil Doherty, Mary Moutsos, and Alexander Welsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2008, Public Law 110-229 established federal control of CNMI immigration. It required DHS to create a transitional work permit program for foreign workers in the CNMI and to decrease the number of permits issued annually; it presently requires that DHS reduce them to zero by December 31, 2019. To implement this aspect of the law, in 2011, DHS created a CW-1 permit program for foreign workers. In 2015, foreign workers totaled 12,784, making up more than half of the CNMI workforce. GAO was asked to review the implementation of federal immigration laws in the CNMI. This testimony discusses GAO's preliminary observations from its ongoing work on (1) the potential economic impact of reducing the number of CNMI foreign workers to zero and (2) federal and CNMI efforts to address labor force challenges. GAO reviewed U.S. laws and regulations; analyzed government data, including CNMI tax records since 2001; and conducted fieldwork in Saipan, Tinian, and Rota, CNMI. During fieldwork, GAO conducted semistructured interviews and discussion groups with businesses, CW-1 workers, U.S. workers, and current and former job training participants. GAO also interviewed officials from the CNMI government, DHS, and the U.S. Departments of Commerce, the Interior, and Labor. If all foreign workers in the Commonwealth of the Northern Mariana Islands (CNMI) with CNMI-Only transitional worker (CW-1) permits, or 45 percent of total workers in 2015, were removed from the CNMI's labor market, GAO's preliminary economic analysis projects a 26 to 62 percent reduction in CNMI's 2015 gross domestic product (GDP)--the most recent GDP available. In addition, demand for foreign workers in the CNMI exceeded the available number of CW-1 permits in 2016--many approved for workers from China and workers in construction occupations. The construction of a new casino in Saipan is a key factor in this demand (see photos taken both before and during construction in 2016). Meanwhile, by 2019, plans for additional hotels, casinos, and other projects estimate needing thousands of new employees. When the CW-1 permit program ends in 2019, GAO's preliminary analysis of available data shows that the unemployed domestic workforce, estimated at 2,386 in 2016, will be well below the CNMI's expected demand for labor. To meet this demand, CNMI employers may need to recruit U.S.-eligible workers from the U.S. states, U.S. territories, and the freely associated states (the Federated States of Micronesia, Republic of the Marshall Islands, and Republic of Palau). Federal and CNMI efforts to address labor force challenges include (1) job training programs and (2) employment assistance funded by the U.S. Department of Labor and implemented by the CNMI's Department of Labor. The Department of Homeland Security (DHS) collects the $150 vocational education fee assessed for each foreign worker on a CW-1 petition and transfers the fees to the CNMI government. Results of GAO's ongoing work indicate that to support vocational education curricula and program development in fiscal years 2012 through 2016, DHS transferred to the CNMI Treasury about $9.1 million in CW-1 fees. During this period, GAO's preliminary analysis shows that the CNMI government allocated about $5.8 million of the $9.1 million to three educational institutions: Northern Marianas College, Northern Marianas Trades Institute, and the CNMI's Public School System. In 2016, a U.S.-CNMI consultative process resulted in a report to Congress with six recommendations related to the CNMI economy, including one to raise the cap on CW-1 foreign worker permits and extend the permit program beyond 2019. GAO is not making any recommendations at this time. GAO plans to issue a final report in May 2017.
4,870
853
DOE's contractors operate a number of facilities that are used to produce nuclear materials and design, test, assemble, and disassemble nuclear weapons. In the operation of these facilities, contractor employees may handle materials, documents, and information that are classified. An employee working in such an environment is investigated and granted a security clearance if one is warranted. To ensure that personnel with access to classified information do not compromise national defense and security, DOE's operations offices may suspend security clearances. A clearance may be suspended as a result of an employee's use of illegal drugs, alcohol abuse, mental illness, falsification of information on security statements, sabotage or treason, membership in an organization that advocates the overthrow of the government or association with people who are members of such organizations, failure to protect classified data, unusual conduct or dishonesty, and having relatives living in a country whose interests are hostile to those of the United States. Information leading to the suspension of an employee's clearance can come from many sources, including routine security reinvestigations, random drug testing, and allegations from other people. If DOE believes that national security could potentially be compromised, it begins a multilayered review process that can result in the suspension--and ultimately revocation--of an employee's security clearance. More than a year may pass before DOE makes a final determination. The employee is entitled to a formal hearing by a hearing officer and attorneys, a review of the hearing transcript by a personnel security review examiner, and a final resolution by the Security Affairs Director. DOE may also have an employee undergo a psychiatric evaluation to examine the employee's judgment or reliability if information reveals mental illness, alcohol abuse, or drug use. The facilities operated by DOE's Albuquerque, Savannah River, and Oak Ridge operations offices employ the Department's largest numbers of employees holding clearances--more than 84,000. These three offices oversee six major contractors: AT&T/Sandia Corporation (Sandia National Laboratories) and the University of California (Los Alamos National Laboratory) at the Albuquerque Operations Office in New Mexico; Westinghouse and Bechtel companies at the Savannah River Operations Office in South Carolina; and Martin Marietta Energy Systems, Incorporated, and M. K. Ferguson of Oak Ridge Company at the Oak Ridge Operations Office in Tennessee. At the locations included in our review, in various 1-year periods during fiscal year 1989 through fiscal year 1993, contractor employees from several minority groups had their security clearances suspended more often than would be expected statistically when they were compared with the majority population of the workforce. The population of contractor employees includes Asians, American Indians, African-Americans, Hispanics, and whites. Table 1 shows the number of years during this period in which a statistical disparity occurred in the number of clearances suspended for the employee population groups at the three sites. During the period covered by our review, AT&T/Sandia Corporation operated the Sandia National Laboratories and the University of California operated the Los Alamos National Laboratory for DOE's Albuquerque Operations Office. These two contractors combined employ more than 15,000 people with security clearances. DOE suspended the security clearances of 98 contractor employees at Sandia and Los Alamos during fiscal year 1989 through fiscal year 1993. The number of clearances suspended for Hispanics was statistically disparate in fiscal years 1992 and 1993; the number for American Indians was statistically disparate in fiscal year 1992. Two other racial/ethnic minority groups were represented at Sandia and Los Alamos: Asians and African-Americans. However, no Asians had their clearances suspended in this period, and the number of African-Americans whose clearances were suspended did not show a statistically significant disparity. More specifically, in fiscal year 1992 American Indians and Hispanics made up about 2 percent and about 23 percent, respectively, of the total population of employees at Sandia and Los Alamos. However, 12 percent (4 of 33) of the suspensions involved American Indians, and 42 percent (14 of 33) involved Hispanics. In fiscal year 1993, Hispanics made up about 23 percent of the total employee population at Sandia and Los Alamos but accounted for 47 percent (14 of 30) of the number of security clearances suspended. The disparities for these groups in these years were all significant, according to the Fisher's Exact Test. (See app. II for data on contractor employees at the Sandia and Los Alamos national laboratories.) DOE's Savannah River facility is operated by the Westinghouse Company for DOE's Savannah River Operations Office. The major construction contractor is the Bechtel Company. About 20,000 employees of Westinghouse and Bechtel work at the Savannah River Site. About 17,000 of those employees have security clearances. DOE suspended the security clearances of 163 contractor employees at the Savannah River Site during calendar years 1989 through 1993. The number of clearances suspended was statistically disparate for one group, African-Americans, in 3 of the 5 years: 1991, 1992, and 1993. African-Americans made up about 20 percent of the total number of employees holding clearances throughout this period. In calendar year 1991, 40 percent (10 of 25) of those whose clearances were suspended were African-American. African-Americans accounted for about 48 percent (27 of 56) of the clearances suspended in calendar year 1992 and about 36 percent (14 of 39) in calendar year 1993. The disparities for African-Americans in calendar years 1991, 1992, and 1993 were all significant, according to the Fisher's Exact Test. The population of contractor employees at this site also includes Asians, American Indians, and Hispanics. American Indians and Hispanics did not have their clearances suspended in this period. The number of Asians whose clearances were suspended did not show a statistically significant disparity. (See app. III for data on the contractor employees at the Savannah River Site.) The contractors we reviewed at DOE's Oak Ridge facilities--Martin Marietta Energy Systems and M. K. Ferguson of Oak Ridge Company--employ about 21,000 people. Over 10,000 of those employees have security clearances. DOE suspended the security clearances of 164 of the contractor employees at its Oak Ridge facilities in fiscal years 1989 through 1993--the largest number of suspensions at the locations we reviewed. For one group, African-Americans, a statistically disparate number of clearances were suspended in 3 of the 5 fiscal years: 1989, 1992, and 1993. African-Americans at Oak Ridge made up between 8 and 10 percent of the workforce holding clearances in the years we reviewed. Although African-Americans represented a small portion of the total population holding clearances, in fiscal year 1989 about 44 percent (14 of 32) of those whose clearances were suspended were African-American. In fiscal year 1992, African-Americans made up 26 percent (13 of 50) of the population whose clearances were suspended; in fiscal year 1993, they made up 22 percent (7 of 32). A statistically disparate number of Hispanics also had their clearances suspended in fiscal year 1990. Specifically, Hispanics represented about 0.2 percent of the workforce in fiscal year 1990. However, about 6 percent (1 of 17) of those whose clearances were suspended were Hispanic. The disparities for African-Americans in fiscal years 1989, 1992, and 1993 and for Hispanics in fiscal year 1990 were significant, according to the Fisher's Exact Test. (See app. IV for data on contractor employees at DOE's Oak Ridge facilities.) Oak Ridge's population of contractor employees also includes Asians and American Indians. However, no Asians or American Indians had their clearances suspended during the period covered by our review. Under federal equal employment opportunity policy, federal agencies and their contractors are not required to monitor the suspension of the security clearances for racial/ethnic minority groups. Because DOE is not required to do so, no organization in the Department collects information on the suspension of clearances by racial or ethnic group, and DOE was not aware of the statistical disparities discussed in this report. Executive Order 11246, entitled "Equal Employment Opportunity," states that federal contractors will not discriminate against any employee or applicant for employment because of several factors, including race. To help in assessing compliance with the policy on equal employment opportunity, reports that federal agencies receive from contractors list employees by race and ethnicity. DOE further requires contractors to provide data on hirings, promotions, layoffs, and terminations. But DOE's orders on equal employment opportunity do not require the contractors to document or track the suspension of security clearances for various population subgroups. Executive Order 11246 does not specifically discuss discrimination in security clearance matters and does not require personnel actions on security clearances taken by federal agencies or their contractors to be monitored. Within DOE, the Office of Safeguards and Security is responsible for establishing policies and procedures for security clearances for personnel. The Office bases its decisions to continue or suspend security clearances on 10 C.F.R. 710, "Criteria and Procedures for Determining Eligibility for Access to Classified Matter or Significant Quantities of Special Nuclear Material." DOE Order 5631.2C, "Personnel Security Program," implements this regulation. According to an official in the Office of Safeguards and Security, because race and ethnicity are not factors in the processes used for continuing or suspending security clearances, such information is not requested or gathered as part of the processes. DOE's Office of Contractor Human Resource Management maintains data on the race and ethnicity of contractor employees but did not gather data on the suspensions of security clearances for the employees. DOE has two orders that apply to equal employment opportunity and affirmative action at the facilities operated by contractors. DOE Order 3220.4A, "Contractor Personnel and Industrial Relations Reports," requires that the contractors provide data on employment--such as hirings, separations, and promotions--by race and ethnicity so that DOE can evaluate the contractors' performance in human resource management. However, the order does not require contractors to provide data on suspensions of security clearances in terms of equal employment opportunity. DOE Order 3220.2A, "Equal Opportunity in Operating and Onsite Service Contractor Facilities," implements DOE's policy that there will be no discrimination at contractors' facilities because of race and that affirmative action will be taken to fully realize equal opportunity. The order details the responsibilities and authorities of the various offices responsible for equal employment opportunity and affirmative action. However, these responsibilities do not include tracking or analyzing the suspension of security clearances by race or ethnicity. DOE was not aware of the statistical disparities that our analysis revealed because it had not combined the data on security clearances--available at security offices--with the data on race and ethnicity--available at other offices. DOE's Office of Safeguards and Security and the site security offices had information about suspensions of clearances but did not have information on race and ethnicity because they were not required to have that information for granting or continuing security clearances. DOE's Office of Economic Impact and Diversity, which includes the offices of Civil Rights and Contractor Human Resource Management, had data on race and ethnicity but had no information on the suspension of security clearances. As previously noted, that office was not required to collect such data. DOE has not been tracking the suspension of clearances by racial/ethnic group. As a result of our analysis, DOE is now aware that contractor employees who are members of racial/ethnic minority groups were more likely than white employees to have their security clearances suspended in some of the years and locations we reviewed. It is important that DOE look into the reason for the statistical disparities to assure itself that discrimination is not occurring. We recommend that the Secretary of Energy investigate the reasons for the disparities in the number of security clearances suspended for contractor employees in the locations and years identified by our review and take action to correct any problems that this investigation identifies in the Department's security clearance procedures and require that data on the racial and ethnic background of contractor employees whose clearances are suspended at all locations be compiled, monitored, and reviewed to identify any statistical disparities in the number of clearances suspended for minorities, and investigate and take appropriate corrective action if such disparities occur. As requested, we did not obtain written agency comments on a draft of this report. However, we discussed the information in this report with officials in DOE's Office of Nonproliferation and National Security and with officials from the Albuquerque, Oak Ridge, and Savannah River operations offices. These officials agreed with the facts contained in the report. However, they expressed concern about the statistical methodology we used to analyze the data on suspended clearances. They said that our analysis was not sufficiently sophisticated to include a variety of demographic factors, such as age or job category, which could explain the statistical disparities we found. They concluded that our "one-faceted" approach to the demographic issue, combined with the very small number of clearances suspended, "renders the reasoning behind any finding of statistical disparity questionable . . . ." In this report, we have not attempted to determine why statistical disparities are occurring. We are only reporting that, according to the Fisher's Exact Test, statistical disparities are occurring at all the locations included in our review--that is, more security clearances are being suspended for minorities than would be expected if suspensions occurred in a purely random fashion. We believe DOE needs to determine why these statistical disparities are occurring. In making this determination, DOE may need to conduct more sophisticated demographic studies of its workforce. Until such studies are completed, DOE cannot know why the security clearances of minority employees are being suspended more often than would be expected statistically. We also discussed the contents of this report with officials from DOE's Office of Economic Impact and Diversity. These officials also agreed with the facts contained in the report. In addition, they said that the findings "serve as a basis for further review of the method utilized for suspending security clearances. . . ." We conducted this review at DOE headquarters and the Albuquerque, Savannah River, and Oak Ridge operations offices between June 1993 and August 1994 in accordance with generally accepted government auditing standards. We reviewed DOE's records, applicable orders, and special program initiatives; interviewed DOE program officials and contractors; and merged data on security clearances with personnel information to analyze the data for statistical disparities in the number of clearances suspended. (See app. I for a more detailed discussion of our scope and methodology.) As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy; the Director, Office of Management and Budget; interested congressional committees; and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. To address the questions of the Chairman, House Committee on Government Operations, we had discussions on the suspension of security clearances with DOE officials in the Office of Safeguards and Security and Office of Civil Rights at the Department's headquarters and operations offices at Albuquerque, Savannah River, and Oak Ridge. We also obtained data on such suspensions from these officials. In addition, we discussed suspensions with contractors at the Sandia and Los Alamos national laboratories, Savannah River Site, and Oak Ridge. The Albuquerque, Savannah River, and Oak Ridge operations offices, which administer these sites, are responsible for 54 percent of the Department's total population of contractor employees holding security clearances. We also interviewed the Deputy Director of the Department of Labor's Office of Federal Contract Compliance Programs and examined the executive order and federal regulations on contractors' compliance programs for equal opportunity employment. In addition, we obtained data on ethnicity, sex, and total annual employment for contractor employees at the locations included in our review and reviewed a random sample of personnel security files to determine what data on ethnicity and sex were collected and recorded. In our analysis of suspensions, we used data provided by DOE on the populations whose clearances had been suspended and on the total populations within each racial/ethnic group at each location. We used the Fisher's Exact Test to (1) compare the proportion of each racial/ethnic group whose clearances had been suspended with the proportion of whites whose clearances had been suspended and (2) calculate the probability that the number of minorities whose clearances were suspended would have occurred had the suspensions been randomly distributed across the racial/ethnic groups. Analysis using the Fisher's Exact Test shows whether the occurrences can be explained by chance or may have been caused by some other factor. Our use of the Fisher's Exact Test had a confidence level of 95 percent, which means that some of the results (about 5 percent) that were found to be statistically significant could be due to chance alone. The Fisher's Exact Test applies to all situations and is not affected by the size of the sample. As a result, the test is commonly used when the number of events being analyzed is small. A significant result from this test does not conclusively demonstrate that discrimination has occurred; rather, it shows that the result differs significantly from what would be expected if race/ethnicity was not related to the suspension of a clearance. William R. Mowbray, Statistician The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) security clearance program, focusing on: (1) whether racial or ethnic disparities existed among those employees who had their security clearances suspended; and (2) actions DOE needs to take to respond to these disparities. GAO found that: (1) between fiscal years 1989 and 1993, DOE suspended 425 security clearances for contractor employees; (2) African-American, Hispanic, and American Indian contractor employees had their security clearances suspended more often than would be statistically expected when compared with the majority of the workforce; (3) DOE was not aware of the statistical disparities because it did not monitor or track the suspension of clearances by racial/ethnic group; and (4) DOE needs to further evaluate why disparities in security clearances are occurring to ensure that discrimination is not occurring.
4,113
189
SEC's financial statements, including the accompanying notes, present fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, SEC's assets, liabilities, net position, net costs, changes in net position, budgetary resources, and custodial activity as of, and for the fiscal years ended, September 30, 2007, and September 30, 2006. However, misstatements may nevertheless occur in other financial information reported by SEC and may not be prevented or detected because of the internal control deficiencies described in this report. As disclosed in footnote 1.C. to SEC's financial statements, in fiscal year 2007, SEC changed its method of accounting for user fees collected in excess of current-year appropriations. Because of the material weakness and significant deficiencies in internal control discussed below, SEC did not maintain effective internal control over financial reporting as of September 30, 2007, and thus did not have reasonable assurance that misstatements material in relation to the financial statements would be prevented or detected on a timely basis. Although certain compliance controls should be improved, SEC maintained, in all material respects, effective internal control over compliance with laws and regulations as of September 30, 2007, that provided reasonable assurance that noncompliance with laws and regulations that could have a direct and material effect on the financial statements would be prevented or detected on a timely basis. Our opinion on internal control is based on criteria established under 31 U.S.C. SS 3512(c)(d), commonly referred to as the Federal Managers' Financial Integrity Act (FMFIA) and the Office of Management and Budget (OMB) Circular No. A-123, Management Accountability and Control. During this year's audit, we identified significant control deficiencies in SEC's financial reporting process, which taken collectively, result in more than a remote likelihood that a material misstatement of the financial statements will not be prevented or detected. Therefore, we considered the combination of the following control deficiencies to collectively constitute a material weakness in SEC's financial reporting process: period-end financial reporting process, disgorgements and penalties accounts receivable, accounting for transaction fee revenue, and preparing financial statement disclosures. In addition to the material weakness discussed above, we identified three significant deficiencies in internal control, which although not material weaknesses, represent significant deficiencies in the design or operation of internal control. Although we are considering these issues separately from the material weakness described above, they nevertheless adversely affect SEC's ability to meet financial reporting and other internal control objectives. These deficiencies concern property and equipment, and accounting for budgetary resources. In our prior year audit, we reported on weaknesses we identified in the areas of SEC's (1) recording and reporting of disgorgements and penalties, (2) information systems controls, and (3) property and equipment controls. During fiscal year 2007, SEC improved its controls over the accuracy, timeliness, and completeness of the disgorgement and penalty data and used a much improved database for the initial recording and tracking of these data. However, the processing of these data for financial reporting purposes is still done through a manual process that is prone to error. We found that the internal controls that compensated for the manual processing of the related accounts receivable balances in fiscal year 2006 were not effective in fiscal year 2007. This issue is included in the material weakness in SEC's financial reporting process for fiscal year 2007. SEC continues to make progress in resolving the information security weaknesses. Previously identified weaknesses, though, still need to be addressed, along with new weaknesses we found during this year's audit. Therefore, we consider information security to be a significant deficiency as of September 30, 2007. In addition, we continued to identify the same weaknesses in controls over property and equipment during this year's audit, and therefore, we considered this area to be a significant deficiency as of September 30, 2007. Although SEC had one material weakness and three significant control deficiencies in internal control, SEC's financial statements were fairly stated in all material respects for fiscal years 2007 and 2006. However, the weaknesses in internal control noted above may adversely affect decisions by SEC management that is based, in whole or in part, on information that is inaccurate because of this weakness. In addition, unaudited financial information reported by SEC, including performance information, may also contain misstatements resulting from these weaknesses. We will be reporting additional details concerning the material weakness and the significant deficiencies separately to SEC management, along with recommendations for corrective actions. We will also be reporting less significant matters involving SEC's system of internal controls separately to SEC management. During this year's audit, we found control deficiencies in SEC's period-end financial reporting process, in its calculation of accounts receivable for disgorgements and penalties, in its accounting for transaction fee revenue, and in preparing its financial statement disclosures. We believe these control deficiencies, collectively, constitute a material weakness. SEC's financial management system does not conform to the systems requirements of OMB Circular No. A-127, Financial Management Systems. Specifically, Circular No. A-127 requires that financial management systems be designed to provide for effective and efficient interrelationships between software, hardware, personnel, procedures, controls, and data contained within the systems. Circular No. A-127 further states that financial systems must have common data elements, common transaction processing, consistent internal controls, and efficient transaction entry, and that reports produced by the systems shall provide financial data that can be traced directly to the general ledger accounts. SEC's period-end financial reporting process for recording transactions, maintaining account balances, and preparing financial statements and disclosures are supported to varying degrees by a collection of automated systems that are not integrated or compatible with its general ledger system. These automated systems' lack of integration and compatibility require that extensive compensating manual and labor-intensive accounting procedures, involving large spreadsheets and numerous posting and routine correcting journal entries, dominate SEC's period-end financial reporting process. Some of SEC's subsidiary systems, such as that for property and equipment and for disgorgements and penalties, do not share common data elements and common transaction processing with the general ledger system. Therefore, intermediary information processing steps, including extensive use of spreadsheets, manipulation of data, and manual journal entries, are needed to process the information in SEC's general ledger. This processing complicates review of the transactions and greatly increases the risk that the transactions are not recorded completely, properly, or consistently, ultimately affecting the reliability of the data presented in the financial statements. Our identification this year of errors in SEC's calculation of disgorgement and penalty accounts receivable, discussed below, illustrates this risk. The risk to data reliability is further increased because basic controls over electronic data, such as worksheet and password protection, change history, and controls over data verification, such as control totals and record counts, were not consistently used during the data processing between the source systems and the general ledger. In addition, currently, SEC's general ledger has several unconventional posting models and other limitations that prevent proper recording of certain transactions. As a result, SEC's year-end reporting process requires extensive routine correcting journal entries to correct errors created by incorrectly posted transactions in its general ledger. We also noted that SEC's documentation used to crosswalk individual accounts to the financial statement line items contained an incorrect routing to a line item on SEC's Statement of Budgetary Resources for SEC's year-end financial statement preparation process, which caused a material error in SEC's draft financial statements. Also, SEC did not have detailed written documentation of its methodologies and processes for preparing financial statements and disclosures, increasing the risk of inconsistent and improper reporting and the risk that disruptions and error may arise when staff turnover occurs. As part of its enforcement responsibilities, SEC issues orders and administers judgments ordering, among other things, disgorgements, civil monetary penalties, and interest against violators of federal securities laws. SEC recognizes a receivable when SEC is designated in an order or a final judgment to collect the assessed disgorgements, penalties, and interest. At September 30, 2007, the gross amount of disgorgements and penalties accounts receivable was $330 million, with a corresponding allowance of $266 million resulting in a net receivable of $64 million. In our reviews of the interim June 30, 2007, and year-end September 30, 2007, balances of accounts receivable for disgorgements and penalties, we found errors in SEC's spreadsheet formulas resulting in overstatements of these receivable balances for both periods. These errors consisted of incorrectly changed spreadsheet formulas that affected the final calculated balances. SEC subsequently detected and corrected the June 30 errors, but then made different spreadsheet calculation errors in the year-end balances as of September 30, 2007, which we detected as part of our audit. SEC made adjustments to correct the errors, which were not material. However, SEC's process for calculating its accounts receivable for disgorgements and penalties presents a high risk that significant errors could occur and not be detected. The main cause of these errors is the breakdown this year in the manual controls that were intended to compensate for the lack of an integrated accounting system for disgorgements and penalties, as discussed above. Specifically, although the journal entries posting the amounts to the general ledger were reviewed, this review did not extend to the preparation of the spreadsheet SEC used to document the accounts receivable calculation at June 30 and September 30, 2007, and therefore, was not sufficient to detect significant spreadsheet formula errors. As one of its sources of revenue, SEC collects securities transaction fees paid by self-regulatory organizations (SRO) to SEC for stock transactions. SRO transaction fees are payable to the SEC twice a year -in March for the previous months September through December, and in September for the previous months January through August. Since the SROs are not required to report the actual volume of transactions until 10 business days after each month end, SEC estimates and records an amount receivable for fees payable by the SROs to SEC for activity during the month of September. At September 30, 2007, SEC estimated this receivable amount at $100.6 million. Based on information SEC received in mid-October concerning the actual volume of transactions, the amount of claims receivable at September 30, 2007, should have been $74.4 million. In previous years, SEC made adjustments to reflect the actual volume of transactions; however, SEC does not have written procedures to help ensure that this adjustment is made as a routine part of its year-end financial reporting process. We proposed, and SEC posted, the necessary audit adjustment to correct the amount of transaction fee revenue for fiscal year 2007. Statement on Auditing Standards No.1, Codification of Auditing Standards and Procedures, which explains the accounting requirements for subsequent events, requires that events or transactions that existed at the date of the balance sheet and affect the estimates inherent in the process of preparing financial statements should be considered for adjustment to or disclosure in the financial statements through the date that the financial statements are issued. In addition, the concept of consistency in financial reporting provides that accounting methods, including those for determining estimates, once adopted, should be used consistently from period to period unless there is good cause to change. In our review of SEC's year-end draft financial statement disclosures, we noted numerous errors including misstated amounts, improper break out of line items, and amounts from fiscal year-end 2006 incorrectly brought forward as beginning balances for fiscal year 2007. For example, in its disclosure for Custodial Revenues and Liabilities, SEC improperly excluded approximately $320 million in collections. In another example, for its disclosure on Fund Balance with Treasury, SEC misclassified approximately $90 million into incorrect line items. Also, in its disclosure for Fiduciary Assets and Liabilities, SEC's beginning balances for Fund Balance with Treasury and for Liability for Fiduciary Activity were each misstated by $8.9 million due to errors in carrying forward ending balances from September 30, 2006. SEC revised the financial statement disclosures to correct the errors that we noted. We believe the cause of these and numerous other errors in the disclosures is due mainly to the lack of a documented timeline and process for completing the fiscal year 2007 financial statements and disclosures, including review of the disclosures. In addition, the cumbersome and complicated nature of SEC's financial reporting process discussed above did not allow SEC finance staff sufficient time to carry out thorough and complete reviews of the disclosures in light of the November 15 reporting deadline. We also identified three control deficiencies that adversely affect SEC's ability to meet its internal control objectives. These conditions concern deficiencies in controls over (1) information security, (2) property and equipment, and (3) accounting for budgetary resources, which are summarized below. SEC relies extensively on computerized information systems to process, account for, and report on its financial activities and make payments. To provide reasonable assurance that financial information and financial assets are adequately safeguarded from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, effective information security controls are essential. These controls include security management, access controls, configuration management, physical security, and contingency planning. Weaknesses in these controls can impair the accuracy, completeness, and timeliness of information used by management and increase the potential for undetected material misstatements in the agency's financial statements. During fiscal year 2007, SEC made important progress in mitigating certain control weaknesses that were previously reported as unresolved at the time of our prior review. For example, SEC developed a comprehensive program for monitoring access activities to its computer network environment, tested and evaluated the effectiveness of controls for the general ledger system, and documented authorizations for software modifications. SEC also took corrective action to restrict access to sensitive files on its servers, change default database accounts that had known or weak passwords, and apply strong encryption key management practices for managing secure connections. Despite this progress, SEC has not consistently implemented certain key information security controls to effectively safeguard the confidentiality, integrity, and availability of its financial and sensitive information and information systems. During this year's audit, we identified continuing and new information security weaknesses that increase the risk that (1) computer resources (programs and data) will not be adequately protected from unauthorized disclosure, modification, and destruction; (2) access to facilities by unauthorized individuals will not be adequately controlled; and (3) computer resources will not be adequately protected and controlled to ensure the continuity of data processing operations when unexpected interruptions occur. For example, SEC had not yet mitigated weaknesses related to malicious code attacks on SEC's workstations, had not yet adequately documented access privileges for a major application, and had not yet implemented an effective intrusion detection system. New control weaknesses in authorization, boundary protection, configuration management, and audit and monitoring that we identified this year include for example, the use of a single, shared user account for posting journal vouchers in a financial application, inadequate patching of enterprise databases, and inadequate auditing and monitoring capabilities on its database servers. Lapses in physical security enabled unauthorized network access from a publicly accessible location within SEC Headquarters. In addition, SEC did not have contingency plans for key desktops that support manual processes such as the preparation of spreadsheets. These weaknesses existed, in part, because SEC has not yet fully implemented its information security program. Collectively, these problems represent a significant deficiency in SEC's internal control over information systems and data. Specifically, the continuing and newly identified weaknesses decreased assurances regarding the reliability of the data processed by the systems and increased the risk that unauthorized individuals could gain access to critical hardware and software and intentionally or inadvertently access, alter, or delete sensitive data or computer programs. Until SEC consistently implements all key elements of its information security program, the information that is processed, stored, and transmitted on its systems will remain vulnerable, and management will not have sufficient assurance that financial information and financial assets are adequately safeguarded from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. We will be issuing a separate report on issues we identified regarding information security concerns at SEC. SEC's property and equipment consists of general-purpose equipment used by the agency; capital improvements made to buildings leased by SEC for office space; and internal-use software development costs for projects in development and production. SEC acquired approximately $27 million dollars in property and equipment during fiscal year 2007. Similar to our last year's audit, during the course of testing fiscal year 2007 additions, we noted numerous instances of inaccuracies in recorded acquisition costs and dates for property and equipment purchases, as well as unrecorded property and equipment purchases, and errors in amounts capitalized and amortized for internal-use software projects. In addition, errors were carried forward from the previous year. These systemic errors did not materially affect the balances reported for property and equipment or the corresponding depreciation/amortization expense amounts in SEC's financial statements for fiscal year 2007; however, these conditions evidence a significant deficiency in control over the recording of property and equipment that affects the reliability of its recorded balances for property and equipment. Specifically, SEC lacks a process that integrates controls over capitalizing and recording property and equipment purchases. For example, SEC does not have a formalized, documented process for comparing quantity and type of item received against the corresponding order for property purchases. In addition, SEC does not have sufficient oversight of the recording of acquisition dates and values of the capitalized property. Further, SEC's lack of an integrated financial management system for accounting for property and equipment, as discussed above, requires compensating procedures, which were not effective, to ensure that manual calculations, such as those for depreciation and amortization, are accurate. Until it has a systemic process that incorporates effective controls over receiving, recording, capitalizing, and amortizing property and equipment purchases, SEC will not have sufficient assurance over the accuracy and completeness of its reported balances for property and equipment. For fiscal year 2007, SEC incurred $877 million in obligations, which represents legal liabilities against funds available to SEC to pay for goods and services ordered. At September 30, 2007, SEC reported that the amount of budgetary resources obligated for undelivered orders was $255 million, which reflects obligations for goods or services that had not been delivered or received as of that date. In our testing of undelivered order transactions for this year's audit, we identified several concerns over SEC's accounting for obligations and undelivered orders. Specifically, we found numerous instances in which SEC (1) recorded obligations prior to having documentary evidence of a binding agreement for the goods or services, (2) recorded invalid undelivered order transactions due to an incorrect posting configuration in SEC's general ledger, and (3) made errors in recording new obligations and deobligations due to the use of incorrect accounts and by posting incorrect amounts in the general ledger. The majority of exceptions related to these issues, amounting to approximately $76 million, were corrected by SEC through adjusting journal entries. While the remaining uncorrected amounts did not materially affect the balances on the Statement of Budgetary Resources at September 30, 2007, ineffective processes that caused these errors constitute a significant deficiency in SEC's internal control over recording and reporting of obligations, and puts SEC at risk that the amounts recorded in the general ledger and reported on SEC's Statement of Budgetary Resources are misstated. Specifically, SEC's general ledger is not configured to properly post related entries, thereby resulting in the need to routinely correct entries. Extensive reviews of the budgetary transactions, along with significant adjusting journal entries, are needed to compensate for the system limitations. The errors in recording new obligations and deobligations that we found in our audit indicate a lack of effective review over those transactions. Further, SEC does not have policies or internal controls to prevent recording of obligations that are not valid. Recording obligations prior to having documentary evidence of a binding agreement for the goods and services is a violation of the recording statute, and may result in funds being reserved unnecessarily and therefore made unavailable for other uses should the agreement not materialize. In addition, early recording of obligations may result in charging incorrect fiscal year funds for an agreement executed in a later fiscal year. Our tests for compliance with selected provisions of laws and regulations disclosed no instances of noncompliance that would be reportable under U.S. generally accepted government auditing standards or OMB audit guidance. However, the objective of our audit was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. SEC's Management's Discussion and Analysis and other accompanying information contain a wide range of data, some of which are not directly related to the financial statements. We do not express an opinion on this information. However, we compared this information for consistency with the financial statements and discussed the methods of measurement and presentation with SEC officials. Based on this limited work, we found no material inconsistencies with the financial statements or nonconformance with OMB guidance. However, because of the internal control weaknesses noted above, misstatements may occur in related performance information. SEC management is responsible for (1) preparing the financial statements in conformity with U.S. generally accepted accounting principles; (2) establishing, maintaining, and assessing internal control to provide reasonable assurance that the broad control objectives of FMFIA are met; and (3) complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the financial statements are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles; and (2) management maintained effective internal control, the objectives of which are the following: Financial reporting: Transactions are properly recorded, processed, and summarized to permit the timely and reliable preparation of financial statements in conformity with U.S. generally accepted accounting principles, and assets are safeguarded against loss from unauthorized acquisition, use, or disposition. Compliance with applicable laws and regulations: Transactions are executed in accordance with (1) laws governing the use of budgetary authority, (2) other laws and regulations that could have a direct and material effect on the financial statements, and (3) any other laws, regulations, or governmentwide policies identified by OMB audit guidance. We are also responsible for (1) testing compliance with selected provisions of laws and regulations that could have a direct and material effect on the financial statements and for which OMB audit guidance requires testing and (2) performing limited procedures with respect to certain other information appearing in SEC's Performance and Accountability Report. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements; assessed the accounting principles used and significant estimates made by evaluated the overall presentation of the financial statements; obtained an understanding of SEC and its operations, including its internal control related to financial reporting (including safeguarding of assets) and compliance with laws and regulations (including execution of transactions in accordance with budget authority); obtained an understanding of the design of internal controls related to the existence and completeness assertions relating to performance measures as reported in Management's Discussion and Analysis, and determined whether the internal controls have been placed in operation; tested relevant internal controls over financial reporting and compliance with applicable laws and regulations, and evaluated the design and operating effectiveness of internal control; considered SEC's process for evaluating and reporting on internal control and financial management systems under the FMFIA; and tested compliance with selected provisions of the following laws and their related regulations: the Securities Exchange Act of 1934, as amended; the Securities Act of 1933, as amended; the Antideficiency Act; laws governing the pay and allowance system for SEC employees; the Prompt Payment Act; and the Federal Employees' Retirement System Act of 1986. We did not evaluate all internal controls relevant to operating objectives as broadly defined by the FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to controls over financial reporting and compliance. Because of inherent limitations in internal control, misstatements due to error or fraud, losses, or noncompliance may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. We did not test compliance with all laws and regulations applicable to SEC. We limited our tests of compliance to those required by OMB audit guidance and other laws and regulations that had a direct and material effect on, or that we deemed applicable to, SEC's financial statements for the fiscal year ended September 30, 2007. We caution that noncompliance may occur and not be detected by these tests and that this testing may not be sufficient for other purposes. We performed our work in accordance with U.S. generally accepted government auditing standards and OMB audit guidance. SEC's management provided comments on a draft of this report. They are discussed and evaluated below and are reprinted in appendix III. In commenting on a draft of this report, SEC's Chairman said he was pleased to receive an unqualified opinion on SEC's financial statements. The Chairman discussed SEC's plans to remediate this material weakness before the end of fiscal 2008, and to address each of the findings and recommendations identified during the audit. The Chairman emphasized SEC's commitment to enhance its controls in all operational areas and to ensure reliability of financial reporting, soundness of operations, and public confidence in SEC's mission. The complete text of SEC's comments is reprinted in appendix III.
Established in 1934 to enforce the securities laws and protect investors, the Securities and Exchange Commission (SEC) plays an important role in maintaining the integrity of the U.S. securities markets. Pursuant to the Accountability of Tax Dollars Act of 2002, SEC is required to prepare and submit to Congress and the Office of Management and Budget audited financial statements. GAO agreed, under its audit authority, to perform the audit of SEC's financial statements. GAO's audit was done to determine whether, in all material respects, (1) SEC's fiscal year 2007 financial statements were reliable and (2) SEC's management maintained effective internal control over financial reporting and compliance with laws and regulations. GAO also tested SEC's compliance with certain laws and regulations. In GAO's opinion, SEC's fiscal year 2007 and 2006 financial statements were fairly presented in all material respects. However, because of a material weakness in internal control over SEC's financial reporting process, in GAO's opinion, SEC did not have effective internal control over financial reporting as of September 30, 2007. Recommendations for corrective action will be included in a separate report. Although certain compliance controls should be improved, SEC did maintain in all material respects effective internal control over compliance with laws and regulations material in relation to the financial statements as of September 30, 2007. In addition, GAO did not find reportable instances of noncompliance with the laws and regulations it tested. In its 2006 report, GAO reported on weaknesses in the areas of SEC's (1) recording and reporting of disgorgements and penalties, (2) information systems controls, and (3) property and equipment controls. During fiscal year 2007, SEC improved its controls over the accuracy, timeliness, and completeness of the disgorgement and penalty data and used a much improved database for the initial recording and tracking of these data. However, the processing of these data for financial reporting purposes is still done through a manual process that is prone to error. GAO found that the internal controls that compensated for the manual processing of the related accounts receivable balances in fiscal year 2006 were not effective in fiscal year 2007. This issue is included in the material weakness in SEC's financial reporting process for fiscal year 2007. Other control deficiencies included in this material weakness concern SEC's period-end closing process, accounting for transaction fee revenue, and preparation of financial statement disclosures. GAO also identified three significant deficiencies in internal control during fiscal year 2007. Although SEC has taken steps to strengthen its information security, some of the weaknesses identified in GAO's previous audit persisted and GAO found new weaknesses during this year's audit. Therefore, GAO is reporting information security as a significant deficiency as of September 30, 2007. In addition, GAO continued to identify the same weaknesses in controls over property and equipment and therefore considers this area a significant deficiency as of September 30, 2007. GAO also identified a new significant deficiency concerning SEC's accounting for budgetary transactions. In commenting on a draft of this report, SEC's Chairman emphasized SEC's commitment to enhance its controls in all operational areas and to ensure reliability of financial reporting, soundness of operations, and public confidence in SEC's mission.
5,587
701
Title IX prohibits discrimination on the basis of sex in any education program or activity, including intercollegiate athletics, at colleges receiving federal financial assistance. The Department's OCR is responsible for enforcing federal civil rights laws as they relate to schools, including title IX. In fiscal year 1995, OCR operated on a $58.2 million appropriation and with 788 full-time-equivalent staff. Federal regulations implementing title IX became effective in 1975 and specifically required gender equity in intercollegiate athletics. The regulations gave colleges a 3-year transition period (through July 21, 1978) to comply fully with the regulations' requirements that equal athletic opportunity be provided for men and women. In 1979, OCR issued a Policy Interpretation providing colleges with additional guidance on what constituted compliance with the gender equity requirements of title IX. Under the Policy Interpretation, OCR applies a three-part test to help determine whether colleges provide equal athletic opportunity to male and female student athletes. To help determine whether equal athletic opportunity exists, OCR assesses whether "intercollegiate level participation opportunities for male and female students are provided in numbers substantially proportionate to their respective enrollments"; whether, when "the members of one sex have been and are underrepresented among intercollegiate athletes . . . the institution can show a history and continuing practice of program expansion which is demonstrably responsive to the developing interests and abilities of the members of that sex"; or whether, when "the members of one sex are underrepresented among intercollegiate athletes, and the institution cannot show a history and continuing practice of program expansion, as described above . . . it can be demonstrated that the interests and abilities of the members of that sex have been fully and effectively accommodated by the present program." Colleges must meet any one of the three criteria of the test. In addition to the three-part test, OCR may use other factors to assess equality of opportunity in intercollegiate athletics, including the financial assistance and travel expenses provided to student athletes, the degree of publicity provided for athletic programs, the extent to which colleges recruit student athletes, and the extent of opportunities to participate in intercollegiate competition. OCR also assesses coaches' assignments and compensation insofar as they relate to athletic opportunity for students. OCR both investigates discrimination complaints and conducts compliance reviews. Compliance reviews differ from complaint investigations in that they are initiated by OCR. Moreover, compliance reviews usually cover broader issues and affect significantly larger numbers of individuals than most complaint investigations do, although some complaint investigations can be just as broad in scope and effect. OCR selects review sites on the basis of information from various sources that indicates potential compliance problems. OCR is authorized to initiate administrative proceedings to refuse, suspend, or terminate federal financial assistance to a school violating title IX. However, in the more than 2 decades since title IX was enacted, according to an OCR official, the Department has not initiated any such administrative action for athletic cases because schools have complied voluntarily when violations have been identified. In addition to OCR's enforcement of title IX, the Department implements the Equity in Athletics Disclosure Act. Under the act, coeducational colleges offering intercollegiate athletics and participating in any federal student financial aid program are required to disclose certain information, by gender, such as the number of varsity teams, the number of participants on each team, the amount of operating expenses, and coaches' salaries. This information must be reported separately for men's and women's teams, and colleges were to have prepared their first reports by October 1, 1996; thereafter, reports are to be prepared annually by October 15th. Colleges must make the information available to students, potential students, and the public. Reports are not required to be submitted to the Department, but copies must be made available to the Department upon request. NCAA is a key organization in intercollegiate athletics. It is a voluntary, unincorporated association that administers intercollegiate athletics for nearly 1,000 4-year colleges and universities. NCAA member colleges belong to one of three divisions, the specific division generally depending on the number of sports the college sponsors. Typically, colleges with the largest number of athletic programs and facilities belong to Division I, and those with smaller programs are in Division II or III. Division I schools are further divided into three categories, Divisions I-A, I-AA, and I-AAA, with those that have the larger football programs generally placed in Division I-A. OCR's strategy for encouraging gender equity in intercollegiate athletics emphasizes both preventing title IX violations and investigating complaints, although it receives relatively few complaints about alleged violations. Principal elements of OCR's preventive approach include issuing guidance and providing technical assistance. In addition, a National Coordinator for Title IX Athletics has been appointed to manage title IX activities. OCR also considers compliance reviews important to prevention but has conducted few of them in recent years. OCR issued its "Clarification of Intercollegiate Athletics Policy Guidance" in January 1996 in response to requests from the higher education community to clarify the three-part test criteria presented in the 1979 Policy Interpretation. The Policy Interpretation allowed colleges' intercollegiate athletic programs to meet any one of the three criteria of the test to ensure that students of both sexes are being provided nondiscriminatory opportunities to participate in intercollegiate athletics. In 1994 and 1995, OCR initiated focus groups to obtain a variety of views on its title IX guidance on intercollegiate athletics. Comments from the focus groups indicated that clarification of the three-part test was needed. While OCR was developing the clarification, the Congress held hearings in May 1995, during which concerns were expressed that the three-part test was ambiguous, thus confirming the need for additional guidance. Subsequently, congressional members asked the Assistant Secretary for Civil Rights to clarify OCR's policy on the three-part test. The resulting 1996 clarification elaborates upon each part of the three-part test of equal athletic opportunity, provides illustrative examples of its application, and confirms that colleges are in compliance if they meet any one part of the test. The clarification states that a college meets the first criterion of the test if intercollegiate participation opportunities are substantially proportionate to enrollments. Such determinations are made on a case-by-case basis after considering each college's particular circumstances or characteristics, including the size of its athletic program. For example, a college where women represent 52 percent of undergraduates and 47 percent of student athletes may satisfy the first part of the three-part test without increasing participation opportunities for women if there are enough interested and able students to field and support a viable team. The second part of the test concerns program expansion. OCR's clarification focuses on whether there has been a history of program expansion and whether it has been continuous and responsive to the developing interests and abilities of the underrepresented sex. The clarification does not identify fixed intervals of time for colleges to have added participation opportunities. To satisfy the second part of the test, a college must show actual program expansion and not merely a promise to expand its program. Under the third part of the test, a determination is made whether, among students of the underrepresented sex, there is (a) sufficient unmet interest in a particular sport to support a team, (b) sufficient ability to sustain a team among interested and able students, and (c) a reasonable expectation of intercollegiate competition for the team in the geographic area in which the school competes. To make its determination, OCR evaluates such information as requests by students to add a sport, results of student interest surveys, and competitive opportunities offered by other schools located in the college's geographic area. Since fiscal year 1992, OCR has investigated and resolved 80 intercollegiate athletics complaints to which the three-part test was applied. Of these 80 colleges, 16 either demonstrated compliance or are taking actions to comply with part one; 4, with part two; and 42, with part three. The remaining 18 schools have yet to determine how they will comply because they are still implementing their settlement agreements. These agreements obligate the schools to comply with one part of the three-part test by a certain date, but OCR's monitoring efforts do not yet indicate which part of the test they will satisfy. OCR provides technical assistance through such activities as participating in on-site and telephone consultations and conferences, conducting training classes and workshops, and disseminating educational pamphlets. For example, OCR staff conduct title IX workshops for schools, athletic associations, and other organizations interested in intercollegiate athletics. Although OCR could not tell us the total number of technical assistance activities it conducted specific to title IX in intercollegiate athletics, it did provide 47 examples of national, state, or local title IX presentations made between October 1992 and April 1996. OCR also coordinates title IX education efforts with NCAA. For example, the Assistant Secretary for Civil Rights spoke at an NCAA-sponsored title IX seminar in April 1995, and OCR representatives have participated in subsequent NCAA-sponsored seminars. The Assistant Secretary for Civil Rights created the position of National Coordinator for Title IX Athletics in 1994. According to the National Coordinator, who reports directly to the Assistant Secretary, this position was created to (1) improve the coordination of resources focused on gender equity in athletics among OCR's 12 offices; (2) prioritize management of title IX activities; (3) ensure timely, consistent, and effective resolution of title IX cases and other issues; and (4) ensure all appropriate OCR staff are trained in conducting title IX athletics investigations in accordance with revised complaint resolution procedures. The National Coordinator told us the creation of the position has resulted in greater consistency in resolving athletics cases and faster responses from OCR offices to athletics inquiries. These improvements were accomplished, in part, by more frequent communication between the National Coordinator and OCR offices using a recently implemented national automated communications network, improved on-the-job training for OCR staff in case resolution, and the establishment of a central source of title IX athletics information. Although OCR investigates and resolves all intercollegiate athletics complaints that are filed in a timely manner, fewer than 100 such complaints were filed between October 1991 and June 1996. These complaints represented 0.4 percent of all civil rights complaints filed during that period (see table 1). Most of the approximately 23,000 complaints filed with OCR during that period dealt with other areas of civil rights, including disability, race, and national origin. OCR's title IX activities have focused recently more on policy development, technical assistance, and complaint investigations and less on assessing schools' compliance with title IX through compliance reviews. Although its strategic plan emphasizes the value of conducting OCR-initiated compliance reviews to maximize the effect of available resources, it conducted only two such reviews in 1995 and none in fiscal year 1996, and it plans none in fiscal year 1997. OCR attributes this decline to resource constraints. As table 2 shows, OCR conducted 32 title IX intercollegiate athletics compliance reviews during fiscal years 1992 through 1996, with the largest number being conducted in 1993. NCAA's constitution charges it with helping its member colleges meet their legislative requirements under title IX. Following the 1992 NCAA Gender Equity Study, which showed that women represented 30 percent of all student athletes and received 23 percent of athletic operating budgets, NCAA created a task force to further examine gender equity in its member colleges' athletic programs. NCAA has since implemented the following recommendations made by the task force. NCAA incorporated the principle of gender equity into its constitution in 1994. Recognizing that each member college is responsible for complying with federal and state laws regarding gender equity, the principle states that NCAA should adopt its own legislation to facilitate member schools' compliance with gender equity laws. According to NCAA, the Athletics Certification Program, begun in academic year 1993-94, was developed to ensure that Division I athletic programs are accredited in a manner similar to the way academic programs are accredited. The certification process includes a review of Division I colleges' commitment to gender equity. Schools are required to collect such information as the gender composition of their athletic department staff and the resources allocated to male and female student athletes. Schools must also evaluate whether their athletic programs conform with NCAA's gender equity principle and develop plans for improving their programs if they do not. As of June 1996, NCAA reported that 70 of the 307 Division I schools (or 23 percent) had been certified. The remaining schools are scheduled to be certified by academic year 1998-99. The certification procedure takes about 2 years to complete and includes site visits by an NCAA evaluation team and self-studies by the schools. Schools not meeting certification criteria must take corrective action within an established time frame. Schools failing to take corrective action may be ineligible for NCAA championship competition in all sports for up to 1 year. If, after 1 year the school has not met NCAA's certification criteria, it is no longer an active member of NCAA. According to NCAA, to date it has not been necessary to impose such sanctions on any school undergoing certification. NCAA's 1992 gender equity study reported the results of a survey of its membership's athletic programs. The study will be updated every 5 years, with the next issuance scheduled for 1997. To update the study, NCAA developed and distributed a form to collect information on colleges' athletic programs. The data the form is designed to gather include the information schools must collect under the Equity in Athletics Disclosure Act. Thus, in addition to publishing its gender equity study, NCAA will be able to aggregate the data in reports prepared by colleges under the Disclosure Act. The deadline for submitting data collection forms to NCAA is the end of October 1996. To help schools achieve gender equity in intercollegiate athletics as well as to meet the interests and abilities of female student athletes, the NCAA Gender Equity Task Force identified nine emerging sports that may provide additional athletic opportunities to female student athletes. Effective September 1994, NCAA said that schools could use the following sports to help meet their gender equity goal: archery, badminton, bowling, ice hockey, rowing (crew), squash, synchronized swimming, team handball, and water polo. In academic year 1995-96, 122 of the 995 (or 12 percent) NCAA schools with women's varsity sports programs offered at least one of the emerging sports. In 1994, NCAA developed a guidebook on achieving gender equity. The guidebook supplements OCR's title IX guidance and provides schools' athletic administrators with basic knowledge of the law and how to comply with it. NCAA also coordinates with OCR to provide its member schools--and others--training and technical assistance through title IX seminars. NCAA held two such seminars in April 1995 (the Assistant Secretary for Civil Rights participated in one of the seminars) and two in April 1996. The seminars were attended by athletic directors, general counsels, gender equity consultants, OCR representatives, and others representing groups interested in gender equity in intercollegiate athletics. States promote gender equity in intercollegiate athletics through a variety of means. Over half of the states were involved in promoting gender equity in intercollegiate athletics. To identify state gender equity initiatives, we surveyed state higher education organizations in all 50 states and the District of Columbia. For reporting purposes, we collectively refer to the 51 respondents as states. Overall, 32 of the 51 states (63 percent) had taken some type of action to promote gender equity in intercollegiate athletics. Information provided by the 51 respondents is summarized in table 3; appendix II discusses the responses in more detail. Some respondents also provided observations of conditions that they believe may facilitate or hinder gender equity in intercollegiate athletics at colleges within their states. Conditions that some believed may facilitate gender equity included a commitment from individuals in leadership positions, state gender equity legislation, and a high participation by girls in K-12 athletics. Conditions that some believed may hinder gender equity included insufficient funds; the presence of football programs, which women are unlikely to participate in; and the perception that women are not as interested in athletics as men are. The eight studies on gender equity in intercollegiate athletics that we identified showed that women's athletic programs have made slight advances since 1992 toward gender equity as measured by the number of sports available to female students, the number of females participating in athletics, and the percentage of scholarship expenditures for women's sports. The studies also show, however, that women's programs remain behind men's programs as measured by the percentage of female head coaches, comparable salaries for coaches, and ratio of student athletes to undergraduate enrollment. All eight studies were national in scope and examined gender equity in the athletic programs at NCAA-member schools since 1992. Although most of the studies used surveys, some studies were based on different sample sizes or time periods, making direct comparisons among studies inappropriate. While the studies selectively evaluated the effect of title IX on various aspects of gender equity in intercollegiate athletics, they did not evaluate schools' compliance with title IX. See appendix III for additional information on the studies; see also the bibliography. The studies reported some advances toward equity between men's and women's intercollegiate athletics: The average number of sports offered to women rose from 7.1 in 1992 to 7.5 in 1996, an increase of almost 6 percent. Schools in all three NCAA divisions have added women's programs in the last 5 years, which one study attributed to the implementation of title IX legislation. An almost equal number of women's and men's sports (about 4.5) used marketing and promotional campaigns designed to increase event attendance. In fiscal year 1993, women at NCAA Division I schools received about 31 percent of athletic scholarship funds, an increase of about 3 percentage points from fiscal year 1989. Similarly, women's programs received 24 percent of total average athletic operating expenses, including scholarships, scouting and recruiting, and other expenses--also an increase of about 3 percentage points from fiscal year 1989. Female student participation in intercollegiate athletic programs has increased. For example, one study showed that the proportion of female student athletes increased from 34 percent of all student athletes in 1992 to 37 percent in 1995, an annual rate of increase of 1 percentage point. The studies also showed that women's athletic programs continue to lag behind men's programs in certain respects: Most of the head coaches for women's teams are male. In 1996, women accounted for about 48 percent of head coaches for women's teams. This represented a slight decline (0.6 percentage points) from the percentage of female coaches in 1992. In contrast, more than 90 percent of women's teams were coached by females in 1972, the year title IX was enacted. Head coaches of women's basketball teams earned 59 percent of what head coaches of men's basketball teams earned, as reported in 1994. Women often constituted half of all undergraduates in 1995, while constituting only 37 percent of all student athletes. In commenting on a draft of our report, the Department of Education clarified several issues, including the reason compliance reviews have declined, the extent of OCR's work with other agencies in support of title IX policies and procedures, the differences between compliance reviews and complaint investigations, and the context in which coaches' employment is considered by OCR in a title IX review (see app. V). The Department also offered a number of technical changes. In general, we agreed with the Department's comments, and incorporated them into the report, as appropriate. We are sending copies of this report to the Secretary of Education; appropriate congressional committees; the Executive Director, NCAA; and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report were Joseph J. Eglin, Jr., Assistant Director; R. Jerry Aiken; Deborah McCormick; Charles M. Novak; Meeta Sharma; Stanley G. Stenersen; Stefanie Weldon; and Dianne L. Whitman-Miner. To determine the actions the Department of Education has taken to promote gender equity in intercollegiate athletics since 1992, we interviewed the National Coordinator for Title IX Athletics and analyzed information from the Department's Office for Civil Rights (OCR). We obtained information on the National Collegiate Athletic Association's (NCAA) gender equity actions by interviewing its Director of Education Outreach, Director of Research, and officials in its Compliance Department. We also analyzed documentation they provided. To identify state gender equity initiatives, we developed a questionnaire and sent it to agencies with oversight responsibility for public higher education in each of the 50 states and the District of Columbia. In nearly all cases, we spoke with staff at the higher education agency. When necessary for clarification, we conducted follow-up telephone interviews. We supplemented this information with supporting documentation provided by state representatives. The questionnaire went to 56 organizations: 41 higher education boards or boards of regents, 9 state university or college systems, 5 community college systems, and 1 public 4-year institution. Five states had separate higher education oversight organizations for 2- and 4-year institutions. We therefore received two sets of responses from these states, one for 2-year and the other for 4-year institutions. We combined the two sets of responses into one response to reflect the state's gender equity initiatives. We received completed surveys from all 50 states and the District of Columbia. The questionnaire requested data on the existence of state gender equity officials; type of gender equity initiatives, if any (that is, legislation, requirements, policy recommendations, or other actions); methods used to promote gender equity; indicators used to measure gender equity; actual or estimated trends for each indicator; compliance and guidance efforts associated with the Equity in Athletics Disclosure Act; and conditions that help or hinder gender equity within the state. All information was self-reported by state representatives, and we did not verify its accuracy. To identify studies on gender equity in intercollegiate athletics issued since 1992, we conducted a literature search and consulted academic experts and professional organizations that deal with gender equity, intercollegiate athletics, or both. (See app. IV for a list of organizations contacted for this report. We have also included a bibliography.) The sources we consulted identified eight studies on gender equity in intercollegiate athletics that were national in scope and were issued since 1992. Most of the studies were surveys of NCAA schools. We reviewed the information in the studies and summarized the key findings, but we did not verify their accuracy. We performed our work between April and August 1996 in accordance with generally accepted government auditing standards. This appendix contains the responses to questions we asked higher education officials in the 50 states and the District of Columbia (referred to in this appendix as 51 states) about gender equity in intercollegiate athletics efforts. All responses reflect statewide gender equity actions. Number and type of sanctions imposed Eight states responded to the question; the remaining 43 states did not use any indicators. The eight national studies we identified that were issued between 1992 and 1996 examined various aspects of gender equity within NCAA schools' intercollegiate athletics programs. Because they varied in the time periods they studied, sample size, purpose, and methodology, the studies cannot be compared with each other. While some studies discuss the overall effect of title IX on women's athletics, they do not present sufficient information to determine whether the colleges were in compliance with title IX. The following is a summary of the key findings of each study. Authors and Date of Study: Acosta and Carpenter (1996) Scope and Time Period Studied: All NCAA schools, academic years 1977-78 to 1995-96 Summary: This longitudinal study examined the number of sport offerings as an indicator of opportunities for women athletes to participate in intercollegiate athletics at NCAA schools. It also reported the percentage of NCAA schools offering each type of sports program. The study identified 24 sports that schools could offer to female students. The percentage of schools offering sports programs to female students in 1996 varied considerably by sport, ranging, for example, from 98.3 percent of schools offering basketball to 0.3 percent offering badminton. In addition, the average number of sports being offered to female intercollegiate athletes generally increased from 7.1 sports per school in 1992 to 7.5 sports in 1996, for all three NCAA divisions (see table III.1). The study noted that the average number of women's sports offered in 1996 was the highest since this information was first reported in 1978. The average number of sports offered per school was also reported for each NCAA division for 1996: 8.3 (Division I), 6.1 (Division II), and 7.8 (Division III). The study also examined the percentage of female coaches and female administrators (head athletic directors) as two other indicators of participation opportunities for women at NCAA schools. The study found that, for women's teams, the percentage of female coaches and female administrators were lower than percentages of male coaches and administrators. While figures for individual years fluctuated, they did not vary much between academic years 1992 and 1996 (see table III.2). The study also noted that the percentage of female coaches in 1996 was the second lowest representation level since title IX was enacted in 1972. By contrast, more than 90 percent of women's teams were coached by females in 1972. Data not readily available. The study concluded that title IX has had more of a positive effect on participation opportunities for female student athletes than for female coaches and administrators. Authors and Date of Study: Barr, Sutton, McDonald, and others (1996) Scope and Time Period Studied: Members of the National Association of Collegiate Marketing Administrators at NCAA schools, 1996 Summary: The study of marketing and promotion of women's programs involved a survey of members of the National Association of Collegiate Marketing Administrators. The study preliminarily concluded that NCAA schools and their marketing departments appeared to have good intentions in supporting women's programs, but athletic departments were not adding the personnel needed to effectively market and promote women's sports. The study reported the following: Women's sports received 37 percent of schools' mean athletic marketing budgets. This result was positively correlated with the overall athletic department budget allocated to women's and men's sports. The mean number of sports offered at NCAA schools was 9.2 for women and 9.2 for men. Given the relative equality of the two estimates, the study suggested title IX may have had a positive effect on the number of women's sports being offered. Marketing and promotional campaigns designed to increase event attendance were used for an almost equal number of women's sports (4.5) and men's sports (4.6); however, the study did not indicate the attendance levels or whether they had increased as a result of marketing and promotional campaigns. Schools at each NCAA division level have added women's programs in the last 5 years as a result of title IX legislation; the mean number of women's programs added ranges from 1.0 to 3.5 sports per school. Within Division I-A, the method cited most frequently for deciding what programs to add was direction from an NCAA conference to its member schools to add specific sports. For Division I colleges with no football programs, the most frequent method was the elevation of an existing club sport to the intercollegiate level. Not many men's sports programs have been dropped in the last 5 years: the mean number ranged from 0.1 to 1.0 per school. The most common reasons given for reducing men's sports were to comply with title IX and to contain athletic programs' costs. No women's sports programs had a full-time staff member devoted to marketing their sports. Authors and Date of Study: NCAA (1995) Scope and Time Period Studied: All NCAA schools, academic years 1982-83 to 1994-95 Summary: Female student athlete participation rose from 34 percent of all student participation in 1992 to 37 percent in 1995, an increase of about 1 percentage point a year. Authors and Date of Study: USA Today (1995) Scope and Time Period Studied: NCAA Division I-A football schools, academic year 1994-95 Summary: The study assessed the effects of title IX on college campuses by surveying the 107 NCAA Division I-A schools. The responses for the 95 schools that replied showed the following: Women were, on average, 33 percent of student athletes and 49 percent of undergraduates. Female athletes received 35 percent of scholarships the schools provided. Forty percent of the schools added a women's sport in the last 3 years. Fifty-nine percent of the responding schools planned to add at least one women's sport in the next 3 years. Authors and Date of Study: Chronicle of Higher Education (1994) Scope and Time Period Studied: NCAA Division I schools, academic year 1993-94 Summary: The survey measured progress in achieving gender equity since the 1992 NCAA Gender Equity Study was issued showing disparities in the number of male and female student athletes and the amount of athletic scholarship money they received. The survey concluded that little had changed since the NCAA study was issued. It identified a slight increase in the proportion of female student athletes and their share of athletic scholarship funds; however, participation opportunities and scholarship funds continued to lag behind those for men, even though women constituted over half of the colleges' undergraduates. Responses from 257 of the 301 NCAA Division I schools showed the following: Women made up about 34 percent of varsity athletes and about 51 percent of undergraduates. Female athletes received almost 36 percent of scholarship funds. Authors and Date of Study: NCAA (1994) Scope and Time Period Studied: All NCAA schools, fiscal year 1992-93 Summary: NCAA's study of member schools' expenses found that about 24 percent of the total average operating expenses went to women's programs at Division I schools in fiscal year 1992-93 (see table III.3). Grants-in-aid (scholarships) Authors and Date of Study: American Volleyball Coaches Association (1995) Scope and Time Period Studied: Coaches at NCAA schools and schools belonging to other athletic associations or college systems that officially conduct intercollegiate volleyball programs, 1993 Summary: The survey gathered information on various aspects of coaches' compensation, including that of head coaches, at NCAA schools and schools belonging to other athletic associations or college systems with intercollegiate volleyball programs. However, meaningful findings were derived only from NCAA Division I women's intercollegiate volleyball programs. Response rates were lower for all the other schools with volleyball programs. Response rates were particularly low for men's programs, precluding any comparisons between men's and women's programs. For women's volleyball, the survey showed about 48 percent of head coaches were female, and their average base salary was $32,383, about 2 percent less than that earned by males coaching women's volleyball. Authors and Date of Study: Women's Basketball Coaches Association (WBCA) (1994) Scope and Time Period Studied: Head coaches at NCAA Division I schools who were WBCA members, 1994 Summary: The survey included an examination of head coaches' salaries, employment contract terms, budgets, and staffing at NCAA Division I schools with basketball programs. Information for both men's and women's basketball programs was provided by the head coach of the women's program. The results showed significant disparities between women's and men's basketball programs in the average base salary for the head coach, coaching contracts, and program budgets (see table III.4). For example, head coaches of women's basketball earned 59 percent of what head coaches of men's basketball earned, and women's average annual athletic budgets were 58 percent of men's budgets. The study also reported that men's basketball programs employed more graduate staff and at higher average salaries than women's programs. For women's basketball programs, however, few differences were found in average base salary and contract terms for male and female head coaches. American Association of University Women, Washington, D.C. American Council on Education, Washington, D.C. American Sports Institute, Mill Valley, Calif. Boise State University, Boise, Idaho Center for Research on Girls and Women in Sport, University of Minnesota, Minneapolis, Minn. Council of Chief State School Officers, Washington, D.C. Eastern Oregon State College, LaGrande, Oreg. Education Commission of the States, Denver, Colo. Harvard School of Public Health, Cambridge, Mass. Moorhead State University, Moorhead, Minn. National Association for Girls and Women in Sport, Reston, Va. National Association of Collegiate Women Athletics Administrators, Sudbury, Mass. National Coalition for Sex Equity in Education, Clinton, N.J. National Softball Coaches Association, Columbia, Mo. National Women's Law Center, Washington, D.C. Princeton University, Princeton, N.J. Smith College, Northampton, Mass. Trial Lawyers for Public Justice, Washington, D.C. University of California, Berkeley, Calif. University of Massachusetts, Amherst, Mass. Washington State University, Pullman, Wash. Women's Educational Equity Act Publishing Center, Education Development Center, Inc., Newton, Mass. Women's Institute on Sports and Education, Pittsburgh, Pa. Women's Sports Foundation, East Meadow, N.Y. Young Women's Christian Association, New York, N.Y. Acosta, Vivian R. and Linda Jean Carpenter. Women in Intercollegiate Sport, A Longitudinal Study, Nineteen Year Update, 1977-1996. Brooklyn, N.Y.: Brooklyn College, 1996. American Volleyball Coaches Association. 1992-1993 Survey, Women's Volleyball Programs. Colorado Springs, Colo.: AVCA, 1995. Barr, Carol A., William A. Sutton, Mark M. McDonald, and others. Marketing Implications of Title IX to Collegiate Athletic Departments (preliminary report). Amherst, Mass.: University of Massachusetts, 1996. Blum, Debra E. "Slow Progress on Equity." Chronicle of Higher Education (Oct. 26, 1994), p. A45. http://www.chronicle.com (cited Mar. 4, 1996). Cheng, Phyllis W. "The New Federalism and Women's Educational Equity: How the States Respond." Paper presented at the annual meeting of the Association of American Geographers, Phoenix, Ariz., 1988. Feminist Majority Foundation. Empowering Women in Sports, No. 4. Arlington, Va.: Feminist Majority Foundation, 1995. Fulks, Daniel L. Revenues and Expenses of Intercollegiate Athletics Programs: Financial Trends and Relationships, 1993. Overland Park, Kans.: NCAA, 1994. Grant, Christine and Mary Curtis. Gender Equity: Judicial Actions and Related Information. Iowa City, Iowa: University of Iowa, 1996. http://www.arcade.uiowa.edu/proj/ge (cited Mar. 1, 1996). Knight Foundation Commission on Intercollegiate Athletics. Reports of the Knight Foundation Commission on Intercollegiate Athletics: March 1991 - March 1993. Charlotte, N.C.: Knight Foundation Commission on Intercollegiate Athletics, 1993. Lederman, Douglas. "A Chronicle Survey: Men Far Outnumber Women in Division I Sports." Chronicle of Higher Education (Apr. 8, 1992), p. A1. http://www.chronicle.com (cited Mar. 21, 1996). Lyndon B. Johnson School of Public Affairs. Gender Equity in Intercollegiate Athletics: The Inadequacy of Title IX Enforcement by the U.S. Office for Civil Rights, Working Paper No. 69. Austin, Tex.: University of Texas at Austin, 1993. National Collegiate Athletic Association. Participation Statistics Report, 1982-1995. Overland Park, Kans.: National Collegiate Athletic Association, 1996. National Federation of State High School Associations. 1995 High School Athletics Participation Survey. Kansas City, Mo.: National Federation of State High School Associations, 1995. Raiborn, Mitchell H. Revenues and Expenses of Intercollegiate Athletics Programs: Analysis of Financial Trends and Relationships, 1985-1989. Overland Park, Kans.: NCAA, 1990. Tom, Denise, ed. "Title IX: Fairness on the Field." USA Today, three-part series (Nov. 7-9, 1995), pp. 4C, 8C. Women's Basketball Coaches Association. 1994 Survey of WBCA Division I Head Coaches. Lilburn, Ga.: WBCA, 1994. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed Department of Education and National Collegiate Athletic Association (NCAA) efforts to promote gender equity in intercollegiate athletics by implementing title IX of the Education Amendments of 1972, focusing on: (1) steps taken by states to promote gender equity in college athletic programs; and (2) what existing studies show about progress made since 1992 in promoting gender equity in intercollegiate athletics. GAO found that: (1) since 1992, the Department of Education's Office of Civil Rights (OCR) has focused on prevention of title IX violations by clarifying its policies on title IX compliance and increasing technical assistance to help colleges meet title IX requirements while it continues to investigate the relatively few complaints filed each year; (2) NCAA created a task force to examine gender equity issues and now requires certification that athletic programs at all Division I schools meet NCAA-established gender equity requirements; (3) state efforts to promote or ensure gender equity in intercollegiate athletics vary considerably; (4) of the 22 states that reported having laws or other requirements to specifically address gender equity in intercollegiate athletics, 13 reported having full- or part-time staff responsible for gender equity issues; and (5) results from 8 national gender equity studies show gains in the number of women's sports that schools offer, number of female students participating in athletics, and percentage of scholarship funds available to female students, but many women's athletics programs lag behind those for men in the percentage of female head coaches, salaries paid to coaches, and proportion of women athletes to total undergraduate enrollment.
8,110
328
From fiscal years 2001 through 2007, the Employment Litigation Section initiated more than 3,200 matters and filed 60 cases as plaintiff under federal statutes prohibiting employment discrimination. About 90 percent of the matters initiated (2,846 of 3,212) and more than half of the cases filed (33 of 60) alleged violations of section 706 of Title VII of the Civil Rights Act, which involves individual claims of employment discrimination. Much of the Section's matters are driven by what the Section receives from other agencies. During the 7-year period, about 96 percent of the matters (3,087 of 3,212) initiated were as a result of referrals from the Equal Employment Opportunity Commission and the Department of Labor. The number of matters initiated under section 706 and the Uniformed Services Employment and Reemployment Rights Act (USERRA) declined in the latter fiscal years, which a Section Chief attributed to a decline in referrals from these two agencies. In addition to addressing discrimination against individuals, the Section also initiated more than 100 pattern or practice matters at its own discretion. Because the Section did not require staff to maintain information in ICM on the subjects (e.g., harassment and retaliation) of the matters or the protected class (e.g., race and religion) of the individuals who were allegedly discriminated against, we could not determine this information for more than 80 percent of the matters the Section closed from fiscal years 2001 through 2007. According to Section officials, staff are not required to do so because the Section does not view this information as necessary for management purposes. The Section also does not systematically collect information in ICM on the reasons matters were closed; therefore, we were not able to readily determine this information for the approximately 3,300 matters the Section closed over the time period of our review. Division officials stated that when planning for ICM's implementation with Section officials, the Division did not consider requiring sections to provide protected class and subject data or the need to capture in ICM the reasons that matters are closed. However, by conducting interviews with agency officials and reviewing files for a nongeneralizable sample of 49 closed matters, we were able to determine that the reasons the Section closed these matters included, among others, the facts in the file would not justify prosecution, the issue was pursued through private litigation, and the employer provided or offered appropriate relief on its own. In addition to the matters initiated, the Employment Litigation Section filed 60 cases in court as plaintiff from fiscal years 2001 through 2007, and filed more than half (33 of 60) under section 706 of Title VII. According to a Section Chief and Deputy Section Chief, the primary reason for pursuing a case was that the case had legal merit. Other priorities, such as those of the Assistant Attorney General, may also influence the Section's decision to pursue particular kinds of cases. For example, according to Section officials, following the terrorist attacks of September 11, 2001, the Assistant Attorney General asked the various sections within the Division to make the development of cases involving religious discrimination a priority. During the 7-year period, the majority of the section 706 cases (18 of 33) involved sex discrimination against women, and one-third (11 of 33) involved claims of race discrimination, with six cases filed on behalf of African Americans and five cases filed on behalf of whites. In addition to these 33 cases, the Section filed 11 pattern or practice cases. Most of the 11 pattern or practice cases involved claims of discrimination in hiring (9 of 11) and the most common protected class was race (7 of 11), with four cases filed on behalf of African Americans, two on behalf of whites, and one on behalf of American Indians or Alaska Natives. In July 2009, Section officials told us that given that the Assistant Attorneys General who authorized suits from fiscal years 2001 through 2007 and the Section Chief who made suit recommendations to the Assistant Attorneys General during that period are no longer employed by DOJ, it would be inappropriate for them to speculate as to why the Section focused its efforts in particular areas. From fiscal years 2001 through 2007, the Housing and Civil Enforcement Section initiated 947 matters and participated in 277 cases under federal statutes prohibiting discrimination in housing, credit transactions, and certain places of public accommodation (e.g., hotels). The Section has the discretion to investigate matters and bring cases under all of the statutes it enforces, with the exception of certain cases referred under the Fair Housing Act (FHA) from the Department of Housing and Urban Development (HUD), which the Section is statutorily required to file. The Section, however, has discretion about whether to add a pattern or practice allegation to these HUD-referred election cases, if supported by the evidence. Furthermore, the Section has the authority and discretion to independently file pattern or practice cases and to pursue referrals from other sources. During the 7-year period, the Section initiated more matters (517 of 947) and participated in more cases (257 of 277) involving discrimination under the FHA than any other statute or type of matter or case. The Section initiated nearly 90 percent of the FHA matters (456 of 517) under its pattern or practice authority; these primarily alleged discrimination on the basis of race or disability and involved land use/zoning/local government or rental issues. According to Section officials, the large number of land use/zoning/local government matters it initiated was due to the Section regularly receiving referrals from HUD and complaints from other entities on these issues. Additionally, Division officials identified that a Section priority during the 7-year period was to ensure that zoning and other regulations concerning land use were not used to hinder the residential choices of individuals with disabilities. During this time, the Section experienced a general decline in HUD election matters, with the Section initiating the fewest number of total matters, 106, in fiscal year 2007. Section officials attributed the decrease, in part, to a decline in HUD referrals because state and local fair housing agencies were handling more complaints of housing discrimination instead of HUD. The Section initiated the second largest number of matters (252 of 947) under the Equal Credit Opportunity Act (ECOA). About 70 percent (177 of 252) of these ECOA matters included allegations of discrimination based on age, marital status, or both. The majority (250 of 269) of the cases that the Section filed as plaintiff included a claim under the FHA. Similar to the Employment Litigation Section, the Housing Section considers legal merit and whether the plaintiff has the resources to proceed on his or her own should the Section choose not to get involved, among other reasons, when deciding whether to pursue a matter as a case. The number of cases filed by the Section each year generally decreased from fiscal years 2001 through 2007--from 53 to 35--which, similar to matters, Section officials generally attributed to fewer HUD referrals. The FHA cases primarily involved rental issues (146). According to Section officials, the number of rental-related issues is reflective of larger national trends in that discrimination in rental housing may be more frequently reported or easier to detect than in home sales. Most of the FHA cases alleged discrimination on the basis of disability (115) or race (70)--66 of which involved racial discrimination against African Americans. The Section filed 9 cases under ECOA, of which 5 were in combination with the FHA. All 9 complaints involved lending issues. Seven of the 9 complaints included at least one allegation of racial discrimination and 4 included at least one allegation of discrimination on the basis of national origin/ethnicity. From fiscal years 2001 through 2007, the Voting Section initiated 442 matters and filed 56 cases to enforce federal statutes that protect the voting rights of racial and language minorities, disabled and illiterate persons, and overseas and military personnel, among others. The Voting Section has the discretion to initiate a matter or pursue a case under its statutes, with the exception of the review of changes in voting practices or procedures, which it is statutorily required to conduct under section 5 of the Voting Rights Act (VRA). According to Section officials, the Section had as its priority the enforcement of all the statutes for which it was responsible throughout the period covered by our review. However, Section and Division officials identified shifts in the Section's priorities beginning in 2002. For example, the Assistant Attorney General in place from November 2005 through August 2007 stated that since 2002, the Section had increased its enforcement of the minority language provisions of the VRA and instituted the most vigorous outreach efforts to jurisdictions covered by the minority language provisions of the act. During the 7-year period, the Section initiated nearly 70 percent of VRA matters (246 of 367) on behalf of language minority groups, primarily Spanish speakers (203 of 246). The Section also initiated 162 matters under section 2 of the VRA. The Section initiated about half of these matters on behalf of language minority groups (80), primarily Spanish speakers (71), and about half on behalf of racial minorities (88 of 162), primarily African American voters (71 of 88). During the 7-year period, the Voting Section filed 56 cases, primarily under the VRA (39). The majority of the cases the Section filed in court under the VRA were on behalf of language minority groups (30 of 39), primarily Spanish speakers (27). The Acting Assistant Attorney General reported in September 2008 that the Division had brought more cases under the VRA's minority language provisions during the past 7 years--a stated priority-- than in all other years combined since 1975. While cases involving language minority groups were filed under various VRA provisions, the largest number of cases (24 of 30) involved claims under section 203 alleging that the covered jurisdiction had failed to provide voting-related materials or information relating to the electoral process in the language of the applicable minority group. The Section filed 13 cases involving a claim under section 2 of the VRA--5 on behalf of language minority groups and 10 on behalf of racial minority groups (6 on behalf of Hispanics, 3 on behalf of African Americans, and 1 on behalf of whites). In October 2007, the Section Chief who served from 2005 through late 2007 told us that while at-large election systems that discriminated against African Americans remained a priority of the Section, not many of these systems continued to discriminate, and new tensions over immigration had emerged; therefore, the Section had been pursuing cases of voting discrimination against citizens of other minority groups. However, in September 2009, Voting Section officials stated that while many at-large election systems that diluted minority voting strength have been successfully challenged, the Section continued to identify such systems that discriminate against African American, Hispanic, and Native American residents in jurisdictions throughout the country and that taking action against at-large election systems remained a high priority for the Section. The Section also carried out its responsibilities under section 5 of VRA, which requires certain jurisdictions covered under the act to "preclear" changes to voting practices and procedures with DOJ or the United States District Court for the District of Columbia to determine that the change has neither the purpose nor the effect of discriminating against protected minorities in exercising their voting rights. The Section reported that over the 7-year period it made 42 objections to proposed changes, of which almost 70 percent (29 of 42) involved changes to redistricting plans. More than half (17) of the 29 objections were made in fiscal year 2002, following the 2000 census, and two were made from fiscal years 2005 through 2007. From fiscal years 2001 through 2007, the Special Litigation Section initiated 693 matters and filed 31 cases as plaintiff to enforce federal civil rights statutes in four areas--institutional conditions (e.g., protecting persons in nursing homes), conduct of law enforcement agencies (e.g., police misconduct), access to reproductive health facilities and places of worship, and the exercise of religious freedom of institutionalized persons. Because the Section had discretion to pursue an investigation or case under all of the statutes it enforced, it considered all of its work to be self-initiated. Of the matters initiated and closed (544 of 693), most involved institutional conditions (373) and conduct of law enforcement agencies (129). Of the 31 cases that the Section filed as plaintiff, 27 alleged a pattern or practice of egregious and flagrant conditions that deprived persons institutionalized in health and social welfare (13), juvenile corrections (7), and adult corrections (7) facilities of their constitutional or federal statutory rights, and 3 cases involved the conduct of law enforcement agencies. According to Section officials, in deciding whether or not to pursue a case, they considered the conditions in a particular facility or misconduct of a particular police department and whether the system (e.g., state correctional or juvenile justice system) or department alleged to have violated the statute had taken corrective action or instead had accepted the behavior in question as its way of doing business. However, they said that even if the system or department were taking corrective action, the Section might pursue a case depending on the severity of the situation (e.g., sexual abuse) or if Section officials believed that the facility or local entity were incapable of addressing the problem. Additionally, according to Section officials, the Section sought to ensure its work reflected geographic diversity. Our analysis of the 31 plaintiff cases showed that the Section had filed cases in 21 states and the District of Columbia. During the 7-year period, the Section did not file any cases involving violations of the exercise of religious freedom of institutionalized persons under the Religious Land Use and Institutionalized Persons Act (RLUIPA). Section officials stated that there was a time when the Section's enforcement of RLUIPA was directed to be a lower priority than its enforcement of other statutes. However, in April 2009, these officials told us that the Section was reviewing a number of preliminary inquires under RLUIPA, but had not yet filed any complaints because it was still investigating these matters. As previously discussed, information regarding the specific protected classes and subjects related to matters and cases and the reasons for closing matters were not systematically maintained in ICM because the Division did not require Sections to capture these data. As a result, the availability and accuracy of protected class and subject data--information that is key to ensuring that the Division executes its charge to enforce statutes prohibiting discrimination on the basis of protected class--varied among the sections. Additionally, neither we nor the Sections could systematically identify the Sections' reasons for closing matters, including the number of instances in which the Section recommended to proceed with a case and Division management did not approve the Section's recommendation. By collecting additional data on protected class and subject in ICM, the Division could strengthen its ability to account for the four sections' enforcement efforts. In October 2006, the Principal Deputy Assistant Attorney General issued a memorandum to section chiefs stating that Division leadership relies heavily on ICM data to, among other things, report to Congress and the public about its enforcement efforts, and should be able to independently extract the data from ICM needed for this purpose. However, over the years, congressional committees have consistently requested information for oversight purposes related to data that the Division does not require Sections to collect in ICM, including information on the specific protected classes and subjects related to matters and cases. While ICM includes fields for collecting these data, the Division has not required sections to capture these data. Some section officials said that they did not believe it was necessary to maintain this information in ICM for internal management purposes. As a result, we found that the availability and accuracy of these data varied among the sections. For example, when comparing data obtained from the 60 complaints the Employment Litigation Section filed in court with data maintained in ICM, we identified that the protected class and subject data in ICM were incomplete or inaccurate for 12 and 29 cases, or about 20 and 48 percent, respectively. Additionally, we found that the Section's protected class and subject data were not captured in ICM for 2,808 and 2,855 matters, or about 83 and 85 percent, respectively. In contrast, according to the Housing and Civil Enforcement Section, it requires that protected class and subject data be recorded in ICM for all matters and cases, and we found that these data were consistently recorded in ICM. To help respond to information inquiries, all four sections maintain data in ancillary data systems, although some of the data are also recorded in ICM. For example, the Employment Litigation Section maintains broad information on protected class and uses this information in conjunction with data in ICM to report on its enforcement efforts. Section officials reported using ancillary data systems in part because it was easier to generate customized reports than using ICM. We previously reported that agencies with separate, disconnected data systems may be unable to aggregate data consistently across systems, and are more likely to devote time and resources to collecting and reporting information than those with integrated systems. Requiring sections to record these data in ICM would assist the Division in, among other things, responding to inquiries from Congress by ensuring access to readily available information and by reducing reliance on ancillary data systems. Additionally, congressional committees have requested information regarding reasons the Division did not pursue matters, including instances in which Division managers did not approve a section's recommendation to proceed with a case. However, ICM does not include a discrete field for capturing the reasons that matters are closed and Division officials we interviewed could not identify instances in which Division managers did not approve a section's recommendation to proceed with a case. Moreover, sections do not maintain this information in other section-level information systems. ICM does have a comment field that sections can use to identify the reasons matters are closed, although these data are not required or systematically maintained in ICM and the Division could not easily aggregate these data using the comment field. According to Division officials, when Division and section officials were determining which data were to be captured in ICM, they did not consider the need to include a discrete field to capture the reasons that matters were closed. As a result, we had to review Division matter files to determine the reasons that matters were closed, and in some instances this information was not contained in the files. For example, for 7 of the 19 section 706 closed matter files we reviewed for the Employment Litigation Section, the reason the matter was closed was not contained in the file documentation we received, and Section officials attributed this to a filing error. Moreover, Division officials stated that because the Division did not track the reasons for closing matters in ICM, they have had to review files and talk with section attorneys and managers to obtain this information. They said that it was difficult to compile this information because of turnover among key section officials. Capturing information on the reasons matters were closed in the Division's case management system would facilitate the reporting of this information to Congress and enable the Division to conduct a systematic analysis of the reasons that matters were closed. This would also help the Division to determine whether there were issues that may need to be addressed through actions, such as additional guidance from the Division on factors it considers in deciding whether to approve a section's recommendation to pursue a case. In our September 2009 report, we recommended that to strengthen the Division's ability to manage and report on the four sections' enforcement efforts, the Acting Assistant Attorney General of the Division, among other things, (1) require sections to record data on protected class and subject in the Division's case management system in order to facilitate reporting of this information to Congress, and (2) as the Division considers options to address its case management system needs, determine how sections should be required to record data on the reasons for closing matters in the system in order to be able to systematically assess and take actions to address issues identified. DOJ concurred with our recommendations and, according to Division officials, the Division plans to (1) require sections divisionwide to record data on protected class and subject/issue in its case management system by the end of calendar year 2009 and (2) upgrade the system to include a field on reasons for closing matters and require sections divisionwide to record data in this field. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are Maria Strudwick, Assistant Director, David Alexander; R. Rochelle Burns; Lara Kaskie; Barbara Stolz; and Janet Temko. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Civil Rights Division (Division) of the Department of Justice (DOJ) is the primary federal entity charged with enforcing federal statutes prohibiting discrimination on the basis of race, sex, disability, religion, and national origin (i.e., protected classes). The Government Accountability Office (GAO) was asked to review the Division's enforcement efforts and its Interactive Case Management System (ICM). This testimony addresses (1) the activities the Division undertook from fiscal years 2001 through 2007 to implement its enforcement responsibilities through its Employment Litigation, Housing and Civil Enforcement, Voting, and Special Litigation sections, and (2) additional data that could be collected using ICM to assist in reporting on the four sections' enforcement efforts. This statement is based on GAO products issued in September and October 2009. From fiscal years 2001 through 2007, the Civil Rights Division initiated matters and filed cases to implement its enforcement responsibilities through the four sections. The Employment Litigation Section initiated 3,212 matters and filed 60 cases as plaintiff under federal statutes prohibiting employment discrimination. Most matters (3,087) were referred by other agencies. Of the 11 pattern or practices cases--cases that attempt to show that the defendant systematically engaged in discriminatory activities--9 involved claims of discrimination in hiring and the most common protected class was race (7). The Housing and Civil Enforcement Section initiated 947 matters and participated in 277 cases under federal statutes prohibiting discrimination in housing, credit transactions, and certain places of public accommodation. Most (456 of 517) Fair Housing Act (FHA) matters were initiated under its pattern or practice authority, primarily alleging discrimination on the basis of race or disability and involving land use/zoning/local government or rental issues. Most (250 of 269) cases filed as plaintiff included an FHA claim. The FHA cases primarily involved rental issues (146) and alleged discrimination on the basis of disability (115) or race (70). The Voting Section initiated 442 matters and filed 56 cases to enforce federal statutes that protect the voting rights of racial and language minorities, and disabled and illiterate persons, among others. The Section initiated most matters (367) and filed a majority of cases (39) as plaintiff under the Voting Rights Act, primarily on behalf of language minority groups (246 and 30). The Special Litigation Section initiated 693 matters and filed 31 cases as plaintiff to enforce federal civil rights statutes on institutional conditions (e.g., protecting people in nursing homes), the conduct of law enforcement agencies, access to reproductive health facilities and places of worship, and the exercise of religious freedom of institutionalized persons. The largest number of matters initiated and closed (544 of 693) involved institutional conditions (373), as did the cases filed (27). Information on the specific protected classes and subjects related to matters and cases and the reasons for closing matters were not systematically maintained in ICM because the Division did not require sections to capture these data. As a result, the availability and accuracy of these data varied among the sections. For example, the Employment Litigation Section did not capture protected class and subject data for more than 80 percent of its matters. In contrast, these data were consistently recorded in ICM for the Housing and Civil Enforcement Section, which requires that protected class and subject data be recorded in ICM. In addition, congressional committees have requested information on reasons the Division did not pursue matters, including instances in which Division managers did not approve a section's recommendation to proceed with a case. However, ICM does not include a discrete field for capturing the reasons that matters are closed and Division officials we interviewed could not identify instances in which Division managers did not approve a section's recommendation to proceed with a case. By requiring sections to record such information, the Division could strengthen its ability to account for its enforcement efforts.
4,550
824
The MIG has taken three different approaches since establishing the NMAP--test audits, Medicaid Statistical Information System (MSIS) audits, and collaborative audits. In each approach, contractors conducted post payment audits, that is, they reviewed medical documentation and other information related to Medicaid claims that had already been paid. The key differences among the three NMAP approaches were (1) the data sources used to identify audit targets, and (2) the roles assigned to states and contractors. In June 2007, the MIG hired a contractor to conduct test audits, and it implemented MSIS audits beginning in December 2007 by hiring separate review and audit contractors for each of five geographic areas of the country. Collaborative audits were introduced in January 2010. In June 2007, the MIG hired a contractor to conduct test audits in five states. Working with the MIG and the states, the contractor audited 27 providers. States provided the initial audit targets based on their own analysis of Medicaid Management Information System (MMIS) data. MMIS are mechanized claims processing and information retrieval systems maintained by individual states, and generally reflect real-time payments and adjustments of detailed claims for each health care service provided. In some cases, states provided samples of their claims data with which to perform the audits, and in other cases states provided a universe of paid claims that the MIG's contractor analyzed to derive the sample. Twenty-seven test audits were conducted on hospitals, physicians, dentists, home health agencies, medical transport vendors, and durable medical equipment providers. In December 2007, while test audits were still under way, the MIG began hiring review and audit contractors to implement MSIS audits.audits differed from the test audits in three ways. First, MSIS audit targets were selected based on the analysis of Medicaid Statistical Information System (MSIS) data. MSIS is a national data set collected and maintained by CMS consisting of extracts from each state's MMIS, including eligibility files and paid claims files that were intended for health care research and evaluation activities but not necessarily for auditing. As a subset of states' more detailed MMIS data files, MSIS data do not include elements that can assist in audits, such as the explanations of benefit codes and the names of providers and beneficiaries. In addition, MSIS data are not as timely because of late state submissions and the time it takes CMS's contractor to review and validate the data. MIG officials told us that they chose MSIS data because the data were readily available for all states and the state-based MMIS data would require a significant amount of additional work to standardize across states. (See table 1 below.) Second, MSIS audits were conducted over a wider geographic area; 44 states have had MSIS audits, compared with the small number of states selected for test audits. Third, MSIS audits use two types of contractors--review contractors to conduct data analysis and help identify audit leads, and audit contractors to conduct the audits. In the test audits, the states provided the initial audit leads. Review contractors. The MIG's two review contractors analyze MSIS data to help identify potential audit targets in an analytic process known as data mining. The MIG issues monthly assignments to these contractors, and generally allows the contractors 60 days to complete them. For each assignment, the MIG specifies the state, type of Medicaid claims data, range of service dates, and algorithm (i.e., a specific set of logical rules or criteria used to analyze the data). The work of the review contractor is summarized in an algorithm findings report, which contains lists of providers ranked by the amount of their potential overpayment. The January through June 2010 algorithm reports reviewed by the HHS-OIG The identified 113,378 unique providers from about 1 million claims.MIG's Division of Fraud Research & Detection oversees the technical work of the review contractors. A summary of the review contractor activities for MSIS audits is shown in figure 1. Audit contractors. The MIG's audit contractors conduct postpayment audits of Medicaid providers. Audit leads are selected by the MIG's Division of Field Operations, generally by looking at providers across one or more applicable algorithms to determine if they have been overpaid or demonstrated egregious billing patterns. From the hiring of audit contractors in December 2007 through February 2012, the division assigned 1,550 MSIS audits to its contractors. During an audit, the contractor may request and review copies of provider records, interview providers and office personnel, or visit provider facilities. If an overpayment is identified, the contractor drafts an audit report, which is shared with the provider and the state. Ultimately, the state is responsible for collecting any overpayments in accordance with state law and must report this information to CMS. A summary of the audit contractor activities is shown in figure 2. In June 2011, CMS released its fiscal year 2010 report to Congress, which outlined a redesign of the NMAP with an approach that closely resembled the test audits. The report described the redesign as an effort to enhance the NMAP and assist states with their program integrity priorities. CMS refers to this new approach as collaborative audits. In these collaborative audits, MIG and its contractor primarily used MMIS data and leveraged state resources and expertise to identify audit targets. In contrast, MSIS audits used separate review contractors and MSIS data to generate lists of providers with potential overpayments, and the MIG selected the specific providers to be audited. From June 2007 through February 2012, payments to the contractors for On an annual test, MSIS, and collaborative audits totaled $102 million.basis, these contractor payments account for more than 40 percent of all of the MIG's expenditures on Medicaid program integrity activities. Contractor payments rose from $1.3 million in fiscal year 2007 and reached $33.7 million in fiscal year 2011. (See fig. 3.) The total cost of the NMAP is likely greater than $102 million because that figure does not include expenditures on the salaries of MIG staff that support the operation of the program. The MSIS audits were less effective in identifying potential overpayments than test and collaborative audits. The main reason for the difference in audit results was the use of MSIS data. According to MIG officials, they chose MSIS data because the data were readily available for all states, they are collected and maintained by CMS, and are intended for health care research and evaluation activities. However, the MSIS audits were not well coordinated with states, and duplicated and diverted resources from states' program integrity activities. Compared with test and collaborative audits, the return on MSIS audits was significantly lower. As of February 2012, a small fraction of the 1,550 MSIS audits identified $7.4 million in potential overpayments. In contrast, 26 test audits and 6 collaborative audits together identified $12.5 million in potential overpayments (see fig. 4.) Appendix II provides details on the characteristics of MSIS audits that successfully identified overpayments. While the newer collaborative audits have not yet identified more in overpayments than MSIS audits, only 6 of the 112 collaborative audits have final audit reports (see app. III), suggesting that the total overpayment amounts identified through collaborative audits will continue to grow. In addition, the MSIS audits identified potential overpayments for much smaller amounts. Half of the MSIS audits were for potential overpayments of $16,000 or less, compared to a median of about $140,000 for test audits and $600,000 for collaborative audits. The use of MSIS data was the principal reason for the poor MSIS audit results, that is, the low amount of potential overpayments identified and the high proportion of unproductive audits. Over two-thirds (69 percent) of the 1,550 MSIS audits assigned to contractors through February 2012 were unproductive, that is, they were discontinued (625), had low or no findings (415), or were put on hold (37). (See fig. 5.) Our findings are consistent with an early assessment of the MIG's audit contractors, which cited MSIS data issues as the top reason that MSIS audits identified a lower amount of potential overpayments. State program integrity officials, the HHS-OIG, and its audit contractors told the MIG that MSIS data would result in many false leads because the data do not contain critical audit elements, including provider identifiers; procedure, product, and service descriptions; billing information; and beneficiary and eligibility information. For example, the MIG assigned 81 MSIS audits in one state because providers appeared to be billing more than 24 hours of service in a single day. However, all of these audits were later discontinued because the underlying data were incomplete and thus misleading; the audited providers were actually large practices with multiple personnel, whose total billing in a single day legitimately exceeded 24 hours. One state official told us that when states met with the MIG staff during the roll out of the Medicaid Integrity Program, the state officials emphasized that (1) MSIS data could not be used for data mining or auditing because they were 'stagnant,' i.e., MSIS does not capture any adjustments that are subsequently made to a claim and (2) MMIS data were current and states would be willing to share their MMIS data with CMS. In their annual lessons-learned reports, the audit and review contractors told the MIG that the MSIS data were not timely or accurate, and recommended that the MIG help them obtain access to state MMIS data.based audits to its contractors; 78 percent of MSIS audits (1,208) were assigned after the August 2009 HHS-OIG report. Nevertheless, the MIG continued to assign MSIS- MIG officials told us that they chose MSIS data because the data were readily available for all states, they are collected and maintained by CMS, and are intended for health care research and evaluation activities. However, when considering the use of MSIS data, officials said that they were aware that the MSIS data had limitations for auditing and could produce many false leads. MIG officials also told us that collecting states' MMIS data would have been burdensome for states and would have resulted in additional work for the review contractors because they would need to do a significant amount of work to standardize the MMIS data to address discrepancies between the states' data sets. However, officials in 13 of the 16 states we contacted volunteered that they were willing to provide the MIG with MMIS data if asked to do so. In addition, the review contractors have had to do some work to standardize the state files within the MSIS maintained by CMS. The MIG did not effectively coordinate MSIS audits with states and as a result, the MIG duplicated state program integrity activities. Officials from several states we interviewed noted that some of the algorithms used by the review contractor were identical to or less sophisticated than the algorithms they used to identify audit leads. An official in one state told us that even after informing the contractor that its work would be duplicative, the review contractor ran the algorithm anyway. Officials in another state told us that the MIG was unresponsive to state assertions that it had a unit dedicated to reviewing a specific category of claims and the MIG was still pursuing audits for this provider type. State officials also cited general coordination challenges, including difficulty communicating with contractors. MIG officials acknowledged that poor communications resulted in the pursuit of many false leads that had not been adequately vetted by the states. In addition, representatives of a review contractor we interviewed told us that states did not always respond to requests to validate overpayments in the algorithm samples provided and the MIG may not have been aware of the lack of a state response when making audit assignments. State officials we interviewed told us that the review contractors' lack of understanding of state policy also contributed to the identification of false leads, even though (1) the MIG required its contractors to become familiar with each state's Medicaid program, and (2) the MIG reviewed state policies as a quality assurance step prior to assigning leads to its audit contractors. Nonetheless, one state official described how the MIG and its review and audit contractors had mistakenly identified overpayments for federally qualified health centers because they assumed that centers should receive reduced payments for an established patient on subsequent visits. In fact, centers are paid on an encounter basis, which uses the same payment rate for the first and follow-up visits. Officials in seven of the states we spoke with described the resources involved in assisting the MIG and its contractors. For example, states told us that they had assigned staff to: (1) review the algorithms used by review contractors to generate potential audit leads; (2) review lists of audit leads created by the MIG; and (3) provide information and training on state-level policies to audit contractors. One state official described how it had clinical staff rerun algorithms using the state's data system to see if the audit targets chosen by the MIG had merit.state staff found that the MIG was pursuing a false lead, the state had to provide the MIG and its contractors with detailed explanations of why the suspect claims complied with state policies. While the state officials we spoke with did not estimate the cost of their involvement in MSIS audits, officials in some states said that participation in MSIS audits diverted staff from their regular duties. MIG officials told us they were sensitive to state burden and had attempted to minimize it. MIG's redesigned NMAP focuses on collaborative audits, which may enhance state Medicaid program integrity activities, and it also intends to continue using MSIS data in some audits. As part of its NMAP redesign, the MIG has assigned new activities to its review contractors, but it is too early to assess their benefit. CMS has not reported to Congress key details about the changes it is making to the NMAP, including the rationale for the redesign of the program, but it plans to discuss these changes in its upcoming 2012 strategic plan. As part of its redesign, the MIG launched collaborative audits with a small number of states in early 2010 to enhance the MIG's program and assist states with their own program integrity priorities. The MIG did not have a single approach for collaborative audits. For example, one state told us that the MIG's audit contractor suggested that together they discuss conducting a collaborative audit with the MIG while in another state a collaborative audit was initiated by the MIG, with the audit contractor's role limited to assistance during the audit (rather than leading it). Generally, collaborative audits allow states to augment their own program integrity audit capacity by leveraging MIG's and its contractors' resources. For example, officials from six of the eight states we interviewed told us the services targeted for collaborative audits were those that the state did not have sufficient resources to effectively audit on its own. In some cases, the MIG's contractor staff conducted additional audits. In others, contractors were used to assess the medical necessity of claims when the states' programs needed additional clinical expertise to make a determination. Officials from most of the states we interviewed agreed that the investment in collaborative audits was worthwhile but some told us that collaborative audits created some additional work for states. For example, two state programs reported that their staff was involved in training the MIG's contractor staff. In one of these states, state program staff dedicated a full week to train the MIG's audit contractor so that the contractor's work would be in accordance with state policies. Another state program official reported that staff had to review all audits and overpayment recovery work, leading to a "bottleneck" in the state's own program integrity activities. Officials in one state suggested that the collaborative audits could be improved if the MIG formalized a process for communicating and resolving disagreements related to audit reports, and minimized the changing of contractors in order to reduce the burden on states. Most states were in favor of expanding the number of collaborative audits. According to the MIG, the agency plans to expand its use of collaborative audits to as many states as are willing to participate. In fact, officials indicated that they are discussing collaborative audits with an additional 12 states. MIG officials noted that they do not foresee the collaborative audits completely replacing audits based on MSIS data. According to MIG officials, NMAP audits using MSIS data might be appropriate in certain situations, including audits of state-owned and operated facilities and states that are not willing to collaborate, as part of the MIG's oversight responsibilities. The MIG recognizes that MSIS-based audits are hampered by deficiencies in the data, and noted that CMS has initiatives under way to address these deficiencies through the Medicaid and CHIP Business Information and Solutions Council (MACBIS). MACBIS is an internal CMS governance body responsible for data planning, ongoing projects, and information product development. According to MIG officials, MACBIS projects include efforts to reduce the time from state submission of MSIS data to the availability of these data; automation of program data; improvements in encounter data reporting; and automation, standardization, and other improvements in MSIS data submissions. One MACBIS project is known as Transformed MSIS (T-MSIS), which aims to add 1,000 additional variables to MSIS for monitoring program integrity and include more regular MMIS updates. MIG officials told us that CMS is currently engaged in a 10-state pilot to develop the data set for T-MSIS, and anticipates that T-MSIS will be operational in 2014. As part of its NMAP redesign, the MIG has assigned new activities to the review contractors. Because these activities are new, it is too early to assess their benefit. Although the review contractors were not involved in early collaborative audits, the MIG expects that they will be involved in future collaborative audits based on these new activities. In redesigning the NMAP, the MIG tasked its review contractors in November 2011 with using MSIS data to compare state expenditures for a specific service to the national average expenditure for that service to identify states with abnormally high expenditures. Once a state (or states) with high expenditures is identified, then discussions are held with the states about their knowledge of these aberrations and recovery activities related to the identified expenditures. According to MIG officials, such cross-state analyses were recently initiated and thus have not yet identified any potential audit targets. The review contractor also indicated that it would continue to explore other analytic approaches to identify causes of aberrant state expenditures. Additionally, as part of its redesign of the program's audits, the MIG instructed its review contractors in November 2011 to reexamine successful algorithms from previously issued final algorithm reports. According to the MIG, the purpose of this effort is to identify the factors that could better predict promising audit targets and thereby improve audit target selection in the future. Although some MSIS audits identified potential overpayments, the value of developing a process using MSIS data to improve audit target selection in the future is questionable.According to the MIG, MSIS audits are continuing but on a more limited scale and with closer collaboration between states and the MIG's contractors. In its 2010 annual report to Congress on the Medicaid Integrity Program, CMS announced that it was redesigning the NMAP in an effort to enhance MIG programs and assist states with their program integrity priorities, but it did not provide key details regarding the changes. For example, the report did not mention that the MSIS audits had a poor return on investment, the number of unproductive audits, or the reasons for the unproductive audits. Moreover, since issuing its 2010 annual report, CMS has assigned new tasks to its review contractors such as reexamining old final algorithm reports to improve provider target selection and new cross-state analyses using MSIS data. But CMS has not yet articulated for Congress how these activities complement the redesign or how such activities will be used to identify overpayments. The MIG is preparing a new strategic plan--its Comprehensive Medicaid Integrity Plan covering Fiscal Years 2013 through 2017--which it plans to submit to Congress in the summer of 2012. According to MIG officials, the new strategic plan will generally describe shortcomings in the NMAP's original design and how the redesign will address those shortcomings. However, MIG officials told us that they do not plan to discuss the effectiveness of the use of funds for MSIS audits, or explain how the MIG will monitor and evaluate the redesign. In its fiscal year 2013 HHS budget justification for CMS, the department indicated that in the future CMS would not report separately on the NMAP return on investment. HHS explained that it had become apparent that the ability to identify overpayments is not, and should not be, limited to the activities of the Medicaid integrity contractors. Rather, HHS said it is considering new measures that better reflect the resources invested through the Medicaid Integrity Program. Federal internal control standards provide that effective program plans are to clearly define needs, tie activities to organizational objectives and goals, and include a framework for evaluation and monitoring. Based on these standards, the poor performance of the MSIS audits should have triggered an evaluation of the program, particularly given the DRA requirement for CMS to report annually to Congress on the effectiveness of the use of funds appropriated for the Medicaid Integrity Program. In approximately 5 years of implementation, the MIG has spent at least $102 million on contractors for an audit program that has identified less than $20 million in potential overpayments. Moreover, almost two-thirds of these potential overpayments were identified in a small number of test and collaborative audits that used different data and took a different approach to identifying audit targets than the MSIS audits, which comprised the vast majority of the program's audits. The poor performance of the MSIS audits can largely be traced to the MIG's decision to use MSIS data to generate audit leads, although evidence showed that (1) these data were inappropriate for auditing, and (2) alternative data sources were both available and effective in identifying potential overpayments. Ineffective coordination with states and a limited understanding of state Medicaid policies on the part of the MIG's contractors also contributed to the poor results of the MSIS audits. Although the MIG recognizes that the MSIS audits have performed far below expectations, it has not quantified how expenditures to date have compared with identified recoveries. Currently, the MIG is experimenting with a promising approach in which the states identify appropriate targets, provide the more complete MMIS data, and actively participate in the audits. This collaborative audit approach has identified $4.4 million in potential overpayments and is largely supported by the states we spoke with, even though they may have to invest their own resources in these audits. However, the MIG has not articulated how its redesign will address flaws in NMAP and it also plans to continue using MSIS data, despite their past experience with these data, for cross-state analysis and for states that are not willing to participate in collaborative audits. At this time, the MIG is preparing a new comprehensive plan for Congress that outlines the components of the NMAP redesign. The details provided in such a plan will be critical to evaluating the effectiveness of the redesign and the agency's long-term plan to improve the data necessary to conduct successful audits. Transparent communications and a well- articulated strategy to monitor and continuously improve NMAP are essential components of any plan seeking to demonstrate that the MIG can effectively manage the program. To effectively redirect the NMAP toward more productive outcomes and to improve reporting under the DRA, the CMS Administrator should ensure that the MIG's planned update of its comprehensive plan (1) quantifies the NMAP's expenditures and audit outcomes; (2) addresses any program improvements; and (3) outlines plans for effectively monitoring the NMAP program, including how to validate and use any lessons learned or feedback from the states to continuously improve the audits; future annual reports to Congress clearly address the strengths and weaknesses of the audit program and its effectiveness; and use of NMAP contractors supports and expands states' own program integrity audits, engages additional states that are willing to participate in collaborative audits, and explicitly considers state burden when conducting audit activities. We provided a draft of this report to HHS for comment. In its written comments, HHS stated that we had not appropriately recognized the progress CMS has made in evaluating and improving the Medicaid Integrity Program, which included the agency's redesign of NMAP. Collaborative audits were the core of that redesign. HHS described CMS's redesign approach as a phased one in which not all elements had been finalized when the agency announced the redesign in its June 2011 annual report to Congress (covering fiscal year 2010). HHS also commented that we did not fully describe the reasons for CMS's use of MSIS data. HHS partially concurred with our first recommendation and fully concurred with the other two recommendations. HHS's comments are reproduced in appendix IV. Although we characterized collaborative audits as a promising new approach, HHS commented that we (1) did not acknowledge that CMS had presented its rationale for the NMAP redesign in the agency's June 2011 annual report to the Congress, and (2) inappropriately criticized CMS for not including other redesign details in its report, which HHS said had not yet been finalized. We continue to believe that a full articulation of the redesign should include transparent reporting of the results of the MSIS audits. However, we agree that the June 2011 report, which was released 18 months after the initiation of collaborative audits, described their advantages--use of better data, augmenting state resources, and providing analytic support for states lacking that capability. Regarding the use of MSIS data, HHS stated that we do not fully describe CMS's reason for its use or acknowledge that CMS sought alternative data sources to supplement or replace MSIS data. We disagree because our report provides CMS's reasons for using MSIS data, acknowledges CMS's awareness of the MSIS data limitations, and discusses its Transformed MSIS project to improve the quality of MSIS data. In addition, we pointed out that officials in 13 of the 16 states we contacted volunteered that they were willing to provide CMS with their own more complete and timely MMIS data. We agree with HHS's comment that not all of CMS's plans for the redesign may have been complete at the time the June 2011 annual report to Congress was being finalized and therefore could not have been included in that report. We have revised this report to acknowledge that some of the elements of the redesign may not have been initiated until after the June 2011 report was finalized. HHS agreed with two of three elements related to our first recommendation regarding CMS's planned update of its Comprehensive Medicaid Integrity Plan covering fiscal years 2013 to 2017. HHS agreed that the planned update should (1) address any NMAP improvements proposed by CMS, and (2) outline CMS's plans for effectively monitoring the NMAP. HHS commented that CMS considers transparency of the program's performance to be a top priority. However, HHS did not concur that the update should quantify NMAP's expenditures and audit outcomes; CMS considers such information to be more appropriately presented in the annual reports to Congress, which already includes dollar figures on annual expenditures for NMAP and overpayments identified in each fiscal year. CMS's annual reports to Congress have provided a snapshot of results that did not differentiate between the effectiveness of the various audit approaches used. For example, in its annual report covering fiscal year 2010, CMS reported that 947 audits were underway in 45 states and that its contractors had identified cumulative potential overpayments of about $10.7 million. Based on our analysis of CMS's data, MSIS audits had only identified overpayments of $2.4 million as of September 30, 2010. The remaining $8.4 million in overpayments were identified during the test audit phase, in which states identified the audit targets and supplied their own MMIS data. We continue to believe that CMS should more fully report on NMAP expenditures and audit outcomes in its annual reports and provide an overall assessment of NMAP in its next comprehensive plan. HHS concurred with our recommendation that CMS should clearly address NMAPs strengths, weaknesses, and effectiveness in the agency's annual reports to Congress. HHS noted that in CMS's December 7, 2011 congressional testimony the agency had reported its awareness of the limitations of MSIS data and outlined steps to improve contractors' access to better quality Medicaid data. HHS also concurred with our recommendation that CMS's use of NMAP contractors should support and expand states' own audit activities, engage other willing states, and explicitly consider state burden when conducting collaborative audits. HHS reported that since February 2012 CMS had increased the number of collaborative audits by 25--from 112 audits in 11 states to 137 in 15 states. Based on HHS comments, we made technical changes as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributions to this report are listed in appendix V. Total (percent) 59 (4) 296 (19) 355 (23) 118 (8) 1,077 (69) 1,550 (100) The 59 MSIS audits that successfully identified potential overpayments were conducted in 16 states, and most of these audits involved hospitals (30 providers) and pharmacies (17 providers). These provider types also had the highest potential overpayments--over $6 million for hospitals and $600,000 for pharmacies. Arkansas and Florida accounted for over half of the audits that identified potential overpayments, but the most substantial overpayments were in Delaware ($4.6 million) and the District of Columbia ($1.7 million). (See tables 3 and 4.) Total (percent) 18 (16) 6 (5) 24 (21) 85 (76) 3 (3) 112 (100) Carolyn L. Yocom at (202) 512-7114 or [email protected]. In addition to the contact named above, key contributors to this report were: Water Ochinko, Assistant Director; Sean DeBlieck; Leslie V. Gordon; Drew Long; and Jasleen Modi. National Medicaid Audit Program: CMS Should Improve Reporting and Focus on Audit Collaboration with States. GAO-12-814T. Washington, D.C.: June 14, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Medicaid: Federal Oversight of Payments and Program Integrity Needs Improvement. GAO-12-674T. Washington, D.C.: April 25, 2012. Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States. GAO-12-288T. Washington, D.C.: December 7, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Status of Fiscal Year 2010 Federal Improper Payments Reporting. GAO-11-443R. Washington, D.C.: March 25, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Medicare: Program Remains at High Risk Because of Continuing Management Challenges. GAO-11-430T. Washington, D.C.: March 2, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicare Recovery Audit Contracting: Weaknesses Remain in Addressing Vulnerabilities to Improper Payments, Although Improvements Made to Contractor Oversight. GAO-10-143. Washington, D.C.: March 31, 2010. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-1004T. Washington, D.C.: September 30, 2009. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-957. Washington, D.C.: September 9, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-239T. Washington, D.C.: November 14, 2007. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-17. Washington, D.C.: November 14, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse. GAO-06-578T. Washington, D.C.: March 28, 2006.
Medicaid, the joint federal-state health care financing program for certain low-income individuals, has the second-highest estimated improper payments of any federal program. The Deficit Reduction Act of 2005 expanded the federal role in Medicaid program integrity, and the Centers for Medicare & Medicaid Services (CMS), the federal agency that oversees Medicaid, established the MIG, which designed the NMAP. Since the NMAP's inception, the MIG has used three different audit approaches: test, MSIS, and collaborative. This report focuses on (1) the effectiveness of the MIG's implementation of NMAP, and (2) the MIG's efforts to redesign the NMAP. To do this work, GAO analyzed MIG data, reviewed its contractors' reports, and interviewed MIG officials, contractor representatives, and state program integrity officials. Compared to the initial test audits and the more recent collaborative audits, the majority of the Medicaid Integrity Group's (MIG) audits conducted under the National Medicaid Audit Program (NMAP) were less effective because they used Medicaid Statistical Information System (MSIS) data. MSIS is an extract of states' claims data and is missing key elements, such as provider names, that are necessary for auditing. Since fiscal year 2008, 4 percent of the 1,550 MSIS audits identified $7.4 million in potential overpayments, 69 percent did not identify overpayments, and the remaining 27 percent were ongoing. In contrast, 26 test audits and 6 collaborative audits--which used states' more robust Medicaid Management Information System (MMIS) claims data and allowed states to select the audit targets--together identified more than $12 million in potential overpayments. Furthermore, the median amount of the potential overpayment for MSIS audits was relatively small compared to test and collaborative audits. The MIG reported that it is redesigning the NMAP, but has not provided Congress with key details about the changes it is making to the program, including the rationale for the change to collaborative audits, new analytical roles for its contractors, and its plans for addressing problems with the MSIS audits. Early results showed that this collaborative approach may enhance state program integrity activities by allowing states to leverage the MIG's resources to augment their own program integrity capacity. However, the lack of a published plan detailing how the MIG will monitor and evaluate NMAP raises concerns about the MIG's ability to effectively manage the program. Given that NMAP has accounted for more than 40 percent of MIG expenditures, transparent communications and a strategy to monitor and continuously improve NMAP are essential components of any plan seeking to demonstrate the MIG's effective stewardship of the resources provided by Congress. GAO recommends that the CMS Administrator ensure that the MIG's (1) update of its comprehensive plan provide key details about the NMAP, including its expenditures and audit outcomes, program improvements, and plans for effectively monitoring the program; (2) future annual reports to Congress clearly address the strengths and weaknesses of the audit program and its effectiveness; and (3) use of NMAP contractors supports and expands states' own program integrity efforts through collaborative audits. HHS partially concurred with GAO's first recommendation commenting that CMS's annual report to Congress was a more appropriate vehicle for reporting NMAP results than its comprehensive plan. HHS concurred with the other two recommendations.
7,429
717
Although our high-risk designation covers only DOD's program, our reports have also documented clearance-related problems affecting other agencies. For example, our October 2007 report on state and local information fusion centers cited two clearance-related challenges: (1) the length of time needed for state and local officials to receive clearances from the Federal Bureau of Investigation (FBI) and the Department of Homeland Security (DHS) and (2) the reluctance of some federal agencies--particularly DHS and FBI--to accept clearances issued by other agencies (i.e., clearance reciprocity). Similarly, our April 2007 testimony on maritime security and selected aspects of the Security and Accountability for Every Port Act (SAFE Port Act) identified the challenge of obtaining clearances so that port security stakeholders could share information through area committees or interagency operational centers. The SAFE Port Act includes a specific provision requiring the Secretary of Homeland Security to sponsor and expedite individuals participating in interagency operational centers in gaining or maintaining their security clearances. Our reports have offered findings and recommendations regarding current impediments, and they offer key factors to consider in future reforms. For example, as the interagency security clearance process reform team develops a new governmentwide end-to-end clearance system, this reform effort provides an opportune time to consider factors for evaluating intermediate steps and the final system in order to optimize efficiency and effectiveness. The Director of National Intelligence's July 25, 2007, memorandum provided the terms of reference for the security clearance process reform team and noted that a future Phase IV would be used to perform and evaluate demonstrations and to finalize the acquisition strategy. In designing a new personnel security clearance system, the Government Performance and Results Act of 1993 (GPRA) may be a useful resource for the team designing the system and the congressional committees overseeing the design and implementation. GPRA provides a framework for strategic performance planning and reporting intended to improve federal program effectiveness and hold agencies accountable for achieving results. Agencies that effectively implement GPRA's results-oriented framework clearly establish performance goals for which they will be held accountable, measure progress towards those goals, determine strategies and resources to effectively accomplish the goals, use performance information to make the programmatic decisions necessary to improve performance, and formally communicate results in performance reports. Our reports have also identified a number of directly relevant factors, such as those found in our November 2005 testimony that evaluated an earlier governmentwide plan for improving the personnel security clearance process. I will address the need for consideration of four key factors in my testimony: (1) a strong requirements-determination process, (2) quality emphasis in all clearance processes, (3) additional metrics to provide a fuller picture of clearance processes, and (4) long-term funding requirements of security clearance reform. The interagency security clearance process reform team established in July 2007 might want to address whether the numbers and levels of clearances are appropriate since this initial stage in the clearance process can affect workloads and costs in other clearance processes. For instance, the team may want to examine existing policies and practices to see if they need to be updated or otherwise modified. We are not suggesting that the numbers and levels of clearances are or are not appropriate--only that any unnecessary requirements in this initial phase use government resources that can be utilized for other purposes such as building additional quality into other clearance processes or decreasing delays in clearance processing. Figure 1 highlights the fact that the clearance process begins with establishing whether an incumbent's position requires a clearance, and if so, at what level. The numbers of requests for initial and renewal clearances and the levels of such clearance requests (phase 2 in fig. 1) are two ways to look at outcomes of requirements setting in the clearance process. In our prior work, DOD personnel, investigations contractors, and industry officials told us that the large number of requests for investigations could be attributed to many factors. For example, they ascribed the large number of requests to the heightened security concerns that resulted from the September 11, 2001, terrorist attacks. They also attributed the large number of investigations to an increase in the operations and deployments of military personnel and to the increasingly sensitive technology that military personnel, government employees, and contractors come in contact with as part of their jobs. While having a large number of cleared personnel can give the military services, agencies, and industry a great deal of flexibility when assigning personnel, the investigative and adjudicative workloads that are required to provide the clearances and flexibility further tax a clearance process that already experiences delays in determining clearance eligibility. A change in the level of clearances being requested also increases the investigative and adjudicative workloads. For example, in our February 2004 report on impediments to eliminating clearance backlogs, we found that a growing percentage of all DOD requests for clearances for industry personnel was at the top secret level: 17 percent of those requests were at the top secret level in 1995 but 27 percent were at the top secret level in 2003. This increase of 10 percentage points in the proportion of investigations at the top secret level is important because top secret clearances must be renewed twice as often as secret clearances (i.e., every 5 years versus every 10 years). In August 2006, OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for the investigation to support a secret or confidential clearance. The doubling of the frequency along with the increased effort to investigate and adjudicate each top secret reinvestigation adds costs and workload for the government. Cost. For fiscal year 2008, OPM's standard billing rate is $3,711 for an investigation for an initial top secret clearance; $2,509 for an investigation to renew a top secret clearance, and $202 for an investigation for a secret clearance. The cost of getting and maintaining a top secret clearance for 10 years is approximately 30 times greater than the cost of getting and maintaining a secret clearance for the same period. For example, an individual getting a top secret clearance for the first time and keeping the clearance for 10 years would cost the government a total of $6,202 in current year dollars ($3,711 for the initial investigation and $2,509 for the reinvestigation after the first 5 years). In contrast, an individual receiving a secret clearance and maintaining it for 10 years would result in a total cost to the government of $202 ($202 for the initial clearance that is good for 10 years). Time/Workload. The workload is also affected by the scope of coverage in the various types of investigations. Much of the information for a secret clearance is gathered through electronic files. The investigation for a top secret clearance, on the other hand, requires the information needed for the secret clearance as well as data gathered through time-consuming tasks such as interviews with the subject of the investigation request, references in the workplace, and neighbors. Since (1) the average investigative report for a top secret clearance takes about 10 times as many investigative staff hours as the average investigative report for a secret clearance and (2) the top secret clearance must be renewed twice as often as the secret, the investigative workload increases about 20-fold. Additionally, the adjudicative workload increases about 4-fold. In 2007, DOD officials estimated that it took about twice as long to review an investigative report for a top secret clearance, which would need to be done twice as often as the secret clearance. Unless the new system developed by the interagency security clearance process reform team includes a sound requirements process, workload and costs may be higher than necessary. Since the late 1990s, GAO has emphasized a need to build more quality and quality monitoring into clearance processes to achieve positive goals such as promoting greater reciprocity and maximizing the likelihood that individuals who are security risks will be scrutinized more closely. In our November 2005 testimony on the earlier governmentwide plan to improve the clearance process, we noted that the plan devoted little attention to monitoring and improving the quality of the personnel security clearance process, and that limited attention and reporting about quality continue. When OMB issued its February 2007 Report of the Security Clearance Oversight Group Consistent with Title III of the Intelligence Reform and Terrorism Prevention Act of 2004, it documented quality with a single metric. Specifically, it stated that OPM has developed additional internal quality control processes to ensure that the quality of completed investigations continue to meet the national investigative standards. OMB added that, overall, less than 1 percent of all completed investigations are returned to OPM from the adjudicating agencies for quality deficiencies. When OMB issued its February 2008 Report of the Security Clearance Oversight Group, it did not discuss the percentage of completed investigations that are returned to OPM or the development or existence of any other metric measuring the level of quality in security clearance processes or products. As part of our September 2006 report, we examined a different aspect of quality--the completeness of documentation in investigative and adjudicative reports. We found that OPM provided incomplete investigative reports to DOD adjudicators, which the adjudicators then used to determine top secret clearance eligibility. Almost all (47 of 50) of the sampled investigative reports we reviewed were incomplete based on requirements in the federal investigative standards. In addition, DOD adjudicators granted clearance eligibility without requesting additional information for any of the incomplete investigative reports and did not document that they considered some adjudicative guidelines when adverse information was present in some reports. GAO has long reported that it is problematic to equate the quality of investigations with the percentage of investigations that are returned by requesting agencies due to incomplete case files. For example, in October 1999 and again in our November 2005 evaluation of the governmentwide plan, we stated that the number of investigations returned for rework is not by itself a valid indicator of quality because adjudication officials said they were reluctant to return incomplete investigations in anticipation of further delays. We additionally suggested that regardless of whether this metric continues to be used, the government might want to consider adding other indicators of the quality of investigations, such as the number of counterintelligence leads generated from security clearance investigations and forwarded to relevant units. Further, our September 2006 report recommended that OMB's Deputy Director of Management require OPM and DOD to (1) submit their procedures for eliminating the deficiencies that we identified in their investigative and adjudicative documentation and (2) develop and report metrics on completeness and other measures of quality that will address the effectiveness of the new procedures. We believe that our recommendation still has merit, but the previously cited passage from the February 2007 OMB report does not describe the new procedures or provide statistics for the recommended new quality measures and the 2008 OMB report is silent on quality measures. As we noted in September 2006, the government cannot afford to achieve its timeliness goal by providing investigative and adjudicative reports that are incomplete in key areas required by federal investigative standards and adjudicative guidelines. Incomplete investigations and adjudications undermine the government's efforts to move toward greater clearance reciprocity. An interagency working group, the Security Clearance Oversight Steering Committee, noted that agencies are reluctant to be accountable for poor quality investigations and/or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies need to have confidence in the quality of the clearance process. Without full documentation of investigative actions, information obtained, and adjudicative decisions, agencies could continue to require duplicative investigations and adjudications. Earlier, we stated that reciprocity concerns continue to exist, citing FBI and DHS reluctance to accept clearances issued by other agencies when providing information to personnel in fusion centers. Much of the recent quantitative information provided on clearances has dealt with how much time it takes for the end-to-end processing of clearances (and related measures such as the numbers of various types of investigative and adjudicative reports generated); however, there is less quantitative information on other aspects of the clearance process. In our November 2005 testimony, we noted that the earlier government plan to improve the clearance process provided many metrics to monitor the timeliness of clearances governmentwide, but that plan detailed few of the other elements that a comprehensive strategic plan might contain. A similar emphasis on timeliness appears to be emerging for the future governmentwide clearance process. In the Director of National Intelligence's 500 Day Plan for Integration and Collaboration issued on October 10, 2007, the core initiative to modernize the security clearance process had only one type of metric listed under the heading about how success will be gauged. Specifically, the plan calls for measuring whether "performance of IC agency personnel security programs meet or exceed IRTPA guidelines for clearance case processing times." While the February 2007 and 2008 OMB reports to Congress contain statistics and other information in addition to timeliness metrics (e.g., use of information technology and reciprocity-related procedures) and the joint team developing the new clearance process may be considering a wider range of metrics than timeliness only, an underlying factor in the emphasis on timeliness is IRTPA. Among other things, IRTPA established specific timeliness guidelines to be phased in over 5 years. The Act also states that, in the initial period which ends in 2009, each authorized adjudicative agency shall make a determination on at least 80 percent of all applications for personnel security clearance within an average of 120 days after the receipt of the application for a security clearance by an authorized investigative agency. The 120-day average period shall include a period of not longer than 90 days to complete the investigative phase of the clearance review and a period of not longer than 30 days to complete the adjudicative phase of the clearance review. Moreover, IRTPA also includes a requirement for a designated agency (currently OMB) to provide information on among other things the timeliness in annual reports through 2011, as OMB did in February 2008. Prior GAO reports as well as inspector general reports identify a wide variety of methods and metrics that program evaluators have used to examine clearance processes and programs. For example our 1999 report on security clearance investigations used multiple methods to examine numerous issues that included: documentation missing from investigative reports; the training of investigators (courses, course content, and number of trainees); investigators' perceptions about the process; customer perceptions about the investigations; and internal controls to protect against fraud, waste, abuse, and mismanagement. Including these and other types of metrics in regular monitoring of clearance processes could add value in current and future reform efforts as well as supply better information for greater congressional oversight. The joint Security Clearance Process Reform team may also want to consider providing Congress with the long-term funding requirements to implement changes to security clearance processes enabling more informed congressional oversight. In a recent report to Congress, DOD provided funding requirements information that described its immediate needs for its industry personnel security program, but it did not include information about the program's long-term funding needs. Specifically, DOD's August 2007 congressionally mandated report on clearances for industry personnel provided less than 2 years of data on funding requirements. In its report, DOD identified its immediate needs by submitting an annualized projected cost of $178.2 million for fiscal year 2007 and a projected funding need of approximately $300 million for fiscal year 2008. However, the report did not include information on (1) the funding requirements for fiscal year 2009 and beyond even though the survey used to develop the funding requirements asked contractors about their clearance needs through 2010 and (2) the tens of millions of dollars that the Defense Security Service Director testified before Congress in May 2007 were necessary to maintain the infrastructure supporting the industry personnel security clearance program. As noted in our February 2008 report, the inclusion of less than 2 future years of budgeting information in the DOD report limits Congress's ability to carry out its oversight and appropriations functions pertaining to industry personnel security clearances. Without more information on DOD's longer-term funding requirements for industry personnel security clearances, Congress lacks the visibility it needs to fully assess appropriations requirements. In addition, the long-term funding requirements to implement changes to security clearance processes are also needed to enable the executive branch to compare and prioritize alternative proposals for reforming the clearance processes. As the joint Security Clearance Process Reform team considers changes to the current clearance processes, it may also want to consider ensuring that Congress is provided with the long-term funding requirements necessary to implement any such reforms. We were encouraged when OMB undertook the development of an earlier governmentwide plan for improving the personnel security clearance process and have documented in our prior reports both DOD and governmentwide progress in addressing clearance-related problems. Similarly, the current joint effort to develop a new governmentwide end- to-end security clearance system represents a positive step to address past impediments and manage security clearance reform efforts. Still, much remains to be done before a new system can be designed and implemented. GAO's experience in evaluating DOD's and governmentwide clearance plans and programs as well as its experience monitoring large- scale, complex acquisition programs could help Congress in its oversight, insight, and foresight regarding security clearance reform efforts. Madam Chairwoman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Jack E. Edwards, Acting Director; James P. Klein, Joanne Landesman, Charles Perdue, Karen D. Thornton, and Stephen K. Woods. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. Washington, D.C.: October 16, 2007. DOD Personnel Clearances: Delays and Inadequate Documentation Found For Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. High-Risk Series: An Update, GAO-07-310 (Washington, D.C.: January 2007). DOD Personnel Clearances: Additional OMB Actions Are Needed To Improve The Security Clearance Process, GAO-06-1070. Washington, D.C.: September 2006. Managing Sensitive Information: DOD Can More Effectively Reduce the Risk of Classification Errors, GAO-06-706. Washington, D.C.: June 30, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD's Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD's Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06-71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD's Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO's High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD's High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission's Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2004, Congress passed the Intelligence Reform and Terrorism Prevention Act to reform security clearance processes. Much of GAO's experience in evaluating personnel security clearance processes over the decades has consisted of examining the Department of Defense's (DOD) program, which maintains about 2.5 million clearances on servicemembers, DOD civilian employees, legislative branch employees, and industry personnel working for DOD and 23 other federal agencies. Long-standing delays in processing applications--and other problems in DOD's clearance program--led GAO to designate it a high-risk area in 2005. GAO also has documented clearance-related problems in other agencies. For this hearing, GAO was asked to identify key factors that could be applied in personnel security clearance reform efforts. To identify key factors, GAO drew upon its past reports and institutional knowledge. For those reports, GAO reviewed laws, executive orders, policies, reports, and other documentation related to the security clearance process; examined samples of cases of personnel granted top secret eligibility; compared documentation in those sampled cases against federal standards; and interviewed a range of cognizant government officials. Current and future efforts to reform personnel security clearance processes should consider, among other things, the following four key factors: determining whether clearances are required for positions, incorporating quality control steps throughout the clearance processes, establishing metrics for assessing all aspects of clearance processes, and providing Congress with the long-term funding requirements of security clearance reform. Requesting a clearance for a position in which it will not be needed, or in which a lower- level clearance would be sufficient, will increase both costs and investigative workload unnecessarily. For example, changing the clearance needed for a position from a secret to top secret increases the investigative workload for that position about 20-fold and uses 10 times as many investigative staff hours. Emphasis on quality in clearance processes could promote positive outcomes, including more reciprocity among agencies in accepting each others' clearances. Building quality throughout clearance processes is important, but government agencies have paid little attention to quality, despite GAO's repeated suggestions to place more emphasis on quality. Even though GAO identified the government's primary metric for assessing quality--the percentage of investigative reports returned for insufficiency during the adjudicative phase--as inadequate by itself in 1999, the Office of Management and Budget and the Office of Personnel Management continue to use that metric. Concerns about the quality of investigative and adjudicative work underlie the continued reluctance of agencies to accept clearances issued by other agencies; as a result, government resources are used to conduct duplicative investigations and adjudications. Many efforts to monitor clearance processes emphasize measuring timeliness, but additional metrics could provide a fuller picture of clearance processes. The emphasis on timeliness is due in part to recent legislation that provides specific guidelines regarding the speed with which clearances should be completed and requires annual reporting of that information to Congress. GAO has highlighted a variety of metrics in its reports (e.g., completeness of investigative and adjudicative reports, staff's and customers' perceptions of the processes, and the adequacy of internal controls), all of which could add value in monitoring clearance processes and provide better information to allow improved oversight by Congress and the Executive Branch. Another factor to consider in reform efforts is providing Congress with the long-term funding requirements to implement changes to security clearance processes. DOD's August 2007 congressionally mandated report on industry clearances identified its immediate funding needs but did not include information on the funding requirements for fiscal year 2009 and beyond. The inclusion of less than 2 future years of budgeting data in the DOD report limits Congress's ability to carry out its long-term oversight and appropriations functions pertaining to industry personnel security clearances.
4,974
831
Since the Securities Act of 1933 and the Securities Exchange Act of 1934 established the principle of full disclosure--requiring public companies to provide full and accurate information to the investing public--public accounting firms have played a critical role in companies' financial reporting and disclosure. While officers and directors of a public company are responsible for the preparation and content of financial statements that fully and accurately reflect the company's financial condition and the results of its operations, public accounting firms, which function as independent external auditors are expected to provide an additional safeguard. The external auditor is responsible for auditing companies' financial statements in accordance with generally accepted auditing standards (GAAS) to provide reasonable assurance that a company's financial statements are fairly presented in all material respects in accordance with generally accepted accounting principles (GAAP). Public accounting firms offer a broad range of services to their clients. In addition to traditional audit and attest and tax services, firms also offer consulting services in areas such as information technology. Although all of the Big 4 firms continue to offer certain consulting services, three of the Big 4 have sold or divested portions of their consulting businesses.Following the implementation of Sarbanes-Oxley, SEC issued new independence rules in March 2003, which place additional limitations on management consulting and other nonaudit services that firms could provide to their audit clients. Sarbanes-Oxley also requires auditors to report to and be overseen by a public company's audit committee, which consists of members of the company's board of directors who are required to be independent. The external auditor also interacts closely with the company's senior management, including the chief financial officer. Most of the survey respondents said they were satisfied with their current auditor. Moreover, half of the respondents reported that they have had the same auditor of record for 10 or more years. Respondents gave various reasons for changing auditors, including concerns about their auditor's reputation and fees. They also told us what factors would drive their decision in choosing a new auditor. Almost all respondents said that they used their auditor of record for more than audit and attest functions, including tax-related services and assistance with company debt and equity offerings. Overall, 80 percent (127 out of 158 respondents answering this question) of the respondents said they were "very" or "somewhat" satisfied with their current auditor of record, while 12 percent (19 of 158) said that they were very or somewhat dissatisfied, and 8 percent (12 of 158) said they were neither satisfied nor dissatisfied. Similarly, of the 135 respondents that provided the year they first employed their auditor of record, half of them said they had retained their auditor of record for 10 years or more. The average tenure was 19 years, ranging from less than 1 year to 94 years. When the 37 public companies that switched from Andersen because of Andersen's dissolution were excluded, the average tenure increased to 25 years, and the percentage of public companies that had retained their auditor for 10 years or more increased to 68 percent. Figure 1 shows the length of the relationship these respondents had with their current auditor. We found that there was an association between the length of the company- auditor relationship and satisfaction. That is, the longer the relationship between a company and its auditor, the more likely that the company was satisfied with its auditor of record. As figure 2 shows, 94 percent (30 of 32) of companies with auditor tenure of more than 30 years were very or somewhat satisfied with their auditor, whereas 70 percent (28 of 40) of companies using their current auditor for 1 year or less said they were very or somewhat satisfied with their auditor. Sixty-one of the respondents reported that they switched auditors since 1987. Of those 61, 37 were former Andersen clients that switched within the last 2 years as a result of Andersen's dissolution, five were former Andersen clients that switched over 2 years ago for reasons other than Andersen's dissolution, and 19 were other respondents that switched from another Big 4 or non-Big firm since 1987, as shown in table 1. The respondents who were clients of Andersen and had to change auditors within the last 2 years as a result of Andersen's dissolution were somewhat less satisfied with their current auditor than a separate group of 19 respondents that had switched from another Big 4 or non-Big 4 firm since 1987. Of the 37 former Andersen clients, 25 respondents indicated that they were satisfied with their current auditor of record, seven said that they were dissatisfied with their current auditor, and five said they were neither satisfied nor dissatisfied. Of the 19 other respondents that switched from other firms since 1987, proportionally more (16 respondents) said they were satisfied with their current auditor of record, while only one was somewhat dissatisfied and two were neither satisfied nor dissatisfied. While this suggests that clients leaving Andersen because of its dissolution are less satisfied with their current audit arrangements than other firms that had changed auditors in the past, it is important to note that the 37 respondents who were former Andersen clients also had the shortest tenures with their current auditors, which may in part explain their lower satisfaction. Respondents gave a variety of reasons for switching, including concerns about the reputation of their auditor, the need to retain an auditor that could meet companies' new demands, concerns about the level of fees charged for audit and attest services, and increased demands resulting from a corporate merger or change in company ownership. Four respondents said their relationship with their former auditor was no longer working, and another respondent cited a disagreement over an accounting policy that resulted in the switch. While none of the respondents said their company had a mandatory rotation policy, two respondents said their companies switched auditors to obtain a "fresh perspective" and "as a form of good governance." When we asked the respondents what factors would drive their decision if they had to choose a new auditor, they most often cited "quality of services offered" as a factor of "very great" or "great" importance (99 percent or 157 of 159). The second most highly rated factor was "reputation or name recognition of the auditor" (83 percent or 132 of 159), followed by "industry specialization or expertise" (81 percent or 128 of 159). Ninety-four percent (149 of 159) of respondents obtained other services from their auditors in addition to audit and attest services. We asked respondents if their auditor provided any of the three following categories of services: tax-related, assistance with company debt and equity offerings, and "other services." Only 10 companies, or 6 percent, reported that their auditor of record provided them with only audit and attest services. Respondents for the remaining 149 companies said they used their auditor of record for one or a combination of other services. Specifically, 87 percent (130 of 149) said their auditor provided tax-related services, such as tax preparation and 71 percent (106 of 149) said they received assistance with company debt and equity offerings. Thirty-seven percent (55 of 149) said they received other services, such as merger and acquisition due diligence, internal control reviews, or tax planning assistance. Respondents had differing views about the impact of past consolidation among the largest accounting firms on audit fees, but most agreed that it had little or no influence on audit quality or auditor independence. While 93 percent (147 of 158) of respondents said that their audit fees increased over the past decade, they were almost evenly divided about whether past consolidation of the largest accounting firms had a "moderate upward" or "great upward" influence (47 percent or 75 of 158) or little or no influence (46 percent or 72 of 158). See figure 4. More respondents said that audit quality had increased over the past decade rather than decreased, but the majority of them did not believe that past consolidation of the largest accounting firms influenced these changes. Specifically, 44 percent (69 of 158) of the respondents said that audit quality had increased, while 18 percent (29 of 158) said quality had decreased and 37 percent (58 of 158) said there had been little or no change. However, 63 percent (100 of 158) of the respondents believed that consolidation of the largest firms had little or no influence on the quality of audit and attest services their companies received (see fig. 5). The respondents provided other reasons for changes in audit quality, including changes in audit partner, new regulations and audit standards, and technical expertise of the audit team. Several respondents cited the importance of the assigned audit partner to overall audit quality. One respondent noted, "The partner in charge is critical ." Another respondent said audit quality improved because of "more personal involvement of the audit partner." Other respondents believed that changes in audit quality were due to changes in audit methodologies and the Sarbanes-Oxley Act. According to one respondent, "The change in the depth and quality of the audit process is due to a more rigorous regulatory and litigation environment and not to audit firm consolidation." Another respondent noted, "Following the Sarbanes-Oxley Act and Andersen's downfall, other firms are increasing the level of work they do and the depth of the audit." Finally, we received comments about the skills and experience of the audit team. One respondent wrote, "Answers to accounting questions take too long and quality of staff is poor. Fundamental audit practices are gone." Another respondent similarly commented that the "level of experience seems to have declined, contributing to lower quality, partners supervise more jobs." However, that same respondent also noted that since his company had changed auditors, the "level of experience has improved." Finally, 59 percent (94 of 158) of the respondents indicated that their auditor had become more independent over the past decade, while 1 percent (2 of 158) said that their auditor had become less independent and 38 percent (60 of 158) said that there had been no change in their auditor's independence. However, 72 percent (114 of 158) of the respondents also said that past consolidations of the largest accounting firms had little or no influence on auditor independence (see fig. 6). The remaining views varied, with 16 percent (26 of 158) of respondents believing that the consolidations had a negative influence on auditor independence and 8 percent (12 of 158) saying that it had a positive influence. Some of the respondents commented that audits had been positively affected by SEC's new independence requirements, while one respondent said that the new rules had not significantly enhanced auditor independence. Respondents raised concerns about the future implications of consolidation, especially about possible limitations on audit firm choice. A significant majority of respondents said that their companies would not use a non-Big 4 accounting firm for audit services, which limited their choices. While most respondents said that they would be able to use another Big 4 firm as their auditor of record if they had to change, they also said that they would prefer more large firms from which to choose. Moreover, they raised concerns that further consolidation among the largest accounting firms would result in too few choices. Yet, despite those concerns, most respondents favored allowing market forces to dictate the level of competition in the market for audit and attest services. Eighty-eight percent (139 of 158) of respondents indicated that they would not consider using a non-Big 4 firm for audit and attest services. As shown in figure 7, nearly all the respondents cited three factors as being of great or very great importance in determining why their companies would not use a non-Big 4 firm: (1) auditor's technical skills and knowledge of the company's industry (91 percent or 126 of 138); (2) the reputation of the accounting firm (91 percent or 126 of 138); and (3) the capacity of the firm (90 percent or 125 of 138). These three factors also corresponded closely to the most frequently cited factors in choosing a new auditor as previously noted in figure 3. One respondent noted, "We have operations in 40 countries and want all our auditors to operate with the same systems and procedures. Only a global firm can deal with this complexity in a cost- effective manner and give us the continuity of support for U.S. generally accepted accounting principles and local statutory requirements." Another respondent noted, "We would want a Big 4 firm because of its global presence and capabilities, reputation, and depth of resources available." Sixty-five percent (89 of 137) of respondents also cited geographic presence and 60 percent (81 of 134) cited the lack of consent from the company's board of directors as reasons of great or very great importance. Respondents also provided the following reasons as to why they would not use a non-Big 4 firm: their shareholders would not want a non-Big 4 firm; to gain investor confidence or stock market acceptance; Big 4 firms have financial resources to stand behind their work; public companies are expected to use them; and the quality of services from a Big 4. While 57 (90 of 158) percent of respondents said that the number of firms their companies could use for audit and attest services was adequate as compared with the 43 percent (68 of 158) who said it was not, 86 percent (117 of 136) told us that ideally there should be more than four large accounting firms as viable choices for large national and multinational public companies. In responding to our question on what they thought the optimal number of firms for large companies should be, 74 percent (100 of 136) said they would prefer from five to eight large accounting firms to provide audit and attest services to large national and multinational public companies and 12 percent (17 of 136) of the respondents preferred more than eight firms. Fourteen percent (19 of 136) of the respondents said four or fewer firms would be optimal. Most comments we received in favor of more firms addressed the need to increase competition, decrease fees, and comply with the new independence rules as required by Sarbanes-Oxley. Respondents noted, "More firms will improve the competition in the industry," "more choices, more competition, lower cost," and "one firm provides tax planning services which may impair independence." Another respondent wrote, "Slightly more options would enhance technical resourcing opportunities external to current auditors." However, we also received many comments cautioning that too great a number of firms might have negative implications. One respondent said, "Any greater number of firms would have difficulty in maintaining scale to properly serve large international companies." According to another respondent, "If the number gets too big, then hard to have level of expertise in certain industries." Some respondents felt that four or five big firms would be sufficient. One respondent wrote, "As a firm believer in the efficiency of the marketplace, I believe that the current number of large firms (4) is probably close to the optimum number, but wouldn't mind seeing another major firm gradually emerge." Another respondent wrote, "Balance must be struck between competition and fragmentation of a fixed talent pool." When asked the minimum number of accounting firms necessary to provide audit and attest services to large national and multinational public companies, 82 percent (120 of 147) of respondents indicated that the market was either at its minimum or already below the minimum number required. Fifty-nine percent (86 of 147) said that four or five large accounting firms would be the necessary minimum. According to one respondent, "Four is the absolute minimum, because if you currently use one firm for external audit purposes and another firm for internal audit purposes, that only leaves two other firms from which to choose if you want to change auditors or use a Big 4 firm for consulting services." Some respondents pointed out that not even all the Big 4 firms have the necessary industry expertise required to conduct their companies' audits. According to one respondent, "From a realistic standpoint, only one other Big 4 firm has a utility practice that would help understand our industry." Another respondent wrote, "We use one of the Big 4. Two of them do not have industry expertise. Only one of the remaining three has industry expertise in the geographic region." Although Sarbanes-Oxley prohibits a company's external auditor from providing internal audit services and certain other consulting services to the same company, many companies currently use one of the Big 4 as their external auditor and one of the remaining three Big 4 firms for nonaudit services such as tax consulting and internal audits. Therefore, a company with this arrangement that needed to change auditors would have one fewer alternative or would need to terminate its internal audit or consulting relationship. For example, one respondent noted, "Aside from our current auditor, we use another of the Big 4 as a co-source provider of internal audit services, so would not consider them. We are using a third for tax work so it would be hard under Sarbanes-Oxley to switch to them." Despite the fact that 94 percent of respondents said they had three or fewer options from which to choose if they had to change auditors, 62 percent (98 of 159) of respondents said they would not suggest that any actions be taken to increase competition in the provision of audit and attest services for large national and multinational companies. When asked whether steps should be taken to increase the number of available choices, 79 percent (65 of 83) opposed government action to break up the Big 4, while 66 (55 of 83) percent opposed any government action to assist non-Big 4 firms. Seventy- eight percent (64 of 82) of respondents said they would favor letting market forces operate without government intervention. While some respondents expressed their belief that the market would adjust to create a more competitive environment, others expressed uncertainty about whether government actions could increase competition. According to one respondent, "Government action to assist the non-Big 4 firms will not work. The level of expertise and depth of resources required to deal with ever increasing levels of complexity and regulation cannot be government intervention." However, another respondent commented, "Having only four large firms is a concern. The benefits of consolidation should be higher quality, less variation in advice, stronger financial resources of the accounting firm, and more accountability. If these benefits are not achieved, then the government may need to intervene." In addition, several respondents expressed concern about further consolidation. Referring to the dissolution of Andersen, one respondent said, "Our biggest concern is the ease with which a firm can disappear." Another stated, "The failure of Andersen had a devastating impact and ultimately resulted in fewer qualified professionals providing attest services during a time of rapidly increasing complexity in applying GAAP." We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Energy and Commerce. We are also sending copies of this report to the Chairman of SEC, the Chairman of the Public Company Accounting Oversight Board, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. This report was prepared under the direction of Orice M. Williams, Assistant Director. Please contact her or me at (202) 512-8678 if you or your staffs have any questions concerning this work. Key contributors are acknowledged in appendix IV. We surveyed a random sample of 250 of the 960 largest publicly-held companies. We defined this population using the 2003 list of the Fortune 1000 companies produced by Fortune, a division of Time, Inc., after removing 40 private companies from this list. We mailed a paper questionnaire to the chief financial officers, or other executives performing a similar role, requesting their views on the services they received from their auditor of record, the effects of past consolidation on competition among accounting firms, and its potential implications. To develop this questionnaire, we consulted with a number of experts at GAO, the American Institute of Certified Public Accountants, and the Securities and Exchange Commission, and pretested a draft questionnaire with six large public companies from a variety of industries. The survey began on May 6, 2003. We removed one company that had gone out of business and received 159 usable responses as of August 11, 2003, from the final sample of 249 companies, for an overall response rate of 64 percent. The number of responses to an individual question may be fewer than 159, depending on how many respondents answered that question. While the survey results are based on a random sample drawn to be representative of the population of publicly held Fortune 1000 companies and thus could be adjusted statistically to represent the whole population, including those not sampled, we are instead reporting totals and percentages only for those companies actually returning questionnaires. We did this because a significant number of sampled companies did not respond, and the answers respondents gave could differ from those nonrespondents might have given had they participated. This kind of potential error from nonresponse, when coupled with the sampling error that results from studying only a fraction of the population, made it particularly risky to project the results of our survey to not only the nonrespondents, but also to the part of the public company population we did not sample. There are other practical difficulties in conducting any survey that may also contribute to errors in survey results. For example, differences in how a question is interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. We took steps during data collection and analysis to minimize such errors. In addition to the questionnaire testing and development measures mentioned above, we followed up with nonresponding companies with telephone calls to help them overcome problems they encountered in completing the survey and to encourage them to respond. We also checked and edited the survey data and programs used to produce our survey results. All 159 companies responding to our survey employed a Big 4 firm as their auditor of record. These companies derived an average of 83 percent of their total revenues from operations within the United States and paid, on average, $3.19 million in fees to their auditor of record in the fiscal year prior to the survey. Using Standard Industry Classification (SIC) codes, we found that 149 respondents represented 39 different industry sectors; we could not identify an SIC code for the other 10 respondents. The top 7 industry sectors represented were electric, gas, and sanitary services (17 companies), depository institutions (10 companies), business services (9 companies), industrial and commercial machinery and computer equipment (9 companies), wholesale trade-non-durable goods (9 companies), chemicals and allied products (8 companies), and electronic and other electrical equipment and components, except computer equipment (6 companies). Please complete this questionnaire specifically the U.S. General Accounting Office (GAO), the for the company named in the cover letter, and independent research and investigative arm of not for any subsidiaries or related companies. This questionnaire should be completed by the profession. To provide a thorough, fair, and balanced report historical information on mergers, operations to Congress, it is essential that we obtain the and finance, as well as report the corporate experiences and viewpoints of a representative policy of this firm. sample of public companies. Your company was selected randomly from the enclosed envelope within 10 business days of 2002 list of Fortune 1000 companies. It is receipt. If the envelope is misplaced, our important for every selected firm to respond to ensure the validity of our research. Congress. Telephone: (202) 512-3608 Email: [email protected] Thank you for participating in this survey. 1. Approximately what percentage of your company's total revenues are derived from operations within and outside of the United States? Please enter percentages totaling 100%. of our revenues are derived from operations within the United States of our revenues are derived from operations outside of the United States 2. If your company was founded in the past decade, in what year was it founded? Please enter 4-digit year. 3. What is the name of your company's current auditor of record and when did this firm become your auditor of record? Please enter name of auditor and 4-digit year hired. 4. What type of services does your auditor of record currently provide to your company? Please check all that apply. 1. Only audit and attest services 2. Tax-related services (e.g., tax preparation) 3. Assistance with company debt and equity offerings (e.g. comfort letters) N=106 4. Other services - please describe: _______________________________________________________________________ 5. Approximately how much were the total annual fees that your company paid to your auditor of record for audit and attest services during your last fiscal year? Please enter approximate dollar figure. Range=$13,807-$62,000,000 6. Starting in 1987, when consolidation of the largest accounting firms began, or since your company was founded (if that occurred after 1987), has your company employed more than one auditor of record? Please check one box. 1. Yes - how many: ________ 2. 7. What were the names and tenures of the most recent previous auditor(s) of record your company has employed since 1987? Please name up to two of the most recent previous auditors and years employed. from (year)_____ to (year)_______ (year)_____ to (year)_______ 8. Which of the following reasons explain why your company changed auditor of record one or more times since 1987? Please check all that apply. 1. Our company had a mandatory rotation policy 2. Expansion of our company required an auditor of record that could meet new demands 3. New regulations forbidding use of auditor for management consulting and other services 4. Fees for audit and attest services 5. Concern about reputation of our auditor of record 6. Our auditor of record was going out of business 7. Our auditor of record resigned 8. Relationship with our auditor of record was no longer working 9. Other - please describe: __________________________________________________________________ 9. If your company previously employed Arthur Andersen as your auditor of record and switched to another firm in the past two years, did you switch to the firm to which your previous Arthur Andersen partner moved? Please check one box. 1. Not applicable - did not employ Arthur Andersen 2. Yes, switched to partner's new firm 3. No, switched to other firm - Consolidation in the Accounting Profession We are focusing on the trend toward consolidation that has occurred in the public accounting profession starting in 1987, when consolidation activity among the largest firms began, primarily the consolidation of the "Big 8" into the "Big 4." This section asks you to consider how relationship with its auditor of record, and the audit services it provides, has changed over this time frame. Although a number of factors m ay have influenced these changes, we would like you to assess the influence of consolidation in the accounting profession in particular. answers on your experience in the past decade or, if this is not possible, on the time frame that reflects your experience. 10. How have the fees that your company pays for audit and attest services changed over the past decade? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. 4. 5. 1% 11. If your company changed auditors within the last two years, how have the fees your company pays your current auditor of record changed compared to the fees paid to your previous auditor? Please check one box. 1. Not applicable - have not changed auditors ----------------------------------------------------------- 2. 3. 4. 5. 6. 12. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the fees that your company pays for auditing and attest services? 1. Great upward influence 2. Moderate upward influence 3. 4. Moderate downward influence 5. ---------------------------------------------------------- 6. Don't know 13. Audit quality is often thought to include the knowledge and experience of audit firm partners and staff, the capability to efficiently respond to a client's needs, and the ability and willingness to appropriately identify and surface material reporting issues in financial reports. Do you believe that the overall quality of audit services your company receives has gotten better or worse over the past decade? Please check one box. 2. 3. 4. ---------------------------------------------------------- 6. Don't know 1% 14. If your company changed auditors within the last two years, do you believe that the overall quality of audit services your company receives from your current auditor is better or worse than the overall quality of audit services your company received from its previous auditor? Please check one box. 1. Not applicable - have not changed auditors ---------------------------------------------------------- 3. 4. 5. ----------------------------------------------------------- 7. 15. In your opinion, how has the consolidation of the largest accounting firms over the past decade influenced the quality of audit and attest services that your company receives? 1. Very positive influence 2. 3. 4. 5. Very negative influence ---------------------------------------------------------- 6. Don't know 16. If you have experienced a change in audit quality, please explain: If you have not experienced a change, please enter "none." _________________________________________________________________________ 17. Auditor independence is often thought to relate to the accounting firm's ability and willingness to appropriately deal with (a) financial reporting issues that may indicate materially misstated financial statements; (b) the appearance of independence in terms of the other services a firm is allowed to and chooses to provide to their clients; and (c) how much influence clients appear to have in the audit decisions. Do you believe that your company's auditor(s) has become more or less independent over the past decade? Please check one box. 1. Much more independent 2. 3. 4. 5. ---------------------------------------------------------- 6. 18. If your company changed auditors within the last two years, do you believe that your current auditor is more or less independent than your previous auditor? Please check one box. 1. Not applicable - have not changed auditors ---------------------------------------------------------- 2. 3. 4. 5. 6. ---------------------------------------------------------- 7. 28. Has the consolidation of the largest accounting firms over the past decade made it harder or easier for your company to satisfactorily select an auditor and maintain a relationship with that auditor? Please check one box. 2. 3. 4. ---------------------------------------------------------- 6. 29. How, if at all, has the consolidation of the largest accounting firms over the past decade affected competition in the provision of audit and attest services? If it is not possible for you to answer for the past decade, please base your answer on the time frame that best reflects your experiences. Please check one box. 1. 2. 3. 4. 5. ---------------------------------------------------------- 6. 30. How, if at all, has this change in competition affected each of the following areas? (1) (2) (3) (4) (5) (6) 31. What do you believe is the minimum number of accounting firms necessary to provide audit and attest services to large national and multinational public companies? Please enter a number. 32. What do you believe is the optimal number of accounting firms for providing audit and attest services to large national and multinational public companies? Please enter a number. _________________________________________________________________________ 33. Do you suggest that any actions be taken to increase competition in the provision of audit and attest services for large national and multinational public companies? Please check one box. 3. Don't know 34. Would you favor or oppose the following actions to increase competition to provide audit and attest services for large national and multinational clients? Please check one box in each row. (1) (2) (3) (4) (5) (6) N=0 35. Do you have any additional comments on any of the issues covered by this survey? Please use the space below to make additional comments or clarifications of any answers you gave in this survey. Thank you for your assistance with this survey! Please return it in the envelope provided. Companies surveyed were invited to add written comments to a number of questions to further explain their answers. Of the 159 respondents that responded to the survey, 149 volunteered written answers to at least one of the eight key open-ended comment questions in our survey: change in audit quality, the number of auditor options, the sufficiency of such options, willingness to use the auditor of a competitor, minimum number of audit firms necessary, optimal number of firms, suggested actions for increasing competition, and any additional comments on the survey. The following tables display selected comments from some respondents to these eight questions. Some of the quotes illustrate typical comments made by several other companies, while others represent a unique viewpoint of only that company. While these specific comments provide valuable insights, the number of comments of a particular type reproduced here is not necessarily proportional to the number of other similar responses, and, therefore, the comments do not represent the variety of opinion that might be found in the population of large public companies as a whole. More respondents said that overall audit quality had gotten better over the past decade than worse (44 percent compared to 18 percent). The reasons behind these ratings are presented in table 2, grouped into summary categories. Almost all respondents--94 percent--indicated that they had three or fewer options from which to choose if they had to change auditors, and 61 percent said exactly three. The explanatory comments we received to that question, shown in table 3, confirm that respondents are almost always referring to the Big 4 firms other than the one they currently employ. As only 8 percent of respondents said they currently use or would consider using a non-Big 4 firm, there were few written explanations for why they thought they had more than three or four options. Those who did explain mentioned the national prominence of the larger second-tier firms and smaller firms with special industry expertise as reasons. Almost half of the respondents (43 percent) said they did not have enough options and desired more. Respondents who said they had enough options said the Big 4 firms were able to meet their needs. However, several of these respondents cautioned that further reductions could be problematic. Those saying the number of firms was not sufficient often took the position that "more competition is always better." Other comments included that differentiation between the firms' services was declining, special expertise was not longer readily available, and monopolistic tendencies in setting fees. See table 4. More than 90 percent of our respondents said that their company would choose the auditor of a competitor. A few of those respondents provided explanations as to why they would or would not, as shown in table 5. A large majority (82 percent) of respondents said that the minimum number of firms necessary to provide audit services to large companies such as theirs was four or more. The largest number of responses was received for four or five firms. See table 6. Most (86 percent) respondents said the optimal number of firms was greater than four, although the majority of those responses remained in the five to eight range. See table 7 for selected comments. those that favored action mentioned assisting non-Big 4 firms to by reducing barriers to entry, preventing further consolidation, breaking up the Big 4, and other actions. Many suggested that market forces should be allowed to operate without intervention. See table 8. in the survey. A number of respondents mentioned concerns about further consolidation in the accounting profession, cost and quality, and other issues such as the impact of the Sarbanes-Oxley act and proposals for mandatory audit firm rotation. In addition to those individuals named above, Martha Chow, Marc Molino, Michelle Pannor, David Pittman, Carl Ramirez, Barbara Roesmann, and Derald Seid made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading.
The largest accounting firms, known as the "Big 4," currently audit over 78 percent of U.S. public companies and 99 percent of public company annual sales. To address concerns raised by this concentration and as mandated by the Sarbanes-Oxley Act of 2002, on July 30, 2003, GAO issued a report entitled Public Accounting Firms: Mandated Study on Consolidation and Competition, GAO-03-864 . As part of that study, GAO surveyed a random sample of 250 public companies from the Fortune 1000 list; preliminary findings were included in the July report. This supplemental report details more comprehensively the 159 responses we received through August 11, 2003, focusing on (1) the relationship of their company with their auditor of record in terms of satisfaction, tenure relationship, and services provided; (2) the effects of consolidation on audit fees, quality, and independence; and (3) the potential implications of consolidation for competition and auditor choice. Most of the 159 respondents said that they were satisfied with the current auditor, and half had used their current auditor for 10 years or more. Generally, the longer a respondent had been with an auditor, the higher the overall level of satisfaction. Consistent with high levels of satisfaction, GAO found that, aside from former clients of Arthur Andersen, few respondents had switched auditors in the past decade. When they did, they switched because of reputation, concerns about audit fees, and corporate mergers or management changes. In looking for a new auditor, the most commonly cited factors the respondents gave were quality of service, industry specialization, and "chemistry" with the audit team. Finally, almost all respondents used their auditor of record for a variety of nonaudit services, including tax-related services and assistance with company debt and equity offerings. Respondents had differing views about whether past consolidation had some influence on audit fees, but most believed that consolidation had little or no influence on audit quality or independence. Respondents commented that other factors--such as new regulations deriving from the Sarbanes-Oxley Act and changing auditing standards--have had a greater impact on audit price, quality, and independence. While half of the respondents said that past consolidation had little or no influence on competition and just over half said they had a sufficient number of auditor choices, 84 percent also indicated a preference for more firms from which to choose as most would not consider using a non-Big 4 firm. Reasons most frequently cited included (1) the need for auditors with technical skills or industry-specific knowledge, (2) the reputation of the firm, and (3) the capacity of the firm. Finally, some expressed concerns about further consolidation in the industry and the limited number of alternatives were they to change auditors under existing independence rules.
8,047
586
Enemy sea mines were responsible for 14 of the 18 Navy ships destroyed or damaged since 1950, and producing countries have developed and proliferated mines that are even more difficult to detect and neutralize. After the Gulf War, during which two Navy ships were severely damaged by sea mines, the Navy began several actions to improve its mine warfare capabilities. The Navy's current MCM capabilities are in a special purpose force that consists of 12 mine-hunter, coastal (MHC) and 14 MCM ships, 1 command and support ship, 24 mine-hunting and clearing helicopters, 17 explosive ordnance disposal detachments, a very shallow water detachment, and a marine mammal detachment. According to the Navy, the cost of operating and maintaining this MCM force from fiscal year 1992 through 2003 will be about $1.9 billion. Because the Navy's MCM ships lack the speed and endurance they would need to accompany carrier battle groups and amphibious ready groups on overseas deployments, the Navy has changed its strategy of maintaining only a special purpose force to also developing mine countermeasure capabilities to be placed on board combat ships within the fleet. The Navy has consolidated operational control of all surface and airborne mine warfare forces under the Commander, Mine Warfare Command, and improved the readiness of these forces through exercises and training. The Navy also initiated research and development projects to address the weaknesses in its MCM program, especially the lack of on-board MCM capability throughout the fleet, and created a Program Executive Office for mine warfare, which brought together disparate MCM programs and their associated program management offices. In a prior report, we discussed weaknesses in the Navy's ability to conduct effective sea mine countermeasures. We reported that critical MCM capabilities were unmet and reviewed the Navy's efforts to address these limitations. At that time, the Navy had not established clear priorities among its mine warfare research and development programs to sustain the development and procurement of the most needed systems. Consequently, the Navy experienced delays in delivering new systems to provide necessary capabilities. DOD concurred with our recommendation that a long-range plan be developed to identify gaps and limitations in the Navy's MCM capabilities and establish priorities. DOD said the process was ongoing and consisted of developing an overall concept of MCM operations and an architecture within which needs and shortfalls in capabilities could be evaluated and prioritized. DOD also said that critical programs would be identified and funded within the constraints of its overall budget. Congress previously expressed its concern that the Navy had failed to sufficiently emphasize mine countermeasures in its research and development program and noted the relatively limited funding allocation. As a result, mine warfare programs were designated as special congressional interest items. To support continuing emphasis on developing the desired mine countermeasures, Congress added a certification requirement in the National Defense Authorization Act for fiscal years 1992 and 1993. This required the Secretary of Defense to certify that the Secretary of the Navy, in consultation with the Chief of Naval Operations and the Commandant of the Marine Corps, had submitted an updated MCM master plan and budgeted sufficient resources for executing the updated plan. It also required the Chairman of the Joint Chiefs of Staff to determine that the budgetary resources needed for MCM activities and the updated master plan are sufficient. This certification requirement will expire with the fiscal year 1999 budget submission unless it is renewed. Although it has developed a strategy for overcoming deficiencies in its MCM capabilities, the Navy has not decided on the composition and size of its future on-board and special purpose MCM force. Navy officials have acknowledged the need to maintain some special purpose MCM force, while the Navy is moving toward an on-board MCM capability. The Navy currently has no on-board MCM capabilities and relies on a force of MCM capabilities that are specifically dedicated to that mission. The Navy has two assessments in progress to develop the information it needs to decide on the mix of its future on-board and special purpose forces. The objectives of these assessments are to determine (1) the quantities and types of on-board MCM systems the Navy will need to procure to meet fleet requirements in fiscal years 2005-2010; (2) the optimal force mix to meet fleet requirements in the 21st century; and (3) the numbers and types, if any, of special purpose MCM assets that will still be needed in the fiscal year 2010-2015 time frame. Initial results are expected to be available in October 1998, in time to influence the development of the fiscal year 2001 Navy resource program, with a final report in January 1999. Navy officials do not expect this phase of the assessments to provide them all of the information that is needed to tailor the future MCM force structure. They do expect, however, that it will give them a good idea of how to plan procurement, training, and maintenance for the on-board systems expected to be deployed in the fiscal year 2001-2005 time frame. To address the lack of on-board capability, the Navy accelerated the delivery of a Remote Minehunting System and established a contingency shallow-water mine-hunting capability in one Navy Reserve helicopter squadron using laser mine detection systems, and it is including mine-hunting systems in upgrades to existing and in new construction submarines. Maintaining the special purpose force is costly, and Navy resource managers have been evaluating how to pay for the operations and support costs of this force while pursuing costly development of on-board capabilities. A final force structure decision will likely be driven by the level of resources the Navy intends to dedicate to the MCM mission in the future--a decision that depends on numerous issues outside the MCM arena such as conflicting funding priorities among the various Navy warfare communities (aircraft, surface ships, and submarines). A decision on the future force structure is, however, still needed because that decision will determine the types and quantities of systems to be procured, set priorities among systems, and determine the level of resources required for development, procurement, and sustainment. For example, the Navy is currently debating whether to retire the current mine-hunting helicopters, the MH-53, in favor of maintaining only H-60 series helicopters. This helicopter decision will directly affect the types and quantity of airborne MCM capabilities the Navy will be able to field in the future. Since 1992, the Navy has invested about $1.2 billion in RDT&E funds to improve its mine warfare capabilities. The Navy plans to spend an additional $1.5 billion for RDT&E over the next 6 years. It is currently managing 28 separate MCM development programs and several advanced technology and advanced concept technology demonstrations. (See app. I for the status of selected programs.) So far, according to a Navy official, this investment has not produced any systems that are ready to transition to production. A few systems, such as the Airborne Mine Neutralization System, the Shallow-Water Assault Breaching system, Distributed Explosive Technology, and a Closed Loop Degaussing system, are scheduled for a production decision over the next 2 to 3 years. Other systems, such as communications data links for the MH-53 helicopters and the airborne laser mine-detection system (Magic Lantern Deployment Contingency), were not produced because the Navy never funded their procurement. Delays experienced in a number of MCM development programs result from the same kinds of problems that are found in other DOD acquisitions such as funding instability, changing requirements, cost growth, and unanticipated technical problems. For example, although the MCM funding program is small, the Navy has reduced funding for its MCM research and development programs after budget approval. (See app. II for two program examples.) These problems in MCM acquisition programs show that the design, development, and production of needed systems are complex and that technical processes must operate within equally complex budget and political processes. If programs are not well conceived, planned, managed, funded, and supported, problems such as cost growth, schedule delays, and performance shortfalls can easily occur. Two examples of mine warfare programs that have been in the research and development phase for many years without advancing to procurement are the AQS-20, an airborne mine-hunting sonar, and the Airborne Mine Neutralization System. The AQS-20 began in 1978 as an exploratory development model and was scheduled for a limited rate initial production decision in fiscal year 1999. The Navy terminated the program in 1997 in favor of a follow-on sonar, the AQS-X, with added mine identification capability and a tow requirement from a H-60 helicopter instead of a MH-53 helicopter. During the intervening 19 years, the program was plagued by cost growth, changing requirements, and a funding shortfall. The development of the Airborne Mine Neutralization System began in 1975, but a production decision is not scheduled until fiscal year 2000. The principal reason for the delay is that the program was canceled and restarted two times because of funding instability. Contributing to difficulties in transitioning programs into production are a number of management and internal control weaknesses noted during the annual Federal Manager's Financial Integrity Act certification. Since 1992, the Program Executive Office has attempted to improve internal controls within five subordinate program offices by developing financial and acquisition management information and reporting systems. At its request, the Naval Audit Service is reviewing the state of internal controls within one of the program offices and expects to issue a report in the fall of 1998. A majority of officials we interviewed said that the annual certification requirement was useful because it served to increase the visibility of MCM requirements within DOD and the Navy. Most said that some form of the certification should continue to be required. However, as currently prepared, the annual certification does not address the adequacy of overall resources for this mission, nor does it provide for objective measures against which progress can be evaluated. Moreover, the Chairman, Joint Chiefs of Staff's involvement in the certification process occurs too late to have a significant impact. The annual certification does not address the adequacy of overall resources for this mission because the Navy's budget for MCM programs addresses only the adequacy of funding for the budget year, not the out years. Further, nothing in the certification process provides objective measures against which progress can be evaluated. Such measures have been developed within the MCM community. For example, the time required by a tactical commander to clear a certain area of mines with and without various capabilities could be used in making individual program decisions. Likewise, there are mean times between repairs and average supply delay times to gauge reliability and supportability for the MCM and MHC ships. In the past, the DOD staff has not been willing to challenge Navy decisions regarding the content and adequacy of its MCM program. Instead, it focused on analyzing the consistency of the program from year to year. Consequently, DOD has been able to certify annually that the budget contains adequate resources for the program. However, in November 1997, the Secretary of Defense expressed his concern about the Navy's financial commitment to mine warfare programs. As a result, the Navy added about $110 million to MCM programs over the future years defense planning period. The inclusion of the Chairman, Joint Chiefs of Staff, in the certification process was intended to give the regional commanders in chief an opportunity to influence the development of the MCM budget. We believe, however, and DOD and Navy officials agree, that the Chairman, Joint Chiefs of Staff's determination has not added any significant value. Although the Joint Staff has assessed joint MCM requirements and capabilities, its conclusions have not been used as a basis for challenging the Navy's MCM programs or suggesting alternatives. Moreover, since the Joint Staff's review has occurred after, rather than before, the Navy's budget proposals for MCM programs have been formalized, it has had no impact on specific Navy acquisition programs or overall resource decisions. To have an effective program, the Navy needs to decide on the size, composition, and capabilities of its future MCM forces. This decision will assist in prioritizing and disciplining its research, development, and procurement efforts. As with other mission areas, the types and quantities of systems to be procured and their platform integration will most likely be driven by the level of resources the Navy allocates to the MCM mission in the future. What is required is for the Navy leadership and the various warfare communities to agree on the composition and structure (size) of future MCM forces and commit the necessary resources to their development and sustainment. Without such an agreement, budgetary pressures may result in degradation of the special purpose forces before the Navy has demonstrated and fielded effective, on-board capabilities within the fleet. The certification requirement has forced DOD and the Navy to pay increased attention to the MCM mission, and most officials involved support its continuation in some form. However, the certification has not provided any assurance that the resources for the MCM mission are "sufficient" because it has only addressed the adequacy of funding for the particular budget year and because the DOD staff and the Chairman of the Joint Chiefs of Staff have not challenged Navy resource allocation or budget decisions. If the Chairman of the Joint Chiefs of Staff's involvement in the certification process is still considered important, it must occur in time to influence Navy decisions on requirements and funding. Overall budgetary pressures, the high operations and maintenance costs associated with the special purpose MCM fleet, and the Navy's expectation of potential increased capabilities from on-board systems still early in development may combine to result in budgetary shifts from current special purpose forces before potential on-board capabilities are realized. We recommend that the Secretary of Defense, in conjunction with the Chairman, Joint Chiefs of Staff, and the Secretary of the Navy, determine the mix of on-board and special purpose forces DOD plans to maintain in the future and commit the funding deemed necessary for the development and sustainment of these desired capabilities. We also recommend that the Secretary of Defense direct the Secretary of the Navy to sustain the special purpose MCM forces until the Navy has demonstrated and fielded effective, on-board capabilities. The certification process has increased DOD's and the Navy's attention to the MCM mission. Since the certification requirement is scheduled to expire this year, Congress may wish to consider extending the annual certification requirement until the Navy has determined the mix of on-board and special purpose forces it will maintain in the future and has fielded effective, on-board MCM capabilities. To strengthen the certification process, Congress may wish to consider amending the requirement to ensure that the participation by the Chairman, Joint Chiefs of Staff, occurs before the Navy's fiscal year budget is submitted to the Office of the Secretary of Defense. In commenting on a draft of this report (see app. III), DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to sustain the special purpose MCM forces until the Navy has demonstrated and fielded effective on-board capabilities. DOD partially concurred with our first recommendation that the Secretary of Defense determine the mix of on-board and special purpose forces DOD plans to maintain in the future and commit the necessary funding. DOD has directed the Navy to ensure that both current and future mine warfare programs are adequately funded. In an April 7, 1998, letter to the Secretary of the Navy, the Secretary of Defense expressed his concern about the Navy's lack of commitment of the necessary resources to mine warfare and noted that currently, requirements exceed resources allocated. He directed the Navy to (1) protect the mine warfare program from any further funding reductions until some on-board capabilities are available, (2) avoid using the funds currently planned for the special purpose forces to fund the development of on-board capabilities, and (3) develop a future years funding plan that matches requirements with resources. DOD, however, cited the Navy as having primary responsibility for MCM forces, whereas our recommendation was directed to the Secretary of Defense. We agree that the Navy does have primary responsibility, but the Secretary of Defense has had a special role through the certification process. As we conclude in the report, the certification requirement has had a positive impact. Therefore, we have added a matter for congressional consideration to the report that suggests that the certification requirement be extended. DOD partially concurred with our recommendation that the Secretary of Defense direct that involvement by the Chairman, Joint Chiefs of Staff, occur early enough to affect annual Navy budget submissions. DOD said the Chairman is involved early enough to affect budget decisions. Our recommendation, however, is based on our conclusion that the certification process has not been effective in assuring the adequacy of resources. This conclusion is based, in part, on the late involvement of the Chairman, Joint Chiefs of Staff. For example, we note that the Navy's fiscal year 1999 budget submission went to Congress in late January 1998, yet the Secretary of Defense's certification, which includes the Chairman's determination regarding the sufficiency of the Navy's resources in fiscal year 1999, was submitted in May 1998. Although the Chairman, Joint Chiefs of Staff, has input in the budget process, the certification requirement provides an additional opportunity to have an effect in assuring the sufficiency of resources. Since DOD only partially concurred and to strengthen the certification process, we have deleted our recommendation regarding the Chairman's participation and added a matter for congressional consideration that the annual certification requirement be amended to ensure the participation by the Chairman, Joint Chiefs of Staff, before the Navy's budget is submitted to the Office of the Secretary of Defense. The intent of our matters for consideration is to give additional attention to the sufficiency of budget resources the Navy has devoted to MCM. DOD also provided some updated information in its comments and we have incorporated it into our report as appropriate. To obtain information on the status of Navy plans, programs, and the certification process, we interviewed and obtained documentation from officials of the Office of the Secretary of Defense, the Joint Staff, the Defense Intelligence Agency, the Secretary of the Navy, the Chief of Naval Operations, the Naval Air and Sea Systems Commands, the Office of Naval Intelligence, and the Office of Naval Research in the Washington, D.C., area, and the Navy Operational Test and Evaluation Force and the Surface Warfare Development Group in Norfolk, Virginia. We also interviewed and obtained information from officials engaged in MCM scientific and technical research and development activities at the Naval Undersea Warfare Center in Newport, Rhode Island; the Navy Coastal Systems Station in Panama City, Florida; and the Applied Physics Laboratory of Johns Hopkins University, in Laurel, Maryland. To gain an understanding of existing capabilities and requirements, and an operational perspective, we interviewed and obtained information from the staff and operational units of the Commander in Chief, Atlantic Command and the Commander in Chief, Atlantic Fleet in Norfolk, Virginia; and the Commander, Mine Warfare Command, in Corpus Christi, and Ingleside, Texas. We conducted our review between September 1997 and March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, Senate Committee on Armed Services; the Chairman, Subcommittee on Defense, Senate Committee on Appropriations; the Chairman, Subcommittee on National Security, House Committee on Appropriations; the Secretaries of Defense, the Army, and the Navy; and the Commandant of the Marine Corps. Copies will also be provided to other interested parties upon request. Please contact me at (202) 512-4841 if you have any questions about this report. The major contributors to this report are listed in appendix IV. Program description: The Remote Minehunting System program develops a new remotely operated mine-hunting system that is capable of detecting and classifying mines. It is intended to provide the surface fleet with an on-board means of finding and avoiding mined waters. The program has a three-fold strategy to develop a new vehicle, upgrade it with state-of-the-art mine-hunting sensors, and provide a supportable, incremental operational contingency system to the fleet during the development process. Platform: Surface combatants. Mine threat: Bottom & moored mines/deep to very shallow water. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2002, milestone III on version 4 (proposed). Current status: Milestone III on version 3 had been scheduled for fiscal year 1999; however, due to cost and schedule problems, the program has been restructured to drop version 3 and continue development of version 4. Funding (fiscal years 1992-97): $44.1 million. Programmed funding (fiscal years 1998-03): $103.7 million. Program description: The Magic Lantern is a helicopter mounted laser/camera system that detects and classifies moored mines. The objective of the Magic Lantern Deployment Contingency system is to field an advanced development model on one detachment of Naval Reserve SH-2G helicopters to provide on-board mine reconnaissance capability for surface and near surface water. In fiscal year 1996, Congress directed a competitive evaluation field test of the Airborne Laser Mine Detection System technologies. These technologies included Magic Lantern, ATD-111, and the Advanced Airborne Hyperspectral Imaging System. This field test took place in late 1997. The Navy expects to send the final report to Congress by the end of April 1998. Platform: SH-2G helicopters. Mine threat: Floating and shallow-water moored mines. Program start date: Fiscal year 1992 (Start of the Airborne Laser Mine Detection System program). Date of estimated completion of research & development phase: Fiscal year 1999. Current status: Installation of contingency systems on H-60 helicopters. Funding (fiscal years 1992-97): $73 million. Programmed funding (fiscal years 1998-03): $29.3 million. Program description: This system is intended to provide an unmanned undersea vehicle mine reconnaissance capability in the form of a single operational prototype, as a stop-gap, interim clandestine offboard system. The system is to be launched and recovered from a SSN-688 class submarine. Platform: SSN-688 class submarines. Mine threat: Bottom and moored mines in deep through very shallow water. Program start date: Fiscal year 1994. Date of estimated completion of research & development phase: Fiscal year 2003. Current status: Initial operational capability is scheduled for fiscal year 1998. The system is scheduled to participate in the Joint Countermine Advanced Concept Technology Demonstration II in June 1998. Funding (fiscal years 1994-97): $42.3 million. Programmed funding (fiscal years 1998-03): $29.6 million. Program description: Radiant Clear is a joint Navy-Marine Corps effort to graphically depict the littoral environment and coastal defenses through the application of advances in the processing of data collected by national systems. Platform: Not applicable. Mine threat: Very shallow water to the beach. Program start date: Fiscal year 1996. Date of estimated completion of research & development phase: Open. Current status: Demonstration, May 1998. Funding (fiscal years 1996-97): $2 million. Programmed funding (fiscal years 1998-03): $6 million. Program description: This system is an explosive line charge system that is delivered from a rocket motor and deployed from a manned Landing Craft, Air Cushion at a standoff range of 200 feet. Platform: Manned Landing Craft, Air Cushion. Mine threat: Very shallow water and surf zone, optimized for 3-10 feet water depth. Program start date: Fiscal year 1992. Date of estimated completion of research & development phase: Fiscal year 1999, milestone III. Current status: Fiscal year 1998, developmental and operational testing. Funding (fiscal years 1992-97): $35.3 million. Programmed funding (fiscal years 1998-03): $10.9 million. Program description: The Distributed Explosive Technology program is a distributed explosive net that is delivered by two rocket motors and deployed from a manned Landing Craft, Air Cushion at a standoff range of 200 feet. It is designed to provide a wide swath of mine clearance in the surf zone. Platform: Manned Landing Craft, Air Cushion. Mine threat: Surf zone, optimized for depths less than 3 feet to the beach. Program start date: Fiscal year 1992. Date of estimated completion of research & development phase: Fiscal year 1999, milestone III. Current status: Fiscal year 1998, developmental and operational testing. Funding (fiscal years 1992-97): $47 million. Programmed funding (fiscal years 1998-03): $19.5 million. Program description: The AQS-20 was to be an airborne towed high speed mine-hunting sonar. It was to work in conjunction with the Airborne Mine Neutralization System. The AQS-20 was to provide the capability to search, detect, localize, and classify mines. Platform: MH-53 helicopters. Mine threat: Bottom, close tethered, and volume mines in deep and shallow water. Program start date: 1978. Date of estimated completion of research & development phase: Fiscal year 2001. Current status: Transitioning to AQS-X, a follow-on advanced sonar with the addition of mine identification capability and towed capability from the H-60 helicopter. An advanced sonar fly-off is planned for fiscal year 1999. Funding (fiscal years 1992-97): $73.1 million. Programmed funding (fiscal years 1998-03): $76.3 million. Program description: This system is a magnetic and acoustic system and is to rapidly sweep and clear influence mines by emulating the signatures of amphibious assault craft. It is to be an on-board mine countermeasures asset and capable of night operations. Platform: Remotely controlled surface craft, but other platforms are being explored. Mine threat: Influence mines in shallow and very shallow water. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2000, scheduled transition from Advanced Technology Demonstration status to acquisition program. Current status: To be a part of the Joint Countermine Advanced Concept Technology Demonstration II in June 1998 (approximate 6 months slippage from original schedule). Funding (fiscal years 1992-97): $49.8 million. Programmed funding (fiscal years 1998-03): $7 million. Program description: This system is an expendable, remotely operated, explosive mine neutralization device that is towed by a helicopter. It is intended to rapidly destroy mines and operate in day or night. Originally, it was intended to operate in conjunction with the AQS-20 sonar. With the termination of the AQS-20 and transition to AQS-X, the system will operate with the AQS-14A sonar, which will be integrated with a laser line scan system to provide interim mine identification capability. Platform: MH-53 helicopters. Mine threat: Bottom and moored mines in deep or shallow water. Program start date: Fiscal year 1975. Date of estimated completion of research & development phase: Fiscal year 2000, milestone III is scheduled. Current status: Engineering, manufacturing, and development contract award scheduled for second quarter, fiscal year 1998. Funding (fiscal years 1992-97): $12.4 million. Programmed funding (fiscal years 1998-00): $22.6 million. Program description: This system is an advanced technology demonstration program and is intended to employ laser targeting and supercavitating projectiles to neutralize near surface moored contact mines. Its objective is to provide fast reacting organic helicopter capability to safely and rapidly clear mines. Platform: Helicopter. Mine threat: Near surface moored contact mines. Program start date: Fiscal year 1998. Date of estimated completion of research & development phase: Fiscal year 2004. Current status: Fiscal year 1998, demonstration of lethality against key mine types. Programmed funding (fiscal years 1998-04): $65 million. Program description: The Explosive Neutralization Advanced Technology Demonstration, as a group of four subsystems, is intended to demonstrate the capability to neutralize anti-invasion mines in the surf zone and craft landing zone. Two of the subsystems will consist of line charges and surf zone array, which are to be launched from an air cushion vehicle and propelled by new rocket motors for extended range and increased stand-off. These two subsystems will also have a third subsystem, a fire control system, for accurate placement of explosives. The fourth subsystem, the beach zone array, will consist of a glider and an array system. The glider, an unmanned, unpowered air vehicle, will be released by an air deployment vehicle. The glider will approach the beach by means of a global positioning system guidance and control system. To detonate and clear mines, it will deploy the array of nylon webbing and shaped charges over a predesignated target. Platform: Unmanned air vehicle. Mine threat: Anti-invasion mines in the surf and craft landing zones. Program start date: Fiscal year 1993. Date of estimated completion of research & development phase: Fiscal year 2005 for the line charges, surf zone array, and fire control system and fiscal year 2009 for the beach zone array. Current status: Demonstration of fieldable prototype of the beach zone array scheduled for fiscal year 1998. Funding (fiscal years 1993-97): $63.7 million. Programmed funding (fiscal years 1998-03): $87.8 million. Two examples of mine warfare programs that have been in the research and development phase for many years without advancing to procurement are the AQS-20, a mine-hunting sonar, and the Airborne Mine Neutralization System. The following tables illustrate the changes, including the recent series of internal Department of Defense (DOD) increases and decreases, to these programs' funding. The changes depicted in table II. 1 resulted in a delay in the AQS-20 schedule. The production decision slipped 1 year, from second quarter fiscal year 1998 to second quarter fiscal year 1999. (Dollars in thousands) As presented in the fiscal year 1996 President's budget $218 (actual) $12,791 (estimated appropriation) $20,123 (estimate) Reprogramming from Airborne Laser Mine Detection System Reinitiate Airborne Mine Neutralization System Realignment to Shallow Water Mine Countermeasures program element Realignment to Remote Minehunting System Total, as presented in the fiscal year 1997 President's budget $9,165 (adjusted actual) $12,390 (adjusted appropriation) $13,164 (revised estimate) The changes depicted in table II.2 resulted in delays in the schedules of both the AQS-20 and the Airborne Mine Neutralization System. The production decision for the AQS-20 slipped an additional 6 months, to the fourth quarter fiscal year 1999. The production decision for the Airborne Mine Neutralization System slipped 1 year, from third quarter fiscal year 1999 to third quarter fiscal year 2000 due to funding constraints. Table II.2: AQS-20 and Airborne Mine Neutralization System Funding Profile, as of February 1997 $12,355 (actual) $13,164 (revised estimate) $13,069 (estimate) $5,694 (estimate) $11,974 (adjusted actual) $18,357 (adjusted appropriation) $16,503 (revised estimate) $19,937 (revised estimate) The changes depicted in table II.3 reflect the addition of two new initiatives, the Configuration Theory Tactical Decision Aid and the Shallow Water Influence Minesweep System. Congress increased the fiscal year 1998 budget request by $2 million for the Shallow-Water Influence Minesweep System program. (Dollars in thousands) As presented in the fiscal year 1998-99 President's budget $18,357 (actual) $16,503 (estimated appropriation) $19,937 (estimate) Small Business Innovative Research assessment Configuration Theory Tactical Decision Aid Total, as presented in the fiscal year 1999 President's budget $17,969 (adjusted actual) $17,905 (adjusted appropriation) $20,054 (estimate) Anton G. Blieberger, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's mine countermeasures efforts, focusing on the: (1) Navy's plans for improving mine countermeasures (MCM) capabilities; (2) status of current research, development, test, and evaluation (RDT&E) programs; and (3) process the Department of Defense (DOD) used to prepare the annual certification required by Public Law 102-190. GAO noted that: (1) the Navy has not decided on the mix of on-board and special purpose forces it wants to maintain in the future and committed the funding needed for developing and sustaining those capabilities; (2) this decision will determine the types and quantities of systems to be developed and their priority; (3) it also affects the schedule and cost of those developments and the design and cost of the platforms on which they will operate; (4) a final force structure decision will likely be determined by the level of resources the Navy decides to dedicate to the MCM mission in the future; (5) a few systems are scheduled for production decisions within the next 2 to 3 years, while other systems were not produced because the Navy never funded their procurement; (6) since 1992, the Navy has spent about $1.2 billion in RDT&E funds to improve its mine warfare capabilities; (7) however, this investment has not produced any systems that are ready to transition to production; (8) delaying factors include funding instability, changing requirements, cost growth, and unanticipated technical problems; (9) the Navy plans to spend an additional $1.5 billion for RDT&E over the next 6 years; (10) most officials interviewed said the annual certification process has served to increase the visibility of MCM requirements within DOD and the Navy, with positive results and should continue to be required; (11) however, as currently conducted, the annual certification process does not address the adequacy of overall resources for this mission, nor does it contain any measures against which the Navy's progress in enhancing its MCM capabilities can be evaluated; (12) the Chairman, Joint Chiefs of Staffs' review for resource sufficiency occurs after the Navy's budget proposals for its MCM program have been formalized; and (13) the review does not affect specific Navy MCM acquisition programs or overall MCM resource decisions.
7,318
479
The use of computer technology in schools has grown dramatically in the past several years. Surveys conducted by one marketing research firmestimated that in 1983 schools had 1 computer for every 125 students; in 1997, the ratio had increased to 1 computer for every 9 students. Meanwhile, many education technology experts believe that current levels of school technology do not give students enough access to realize technology's full potential. For example, schools should have a ratio of four to five students for every computer or five students for every multimedia computer, many studies suggest. In addition, concern has been expressed that aging school computers may not be able to run newer computer programs, use multimedia technology, and access the Internet. A computer-based education technology program has many components, as figure 1 shows, which range from the computer hardware and software to the maintenance and technical support needed to keep the system running. Although technology programs may define the components differently, they generally cover the same combination of equipment and support elements. Computer-based technology can be used to augment learning in a number of ways. These include drill-and-practice programs to improve basic skills; programs providing students with the tools to write and produce multimedia projects that combine text, sound, graphics, and video; programs providing access to information resources, such as on the Internet; and networks that support collaborative and active learning. Research on school technology has not, however, provided clear and comprehensive conclusions about its impact on student achievement. Although some studies have shown measurable improvements in some areas, less research data exist on the impact of the more complex uses of technology. Our work focused on funding for school technology. We did not evaluate district goals or accomplishments or assess the value of technology in education. Each of the districts we visited used a combination of funding sources to support technology in its schools (see table 1). At the local level, districts allocated funds from their district operating budgets, levied special taxes, or both. Districts also obtained funds from federal and state programs specifically designated to support school technology or from federal and state programs that could be used for this and other purposes. Finally, districts obtained private grants and solicited contributions from businesses. Although some individual schools in the districts we visited raised some funds, obtaining technology funding was more a district-level function than a school-level function, according to our study. Although districts tapped many sources, nearly all of them obtained the majority of their funding from one main source. The source, however, varied by district. For example, in Seattle, a 1991 local capital levy has provided the majority of the district's education technology funding to date. In Gahanna, the district operating budget has provided the majority of technology funding. All five districts chose to allocate funds for technology from their operating budgets. The portions allocated ranged widely from 16 to 77 percent of their total technology funding. Two districts--Seattle and Roswell--also raised significant portions of their technology funding using local bonds or special levies. Manchester and Seattle won highly competitive 5-year Technology Innovation Challenge Grants for $2.8 million and $7 million, respectively. The grant provided the major source of funding for Manchester's technology program--about 66 percent of the funding. The $1.5 million in grant funding Seattle has received so far accounted for about 4 percent of the district's technology funding. All five districts reported using federal and state program funding that was not specifically designated for technology but could be used for this purpose if it fulfilled program goals. For example, four districts reported using federal title I funds for technology. In Manchester, a schoolwide program at a title I elementary school we visited had funded many of its 27 computers as part of its title I program. Three districts used state program funds, such as textbook or instructional materials funds, to support their technology programs. In Davidson County, for example, the district has directed about $2 million in such funds, including those for exceptional and at-risk children as well as vocational education, to education technology. All districts received assistance, such as grants and monetary and in-kind donations, from businesses, foundations, and individuals. Such funding constituted about 3 percent or less of their technology funding. It is important to note, however, that our selection criteria excluded districts that had benefited from extraordinary assistance such as those receiving the majority of their funding from a company or individual. Officials we spoke with attributed the limited business contributions in their districts to a variety of reasons, including businesses not fully understanding the extent of the schools' needs and businesses feeling overburdened by the large number of requests from the community for assistance. Some said their district simply had few businesses from which to solicit help. Nonetheless, all five districts noted the importance of business' contribution and were cultivating their ties with business. teacher organization activities and other school fund-raisers. Such supplemental funding amounted to generally less than $7,000 annually but did range as high as $84,000 over 4 years at one school. Staff at two schools reported that teachers and other staff used their personal funds to support technology in amounts ranging from $100 to over $1,000. Officials in the districts we visited identified a variety of barriers to obtaining technology funding. Four types of barriers were common to most districts and considered by some to be especially significant. (See table 2.) Officials in all of the districts we visited reported that district-level funding was difficult to obtain for technology because it was just one of many important needs that competed for limited district resources. For example, a Gahanna official reported that his district's student population had grown, and the district needed to hire more teachers. A Seattle official reported that his district had $275 million in deferred maintenance needs. Some districts had mandates to meet certain needs before making funding available for other expenditures like technology. Manchester officials noted, for example, that required special education spending constituted 26 percent of their 1997 district operating budget, a figure expected to rise to 27.5 percent in fiscal year 1998. Officials from all districts said that resistance to higher taxes affected their ability to increase district operating revenue to help meet their technology goals. For example, in Davidson County, the local property tax rate is among the lowest in the state, and officials reported that many county residents were attracted to the area because of the tax rates. In addition, two districts--Roswell and Seattle--did not have the ability to increase the local portion of their operating budgets because of state school finance systems that--to improve equity--limited the amount of funds districts could raise locally. Officials in three districts reported that the antitax sentiment also affected their ability to pass special technology levies and bond measures. Although all districts identified an environment of tax resistance in their communities, most said they believed the community generally supported education. Many officials reported that they did not have the time to search for technology funding in addition to performing their other job responsibilities. They said that they need considerable time to develop funding proposals or apply for grants. For example, one technology director with previous grant-writing experience said she would need an uninterrupted month to submit a good application for a Department of Commerce telecommunications infrastructure grant. As a result, she did not apply for this grant. The technology director in Manchester said that when the district applied for a Technology Innovation Challenge Grant, two district staff had to drop all other duties to complete the application within the 4-week time frame available. corporations and foundations typically like to give funds to schools where they can make a dramatic difference. Districts have employed general strategies to overcome funding barriers rather than address specific barriers. The strategies have involved two main approaches--efforts to inform decisionmakers about the importance of and need for technology and leadership efforts to secure support for technology initiatives. In their information efforts, district officials have addressed a broad range of audiences about the importance of and need for technology. These audiences have included school board members, city council representatives, service group members, parents, community taxpayers, and state officials. These presentations have included technology demonstrations, parent information nights, lobbying efforts with state officials, and grassroots efforts to encourage voter participation in levy or bond elections. Roswell, for example, set up a model technology school and used it to demonstrate the use of technology in school classrooms. In the districts we visited, both district officials and the business community provided leadership to support school technology. In all districts, district technology directors played a central leadership role in envisioning, funding, and implementing their respective technology programs over multiyear periods and continued to be consulted for expertise and guidance. In some districts, the superintendent also assumed a role in garnering support and funding for the technology program. Beyond the district office, business community members sometimes assumed leadership roles to support technology by entering into partnerships with the districts to help in technology development efforts as well as in obtaining funding. All five districts we visited had developed such partnerships with local businesses. In Roswell and Seattle, education foundations comprising business community leaders had helped their school districts' efforts to plan and implement technology, providing both leadership and funding for technology. Other districts we visited continued to cultivate their ties with the business community through organizations such as a business advisory council and a community consortium. Nearly all districts reported maintenance, technical support, and training-- components often dependent on staff--as more difficult to fund than other components. Officials we interviewed cited several limitations associated with funding sources that affected their use for staff costs. First, some sources simply could not be used to pay for staff. Officials in Roswell and Seattle noted that special levy and bond monies, their main sources of technology funds, could not be used to support staff because the funds were restricted to capital expenditures. Second, some funding sources do not suit the ongoing nature of staff costs. Officials noted, for example, that grants and other sources provided for a limited time or that fluctuate from year to year are not suited to supporting staff. Most districts funded technology staff primarily from district operating budgets. Several officials noted that competing needs and the limited size of district budgets make it difficult to increase technology staff positions. Officials in all five districts reported having fewer staff than needed. Some technology directors and trainers reported performing maintenance or technical support at the expense of their other duties because of a lack of sufficient support staff. One result was lengthy periods--up to 2 weeks in some cases--when computers and other equipment were unavailable. Several officials observed that this can be frustrating to teachers and discourage them from using the equipment. Teacher training was also affected by limited funding for staff costs, according to officials. In one district, for example, an official said that the number of district trainers was insufficient to provide the desired in-depth training to all teachers. Most district officials expressed a desire for more technology training capability, noting that teacher training promoted the most effective use of the equipment. A number of districts had developed mitigating approaches to a lack of technology support staff. These included purchasing extended warranties on new equipment, training students to provide technical support in their schools, and designating teachers to help with technical support and training. and (2) periodic costs of upgrading and replacing hardware, software, and infrastructure to sustain programs. Most districts planned to continue funding ongoing maintenance, technical support, training, and telecommunications costs primarily from their operating budgets and to sustain at least current levels of support. Nonetheless, most districts believed that current levels of maintenance and technical support were not adequate and that demand for staff would likely grow. Some officials talked about hiring staff in small increments but were unsure to what extent future district budgets would support this growing need. The periodic costs to upgrade and replace hardware, software, or infrastructure can be substantial, and most districts faced uncertainty in continuing to fund them with current sources. For example, Davidson County and Gahanna funded significant portions of their hardware with state technology funding. However, officials told us that in the past, the level of state technology funding had been significantly reduced due to the changing priorities of their state legislatures. In Seattle, special levies are the district's primary funding source, but passing these initiatives is unpredictable. Officials in all districts underscored the need for stable funding sources and for technology to be considered a basic education expenditure rather than an added expense. They also suggested ways to accomplish this. Some proposed including a line item in the district operating budget to demonstrate district commitment to technology as well as provide a more stable funding source. One official said that technology is increasingly considered part of basic education and as such should be included in the state's formula funding. Without such funding, he said districts would be divided into those that could "sell" technology to voters and those that could not. technology supporters in the districts we studied not only had to garner support at the start for the district's technology, but they also had to continue making that case year after year. To develop support for technology, leaders in these five school districts used a broad informational approach to educate the community, and they formed local partnerships with business. Each district has developed some ties with business. Nonetheless, funding from private sources, including business, for each district, constituted no more than about 3 percent of what the district has spent on its technology program. Other districts like these may need to continue depending mainly on special local bonds and levies, state assistance, and federal grants for initially buying and replacing equipment and on their operating budgets for other technology needs. Lack of staff for seeking and applying for funding and the difficulty of funding technology support staff were major concerns of officials in all the districts we studied. Too few staff to maintain equipment and support technology users in the schools could lead to extensive computer downtime, teacher frustration, and, ultimately, to reduced use of a significant technology investment. The technology program in each of the five districts we visited had not yet secured a clearly defined and relatively stable funding source, such as a line item in the operating budget or a part of the state's education funding formula. As a result, district officials for the foreseeable future will continue trying to piece together funding from various sources to maintain their technology programs and keep them viable. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Task Force may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed how school districts obtain funds for the acquisition of education technology, focusing on: (1) sources of funding school districts have used to develop and fund their technology programs; (2) barriers districts have faced in funding the technology goals they set, and how they attempted to deal with these barriers; (3) components of districts' technology programs that have been the most difficult to fund, and what the consequences have been; and (4) districts' plans to deal with the ongoing costs of the technology they have acquired. GAO noted that: (1) the five districts it studied used a variety of ways to fund their technology programs; (2) four types of barriers seemed to be common to several districts: (a) technology was just one of a number of competing needs and priorities, such as upkeep of school buildings; (b) local community resistance to higher taxes limited districts' ability to raise more revenue; (c) officials said they did not have enough staff for fund-raising efforts and therefore had difficulty obtaining grants and funding from other sources such as business; and (d) some funding sources had restrictive conditions or requirements that made funding difficult to obtain; (3) to overcome these barriers, officials reported that their districts used a variety of methods to educate and inform the school board and the community about the value of technology; (4) these ranged from presentations to parent groups to the establishment of a model program at one school to showcase the value of technology; (5) the parts of the technology program that were hardest to fund, according to those GAO interviewed, were components such as maintenance, training, and technical support, which depend heavily on staff positions; (6) for example, in two locations special levy and bond funding could be used only for capital expenditures--not for staff; (7) in several districts GAO visited, officials said that staffing shortfalls in maintenance and technical support had resulted in large workloads for existing staff and in maintenance backlogs; (8) most said this resulted in reduced computer use because computers were out of service; and (9) as these districts looked to the future to support the ongoing and periodic costs of their technology programs, they typically planned to continue using a variety of funding sources despite uncertainties associated with many of these sources.
3,237
464
According to EPA, perchlorate can interfere with the normal functioning of the thyroid gland by competitively inhibiting the transport of iodide into the thyroid, which can then affect production of thyroid hormones. The fetus depends on an adequate supply of maternal thyroid hormone for its central nervous system development during the first trimester of pregnancy. The National Academy of Sciences reported that inhibition of iodide uptake from low-level perchlorate exposure may increase the risk of neurodevelopmental impairment in fetuses of high-risk mothers-- pregnant women who might have iodine deficiency or hypothyroidism (reduced thyroid functioning). The Academy recognized the differences in sensitivity to perchlorate exposure between the healthy adults used in some studies and the most sensitive population and the fetuses of these high-risk mothers. Consequently, the Academy included a 10-fold uncertainty factor in its recommended reference dose to protect these sensitive populations. The Academy also called for additional research to help determine what effects low-level perchlorate exposure may have on children and pregnant women. EPA has issued drinking water regulations for more than 90 contaminants. The Safe Drinking Water Act, as amended in 1996, requires EPA to make regulatory determinations on at least five unregulated contaminants and decide whether to regulate these contaminants with a national primary drinking water regulation. The act requires that these determinations be made every five years. The unregulated contaminants are typically chosen from a list known as the Contaminant Candidate List (CCL), which the act also requires EPA to publish every five years. EPA published the second CCL on February 24, 2005. On April 11, 2007, EPA announced its preliminary determination not to regulate 11 of the contaminants on this list. The agency also announced that it was not making a regulatory determination for perchlorate because EPA believed that additional information may be needed to more fully characterize perchlorate exposure and determine whether regulating perchlorate in drinking water presents a meaningful opportunity for health risk reduction. Several federal environmental laws provide EPA and states authorized by EPA with broad authorities to respond to actual or threatened releases of substances that may endanger public health or the environment. For example, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes EPA to investigate the release of any hazardous substance, pollutant, or contaminant. The Resource Conservation and Recovery Act of 1976 (RCRA) gives EPA authority to order a cleanup of hazardous waste when there is an imminent and substantial endangerment to public health or the environment, and one federal court has ruled that perchlorate is a hazardous waste under RCRA. The Clean Water Act's National Pollutant Discharge Elimination System (NPDES) provisions authorize EPA, which may, in turn, authorize states, to regulate the discharge of pollutants into waters of the United States. These pollutants may include contaminants such as perchlorate. The Safe Drinking Water Act authorizes EPA to respond to actual or threatened releases of contaminants into public water systems or underground sources of drinking water, regardless of whether the contaminant is regulated or unregulated, where there is an imminent and substantial endangerment to health and the appropriate state and local governments have not taken appropriate actions. Under certain environmental laws such as RCRA, EPA can authorize states to implement the requirements as long as the state programs are at least equivalent to the federal program and provide for adequate enforcement. In addition, some states have their own environmental and water quality laws that provide state and local agencies with the authority to monitor, sample, and require cleanup of various regulated and unregulated hazardous substances that pose an imminent and substantial danger to public health. For example, the California Water Code authorizes Regional Water Control Boards to require sampling of waste discharges and to direct cleanup and abatement, if necessary, of any threat to water, including the release of an unregulated contaminant such as perchlorate. Finally, according to EPA and state officials, at least 9 states have established nonregulatory action levels or perchlorate advisories, ranging from under 1 part per billion to 18 parts per billion, under which responsible parties have been required to sample and clean up perchlorate. For example, according to California officials, the state of California has a public health goal for perchlorate of 6 parts per billion and has used the goal to require cleanup at one site. Because information on the extent of perchlorate contamination was not readily available, we thoroughly reviewed available perchlorate sampling reports and discussed them with federal and state environmental officials. We identified 395 sites in 35 states, the District of Columbia, and 2 commonwealths of the United States where perchlorate has been found in drinking water, groundwater, surface water, sediment, or soil. The perchlorate concentrations ranged from the minimum reporting level of 4 parts per billion to in more than 3.7 million parts per billion--a level found in groundwater at one of the sites. Roughly one-half of the contaminated sites were found in Texas (118) and California (106), where both states conducted broad investigations to determine the extent of perchlorate contamination. As shown in figure 1, the highest perchlorate concentrations were found in five states--Arkansas, California, Nevada, Texas, and Utah--where, collectively, 11 sites had concentrations exceeding 500,000 parts per billion. However, most of the 395 sites did not have such high levels of contamination. We found 271 sites where the concentration was less than 24.5 parts per billion, the drinking water concentration equivalent calculated on the basis of EPA's reference dose. According to EPA and state agency officials, the greatest known source of contamination was defense and aerospace activities. As shown in figure 2, our analysis found that, at 110 of the 395 sites, the perchlorate source was related to propellant manufacturing, rocket motor testing firing, and explosives testing and disposal at DOD, NASA, and defense-related industries. Officials said the source of the contamination at another 58 sites was agriculture, a variety of other commercial activities such as fireworks and flare manufacturing, and perchlorate manufacturing and handling. At the remaining sites, state agency officials said the source of the perchlorate was either undetermined (122 sites) or naturally occurring (105 sites). Further, all 105 sites with naturally occurring perchlorate are located in the Texas high plains region where perchlorate concentrations range from 4 to 59 parts per billion. Of the sites we identified, 153 were public drinking water systems. The Safe Drinking Water Act's Unregulated Contaminant Monitoring Regulation required sampling of public drinking water systems for a 12- month period between 2001 and 2003. . As of January 2005, 153 (about 4 percent) of 3,722 systems that were sampled and reported reported finding perchlorate to EPA. Located across 26 states and 2 commonwealths, these 153 sites accounted for more than one-third of the sites we identified where perchlorate concentrations reported ranged from 4 parts per billion to 420 parts per billion but averaged less than 10 parts per billion. Only 14 of the 153 public drinking water systems had concentration levels above 24.5 parts per billion, the drinking water equivalent calculated on the basis of EPA's revised perchlorate reference dose. California had the most public water systems with perchlorate, where 58 systems reported finding perchlorate in drinking water. The highest drinking water perchlorate concentration of 420 parts per billion was found in Puerto Rico in 2002. Subsequent sampling in Puerto Rico did not find any perchlorate, and officials said the source of the initial finding was undetermined. These 153 public drinking water systems that found perchlorate serve populated areas, and an EPA official estimated that as many as 10 million people may have been exposed to the chemical. EPA officials told us they do not know the source of most of the contamination found in public drinking water systems, but that 32 systems in Arizona, California, and Nevada were likely due to previous perchlorate manufacturing at a Kerr McGee Chemical Company site in Henderson, Nevada. Regional EPA and state officials told us they did not plan to clean up perchlorate found at public drinking water sites until EPA establishes a drinking water standard for perchlorate. In some cases, officials did not plan to clean up because subsequent sampling was unable to confirm that perchlorate was present. EPA officials said the agency does not centrally track or monitor perchlorate detections or the status of cleanup activities. As a result, it is difficult to determine the extent of perchlorate contamination in the U.S. EPA maintains a list of sites where cleanup or other response actions are underway but the list does not include sites not reported to EPA. As a result, EPA officials said they did not always know whether other federal and state agencies found perchlorate because, as is generally the case with unregulated contaminants, there is no requirement for states or other federal agencies to routinely report perchlorate findings to EPA. For example, DOD is not required to report to EPA when perchlorate is found on active installations and facilities. Consequently, EPA region officials in California said they did not know the Navy found perchlorate at the Naval Air Weapons Station at China Lake because the Navy did not report the finding to EPA. Further, states are not required to routinely notify EPA about perchlorate contamination they discover. For example, EPA region officials in California said the Nevada state agency did not tell them perchlorate was found at Rocketdyne, an aerospace facility in Reno, or that it was being cleaned up. EPA only learned about the perchlorate contamination when the facility's RCRA permit was renewed. In our May 2005 review, we conducted a literature search for studies of perchlorate health risks published from 1998 to 2005 and identified 125 studies on perchlorate and the thyroid. After interviewing DOD and EPA officials about which studies they considered important in assessing perchlorate health risks, we reviewed 90 that were relevant to our work. The findings of 26 of these studies indicated that perchlorate had an adverse effect on thyroid function and human health. In January 2005, the National Academy of Sciences considered many of these same studies and concluded that the studies did not support a clear link between perchlorate exposure and changes in the thyroid function or thyroid cancer in adults. Consequently, the Academy recommended additional research into the effect of perchlorate exposure on children and pregnant women but did not recommend a drinking water standard. DOD, EPA, and industry sponsored the majority of the 90 health studies we reviewed; the remaining studies were conducted by academic researchers and other federal agencies. Of these 90 studies, 49 were experiments that sought to determine the effects of perchlorate on humans, mammals, fish, and/or amphibians by exposing these groups to different doses of perchlorate over varied time periods and comparing the results with other groups that were not exposed. Twelve were field studies that compared humans, mammals, fish, and/or amphibians in areas known to be contaminated with the same groups in areas known to be uncontaminated. Both types of studies have limitations: the experimental studies were generally short in duration, and the field studies were generally limited by the researchers' inability to control whether, how much, or how long the population in the contaminated areas was exposed. For another 29 studies, researchers reviewed several publicly available human and animal studies and used data derived from these studies to determine the process by which perchlorate affects the human thyroid and the highest exposure levels that did not adversely affect humans. The 3 remaining studies used another methodology. Many of the studies we reviewed contained only research findings, rather than conclusions or observations on the health effects of perchlorate. Appendix III from our 2005 report provides data on these studies, including who sponsored them; what methodologies were used; and, where presented, the author's conclusions or findings on the effects of perchlorate. Only 44 of the studies we reviewed had conclusions on whether perchlorate had an adverse effect. However, adverse effects of perchlorate on the adult thyroid are difficult to evaluate because they may happen over longer time periods than can be observed in a typical research study. Moreover, different studies used the same perchlorate dose amount but observed different effects, which were attributed to variables such as the study design type or age of the subjects. Such unresolved questions were one of the bases for the differing conclusions in EPA, DOD, and academic studies on perchlorate dose amounts and effects. The adverse effects of perchlorate on development can be more easily studied and measured within typical study time frames. Of the studies we reviewed, 29 evaluated the effect of perchlorate on development, and 18 of these found adverse effects resulting from maternal exposure to perchlorate. According to EPA officials, the most sensitive population for perchlorate exposure is the fetus of a pregnant woman who is also nearly iodine-deficient. However, none of the 90 studies that we reviewed considered this population. Some studies reviewed the effect on the thyroid of pregnant rats, but we did not find any studies that considered perchlorate's effect on the thyroid of nearly iodine-deficient pregnant rats. In January 2005, the National Academy of Sciences issued its report on EPA's draft health assessment and the potential health effects of perchlorate. The Academy reported that although perchlorate affects thyroid functioning, there was not enough evidence to show that perchlorate causes adverse effects at the levels found in most environmental samples. Most of the studies that the Academy reviewed were field studies, the report said, which are limited because they cannot control whether, how much, or how long a population in a contaminated area is exposed. The Academy concluded that the studies did not support a clear link between perchlorate exposure and changes in the thyroid function in newborns and hypothyroidism or thyroid cancer in adults. In its report, the Academy noted that only 1 study examined the relationship between perchlorate exposure and adverse effects on children, and that no studies investigated the relationship between perchlorate exposure and adverse effects on vulnerable groups, such as low-birth-weight infants. The Academy concluded that an exposure level higher than initially recommended by EPA may not adversely affect a healthy adult. The Academy recommended that additional research be conducted on perchlorate exposure and its effect on children and pregnant women but did not recommend that EPA establish a drinking water standard. To address these issues, in October 2006, CDC researchers published the results of the first large study to examine the relationship between low- level perchlorate exposure and thyroid function in women with lower iodine levels. About 36 percent of U.S. women have these lower iodine levels. The study found decreases in a thyroid hormone that helps regulate the body's metabolism and is needed for proper fetal neural development in pregnant women. Mr. Chairman, this concludes my testimony. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this presentation, please contact me, John Stephenson, at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Steven Elstein, Assistant Director, and Terrance Horner, Senior Analyst; Richard Johnson, Alison O'Neill, Kathleen Robertson, and Joe Thompson also made key contributions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Perchlorate has been used for decades by the Department of Defense, the National Aeronautics and Space Administration, and the defense industry in manufacturing, testing, and firing missiles and rockets. Other uses include fireworks, fertilizers, and explosives. Perchlorate is readily dissolved and transported in water and has been found in groundwater, surface water, and soil across the country. Perchlorate emerged as a contaminant of concern because health studies have shown that it can affect the thyroid gland, which helps regulate the body's metabolism, and may cause developmental impairment in fetuses of pregnant women. In 2005, EPA set a reference dose of 24.5 parts per billion (ppb)--the exposure level not expected to cause adverse effect in humans. Today's testimony updates GAO's May 2005 report, Perchlorate: A System to Track Sampling and Cleanup Results is Needed, GAO-05-462 . It summarizes GAO's (1) compilation of the extent of perchlorate contamination in the U.S. and (2) review of peer-reviewed studies about perchlorate's health risks. GAO's 2005 report recommended that EPA work to track and monitor perchlorate detections and cleanup efforts. In December 2006, EPA reiterated its disagreement with this recommendation. GAO continues to believe such a system would better inform the public and others about perchlorate's presence in their communities. Perchlorate has been found at 395 sites in the U.S.--including 153 public drinking water systems--in concentrations ranging from 4 ppb to more than 3.7 million ppb. More than half the sites are in California and Texas, with the highest concentrations found in Arkansas, California, Texas, Nevada, and Utah. About 28 percent of sites were contaminated by defense and aerospace activities related to propellant manufacturing, rocket motor research and test firing, or explosives disposal. Federal and state agencies are not required to routinely report perchlorate findings to EPA, which does not track or monitor perchlorate detections or cleanup status. EPA recently decided not to regulate perchlorate in drinking water supplies pending further study. GAO reviewed 90 studies of health risks from perchlorate published from 1998 to 2005, and one-quarter indicated that perchlorate had an adverse effect on human health, and thyroid function in particular. In January 2005, the National Academy of Sciences also reviewed several studies and concluded that they did not support a clear link between perchlorate exposure and changes in the thyroid function. The academy did not recommend a drinking water standard but recommended additional research into the effect of perchlorate exposure on children and pregnant women. More recently, a large study by CDC scientists has identified adverse thyroid effects from perchlorate in women with low iodine levels that are found in about 36 percent of U.S. women.
3,512
639
Detecting illicit trafficking in nuclear material is complicated because one of the materials of greatest concern--highly enriched uranium--has a relatively low level of radioactivity and is, therefore, among the most difficult to detect. In contrast, medical and industrial radioactive sources, which could be used to construct a dirty bomb, are highly radioactive and, therefore, easier to detect. Although their levels of radioactivity differ, uranium and radioactive sources are similar in that they generally emit only gamma radiation, which is relatively easily shielded when encased in high-density material, such as lead. For example, we reported in March 2005 that a cargo container containing a radioactive source passed through radiation detection equipment DOE had installed at a foreign seaport without being detected because the source was surrounded by large amounts of scrap metal in the container. Plutonium, another nuclear material of great concern, emits both gamma and neutron radiation. Although most currently fielded radiation detection equipment has the capability to detect both gamma and neutron radiation, shielding neutron radiation can be more difficult than shielding gamma radiation. Consequently, plutonium can usually be detected by a neutron detector regardless of the amount of shielding from high-density material. According to DOE officials, neutron radiation alarms are caused only by man-made materials, such as plutonium, while gamma radiation alarms are caused by a variety of naturally occurring sources, including commercial goods such as bananas, ceramic tiles, and fertilizer, as well as by dangerous nuclear materials, such as uranium and plutonium. Because of the complexities of detecting and identifying nuclear material, customs officers and border guards who are responsible for operating detection equipment must be trained in using handheld radiation detectors to pinpoint the source of an alarm, identify false alarms, and properly respond to cases of nuclear smuggling. The manner in which radiation detection equipment is deployed, operated, and maintained can also limit its effectiveness. Given the difficulties in detecting certain nuclear materials and the inherent limitations of currently deployed radiation detection equipment, it is important that the equipment be installed, operated, and maintained in a way that optimizes authorities' ability to interdict illicit nuclear materials. Although efforts to combat nuclear smuggling through the installation of radiation detection equipment are important, the United States should not and does not rely upon radiation detection equipment at U.S. or foreign borders as its sole means for preventing nuclear materials or a nuclear warhead from reaching the United States. Recognizing the need for a broad approach to the problem, the U.S. government has multiple initiatives that are designed to complement each other that provide a layered defense against nuclear terrorism. For example, DOE works to secure nuclear material and warheads at their sources through programs that improve the physical security at nuclear facilities in the former Soviet Union and in other countries. In addition, DHS has other initiatives to identify containers at foreign seaports that are considered high risk for containing smuggled goods, such as nuclear and other dangerous materials. Supporting all of these programs is intelligence information that can give advanced notice of nuclear material smuggling and is a critical component to prevent dangerous materials from entering the United States. One of the main U.S. efforts providing radiation detection equipment to foreign governments is DOE's Second Line of Defense program, which began installing equipment at key sites in Russia in 1998. According to DOE, through the end of fiscal year 2005, the program had spent about $130 million to complete installations at 83 sites, mostly in Russia. Ultimately, DOE plans to install radiation detection equipment at a total of about 350 sites in 31 countries by 2012 at a total cost of about $570 million. In addition to DOE's efforts, other U.S. agencies also have programs that provide radiation detection equipment and training to foreign governments. Two programs at DOD--the International Counterproliferation Program and Weapons of Mass Destruction Proliferation Prevention Initiative--have provided equipment and related training to eight countries in the former Soviet Union and Eastern Europe at a cost of about $22 million. Similarly, three programs at State--the Nonproliferation and Disarmament Fund, Georgia Border Security and Law Enforcement program, and Export Control and Related Border Security program--have spent about $25 million to provide radiation detection equipment and training to 31 countries. However, these agencies face a number of challenges that could compromise their programs' effectiveness, including (1) corruption of foreign border security officials, (2) technical limitations of equipment at some foreign sites, (3) problems with maintenance of handheld equipment, and (4) the lack of infrastructure and harsh environmental conditions at some border sites. First, according to officials from several recipient countries we visited, corruption is a pervasive problem within the ranks of border security organizations. DOE, DOD, and State officials told us they are concerned that corrupt foreign border security personnel could compromise the effectiveness of U.S.-funded radiation detection equipment by either turning off equipment or ignoring alarms. To mitigate this threat, DOE and DOD plan to deploy communications links between individual border sites and national command centers so that alarm data can be simultaneously evaluated by multiple officials, thus establishing redundant layers of accountability for alarm response. In addition, DOD plans to implement a program in Uzbekistan to combat some of the underlying issues that can lead to corruption through periodic screening of border security personnel. Second, some radiation portal monitors that State and other U.S. agencies previously installed have technical limitations: they can detect only gamma radiation, making them less effective at detecting some nuclear material than equipment with both gamma and neutron radiation detection capabilities. Through an interagency agreement, DOE assumed responsibility for ensuring the long-term sustainability and continued operation of radiation portal monitors and X-ray vans equipped with radiation detectors that State and other U.S. agencies provided to 23 countries. Through this agreement, DOE provides spare parts, preventative maintenance, and repairs for the equipment through regularly scheduled maintenance visits. Since 2002, DOE has maintained this equipment but has not upgraded any of it, with the exception of at one site in Azerbaijan. According to DOE officials, new implementing agreements with the appropriate ministries or agencies within the governments of each of the countries where the old equipment is located are needed before DOE can install more sophisticated equipment. Third, since 2002, DOE has been responsible for maintaining certain radiation detection equipment previously deployed by State and other agencies in 23 countries. However, DOE is not responsible for maintaining handheld radiation detection equipment provided by these agencies. As a result, many pieces of handheld equipment, which are vital for border officials to conduct secondary inspections of vehicles or pedestrians, may not function properly. For example, in Georgia, we observed border guards performing secondary inspections with a handheld radiation detector that had not been calibrated (adjusted to conform with measurement standards) since 1997. According to the detector's manufacturer, yearly recalibration is necessary to ensure that the detector functions properly. Finally, many border sites are located in remote areas that often do not have access to reliable supplies of electricity, fiber optic lines, and other infrastructure essential to operate radiation detection equipment and associated communication systems. Additionally, environmental conditions at some sites, such as extreme heat, can affect the performance of equipment. To mitigate these concerns, DOE, DOD, and State have provided generators and other equipment at remote border sites to ensure stable supplies of electricity and, when appropriate, heat shields or other protection to ensure the effectiveness of radiation detection equipment. We also reported that State's ability to carry out its role as lead interagency coordinator of U.S. radiation detection equipment assistance has been limited by deficiencies in its strategic plan for interagency coordination and by its lack of a comprehensive list of all U.S. radiation detection equipment assistance. In response to a recommendation we made in 2002, State led the development of a governmentwide plan to coordinate U.S. radiation detection equipment assistance overseas. This plan broadly defines a set of interagency goals and outlines the roles and responsibilities of participating agencies. However, the plan lacks key components, including overall program cost estimates, projected time frames for program completion, and specific performance measures. Without these elements in the plan, State will be limited in its ability to effectively measure U.S. programs' progress toward achieving the interagency goals. Additionally, in its role as lead interagency coordinator, State has not maintained accurate information on the operational status and location of all radiation detection equipment provided by U.S. programs. While DOE, DOD, and State each maintain lists of radiation detection equipment provided by their programs, they do not regularly share such information, and no comprehensive list of all equipment provided by U.S. programs exists. For example, according to information we received from program managers at DOE, DOD, and State, more than 7,000 pieces of handheld radiation detection equipment had been provided to 36 foreign countries through the end of fiscal year 2005. Because much of this equipment was provided to the same countries by multiple agencies and programs, it is difficult to determine the degree to which duplication of effort has occurred. Without a coordinated master list of all U.S.-funded equipment, program managers at DOE, DOD, and State cannot accurately assess if equipment is operational and being used as intended, determine the equipment needs of countries where they plan to provide assistance, or detect whether an agency has unknowingly supplied duplicative equipment. Through December 2005, DHS had installed about 670 radiation portal monitors nationwide-- about 22 percent of the portal monitors DHS plans to deploy--at international mail and express courier facilities, land border crossings, and seaports in the United States. DHS has completed portal monitor deployments at international mail and express courier facilities and the first phase of northern border sites--57 and 217 portal monitors, respectively. In addition, by December 2005, DHS had deployed 143 of 495 portal monitors at seaports and 244 of 360 at southern border sites. As of February 2006, CBP estimated that, with these deployments, it has the ability to screen about 62 percent of all containerized shipments entering the United States (but only 32 percent of all containerized seaborne shipments) and roughly 77 percent of all private vehicles. DHS plans to deploy 3,034 portal monitors by September 2009 at a cost of $1.3 billion. However, the final costs and deployment schedule are highly uncertain because of delays in releasing appropriated funds to contractors, difficulties in negotiating with seaport operators, and uncertainties in the type and cost of radiation detection equipment DHS plans to deploy. Further, to meet this goal, DHS would have to deploy about 52 portal monitors a month for the next 4 years--a rate that far exceeds the 2005 rate of about 22 per month. In particular, several factors have contributed to the delay in the deployment schedule. First, DHS provides the Congress with information on portal monitor acquisitions and deployments before releasing any funds. However, DHS's cumbersome review process has consistently caused delays in providing such information to the Congress. For example, according to the House Appropriations Committee report on DHS's fiscal year 2005 budget, CBP should provide the Congress with an acquisition and deployment plan for the portal monitor program prior to funding its contractors. This plan took many months to finalize, mostly because it required multiple approvals within DHS and the Office of Management and Budget prior to being submitted to the Congress. The lengthy review process delayed the release of funds and, in some cases, disrupted and delayed deployment. Second, difficult negotiations with seaport operators about placement of portal monitors and screening of railcars have delayed deployments at U.S. seaports. Many seaport operators are concerned that radiation detection equipment may inhibit the flow of commerce through their ports. In addition, seaports are much larger than land border crossings, consist of multiple terminals, and may have multiple exits, which may require a greater number of portal monitors. Further, devising an effective way to conduct secondary inspections of rail traffic as it departs seaports without disrupting commerce has delayed deployments. This problem may worsen because the Department of Transportation has forecast that the use of rail transit out of seaports will probably increase in the near future. Finally, DHS's $1.3 billion estimate for the project is highly uncertain, in part, because of uncertainties in the type and cost of radiation detection equipment that DHS plans to deploy. The estimate is based on DHS's plans for widespread deployment of advanced technology portal monitors, which are currently being developed. However, the prototypes of this equipment have not yet been shown to be more effective than the portal monitors now in use, and DHS officials say they will not purchase the advanced portal monitors unless they are proven to be clearly superior. Moreover, when advanced technology portal monitors become commercially available, experts estimate that they will cost between about $330,000 and $460,000 each, far more than the currently used portal monitors whose costs range from about $49,000 to $60,000. Even if future test results indicate better detection capabilities, without a detailed comparison of the two technologies' capabilities it would not be clear that the dramatically higher cost for this new equipment would be worth the investment. We also identified potential issues with the procedures CBP inspectors use to perform secondary inspections that, if addressed, could strengthen the nation's defenses against nuclear smuggling. For example, CBP's procedures require only that officers locate, isolate, and identify radiological material. Typically, officers perform an external examination by scanning the sides of cargo containers with handheld radiation detection equipment during secondary inspections. CBP's guidance does not specifically require officers to open containers and inspect their interiors, even when their external examination cannot unambiguously resolve the alarm. However, under some circumstances, opening containers can improve security by increasing the chances that the source of radioactivity that originally set off the alarm will be correctly located and identified. The second potential issue with CBP's procedures involves NRC documentation. Individuals and organizations shipping radiological materials to the United States must generally acquire a NRC license, but according to NRC officials, the license does not have to accompany the shipment. Although inspectors examine such licenses when these shipments arrive at U.S. ports of entry, CBP officers are not required to verify that shippers of radiological material actually obtained required licenses and to authenticate licenses that accompany shipments. We found that CBP inspectors lack access to NRC license data that could be used to authenticate a license at the border. This concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. R. Stockton Butler, Nancy Crothers, Jim Shafer, and Eugene Wisnoski made key contributions to this statement. Combating Nuclear Smuggling: DHS Has Made Progress in Deploying Radiation Detection Equipment at U.S. Ports of Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T Washington, D.C.: July 30, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO is releasing two reports today on U.S. efforts to combat nuclear smuggling in foreign countries and in the United States. Together with the March 2005 report on the Department of Energy's Megaports Initiative, these reports represent GAO's analysis of the U.S. effort to deploy radiation detection equipment worldwide. In my testimony, I will discuss (1) the progress made and challenges faced by the Departments of Energy (DOE), Defense (DOD), and State in providing radiation detection equipment to foreign countries and (2) the Department of Homeland Security's (DHS) efforts to install radiation detection equipment at U.S. ports of entry and challenges it faces. Regarding the deployment of radiation detection equipment in foreign countries, DOE, DOD, and State have spent about $178 million since fiscal year 1994 to provide equipment and related training to 36 countries. For example, through the end of fiscal year 2005, DOE's Second Line of Defense program had completed installation of equipment at 83 sites, mostly in Russia. However, these agencies face a number of challenges that could compromise their efforts, including corruption of foreign border security officials, technical limitations and inadequate maintenance of some equipment, and the lack of supporting infrastructure at some border sites. To address these challenges, U.S. agencies plan to take a number of steps, including combating corruption by installing multitiered communications systems that establish redundant layers of accountability for alarm response. State coordinates U.S. programs to limit overlap and duplication of effort. However, State's ability to carry out this role has been limited by deficiencies in its interagency strategic plan and its lack of a comprehensive list of all U.S. radiation detection equipment provided to other countries. Domestically, DHS had installed about 670 radiation portal monitors through December 2005 and provided complementary handheld radiation detection equipment at U.S. ports of entry at a cost of about $286 million. DHS plans to install a total of 3,034 radiation portal monitors by the end of fiscal year 2009 at a total cost of $1.3 billion. However, the final costs and deployment schedule are highly uncertain because of delays in releasing appropriated funds to contractors, difficulties in negotiating with seaport operators, and uncertainties in the type and cost of radiation detection equipment DHS plans to deploy. Overall, GAO found that U.S. Customs and Border Protection (CBP) officers have made progress in using radiation detection equipment correctly and adhering to inspection guidelines, but CBP's secondary inspection procedures could be improved. For example, GAO recommended that DHS require its officers to open containers and inspect them for nuclear and radioactive materials when they cannot make a determination from an external inspection and that DHS work with the Nuclear Regulatory Commission (NRC) to institute procedures by which inspectors can validate NRC licenses at U.S. ports of entry.
3,729
607
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard's responsibilities fall into two general categories--those related to homeland security missions, such as port security and vessel escorts, and those related to non-homeland security missions, such as search and rescue and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing those assets. At the start of Deepwater in the late 1990s, the Coast Guard chose to use a system of systems acquisition strategy that was intended to replace the assets with a single, integrated package of aircraft, vessels, and communications systems. As the systems integrator, ICGS was responsible for designing, constructing, deploying, supporting, and integrating the assets. The decision to use a systems integrator for the Deepwater Program was driven in part because of the Coast Guard's lack of expertise in managing and executing an acquisition of this magnitude. Under this approach, the Coast Guard provided the contractor with broad, overall performance specifications--such as the ability to interdict illegal immigrants--and ICGS determined the specifications for the Deepwater assets. According to Coast Guard officials, the ICGS proposal was submitted and priced as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. Deepwater assets are in various stages of the acquisition process. Some, such as the NSC and Maritime Patrol Aircraft, are in production. Others, such as the Fast Response Cutter, are in design, and still others, such as the Offshore Patrol Cutter, are in the early stages of requirements definition. Since the Commandant's April 2007 announcement that the Coast Guard was taking over the lead role in systems integration from ICGS, the Coast Guard has undertaken several initiatives that have increased accountability for Deepwater outcomes within the Coast Guard and to DHS. The Coast Guard's Blueprint for Acquisition Reform sets forth a number of objectives and specific tasks with the intent of improving acquisition processes and results. Its overarching goal is to enhance the Coast Guard's mission execution through improved contracting and acquisition approaches. One key effort in this regard was the July 2007 consolidation of the Coast Guard's acquisition responsibilities--including the Deepwater Program--into a single acquisition directorate. Previously, Deepwater assets were managed independently of other Coast Guard acquisitions within an insulated structure. The Coast Guard has also vested its government project managers with management and oversight responsibilities formerly held by ICGS. The Coast Guard is also now managing Deepwater under an asset-based approach, rather than as an overall system-of-systems as initially envisioned. This approach has resulted in increased government control and visibility. For example, cost and schedule information is now captured at the individual asset level, resulting in the ability to track and report cost breaches for assets. Under the prior structure, a cost breach was to be tracked at the overall Deepwater Program level, and the threshold was so high that a breach would have been triggered only by a catastrophic event. To manage Deepwater acquisitions at the asset level, the Coast Guard has begun to follow a disciplined project management process using the framework set forth in its Major Systems Acquisition Manual. This process requires documentation and approval of program activities at key points in a program's life cycle. The process begins with identification of deficiencies in Coast Guard capabilities and then proceeds through a series of structured phases and decision points to identify requirements for performance, develop and select candidate systems that meet those requirements, demonstrate the feasibility of selected systems, and produce a functional capability. Previously, the Coast Guard authorized the Deepwater Program to deviate from the structured acquisition process, stating that the requirements of the process were not appropriate for the Deepwater system-of-systems approach. Instead, Deepwater Program reviews were required on a schedule-driven--as opposed to the current event-driven--basis. Further, leadership at DHS is now formally involved in reviewing and approving key acquisition decisions for Deepwater assets. We reported in June 2008 that DHS approval of Deepwater acquisition decisions as part of its investment review process was not required, as the department had deferred decisions on specific assets to the Coast Guard in 2003. We recommended that the Secretary of DHS direct the Under Secretary for Management to rescind the delegation of Deepwater acquisition decision authority. In September 2008, the Under Secretary took this step, so that Deepwater acquisitions are now subject to the department's investment review process, which calls for executive decision making at key points in an investment's life cycle. We also reported this past fall, however, that DHS had not effectively implemented or adhered to this investment review process; consequently, the department had not provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Without the appropriate reviews, DHS loses the opportunity to identify and address cost, schedule, and performance problems and, thereby, minimize program risk. We reported that 14 of the department's investments that lacked appropriate review experienced cost growth, schedule delays, and underperformance--some of which were substantial. Other programs within DHS have also experienced cost growth and schedule delays. For example, we reported in July 2008 that the Coast Guard's Rescue 21 system was projected to experience cost increases of 184 percent and schedule delays of 5 years after rebaselining. DHS issued a new interim management directive on November 7, 2008, that addresses many of our findings and recommendations on the department's major investments. If implemented as intended, the more disciplined acquisition and investment review process outlined in the directive will help ensure that the department's largest acquisitions, including Deepwater, are effectively overseen and managed. While the decision to follow the Major Systems Acquisition Manual process for Deepwater assets is promising, the consequences of not following this acquisition approach in the past--when the contractor managed the overall acquisition--are now apparent for assets already in production, such as the NSC, and are likely to pose continued problems, such as increased costs. Because ICGS had determined the overall Deepwater solution, the Coast Guard had not ensured traceability from identification of mission needs to performance specifications for the Deepwater assets. In some cases it is already known that the ICGS solution does not meet Coast Guard needs, for example: The Coast Guard accepted the ICGS-proposed performance specifications for the long-range interceptor, a small boat intended to be launched from larger cutters such as the NSC, with no assurance that the boat it was buying was what was needed to accomplish its missions. Ultimately, after a number of design changes and a cost increase from $744,621 to almost $3 million, the Coast Guard began to define for itself the capabilities it needed and has decided not to buy any more of the ICGS boats. ICGS had initially proposed a fleet of 58 fast response cutters, subsequently termed the Fast Response Cutter-A (FRC-A), which were to be constructed of composite materials (as opposed to steel, for example). However, the Coast Guard suspended design work on the FRC-A in February 2006 to assess and mitigate technical risks. Ultimately, because of high risk and uncertain cost savings, the Coast Guard decided not to pursue the acquisition, a decision based largely on a third-party analysis that found the composite technology was unlikely to meet the Coast Guard's desired 35-year service life. After obligating $35 million to ICGS for the FRC-A, the Coast Guard pursued a competitively awarded fast response cutter based on a modified commercially available patrol boat. That contract was awarded in September 2008. Although the shift to individual acquisitions is intended to provide the Coast Guard with more visibility and control, key aspects still require a system-level approach. These aspects include an integrated C4ISR system--needed to provide critical information to field commanders and facilitate interoperability with the Department of Defense and DHS--and decisions on production quantities of each Deepwater asset the Coast Guard requires to achieve its missions. The Coast Guard is not fully positioned to manage these aspects under its new acquisition approach but is engaged in efforts to do so. C4ISR is a key aspect of the Coast Guard's ability to meet its missions. How the Coast Guard structures C4ISR is fundamental to the success of the Deepwater Program because C4ISR encompasses the connections among surface, aircraft, and shore-based assets and the means by which information is communicated through them. C4ISR is intended to provide operationally relevant information to Coast Guard field commanders to allow the efficient and effective execution of their missions. However, an acquisition strategy for C4ISR is still in development. Officials stated that the Coast Guard is revisiting the C4ISR incremental acquisition approach proposed by ICGS and analyzing that approach's requirements and architecture. In the meantime, the Coast Guard is continuing to acquire C4ISR through ICGS. As the Coast Guard transitions from the ICGS-based system-of-systems acquisition strategy to an asset-based approach, it will need to maintain a strategic outlook to determine how many of the various Deepwater assets to procure to meet Coast Guard needs. When deciding how many of a specific vessel or aircraft to procure, it is important to consider not only the capabilities of that asset, but how it can complement or duplicate the capabilities of the other assets with which it is intended to operate. To that end, the Coast Guard is modeling the planned capabilities of Deepwater assets, as well as the capabilities and operations of existing assets, against the requirements for Coast Guard missions. The intent of this modeling is to test each planned asset to ensure that its capabilities fill stated deficiencies in the Coast Guard's force structure and to inform how many of a particular asset are needed. However, the analysis based on the modeling is not expected to be completed until the summer of 2009. In the meantime, Coast Guard continues to plan for asset acquisitions in numbers very similar to those determined by ICGS, such as 8 NSCs. Like many federal agencies that acquire major systems, the Coast Guard faces challenges in recruiting and retaining a sufficient government acquisition workforce. In fact, one of the reasons the Coast Guard originally contracted with ICGS as a systems integrator was the recognition that the Coast Guard lacked the experience and depth in its workforce to manage the acquisition itself. The Coast Guard's 2008 acquisition human capital strategic plan sets forth a number of workforce challenges that pose the greatest threats to acquisition success, including a shortage of civilian acquisition staff , its military personnel rotation policy, and the lack of an acquisition career path for its military personnel. The Coast Guard has taken a number of steps to hire more acquisition professionals, including the increased use of recruitment incentives and relocation bonuses, utilizing direct hire authority, and rehiring government annuitants. The Coast Guard also recognizes the impact of military personnel rotation on its ability to retain people in key positions. Its policy of 3-year rotations of military personnel among units, including to and from the acquisition directorate, limits continuity in key project roles and can have a serious impact on acquisition expertise. While the Coast Guard concedes that it does not have the personnel required to form a dedicated acquisition career field for military personnel, such as that found in the Navy, it is seeking to improve the base of acquisition knowledge throughout the Coast Guard by exposing more officers to acquisition as they follow their regular rotations. In the meantime, the lack of a sufficient government acquisition workforce means that the Coast Guard is relying on contractors to supplement government staff, often in key positions such as cost estimators, contract specialists, and program management support. While support contractors can provide a variety of essential services, when they are performing certain activities that closely support inherently governmental functions their use must be carefully overseen to ensure that they do not perform inherently governmental roles. Conflicts of interest, improper use of personal services contracts, and increased costs are also potential concerns of reliance on contractors. In response to significant problems in achieving its intended outcomes under the Deepwater Program, the Coast Guard leadership has made a major change in course in its management and oversight by re-organizing its acquisition directorate, moving away from the use of a contractor as the systems integrator, and putting in place a structured, more disciplined acquisition approach for Deepwater assets. While the initiatives the Coast Guard has underway have begun to have a positive impact, the extent and duration of this impact depend on positive decisions that continue to increase and improve government management and oversight. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or members of the subcommittee may have at this time. For further information about this testimony, please contact John P. Hutton, Director, at 202-512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has a large body of work examining government agencies' approaches to managing their large acquisition projects. GAO has noted that without sufficient knowledge about system requirements, technology, and design maturity, programs are subject to cost overruns, schedule delays, and performance that does not meet expectations. The Deepwater Program, intended to replace or modernize 15 major classes of Coast Guard assets, accounts for almost 60 percent of the Coast Guard's fiscal year 2009 appropriation for acquisition, construction and improvements. GAO has reported over the years on this program, which has experienced serious performance and management problems such as cost breaches, schedule slips, and assets designed and delivered with significant defects. To carry out the Deepwater acquisition, the Coast Guard contracted with Integrated Coast Guard Systems (ICGS) as a systems integrator. In April 2007, the Commandant acknowledged that the Coast Guard had relied too heavily on contractors to do the work of government and announced that the Coast Guard was taking over the lead role in systems integration from ICGS. This testimony reflects our most recent issued work on Deepwater, specifically our June 2008 report, Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain, GAO-08-745 . Over the past two years, the Coast Guard has reoriented its acquisition function to position itself to execute systems integration and program management responsibilities formerly carried out by ICGS. The acquisition directorate has been consolidated to oversee all Coast Guard acquisitions, including the Deepwater Program, and Coast Guard project managers have been vested with management and oversight responsibilities formerly held by ICGS. Another key change has been to manage the procurement of Deepwater assets on a more disciplined, asset-by-asset approach rather than as an overall system of systems, where visibility into requirements and capabilities was limited. For example, cost and schedule information is now captured at the individual asset level, resulting in the ability to track and report breaches for assets. Further, to manage Deepwater acquisitions at the asset level, the Coast Guard has begun to follow a disciplined project management process that requires documentation and approval of program activities at key points in a program's life cycle. These process changes, coupled with strong leadership to help ensure the processes are followed in practice, have helped to improve Deepwater management and oversight. However, the Coast Guard still faces many hurdles going forward and the acquisition outcome remains uncertain. The consequences of not following a disciplined acquisition approach for Deepwater acquisitions and of relying on the contractor to define Coast Guard requirements are clear now that assets, such as the National Security Cutter, have been paid for and delivered without the Coast Guard's having determined whether the assets' planned capabilities would meet mission needs. While the asset-based approach is beneficial, certain cross-cutting aspects of Deepwater--such as command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) and the overall numbers of each asset needed to meet requirements--still require a system-level approach. The Coast Guard is not fully positioned to manage these aspects. One of the reasons the Coast Guard originally contracted with ICGS as the systems integrator was the recognition that the Coast Guard lacked the experience and depth in workforce to manage the acquisition itself. The Coast Guard has faced challenges in building an adequate government acquisition workforce and, like many other federal agencies, is relying on support contractors--some in key positions such as cost estimating and contract support. GAO has pointed out the potential concerns of reliance on contractors who closely support inherently governmental functions.
2,892
766
In 1989, the Pacific Area Office, then called the Western Regional Office, identified several deficiencies in the 935 ZIP Code area and proposed relocating the distribution operations for five post offices in the area into a new facility. The key deficiencies identified by postal officials included the following: space deficiencies for mail processing operations in the Mojave MPO, which is responsible for mail processing operations for all of the post offices in the Antelope Valley; space deficiencies in carrier delivery operations in four of the five post offices affected by the proposed project; and space deficiencies in the Lancaster MPO limited the ability to meet demand for post office boxes, and parking for customers, employees, and postal vehicles. Figure 1 shows the locations of the five affected post offices in the cities of Lancaster, Mojave, Palmdale, Tehachapi, and Ridgecrest located in the southern portion of the Antelope Valley. Since the 1980 census, the Antelope Valley area, also known as the 935 ZIP Code area, has more than doubled its population. The growth in mail volume has paralleled the population growth. As shown in table 1, growth in this area was somewhat slower in the 1990s than in the 1980s. However, current projections expect that population and mail growth will accelerate again over the next decade. Over half of the population growth in the 935 ZIP Code area occurred in two cities, Lancaster and Palmdale. From 1980 to 1990, Lancaster's population grew from about 48,000 to 97,300, and Palmdale's population grew from about 12,300 to 68,900. During this same period, Mojave's population grew from about 2,900 to 3,800. The Southern California Association of Governments has projected that the Lancaster-Palmdale population would increase again over 200 percent by 2010. Mail scheduled for final delivery in the Antelope Valley originates from all over the United States and the rest of the world and is transported to the Los Angeles Processing and Distribution center located near Los Angeles International Airport. There, the mail undergoes a first-level sort by the first three digits of the ZIP Code. The mail is then transported to smaller mail processing facilities, such as the Mojave MPO, where secondary operations are performed on automated equipment to sort the mail to the five-digit ZIP Code level. Generally at this stage, some of the mail would also be automatically sorted to the carrier-route level and sequenced in the order that carriers deliver it. However, in Mojave, the necessary automated equipment is not available for sorting mail down to the carriers' delivery sequence order. Thus, the mail is transported to the postal facilities responsible for mail delivery, such as Lancaster, where the mail carriers manually sort the mail into delivery sequence order. Administrative support and mail processing functions for mail to be delivered in the 935 ZIP Code area, as well as local retail and delivery functions, are housed at the MPO in Mojave. According to available postal documents, the Mojave MPO was functioning at its maximum capacity in 1990. Mail processing and customer service operations competed for space in the crowded facility. Operational efficiency was beginning to suffer due to the continual shifting of equipment to allow adequate space for processing operations. More recently, postal documents noted that some automated sorting equipment intended for Mojave processing operations was being stored in warehouses due to insufficient space. Postal documents from 1990 also reported that the Lancaster MPO had reached its maximum capacity and could not accommodate the future growth anticipated in Lancaster. Carrier operations had spread onto the loading platform, where mail was being placed to await distribution. Both employees and mail were exposed to weather conditions. There was a demand for additional post office boxes at the MPO, but there was no room to expand the box section. According to the Service, employee support facilities were inadequate; and parking facilities for customer, employee, and postal vehicles were also inadequate. Similar conditions reportedly existed in the Palmdale MPO, and a facility replacement was included in the Western Region's Five-Year Facility plan. The MPOs in Ridgecrest and Tehachapi were also reported to be experiencing space deficiencies but not to the extent of the problems in Lancaster, Mojave, and Palmdale. The proposed new Antelope Valley facility would include mail-processing operations and support functions that are currently located at the Mojave MPO, and the secondary mail-processing operations would be relocated from the Palmdale, Ridgecrest, Mojave, and Tehachapi MPOs to the new facility. The Mojave MPO would be retained and would continue to provide retail and delivery services for the area and serve as a transfer point for those areas north and west of Mojave. The existing Lancaster MPO would be retained to serve as a carrier annex for carrier delivery operations. The Palmdale, Tehachapi, and Ridgecrest MPOs would be retained to provide full retail and delivery services for their areas. To evaluate the Service's approval process for this project, we performed the following: obtained and reviewed Service policies and guidance in effect when the project began and the policies and guidance currently in effect for facility planning, site acquisition, and project approval; obtained and analyzed Service documents related to the proposed Antelope Valley project and project approval process; discussed the proposed project and the review process with Service officials in Headquarters, the Pacific Area Office, the Van Nuys District, and the Lancaster and Mojave MPOs; observed operating conditions at the existing Lancaster and Mojave postal facilities and visited the postal-owned site in Lancaster that was purchased in 1991; reviewed cost estimates for the two alternatives under consideration prior to the project being placed on hold in March 1999; these cost estimates were included in draft project approval documents that were submitted for headquarters review in February 1999; and discussed the impact of the proposed project with community officials in Mojave, Kern County, and Lancaster, CA. We did not evaluate whether this project should be approved or funded. The Service has a process and criteria for assessing and ranking capital facility projects for funding. However, we only reviewed this particular project and, therefore, did not have a basis for comparing its merits with those of other capital projects competing for approval and funding. We also did not independently verify the accuracy of the financial data included in the Postal Service's analyses of the cost of various alternatives under consideration. Postal officials acknowledged that these preliminary cost estimates might need corrections and revisions because they had not completed their review of the project approval documents. Due to the incomplete status of this project, our assessment generally covered the requirements followed and actions taken by the Service during the period (1) from project initiation in 1989 until the first suspension in 1992 and (2) since its reinstatement in 1995 to August 1999. We conducted our review between December 1998 and August 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General. We received written comments from the Postmaster General, which we have included in appendix I. His comments are discussed near the end of this report. The Service followed most of its key requirements for acquiring a site in Lancaster prior to obtaining approval for the proposed Antelope Valley project, although some requirements were vague. One major exception was that the Headquarters CIC did not review and approve the proposed project justification and alternatives under consideration prior to advance site acquisition, as required by Service policies. The Service's guidance allowed advance site acquisition before all analyses that were required for final project approval were completed if, among other requirements, the Service believed that the preferred site would not be available when project approval was anticipated. Table 2 presents the key requirements in the Service's major facility project approval process and the actions taken by the Service to meet those requirements prior to project suspension in 1992. The key requirements of this project approval process include formal documentation, and the dates provided are based on available documentation. The Postal Service's guidance detailing its investment policies and procedures for major facilities explains that its purpose is to ensure that major facility investments support the strategic objectives of the Postal Service, make the best use of available resources, and establish management accountability for investment decisions. Postal Service policies also specify the delegation of authority for approving capital facility projects based on total project costs. All capital projects exceeding $10 million in total project costs are considered major facility projects and are required to obtain final approval from the Postal Service's Board of Governors after being approved through appropriate area and headquarters officials, including the Headquarters CIC. Some facility projects may be funded from the area's budget. To obtain funding from headquarters capital investment funds, these proposed major capital facility projects must be prioritized along with proposed projects from all other regions/areas and included by headquarters officials in the Postal Service's Five-Year Major Facilities Priority List. This list is to be updated annually and included as part of the Service's annual budget, which is then reviewed and approved by postal management and the Board of Governors. As shown in table 2, the Service generally followed its approval process for advance site acquisition. However, one major requirement that was not completed before the advance site acquisition was the Advance Project Review, which involves the review and approval of the project justification and alternatives by the Headquarters CIC. Postal officials told us that the project had met all of the Service's requirements prior to approval for advance site acquisition. However, the Service could not provide a date for when the Headquarters CIC meeting occurred or any documentation of the completion of the Advance Project Review stage. The purpose of the Advance Project Review by the Headquarters CIC, according to postal guidance, is "to be sure that the Headquarters CIC concurs with the scope (especially the justification, alternatives, and strategic compatibility) before the expenditure of substantial planning resources." According to the Service's requirements that were in effect in 1991, advance site acquisition was permitted prior to completion of the project approval process with the approval of the headquarters senior official responsible for facilities. The regional postmaster general requested site acquisition in advance of project approval for the site in Lancaster on June 25, 1991. The request noted that Western Region officials had approved funding from the region's budget for site acquisition in fiscal year 1991. In addition, the request noted that the project was a headquarters-funded project scheduled to be presented to the Headquarters CIC for review in mid 1992, go to the Board of Governors for review and approval in August 1992, and begin construction in fiscal year 1992. The request also noted that control of the site expired on June 30, 1991, and that failure to acquire the site as an advance site acquisition may result in its loss. The total project cost was estimated at just over $31 million, with site purchase in the amount of $6,534,000, and site support costs of $100,000 for a total funding request of $6,634,000 for advance site acquisition. The request also noted that the property-owner had offered the Postal Service an additional saving of $250,000, which would reduce the sales price to $6,284,000, if the site acquisition were approved and closing occurred prior to August 1, 1991. The funding request was approved by the appropriate headquarters official, and the site was purchased for $6,534,000 on October 25, 1991. Service guidance required that alternatives be identified and analyzed before a project could qualify for advance site acquisition but did not clearly state the type or depth of analyses required. At the time of the Lancaster site acquisition, some analyses, such as the space requirements (which determine sizes of buildings and site requirements for operational needs) as well as the cost estimates of project alternatives (which provide information on projected cash flows and return on investment) were still under development. Only the estimated project costs associated with the preferred alternative--construction of a new processing facility in Lancaster--were available prior to site acquisition. Moreover, the available documentation did not explain why this alternative was preferred over the other alternatives considered. According to documentation provided to us, four alternatives were presented at the project planning meeting held in June 1990. The four alternatives, with the key differences underscored, were as follows: (A) a new area mail processing center in Lancaster for relocated mail processing operations, distribution operations, and delivery services for the 93535 ZIP Code area; the existing Lancaster MPO would retain its retail and delivery services; (B) a new general mail facility in Lancaster for relocated mail processing operations, distribution operations, and delivery services for the 93535 ZIP Code area; the existing Lancaster MPO would retain its delivery services and retail services would be relocated in the area; (C) new area mail processing center in the vicinity of Mojave and Lancaster for relocated mail processing operations and distribution operations; the existing Mojave and Lancaster MPOs would retain retail and delivery services for their respective communities and a new facility would be constructed in Lancaster for delivery services; and (D) lease and modify an existing building for use as a Mail Handling Annex for relocated mail processing operations and distribution operations; the existing Mojave MPO would retain its retail and delivery services. "The alternatives were discussed at length. Alternative A, B, and C were discussed. It was agreed upon that these alternatives will solve the major operating needs of the Antelope Valley, but will not address all of our needs for delivery and retail facilities. A reassessment of the proposed concept and the requirements for Lancaster and Palmdale Main Post Offices will be conducted following site selection to ascertain whether the specific site is conducive to delivery or retail activities as a result of its location." "The existing facilities in Lancaster, Palmdale, and Mojave could not be expanded to provide sufficient space to accommodate the current and projected growth in the Antelope Valley. Continuation of mail processing operations at the Mojave MPO will not meet corporate goals for improved delivery times and efficiencies." However, since the proposed project was revised in 1998, expansion of the existing Mojave facility was one of two alternatives under consideration, along with the preferred alternative to construct a new facility on the Service-owned site in Lancaster. Available documentation did not explain why expansion of the existing Mojave facility was not considered viable in 1990 but was considered a viable alternative in 1998. The problem of inadequate documentation of the Service's real estate acquisition decisions is not a new issue. In 1989, we reviewed the Service's real estate acquisition process. At that time, we reviewed a sample of 246 sites purchased during fiscal year 1987 and made recommendations to improve the Service's real estate acquisition program. Our 1989 report found that the Service usually purchased sites that exceeded both its operational needs and advertised size requirements. When alternative sites were available for purchase, the Service generally selected the larger, more costly sites without requiring site selection committees to document why less expensive alternative sites were less desirable. The report raised concerns, based on the Service's requirements for advertising and purchasing practices, that the Service might be spending more than was necessary for land and accumulating an unnecessarily large real estate inventory. The report also recognized that sometimes larger, more costly sites may best meet the Service's operational requirements but that justification for such selections should be required when smaller, less costly contending sites were available. In the Service's letter dated August 25, 1989, responding to a draft of that report, the Postmaster General agreed with our recommendation relating to more complete documentation of the selection process. He stated, "The Postal Service is concerned only with the best value and will make sure that the reasoning behind the determination of best value is more carefully documented in the future." However, improvement in documentation was not evident in the documentation related to the proposed Antelope Valley area project, which was prepared soon after our report was issued. We identified inconsistencies in internal postal memorandums related to the required site size and disposition of any excess land. The region's June 25, 1991, memorandum requesting approval for advance site acquisition in Lancaster stated, "No excess land is expected to remain." Another internal memorandum dated October 25, 1991--the date of final settlement for the purchase of the Lancaster site--discussed preparation of the final cost estimates for the proposed Antelope Valley Area project and stated "Please note that the required site is considerably less than the selected site." Further, a February 1992 internal memorandum noted that the Lancaster site was purchased in late 1991 and that the site area exceeded Service requirements by 296,000 square feet (about 6.8 acres). The reason for the purchase of a site that was larger than needed was not explained in any available documents. More recent documents related to the proposed project alternatives also noted that the Service-owned site in Lancaster exceeds project requirements, but the alternatives do not discuss how the excess property would be disposed of. As of the beginning of July 1999, the Service's consideration of the proposed Antelope Valley project had been put on hold, and a decision may not be made for some time. Consequently, the status and funding of the proposed project remains uncertain almost 10 years after it was initiated. Consideration of the project has been delayed due to two suspensions, reductions in capital investment spending, and a recent reclassification of the proposed facility. As a result, processing and delivery deficiencies that were identified as critical for this area in 1989 continue to exist, and the Service has not determined how it plans to address these operational deficiencies. In addition, the Service has incurred additional costs that have resulted from the need to repeat analyses and update documents required for final project approval. With the project currently on hold, further costs may be incurred to again update required analyses. Finally, the delays have prolonged the uncertainty related to business development opportunities for the affected communities of Mojave and Lancaster. Initiated in 1989, with an expectation that the project would be funded in fiscal year 1992, the proposed Antelope Valley project was suspended in 1992, while the Service was undergoing a reorganization and had reduced its funding for capital facility projects. Table 3 shows that between 1991 and 1995, the Service committed $999 million less to its facilities improvement program than it had originally authorized in its 1991 to 1995 Capital Improvement Plan. Postal Service officials could not explain why the classification of this project, as a processing facility or other type of capital facility, has been changed several times and why it has not yet been submitted for consideration in the headquarters capital facility projects prioritization and funding process. All major mail processing facilities must be funded from the headquarters capital facility budget, while other types of processing and delivery facilities may be funded from regional/area budgets. At the time that the proposed project was suspended in 1992, it was classified as a mail processing facility in the Western Region/Pacific Area Major Facility Priority List. It had also been submitted for headquarters funding consideration in the Five-Year Major Facilities Priority List for fiscal years 1991 to 1995. The project was reinstated and reclassified in 1995 as a Delivery and Distribution Center (DDC), with the expectation that it would be funded out of area funds in fiscal year 1998. The Service suspended the project a second time in March 1999, while it was undergoing review by headquarters officials. Based upon the headquarters review, the project was again reclassified from a DDC to a Processing and Distribution Center. The latest reclassification meant that the project would have to be funded by headquarters rather than the Pacific Area Office, and it would have to compete nationally for funding. This means that the project will have to await placement on the next headquarters Five-Year Major Facilities Priority List, which is scheduled to be completed by August 2000. It is also not clear why the proposed project was reinstated and reclassified in 1995 as a DDC when the major purpose and design of this project had not fundamentally changed. Postal officials in the Pacific Area Office and Van Nuys District said that the recently proposed Antelope Valley project is essentially the same as the project that was being planned when the Service acquired the 25-acre Lancaster site in 1991. The major differences in the two projects are in nonmail processing areas. As previously mentioned, the proposed project had not had an Advance Project Review by the Headquarters CIC prior to the suspension in 1992. Such a review might have prevented the unexplained reclassifications of this project that have contributed to delays in its funding. Ten years after this project began, the operational processing and delivery deficiencies that were identified as critical for this area in 1989 still remain. Because of continued space deficiencies, automated equipment has not been deployed as scheduled, and the projected operating efficiencies and savings have not been realized. The District projected that one of the benefits from automated sorting of the mail to the carriers in delivery walk sequence would be to improve delivery performance by 4.25 percent annually. This additional sorting would decrease the time that the carriers spend in the delivery units preparing the mail for delivery and increase the amount of time the carriers would have to deliver the mail. Another negative effect of the space deficiencies in Mojave was that some of the mail originating in the 935 ZIP Code area (approximately 130,000 pieces per day) was diverted from processing in Mojave to the processing facility in Santa Clarita. According to local postal officials, the effect of this diversion was to delay by 1 day the delivery of some mail that was to be delivered in the 935 ZIP Code area. The local area First-Class mail was supposed to be delivered within 1 day to meet overnight delivery standards for First-Class mail. Since this project was initiated in 1989, the Service has taken several actions to address mail processing and delivery deficiencies in the Antelope Valley. The Service added 2,417 square feet of interior space to the Palmdale MPO by relocating the post office into a larger leased facility. Some relief was provided to the cramped carrier operations at the Lancaster MPO by relocating 15 of the 89 carrier routes serving Lancaster to the Lancaster Cedar Station. However, as we observed on our visit to the Lancaster facilities, conditions in Lancaster were still very congested. Mail that was waiting to be processed and workroom operations spilled out of the building onto the platform, exposing both employees and the mail to weather conditions. In an effort to provide the Mojave MPO with more mail-processing space, a 2,400 square foot tent was installed in 1998, at a cost of $30,000, next to the loading platform. The tent provided additional space for processing operations and for holding mail that was waiting to be processed, but it did not allow for deployment of any automated equipment scheduled for use in the 935 mail-processing functions. Also, we observed that the tent would not provide adequate shelter from high winds or other weather-related conditions. Some of the equipment was stored at district warehouses. Although these efforts have allowed the district to continue to provide processing and delivery service, it is not clear how the Service intends to meet the operational processing and delivery deficiencies while decisions related to the proposed facility are pending. Project delays have also contributed to higher costs, incurred to repeat and update some of the analyses and cost data needed for final project approval. Given that the process is not completed, additional costs may be incurred to further update required analyses. The Service has incurred additional costs related to developing a second set of documents required for project approval, including Facility Planning Concept documents, appraisals, space requirements, environmental assessments, and DARs. Generally, the Service uses contractors to develop the environmental and engineering studies. Although the total cost of document preparation has not been quantified, available documentation indicates that the Service has incurred about $254,000 for costs related to previous design efforts for this project. In addition, costs that have not been quantified include staff time and travel costs associated with this project. The Area Office Operations Analyst who was responsible for preparing the DAR told us that it took him approximately a year to develop a DAR and the supporting documents and analysis. This did not include the time of the other individuals who provided him with various information needed to complete the analyses or the time of officials responsible for reviewing and approving the project. The Service has also incurred additional costs for travel associated with project reviews, such as the Planning Parameters Meeting, which involved the travel of at least three headquarters officials. It is difficult at this stage to determine what additional analyses may be needed because the Antelope Valley project has been suspended and, according to Service officials, no further action is being taken on reviewing the project until it is submitted by Pacific area officials for prioritization. We reviewed the cost estimates for the two alternatives that were included in the draft DAR that had been submitted to headquarters for review in February 1999. We found some deficiencies in the information presented. Postal officials stated that these types of deficiencies would be identified during their review process that includes reviews by officials in three separate headquarters departments--Facilities, Operations, and Finance. They also said that the cost estimates in the DAR were too preliminary to use as a basis for assessing which of the two alternatives under consideration were more cost effective. The officials noted that significant changes could be made to the cost estimates as the project documentation completes the review process. In addition, the Service has not realized any return on its investment in the site in Lancaster, which has remained unused since 1991. This unrealized investment has an interest cost associated with the Service's use of funds to purchase the Lancaster site in October 1991. We estimated that the interest cost associated with the Service's $6.5 million investment totaled about $2.9 million from the time that the site was purchased in October 1991 through June 1999 and that it would likely increase by over $300,000 each year. The uncertainty of this project over such a long period has also created difficulties, particularly related to business development planning, for the affected Lancaster and Mojave communities. Mojave community officials have raised concerns about the effect that relocating the postal operations would have on their community. They expressed specific concerns relating to the potential lost job opportunities to the Mojave and nearby California City residents and the impact that losing the postal processing operations would have on their effort to attract new homes and retail services. Postal documents indicated that while none of the Mojave employees would lose their jobs, approximately 80 employees working the evening and night shift would be relocated if distribution operations were to be relocated to a new facility in Lancaster. The Service projects that the proposed expanded Mojave Facility would create 10 additional jobs at the facility when it opens. The project delay has also affected the business development opportunities in Lancaster. After the Service selected the Lancaster site in 1991, the Mayor of Lancaster stated in a letter to the Postal Service that he welcomed the new facility and that the facility would anchor the new 160- acre Lancaster Business Park Project. Shortly after the Postal Service selected the 25-acre site, a major mailer, Deluxe Check Printing, acquired a 12-acre site adjacent to the postal property. Recently, the Lancaster City Manager noted that not having the Postal Service facility has made marketing the Business Park to potential developers very difficult. In addition, Lancaster officials stated that the city has spent over $20 million to provide improvements to the business park. These improvements were conditions of sale when the Postal Service acquired the site in 1991. The Service followed most of its key requirements when it purchased a site in Lancaster in 1991 for the proposed Antelope Valley project before it had obtained overall project approval, although some requirements were vague. One major exception was that the Headquarters CIC did not review and approve the proposed project justification and alternatives under consideration prior to advance site acquisition as required by Service guidance. The Service's requirements for advance site acquisition were unclear because they did not specify the types or depth of analyses required. The Service's analyses of alternatives were incomplete because estimated costs of the alternatives and space requirements were still under development. Also, it was not clear why an alternative that was recently under consideration, the expansion of the existing Mojave MPO, was not considered a viable alternative before the site in Lancaster was acquired. We could not determine whether review and approval of the proposed project justification and alternatives by the Headquarters CIC would have resulted in changes in the proposed project justification and alternatives or more in-depth analysis of the alternatives. Such a review may have prevented the unexplained inconsistencies in the classifications of this project that have contributed to delays in its funding. Likewise, it is not known whether the Committee's review would have suggested a course of action other than acquisition of the Lancaster site. Further, the more recent analysis of the alternative to expand the Mojave MPO is too preliminary to assess or draw any conclusions from because the headquarters review of the proposed project has been suspended. However, what is known is that the Service spent about $6.5 million over 8 years ago to purchase a site that has remained unused. This site may or may not be used by the Service in the future, and its investment has a substantial annual interest cost associated with it. While this interest cost continues, the mail service deficiencies identified nearly 10 years ago remain unaddressed, and projected operating efficiencies and savings anticipated from new equipment are unrealized as the equipment remains in storage. Given this situation, it is not clear why the status of this project has been allowed to go unresolved for such a long time. It is also unclear at this time whether funding for this project will be approved and, if so, for what year of the next 5-year capital projects funding cycle. Thus, the Service's site investment in unused land and the existing operational deficiencies are likely to continue for some time, and the Service has not determined how it will address these issues if the project is not approved or funded for several years. To address the long-standing uncertainties related to the proposed Antelope Valley project, we recommend that the Postmaster General take the following actions: Resolve the internal inconsistencies in the classification of this project, determine whether the site in Lancaster should be retained, and ensure that the project is considered in the appropriate funding and approval process, and Require the Pacific Area office to determine whether immediate action is needed to address the operational deficiencies identified in the Antelope Valley area and report on planned actions and related time frames for implementation. We received written comments from the Postmaster General on August 20, 1999. These comments are summarized below and included as appendix I. We also incorporated technical comments provided by Service officials into the report where appropriate. The Postmaster General responded to our conclusion that the Service did not follow all of its procedures in effect at the time that approval was given to purchase a site for a proposed facility in advance of the proposed Antelope Valley project's review and approval. He stated that the Service has revised its procedures for advance site acquisition so that proposed sites are subjected to additional review and approval. As a result, he stated that the advanced acquisition of a site for project such as Antelope Valley now must receive approval from the Headquarters Capital Investment Committee and the Postmaster General. The Postmaster General generally agreed with our recommendations to address the unresolved status of the Antelope Valley project and the operational deficiencies in the Antelope Valley area. In response to our first recommendation to resolve the inconsistent classification of the project, he stated that the Service has determined that the proposed Antelope Valley project is properly classified as a mail processing facility. He also stated that the proposed project would be considered for funding along with other such projects during the next round of project review and prioritization. While clarification of the project's classification is a good first step, until disposition of the entire project is completed, the status of the project, including the use of the Lancaster site, remains unresolved. Regarding our second recommendation to address operational deficiencies in the Antelope Valley area, he stated that officials from the involved Pacific Area offices have met to discuss the most workable alternatives to sustain and improve mail service for Antelope Valley customers. However, due to the complexity of issues, including the possibility of relocating some operations into leased space on an interim basis, a fully developed distribution and delivery improvement plan may take some time to implement. He agreed to provide us with action plans and time frames as they are finalized. If actions are taken as described by the Postmaster General, we believe they would be responsive to our recommendations. We are sending copies of this report to Representative Howard (Buck) McKeon; Representative John McHugh, Chairman, and Chaka Fattah, Ranking Minority Member, Subcommittee on the Postal Service, House Committee on Government Reform; Mr. William J. Henderson, Postmaster General; and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions about this report, please call me on (202) 512-8387. Teresa Anderson, Melvin Horne, Hazel Bailey, Joshua Bartzen, and Jill Sayre made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the project approval process the Postal Service used in proposing to relocate postal operations for the Antelope Valley, California, area from the Main Post Office in Mojave, California, to a new facility in Lancaster, California. GAO noted that: (1) the Service followed most of its key requirements for acquiring a site in Lancaster in 1991 prior to obtaining approval for the proposed Antelope Valley project, although some requirements were vague; (2) one major exception was that review and approval of the proposed project justification and alternatives by the Headquarters Capital Investment Committee did not take place prior to the advance site acquisition in Lancaster, as required by Service policies; (3) Service guidance was unclear because it required that alternatives be identified and analyzed before a project could qualify for advance site acquisition, but it did not clearly state the type or depth of analysis required; (4) at the time of the Lancaster site acquisition, the analysis to support the decision was incomplete; (5) more detailed analyses were still under development; (6) GAO could not determine from available documentation why the alternative to construct a new facility in Lancaster was preferred over other alternatives that had been proposed or why various alternatives were not considered viable; (7) the Lancaster site purchased for $6.5 million in 1991 has remained unused since that time due to the Service's failure to decide how and when it will resolve the long-standing problems that the proposed Antelope Valley project was to address; (8) continuing negative effects have resulted from the incomplete status of the project for almost 10 years; (9) project approval and funding of the project remain uncertain due to delays resulting from two suspensions, limits on capital spending, and changes in project classification; (10) it is unclear how the Service intends to address the space deficiencies that have contributed to operational processing and delivery deficiencies in the Antelope Valley area; (11) because of continued space deficiencies, automated equipment was sitting unused in warehouses, some mail delivery was being delayed, and the projected operating efficiencies and savings have not been realized; (12) the Service has invested $6.5 million in land that has been unused for nearly 8 years; such an investment has a substantial annual interest cost estimated at over $300,000; (13) it has also incurred additional costs to update documents required for project approval and may incur more costs if some of these documents again have to be updated when the project is reviewed for approval; and (14) the Lancaster and Mojave communities have faced uncertainty over business development opportunities as a result of the project delays.
7,215
536
CBP's SBI program is to leverage technology, tactical infrastructure, and people to allow CBP agents to gain control of the nation's borders. Within SBI, SBInet is the program for acquiring, developing, integrating, and deploying an appropriate mix of surveillance technologies and command, control, communications, and intelligence (C3I) technologies. The surveillance technologies are to include a variety of sensor systems aimed at improving CBP's ability to detect, identify, classify, and track items of interest along the borders. Unattended ground sensors are to be used to detect heat and vibrations associated with foot traffic and metal associated with vehicles. Radars mounted on fixed and mobile towers are to detect movement, and cameras on fixed and mobile towers are to be used to identify, classify, and track items of interest detected by the ground sensors and the radars. Aerial assets are also to be used to provide video and infrared imaging to enhance tracking of targets. The C3I technologies are to include software and hardware to produce a Common Operating Picture (COP)--a uniform presentation of activities within specific areas along the border. The sensors, radars, and cameras are to gather information along the border, and the system is to transmit this information to the COP terminals located in command centers and agent vehicles, assembling this information to provide CBP agents with border situational awareness. A system life cycle management approach typically consists of a series of phases, milestone reviews, and related processes to guide the acquisition, development, deployment, and operation and maintenance of a system. The phases, reviews, and processes cover such important life cycle activities as requirements development and management, design, software development, and testing. In general, SBInet surveillance systems are to be acquired through the purchase of commercially available products, while the COP systems involve development of new, customized systems and software. Together, both categories are to form a deployable increment of SBInet capabilities, which the program office refers to as a "block." Each block is to include a release or version of the COP. The border area that receives a given block is referred to as a "project." Among the key processes provided for in the SBInet system life cycle management approach are processes for developing and managing requirements and for managing testing activities. SBInet requirements are to consist of a hierarchy of six types of requirements, with the high-level operational requirements at the top. These high-level requirements are to be decomposed into lower-level, more detailed system, component, design, software, and project requirements. SBInet testing consists of a sequence of tests that are intended first to verify that individual system parts meet specified requirements, and then verify that these combined parts perform as intended as an integrated and operational system. Having a decomposed hierarchy of requirements and an incremental approach to testing are both characteristics of complex information technology (IT) projects. Important aspects of SBInet--the scope, schedule, and development and deployment approach--remain ambiguous and in a continued state of flux, making it unclear and uncertain what technology capabilities will be delivered and when, where, and how they will be delivered. For example, the scope and timing of planned SBInet deployments and capabilities have continued to change since the program began, and remain unclear. Further, the approach that is being used to define, develop, acquire, test, and deploy SBInet is similarly unclear and has continued to change. The absence of clarity and stability in these key aspects of SBInet introduces considerable program risks, hampers DHS's ability to measure program progress, and impairs the ability of Congress to oversee the program and hold DHS accountable for program results. The scope and timing of planned SBInet deployments and capabilities have not been clearly established, but rather have continued to change since the program began. Specifically, as of December 2006, the SBInet System Program Office planned to deploy an "initial" set of capabilities along the entire southwest border by late 2008 and a "full" set of operational capabilities along the southern and northern borders (a total of about 6,000 miles) by late 2009. Since then, however, the program office has modified its plans multiple times. As of March 2008, it planned to deploy SBInet capabilities to just three out of nine sectors along the southwest border--Tucson Sector by 2009, Yuma Sector by 2010, and El Paso Sector by 2011. According to program officials, no deployment dates had been established for the remainder of the southwest or northern borders. At the same time, the SBInet System Program Office committed to deploying Block 1 technologies to two locations within the Tucson Sector by the end of 2008, known as Tucson 1 and Ajo 1. However, as of late July 2008, program officials reported that the deployment schedule for these two sites has been modified, and they will not be operational until "sometime" in 2009. The slippages in the dates for the first two Tucson deployments, according to a program official, will, in turn, delay subsequent Tucson deployments, although revised dates for these subsequent deployments have not been set. In addition, the current Block 1 design does not provide key capabilities that are in requirements documents and were anticipated to be part of the Block 1 deployments to Tucson 1 and Ajo 1. For example, the first deployments of Block 1 will not be capable of providing COP information to the agent vehicles. Without clearly establishing program commitments, such as capabilities to be deployed and when and where they are to be deployed, program progress cannot be measured and responsible parties cannot be held accountable. Another key aspect of successfully managing large programs like SBInet is having a schedule that defines the sequence and timing of key activities and events and is realistic, achievable, and minimizes program risks. However, the timing and sequencing of the work, activities, and events that need to occur to meet existing program commitments are also unclear. Specifically, the program office does not yet have an approved integrated master schedule to guide the execution of SBInet. Moreover, our assimilation of available information from multiple program sources indicates that the schedule has continued to change. Program officials attributed these schedule changes to the lack of a satisfactory system-level design, turnover in the contractor's workforce, including three different program managers and three different lead system engineers, and attrition in the SBInet Program Office, including turnover in the SBInet Program Manager position. Without stability and certainty in the program's schedule, program cost and schedule risks increase, and meaningful measurement and oversight of program status and progress cannot occur, in turn limiting accountability for results. System quality and performance are in large part governed by the approach and processes followed in developing and acquiring the system. The approach and processes should be fully documented so that they can be understood and properly implemented by those responsible for doing so, thus increasing the chances of delivering promised system capabilities and benefits on time and within budget. The life cycle management approach and processes being used by the SBInet System Program Office to manage the definition, design, development, testing, and deployment of system capabilities has not been fully and clearly documented. Rather, what is defined in various program documents is limited and not fully consistent across these documents. For example, officials have stated that they are using the draft Systems Engineering Plan, dated February 2008, to guide the design, development, and deployment of system capabilities, and the draft Test and Evaluation Master Plan, dated May 2008, to guide the testing process, but both of these documents appear to lack sufficient information to clearly guide system activities. For example, the Systems Engineering Plan includes a diagram of the engineering process, but the steps of the process and the gate reviews are not defined or described in the text of the document. Further, statements by program officials responsible for system development and testing activities, as well as briefing materials and diagrams that these officials provided, did not add sufficient clarity to describe a well-defined life cycle management approach. Program officials told us that both the government and contractor staff understand the SBInet life cycle management approach and related engineering processes through the combination of the draft Systems Engineering Plan and government-contractor interactions during design meetings. Nevertheless, they acknowledged that the approach and processes are not well documented, citing a lack of sufficient staff to both document the processes and oversee the system's design, development, testing, and deployment. They also told us that they are adding new people to the program office with different acquisition backgrounds, and they are still learning about, evolving, and improving the approach and processes. The lack of definition and stability in the approach and related processes being used to define, design, develop, acquire, test, and deploy SBInet introduces considerable risk that both the program officials and contractor staff will not understand what needs to be done when, and that the system will not meet operational needs and perform as intended. DHS has not effectively defined and managed SBInet requirements. While the program office recently issued guidance that is consistent with recognized leading practices, this guidance was not finalized until February 2008, and thus was not used in performing a number of key requirements-related activities. In the absence of well-defined guidance, the program's efforts to effectively define and manage requirements have been mixed. For example, the program has taken credible steps to include users in the definition of requirements. However, several requirements definition and management limitations exist. One of the leading practices associated with effective requirements development and management is engaging system users early and continuously. In developing the operational requirements, the System Program Office involved SBInet users in a manner consistent with leading practices. Specifically, it conducted requirements-gathering workshops from October 2006 through April 2007 to ascertain the needs of Border Patrol agents and established work groups in September 2007 to solicit input from both the Office of Air and Marine Operations and the Office of Field Operations. Further, the program office is developing the COP technology in a way that allows end users to be directly involved in software development activities, which permits solutions to be tailored to their needs. Such efforts increase the chances of developing a system that will successfully meet those needs. The creation of a requirements baseline establishes a set of requirements that have been formally reviewed and agreed on, and thus serve as the basis for further development or delivery. According to SBInet program officials, the SBInet Requirements Development and Management Plan, and leading practices, requirements should be baselined before key system design activities begin in order to inform, guide, and constrain the system's design. While many SBInet requirements have been baselined, two types have not yet been baselined. According to the System Program Office, the operational requirements, system requirements, and various system component requirements have been baselined. However, as of July 2008, the program office had not baselined its COP software requirements and its project-level requirements for the Tucson Sector, which includes Tucson 1 and Ajo 1. According to program officials the COP requirements have not been baselined because certain interface requirements had not yet been completely identified and defined. Despite the absence of baselined COP and project-level requirements, the program office has proceeded with development, integration, and testing activities for the Block 1 capabilities to be delivered to Tucson 1 and Ajo l. As a result, it faces an increased risk of deploying systems that do not align well with requirements, and thus may require subsequent rework. Another leading practice associated with developing and managing requirements is maintaining bidirectional traceability from high-level operational requirements through detailed low-level requirements to test cases. The SBInet Requirements Development and Management Plan recognizes the importance of traceability, and the SBInet System Program Office established detailed guidance for populating and maintaining a requirements database for maintaining linkages among requirement levels and test verification methods. To provide for requirements traceability, the prime contractor established such a requirements management database. However, the reliability of the database is questionable. We attempted to trace requirements in the version of this database that the program office received in March 2008, and were unable to trace large percentages of component requirements to either higher-level or lower-level requirements. For example, an estimated 76 percent (with a 95 percent degree of confidence of being between 64 and 86 percent) of the component requirements that we randomly sampled could not be traced to the system requirements and then to the operational requirements. In addition, an estimated 20 percent (with a 95 percent degree of confidence of being between 11 and 33 percent) of the component requirements in our sample failed to trace to a verification method. Without ensuring that requirements are fully traceable, the program office does not have a sufficient basis for knowing that the scope of the contractor's design, development, and testing efforts will produce a system solution that meets operational needs and performs as intended. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. This includes having an overarching test plan or strategy and testing individual system components to ensure that they satisfy requirements prior to integrating them into the overall system. This test management plan should define the schedule of high- level test activities in sufficient detail to allow for more detailed test planning and execution to occur, define metrics to track test progress and report and address results, and define the roles and responsibilities of the various groups responsible for different levels of testing. However, the SBInet program office is not effectively managing its testing activities. Specifically, the SBInet Test and Evaluation Master Plan, which documents the program's test strategy and is being used to manage system testing, has yet to be approved by the SBInet Acting Program Manager, even though testing activities began in June 2008. Moreover, the plan is not complete. In particular, it does not (1) contain an accurate and up-to-date test schedule, (2) identify any metrics for measuring testing progress, and (3) clearly define and completely describe the roles and responsibilities of various entities that are involved in system testing. Further, the SBInet System Program Office has not performed individual component testing as part of integration testing. As of July 2008, agency officials reported that component-level tests had not been completed and were not scheduled to occur. Instead, officials stated that Block 1 components were evaluated based on what they described as "informal tests" (i.e., contractor observations of cameras and radar suites in operation at a National Guard facility in the Tucson Sector) and stated that the contractors' self-certification that the components meet functional and performance requirements was acceptable. Program officials acknowledged that this approach did not verify whether the individual components in fact met requirements. Without effectively managing testing activities, the chances of SBInet testing being effectively performed is reduced, which in turn increases the risk that the delivered and deployed system will not meet operational needs and not perform as intended. In closing, I would like to stress that a fundamental aspect of successfully implementing a large IT program like SBInet is establishing program commitments, including what capabilities will be delivered and when and where they will be delivered. Only through establishing such commitments, and adequately defining the approach and processes to be used in delivering them, can DHS effectively position itself for measuring progress, ensuring accountability for results, and delivering a system solution with its promised capabilities and benefits on time and within budget constraints. For SBInet, this has not occurred to the extent that it needs to for the program to have a meaningful chance of succeeding. In particular, commitments to the timing and scope of system capabilities remain unclear and continue to change, with the program committing to far fewer capabilities than originally envisioned. Further, how the SBInet system solution is to be delivered has been equally unclear and inadequately defined. Moreover, while the program office has defined key practices for developing and managing requirements, these practices were developed after several important requirements activities were performed. In addition, efforts performed to date to test whether the system meets requirements and functions as intended have been limited. Collectively, these limitations increase the risk that the delivered system solution will not meet user needs and operational requirements and will not perform as intended. In turn, the chances are increased that the system will require expensive and time-consuming rework. In light of these circumstances and risks surrounding SBInet, our soon to be issued report contains eight recommendations to the department aimed at reassessing its approach to and plans for the program--including its associated exposure to cost, schedule, and performance risks--and disclosing these risks and alternative courses of action for addressing them to DHS and congressional decision makers. The recommendations also provide for correcting the weaknesses surrounding the program's unclear and constantly changing commitments and its life cycle management approach and processes, as well as implementing key requirements development and management and testing practices. While implementing these recommendations will not guarantee a successful program, it will minimize the program's exposure to risk and thus the likelihood that it will fall short of expectations. For SBInet, living up to expectations is important because the program is a large, complex, and integral component of DHS's border security and immigration control strategy. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information, please contact Randolph C. Hite at (202) 512-3439 or at [email protected]. Other key contributors to this testimony were Carl Barden, Deborah Davis, Neil Doherty, Lee McCracken, Jamelyn Payan, Karl Seifert, Sushmita Srikanth, Karen Talley, and Merry Woo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security's (DHS) Secure Border Initiative (SBI) is a multiyear, multibillion-dollar program to secure the nation's borders through, among other things, new technology, increased staffing, and new fencing and barriers. The technology component of SBI, which is known as SBInet, involves the acquisition, development, integration, and deployment of surveillance systems and command, control, communications, and intelligence technologies. GAO was asked to testify on its draft report, which assesses DHS's efforts to (1) define the scope, timing, and life cycle management approach for planned SBInet capabilities and (2) manage SBInet requirements and testing activities. In preparing the draft report, GAO reviewed key program documentation, including guidance, plans, and requirements and testing documentation; interviewed program officials; analyzed a random probability sample of system requirements; and observed operations of the initial SBInet project. Important aspects of SBInet remain ambiguous and in a continued state of flux, making it unclear and uncertain what technology capabilities will be delivered and when, where, and how they will be delivered. For example, the scope and timing of planned SBInet deployments and capabilities have continued to be delayed without becoming more specific. Further, the program office does not have an approved integrated master schedule to guide the execution of the program, and the nature and timing of planned activities has continued to change. This schedule-related risk is exacerbated by the continuous change in, and the absence of a clear definition of, the approach that is being used to define, develop, acquire, test, and deploy SBInet. SBInet requirements have not been effectively defined and managed. While the program office recently issued guidance that is consistent with recognized leading practices, this guidance was not finalized until February 2008, and thus was not used in performing a number of important requirements-related activities. In the absence of this guidance, the program's efforts have been mixed. For example, while the program has taken steps to include users in developing high-level requirements, several requirements definition and management limitations exist. These include a lack of proper alignment (i.e., traceability) among the different levels of requirements, as evidenced by GAO's analysis of a random probability sample of requirements, which revealed large percentages that were not traceable backward to higher level requirements, or forward to more detailed system design specifications and verification methods. SBInet testing has also not been effectively managed. While a test management strategy was drafted in May 2008, it has not been finalized and approved, and it does not contain, among other things, a high-level master schedule of SBInet test activities, metrics for measuring testing progress, and a clear definition of testing roles and responsibilities. Further, the program office has not tested the individual system components to be deployed to the initial deployment locations, even though the contractor initiated testing of these components with other system components and subsystems in June 2008. In light of these circumstances, our soon to be issued report contains eight recommendations to the department aimed at reassessing its approach to and plans for the program, including its associated exposure to cost, schedule and performance risks, and disclosing these risks and alternative courses of action to DHS and congressional decision makers. The recommendations also provide for correcting the weaknesses surrounding the program's unclear and constantly changing commitments and its life cycle management approach and processes, as well as implementing key requirements development and management and testing practices.
3,809
737
The Clean Air Act gives EPA authority to set national standards to protect human health and the environment from emissions that pollute ambient (outdoor) air. The act assigns primary responsibility for ensuring adequate air quality to the states. The pollutants regulated under the act can be grouped into two categories--"criteria" pollutants and "hazardous air" pollutants. While small in number, criteria pollutants are discharged in relatively large quantities by a variety of sources across broad regions of the country.Because of their widespread dispersion, the act requires EPA to determine national standards for these pollutants. These national standards are commonly referred to as the National Ambient Air Quality Standards (NAAQS). The NAAQS specify acceptable air pollution concentrations that should not be exceeded within a geographic area. States are required to meet these standards to control pollution and to ensure that all Americans have the same basic health and environmental protection. NAAQS are currently in place for six air pollutants: ozone, carbon monoxide, sulfur dioxide, nitrogen dioxide, lead, and particulate matter. The second category, referred to as "hazardous air pollutants" or "air toxics," includes chemicals that cause serious health and environmental hazards. For the most part, these pollutants emanate from specific sources, such as auto paint shops, chemical factories, or incinerators. Prior to its amendment in 1990, the act required EPA to list each hazardous air pollutant that was likely to cause an increase in deaths or in serious illnesses and establish emission standards applicable to sources of the listed pollutant. By 1990, EPA had listed seven pollutants as hazardous: asbestos, beryllium, mercury, vinyl chloride, arsenic, radionuclides, and benzene. However, the agency was not able to establish emissions standards for other pollutants because EPA, industry, and environmental groups disagreed widely on the safe level of exposure to these substances. The 1990 amendments established new information gathering, storage, and reporting demands on EPA and the states. Required information ranged from that on ground-level to atmospheric pollutants. For example, states with ozone nonattainment areas must require owners or operators of stationary sources of nitrogen oxides or volatile organic compounds to submit to the state annual statements showing actual emissions of these pollutants. Also, the amendments expanded the air toxics category to include a total of 189 hazardous air pollutants that are to be controlled through technology-based emission standards, rather than health-based standards as the previous law had required. To establish technology-based standards, EPA believes that it needs to collect information on emissions of these hazardous air pollutants. In addition, the amendments initiated a national operating permit program that requires new information to be collected from sources that release large amounts of pollutants into the air. Further, the amendments require new information about acid rain, stratospheric ozone-depleting chemicals, and ecological and health problems attributed to air pollutants. Appendix I identifies titles of the act and selected additional data collection requirements imposed by the new law. EPA designed AIRS in stages during the 1980s to be a national repository of air pollution data. EPA believed that having this information would help it and the states monitor, track, and improve air quality. The system is managed by EPA's Information Transfer and Program Integration Division in the Office of Air Quality Planning and Standards. The Office of Air Quality Planning and Standards, under the Assistant Administrator of Air and Radiation, manages the air quality program. AIRS was enhanced in response to the 1990 amendments, when additional gathering, calculating, monitoring, storing, and reporting demands were placed on the system. AIRS currently consists of four modules or subsystems: Facility Subsystem: This database, which became operational in 1990, contains emission, compliance, enforcement, and permit data on air pollution point sources that are monitored by EPA, state, and local regulatory agencies. Air Quality Subsystem: This database, which became operational in 1987, contains data on ambient air quality for criteria, air toxic, and other pollutants, as well as descriptions of each monitoring station. Area and Mobile Source Subsystem: This is a database for storing emission estimates and tracking regulatory activities for mobile air pollution sources, such as motor vehicles; small stationary pollutant emitters, such as dry cleaners; and natural sources, such as forest fires. The subsystem became operational in 1992 and is scheduled to be phased out by September 1995 due to budget cuts and low utilization. Geo-Common Subsystem: This database, which became operational in 1987, contains identification data such as code descriptions used to identify places, pollutants, and processes; populations of cities and/or counties; and numerical values that pertain to air quality standards and emission factors that are used by all the other subsystems. Information provided by EPA, which we did not independently verify, indicates that the total cost to develop and operate the system from 1984 through 1995 will be at least $52.6 million. Budgeted operating and maintenance costs for fiscal year 1996 are projected to be $2.7 million. Neither of these estimates include states' personnel costs. The Facility Subsystem accounted for the largest portion of subsystem costs. Appendix II provides a more detailed breakdown of estimated subsystem costs for fiscal years 1984 through 1995. Budgeted subsystem costs were not available for fiscal year 1996. To determine whether EPA's planned state emissions reporting requirements exceeded the agency's actual program needs, we reviewed the Clean Air Act, and we analyzed various information reporting requirements of the 1990 amendments and EPA documents interpreting requirements of the amendments. We also analyzed a draft EPA emissions reporting regulation and compared its reporting requirements with an EPA emissions reporting options paper examining several alternative reporting levels. Further, we evaluated state and state air pollution association comments on the draft regulation. Finally, we reviewed other EPA emission reporting guidance documents and interviewed EPA, state, and local air pollution officials to obtain their comments on the draft regulation. EPA officials interviewed were from the Information Transfer and Program Integration Division and the Emissions, Monitoring, and Analysis Division in the Office of Air Quality Planning and Standards. State representatives interviewed were from Arizona, California, Michigan, New Hampshire, Tennessee, and Wisconsin. Local officials interviewed were from Ventura County, California, and the South Coast Air Quality District, Diamond Bar, California. To determine whether states use AIRS to monitor emissions data, we reviewed early AIRS design and development documents and examined EPA documents evaluating AIRS Facility Subsystem use by all the states. Further, we examined comments and/or analyses provided to EPA by seven states on their use of AIRS. We also evaluated original user requirements and other AIRS documents to determine the original purpose and anticipated users of AIRS. In addition, we interviewed EPA, state, and vendor information system officials on states' use of AIRS and state information systems. Vendor representatives interviewed were from Martin Marietta Technical Services, Inc., and TRC Environmental Corporation. We performed our work at the EPA AIRS program offices in Research Triangle Park and Durham, North Carolina, and at the AIRS 7th Annual Conference in Boston, Massachusetts. Our work was performed from October 1994 through May 1995, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Administrator of the Environmental Protection Agency. In response, on June 29, 1995, we received comments from the Acting Director for the Office of Air Quality Planning and Standards. EPA's draft regulation on states' reporting of air pollution emissions exceeded what was needed by EPA to meet minimum agency air pollution program needs. EPA has suspended its promulgation of the regulation and has recently begun studying alternative reporting options. EPA began work on the now suspended emissions regulation in order to consolidate and standardize several state emissions reporting requirements (i.e., emission statements, periodic emission inventories, and annual statewide point source reporting) and to align these requirements with the mandates in the 1990 amendments. Draft versions of the regulation were circulated in late 1993 and early 1994 to obtain preliminary comments from several states. Three states commented to EPA on the draft regulation and one provided written comments. This state concluded that the level of detail required by the proposed regulation was not necessary. The state also noted that the draft regulation required data on each emission point within a plant, rather than aggregate data for each facility, and on items related to a factory's process and equipment, such as process rate units, annual process throughput, and typical daily seasonal throughput. Further, this state also asserted that annual reporting of hazardous air pollution emissions, as required by the draft regulation, is not required by the amendments. The state said that because of the additional complexity of toxic air pollutant data compared to criteria pollutant data, annual reporting to AIRS would not be feasible. In addition, in a letter to EPA addressing several AIRS issues, seven states also mentioned the draft regulation. These states said that the draft regulation would require them to submit more highly detailed data items into AIRS than called for under the amendments and other EPA mandated programs. Further, these states noted that providing the additional data sought in the draft regulation concerning hazardous air pollutant emissions would require developing more complicated toxic chemical databases, which are very costly to develop. The states noted that additional resources to develop these databases were not available. EPA acknowledged these concerns and has suspended the regulation. In December 1994, EPA issued a study that stated that minimum program needs could be met with a fraction of the data that would have been required by the suspended regulation. Our analysis of the study revealed that, in one case, EPA only needed to collect about 20 percent of the volatile organic compounds data requested in the suspended regulation to meet minimum program needs. The study showed that, in this case, an estimated 1,323,540 of these data items would have to be reported by California under the draft regulation, while only 241,574 data items would be reported under the minimum program needs option. According to representatives in EPA's Emissions, Monitoring, and Analysis Division, most other states could reduce the amount of data submitted to EPA by a similar proportion and still meet minimum program needs. (See appendix III for additional state examples). However, officials in EPA's Office of Air Quality and Standards noted that while the reduced level of data would meet minimum program needs, other important data that the agency believes could contribute to a more effective program would not be collected. Nevertheless, collection of these additional data would place an extra burden on the states. EPA has now begun reevaluating the information it needs from states and is considering various reporting alternatives. The use of the AIRS Facility Subsystem by heavy emission states for tracking air pollution emissions is limited. When AIRS was originally designed, states were expected to be one of its primary users; however, most heavy emission states now use their own systems because these systems are more efficient and easier to use than AIRS. The Facility Subsystem is the official repository for emission inventory, regulatory compliance, and permit data. It contains annual emissions estimates for criteria pollutants and daily emissions estimates. The subsystem was developed by EPA to track, monitor, and assess state progress in achieving and maintaining national ambient air quality standards and is also used to report the status of these efforts to the Congress. It was also developed to allow state and local air pollution control agencies to monitor and track emissions and make midcourse adjustments, as necessary, to achieve air quality standards. EPA requires that states submit data to the subsystem either in an AIRS compatible format or directly to the subsystem. The states receive these data from thousands of sources around the country. For the 1990 base year inventory, over 52,000 sources reported data through the states to the AIRS Facility Subsystem. Each state is to use these data to help prepare a plan detailing what it will do to improve the air quality in areas that do not meet national standards. While all the states must input emission and other data into the Facility Subsystem, most heavy emission states do not use the subsystem internally to monitor and analyze emissions and compliance data. In many cases, these states already had their own systems to perform these functions. Each state's system is customized to that particular state's program data and reporting needs. Of the 10 states that account for almost half of the combined emissions of the criteria pollutants, only one (Indiana) is a direct user of the emissions portion of the subsystem. Further, of these same 10 states, only 4 (California, Georgia, Indiana, and Pennsylvania) are direct users of the compliance portion of the subsystem. By contrast, a greater proportion of the smaller emission source states use the Facility Subsystem to manage and analyze air pollution data. These states do not have their own air pollution information systems. In his comments, the Acting Director for the Office of Air Quality Planning and Standards expressed concern that the primary evidence supporting our assertion that the proposed reporting requirements exceeded EPA minimum program needs is based primarily on the written comments provided by one state. This is incorrect. Our finding is based primarily on our analysis of EPA's December 1994 study, which also concluded that minimum program needs could be met with a fraction of the data that would have been required by the suspended regulation. The Acting Director also commented that the report did not adequately reflect EPA's efforts to respond to the states' concerns. We believe that the report makes clear that EPA took action and suspended the draft regulation based on state concerns. Finally, the Acting Director stated that the draft report did not reflect the success of EPA's regulatory review process and only focused on an interim finding that EPA addressed by suspending the regulation. We believe the report adequately reflects EPA's process and states' concern with the additional burden that would have been imposed on them if the draft regulation had been promulgated. For example, we note in the report that EPA has recently begun studying alternative reporting options. We are sending copies of this report to the Administrator, EPA; interested congressional committees; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. Please call me at (202) 512-6253 if you or your staff have any questions concerning this report. Major contributors are listed in appendix IV. Expands several existing information collection, storage, and reporting requirements currently being met by the Aerometric Information Retrieval System (AIRS). Thousands of additional facilities in ozone nonattainment areas will be defined as "major sources" and will thus be subject to enhanced monitoring, recordkeeping, reporting, and emissions control requirements. Expands and revises emission limitations for mobile sources (automobiles and trucks) of air pollutants. New standards are established for motor vehicle engines, fuel content, alternative fueled vehicles, and other mobile sources. AIRS was not affected by these requirements. Creates a program to monitor and control the 189 hazardous air pollutants. AIRS is being enhanced to provide a tool for EPA to develop technology-based standards and, when standards have not been developed, for state pollution control agencies to make case-by-case decisions on the best demonstrated control technologies for hazardous air pollutants within an industry. Establishes a new federal program to control acid deposition. AIRS was not affected by these requirements. The separate Acid Rain Data System/Emissions Tracking System provides for recording and validating emissions data from sources emitting sulfur dioxide and nitrogen oxides, ingredients of acid rain. Establishes a new permit program that, in large part, is to be implemented by the states. AIRS is being enhanced to accommodate additional permit program data elements and to merge emissions and enforcement data. Creates a new federal program for the protection of stratospheric ozone. Each person producing, importing, or exporting certain substances that cause or contribute significantly to harmful effects on the ozone layer must report to EPA quarterly the amount of each substance produced. AIRS was not affected by this requirement. Enhances federal enforcement authority, including authority for EPA to issue field citations for minor violations. AIRS was enhanced to collect and report new data concerning administrative, field citation, and other actions. Includes various miscellaneous provisions, including provisions addressing emissions from sources on the outer continental shelf and visibility issues. AIRS was not affected by these provisions. Requires several national or regional research programs. Most of the research programs require air data that can be integrated with data from other media or from other systems. This may require system modification. Legend: n/a = not applicable. Columns and rows may not total precisely due to rounding. Allan Roberts, Assistant Director Barbara Y. House, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed selected data collection and reporting requirements of the 1990 Clean Air Act Amendments, focusing on whether: (1) the Environmental Protection Agency's (EPA) planned state emissions reporting requirements exceed its program needs; and (2) states use the EPA Aerometric Information Retrieval System (AIRS) to monitor emissions data. GAO found that: (1) EPA draft regulation would have required states to submit emissions data that exceeded its minimum air pollution program needs and to develop complicated pollutant databases that they could not afford; (2) EPA has since suspended the regulation and is considering alternative reporting options; (3) despite EPA intentions, 9 of the 10 heavy emission states use their own independently developed systems to track air pollution emissions; and (4) the state tracking systems are more efficient and easier to use than AIRS.
3,821
168
Foreign nationals who wish to visit the United States, including business travelers and tourists, must generally obtain a nonimmigrant visa. The majority of travelers visiting the United States from Mexico receive an NIV Border Crossing Card, which is valid for 10 years. In order to obtain a Border Crossing Card, applicants must generally: (1) schedule an appointment for a visa interview at a U.S. consulate, (2) fill out an application and pay applicable fees, (3) have their photos taken and fingerprints collected at a U.S. consulate, (4) have their information checked in the Consular Lookout and Support System--State's name- check database that consulates use to access critical information for visa adjudication, and (5) have an interview with a consular officer, who is responsible for making the adjudication decision. In 1996, Congress passed the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA), which required that every Border Crossing Card issued after April 1, 1998, contain a biometric identifier, such as a fingerprint, and be machine readable. The law also mandated that all Border Crossing Cards issued before April 1, 1998, would expire on October 1, 1999, regardless of when their validity period ended. This deadline was extended by Congress two times, first to September 30, 2001, and then to September 30, 2002. The passage of IIRIRA created a significant surge in Mission Mexico's NIV workload, as Border Crossing Card holders sought to obtain the new visas before the congressionally mandated expiration date. This culminated in a historic high in NIV workload in fiscal year 2001, when the mission processed 2,869,000 NIV applications. We have previously reported on challenges State faced in managing its NIV workload. Among other things, we found that NIV applicants have often had to wait for extended periods of time to receive appointments for interviews. Believing that wait times for NIV interviews were excessive, in February 2007, State announced a worldwide goal of interviewing NIV applicants within 30 days. In the year before the 30-day goal was announced, the average wait time across the consulates in Mexico had been as high as 73 days; by the time of the announcement of the 30-day goal, however, Mission Mexico had already successfully reduced the average wait time to less than 30 days at all but one of its posts. Since February 2007, the mission has successfully kept the average wait time among the consulates at less than 30 days. In response to recommendations in the 9/11 Commission report, the Intelligence Reform and Terrorism Prevention Act of 2004, as amended, required that the Secretary of Homeland Security, in conjunction with the Secretary of State, develop and implement a plan that requires United States citizens to provide a passport, other document, or combination of documents that the Secretary of Homeland Security deems sufficient to show identity and citizenship when entering the United States from certain countries, including Mexico. This will represent a significant change for many U.S. citizens living in Mexico, who have until recently been able to routinely cross between the United States and Mexico with more limited documentation. The Department of Homeland Security (DHS) and State are implementing these requirements through WHTI. DHS implemented WHTI at all air ports of entry into the United States on January 23, 2007, and plans to implement the requirements at land and sea ports of entry beginning in June 2009, assuming that DHS and State can certify 3 months in advance that certain criteria have been met, as required under the law. Ten years after the first surge in demand for Border Crossing Cards began in fiscal year 1998, State anticipates another surge in NIV demand in Mexico as these cards begin to expire and millions of card holders apply for renewals at U.S. consulates. In addition to this cyclical surge in demand caused by the expiring Border Crossing Cards, State officials anticipate that Mission Mexico will continue to experience steady growth in demand from first-time visa applicants. To assist in preparing for these increases, State has developed forecasts of the expected future NIV workload in Mexico. The NIV projections and forecasting methodology discussed in this report are based upon data State provided to us in February and April 2008. On June 18, State informed us that it has developed revised NIV forecasts for Mission Mexico based upon an alternative methodology. We have not yet had time to analyze these NIV forecasts or incorporate them into this testimony, but we may include a discussion of them in our final report, which is scheduled to be completed at the end of July 2008. State's forecasts, as of April 2008, anticipate that the upcoming surge in NIV demand will follow a pattern similar to the previous Border Crossing Card surge from fiscal years 1998 to 2002, as shown in figure 1. According to the forecasts, the surge will begin in fiscal year 2008, with missionwide NIV demand peaking at a little more than 3 million applications in fiscal year 2011--a 103 percent increase in demand from fiscal year 2007. The forecasts show the surge beginning to abate in fiscal year 2012. In addition to the missionwide forecast, State has developed demand forecasts for individual consulates. As shown in figure 2, State's forecasts anticipate that Mexico City will have the highest levels of demand, with applications growing to over 580,000 in fiscal year 2010. While Mexico City is projected to have the highest overall demand, State anticipates that the steepest increases in demand will occur at border posts. This follows a pattern similar to the previous Border Crossing Card surge, where the border consulates assumed a greater share of the total mission workload during the surge, with this share then diminishing again at the surge's end. Estimating future NIV demand is inherently uncertain, and State acknowledges that several factors could affect the accuracy of its April 2008 NIV demand forecasts. First, the forecasts are based heavily upon Change Navigators' 2005 Consular Affairs Futures Study (CAFS), which generated NIV demand forecasts for various high-volume and high-growth missions around the globe, including Mexico. Thus, the extent to which the underlying CAFS numbers prove to be accurate affects State's revised forecasts. While the CAFS includes a general analysis of how various demographic, economic, and political factors impact NIV demand across countries, it does not explain how it arrived at its specific forecasts for Mexico. Based upon our review of the forecasts, it appears that the CAFS authors relied primarily upon historical workload data from the previous Border Crossing Card surge, but we could not assess how, if at all, other considerations were factored into the forecasts. Second, methodological issues associated with State's April 2008 NIV forecasts may affect their accuracy in projecting demand. For example, State relied heavily on actual demand data from fiscal year 2007 to revise the CAFS forecasts, in order to try to better account for growth in demand from first-time visa applicants. In doing so, State assumed demand for fiscal year 2007 was representative of the underlying long-term growth in NIV demand. However, this is not necessarily the case, as State officials acknowledge demand may have been artificially high in fiscal year 2007 as posts worked off backlogs that had accumulated from previous years. State officials also noted that they chose to be conservative and assume all Border Crossing Card holders would renew their cards when they expire. However, this is not likely to happen, as a portion of Border Crossing Card holders have had their cards lost or stolen and already had them replaced, while others have either legally or illegally immigrated to the United States and will not be returning to renew their cards. Consequently, the forecasts could prove to be higher than actual demand depending on the share of Border Crossing Card holders who do not seek a renewal at the expiration of their card. State's approach to forecasting NIV workload, based on historical precedent and underlying growth in demand, and other factors, provide a reasonable basis for addressing the anticipated surge in NIV demand. State has detailed data on the number of Border Crossing Cards issued during the previous surge and when they are expiring, which gives it a strong basis for its projections. Further, even if the NIV forecasts do not prove completely accurate, State officials do not expect significant risks for several reasons. First, State officials believe that the forecasts are conservative, with NIV demand likely to be lower than forecasted. Second, State intends to avoid relying on the exact numbers in the forecasts and is instead using them as a rough guide in developing plans to meet the upcoming surge in NIV workload. Third, State officials believe they have developed these plans with sufficient flexibility to be able to respond as needed if actual workload deviates from the forecasts. Finally, State plans to continually track demand at the consulates as the NIV surge unfolds and will revise these forecasts periodically. In addition to the surge in NIV workload, Mission Mexico will also experience a surge in its passport workload as a result of the implementation of WHTI at air ports of entry in January 2007 and its subsequent, intended implementation at land ports in June 2009. According to State officials, the mission has already seen a significant increase in its passport workload as U.S. citizens living in Mexico have begun to apply for passports in response to the new documentary requirements. Mission Mexico's passport and CRBA workload, which State tracks together because both types of applications are handled by consular officers in posts' American Citizen Services units, grew to 34,496 applications in fiscal year 2007, a 77 percent increase from fiscal year 2006. Despite the expected increases, passport workload will continue to be only a fraction of Mission Mexico's workload, relative to NIV applications. While State expects passport workload in Mexico to continue to increase significantly in the coming years, it is difficult to predict precisely what the magnitude of this increase will be. Unlike with the NIV surge, there is not a clear historical precedent to the WHTI surge. Additionally, there is a great deal of uncertainty regarding the number of U.S. citizens living in Mexico and the number of these citizens who are potential passport applicants. Therefore, efforts to forecast increases in passport workload due to WHTI are extremely challenging. Nonetheless, State has developed rough estimates of Mission Mexico's passport and CRBA workload with the implementation of WHTI. These estimates are based on the input of experienced consular officers because the lack of data on U.S. citizens living in Mexico made any type of statistical analysis problematic. Based upon State's estimates, Mission Mexico's WHTI workload is projected to peak at 73,000 passport and CRBA applications in fiscal year 2009 with the implementation of WHTI at land ports of entry. State anticipates that passport and CRBA workload will continue at that peak rate in fiscal year 2010 and then begin to decline. In its estimates, State predicts that from fiscal years 2007 to 2009, workload will increase by around 177 percent for Mission Mexico. To this point, State has not revised its WHTI estimates based on workload in fiscal year 2007, or year to date in the current fiscal year, even though the workload estimates were low in fiscal year 2007. State says it has not needed to revise its estimates at this point, because posts have been able to keep up with workload increases without the need for additional resources. In addition, rather than focusing on developing precise workload estimates in order to prepare for the surge, State has instead chosen to pursue strategies designed to provide it with the flexibility to respond to increases in workload as they occur--particularly as a more limited number of resources will be needed to cover increases in passport and CRBA applications than NIV applications, given their small share of Mission Mexico's overall consular workload. To keep pace with the expected NIV renewal surge, State is increasing the total number of hardened interview windows in the consulates' NIV sections by over 50 percent before the demand peaks in 2011. State added windows to the consulate in Hermosillo in fiscal year 2007 and will soon be adding windows to the consulates in Monterrey and Mexico City. In addition, new consulate compounds in Ciudad Juarez and Tijuana will result in additional windows for adjudicating NIV applications. The new facility in Ciudad Juarez is set to open in September 2008, and construction on the new building in Tijuana began this past April. Once completed, these projects will provide Mission Mexico with the window capacity to interview about 1 million additional NIV applicants per year. Table 1 compares the number of interview windows available in fiscal year 2007 to the number that will be available by fiscal year 2011, when NIV demand peaks. Consulate officials at the posts we visited generally expressed confidence that they will have sufficient window capacity to keep pace with the expected NIV demand and avoid excessive wait times for interviews beyond State's standard of 30 days. As shown in figure 3, our analysis of expected window capacity also indicates that Mission Mexico generally appears to have enough window capacity to keep pace with projected demand, based on the April 2008 projections. However, State officials acknowledge that two posts, Nuevo Laredo and Matamoros, will not have adequate window capacity during the NIV surge. Consequently, NIV applicants may face longer wait times for an interview appointment at these posts. State officials noted that individuals who would typically apply at one of these two posts will have the option to schedule appointments at the relatively nearby consulate in Monterrey, which is expected to have excess window capacity during the surge in demand. At other posts, the potential shortfall in window capacity, reflected in figure 3, appears to be small enough that it can likely be managed by extending hours that windows are open, if necessary. Although Guadalajara also appears to have a significant shortfall, consular officials there believe the post should be able to absorb the increased workload with the number of windows available as long as they have enough staff to work the windows in shifts to keep them open all day, if necessary. In addition to the increase in hardened windows, Mission Mexico requires a significant increase in adjudicators over the next few years. Based on NIV and passport workload projections, provided in April 2008, State estimates it will need 217 adjudicators throughout Mission Mexico in fiscal year 2011, which is the expected peak year of the surge in NIV demand. This number is an increase of 96 adjudicators, or about 80 percent, over the number of adjudicator positions in place in fiscal year 2007. State may revise its staffing plans as it generates updated forecasts. State plans to meet its staffing needs during the expected workload surge primarily by hiring a temporary workforce of consular adjudicators that can be assigned to posts throughout Mission Mexico, depending on each post's workload demands. Figure 4 shows the number of temporary adjudicators and career adjudicators planned for Mission Mexico in fiscal year 2011. State officials noted that relying on a temporary workforce allows Mission Mexico to avoid having excess staff after the workload surge and reduces costs per staff compared to permanent hires. State has budgeted for about 100 temporary adjudicators to be in place during the surge in workload demand, although State officials noted that these budgeted funds could be reprogrammed if fewer than expected adjudicators are needed. State has already posted the job announcement on its Web site and expected to begin placing these additional temporary adjudicators at posts in fiscal year 2009. State officials noted that they will try to fill slots gradually to help posts absorb the additional staff. The temporary hires will be commissioned as consular officers with 1- year, noncareer appointments that can be renewed annually for up to 5 years. They will also receive the same 6-week Basic Consular Course at the Foreign Service Institute in Arlington, Virginia, as permanent Foreign Service officers. These individuals must be U.S. citizens, obtain a security clearance, and be functionally fluent in Spanish. Housing in Mexico for the temporary adjudicators will be arranged for by the State Bureau of Consular Affairs in Washington, D.C., through contract services, which will provide greater flexibility to move adjudicators from one post to another, if necessary. As figure 4 indicates, posts in Monterrey, Mexico City, Ciudad Juarez, and Tijuana are expected to be the heaviest users of temporary adjudicators. Consequently, these posts would be at greatest risk of increased NIV backlogs if temporary adjudicator slots cannot be filled as needed or if their productivity is not as high as anticipated. However, State officials believe they have an adequate pool of potential candidates from among returning Peace Corps volunteers, graduates of the National Security Education Program, eligible family members, and retired Foreign Service officers. These officials noted that they recently began reaching out to targeted groups of potential applicants and have already received strong interest. Furthermore, officials from the posts we visited were confident that State's plan to provide them with additional consular officers would enable them to keep pace with workload demand. Post officials anticipate the same level of productivity and supervision requirements as they would expect from new career Foreign Service officers. The officials noted that new consular adjudicators typically take about 2 months of working the NIV interview windows to reach the productivity levels of more experienced adjudicators. State began a pilot program in the spring of 2008 at two posts, Monterrey and Nuevo Laredo, to outsource part of the NIV application process, including biometric data collection, to an off-site facility. The pilot is part of an effort by State to establish a new service delivery model for processing visas worldwide in response to long-term growth in demand for visas. State envisions expanding this model throughout Mexico and other high-demand posts worldwide through a formal request for proposal process. State also envisions the possibility of providing off-site data collection facilities serving NIV applicants in cities that do not have consulates. In Monterrey, the pilot made space available in the consulate facility to add much needed NIV interview windows. The pilot is implemented by a contractor that handles functions that do not require the direct involvement of a consular officer, including scanning of applicants' fingerprints and passports, live-capture digital photograph, and visa passback. Consular officers at these two posts focus on their "core mission" of making adjudication decisions after the contractor has electronically transferred the applicants' application and biometric data. The cost of outsourcing these functions is covered through an additional fee of $26 paid by the applicants. Consulate officials at the posts involved in the pilot are responsible for monitoring the performance of the contractor through the use of surveillance cameras, random visits to the off-site facility, and validation reviews of NIV applications to check for incidence of fraud or incorrect information. According to State officials, the contractor does not have the ability to alter any of the data it collects, and a U.S. citizen with a security clearance is on site to manage the facility. Consular officials in Monterrey stressed the importance of monitoring contractor employees to help ensure they do not coach applicants. State officials stated that the department intends to assess the pilot to ensure that the technological challenges of remote biometric data collection and data transfer have been overcome. They will also assess whether the new software involved presents the data to consular officers in a user-friendly format to facilitate the adjudication. In addition, State will monitor adjudication rates at the participating posts. State has neither established specific milestones for completing the pilot nor provided us with any metrics that would be part of an assessment of the potential impact on productivity, fraud, or security. In another step to help posts keep pace with NIV demand, Mission Mexico has also begun to waive interviews of NIV renewal applicants allowable under certain circumstances established by federal law and State regulations. State recently provided guidance to posts worldwide on waiving interviews for certain applicants, following the transition to the collection of 10 fingerprints and technology allowing reuse of fingerprints. The policy only applies to applicants seeking to renew their biometric NIVs within 12 months of expiration. Consular officers retain the discretion to require any applicant to appear for an interview, and no applicant may have an interview waived unless they clear all computer- based security screening. According to State guidance, consular officers will also have the discretion to waive interviews of applicants as part of the off-site data collection model being piloted in Monterrey and Nuevo Laredo, when prints collected off site match with the applicant's fingerprints already in the system. According to State officials, this will be possible beginning in 2009, when Border Crossing Cards issued after 1999 containing biometric data start to expire. The Monterrey and Ciudad Juarez posts have already begun to waive interviews of applicants renewing NIVs and found significant productivity gains. As a result, officers there were able to adjudicate cases more rapidly and better utilize window capacity, according to consular officials. These posts also found no significant difference in denial rates for NIV renewal applicants who were interviewed compared to those whose interviews were waived, although post and Bureau of Consular Affairs officials noted it was necessary to continue monitoring the effect of waiving interviews. These officials also highlighted the need to adjust consular training to be consistent with State's current guidance on waiving interviews under certain circumstances. Posts in Mexico will also be increasing resources for adjudicating additional passport applications, which are expected to peak in fiscal year 2009. Although the volume of passport applications is much smaller than NIV applications, adjudicating passport applications for American citizens takes precedence over NIV applications. Consular officials at posts we visited noted that because of the uncertainty over future passport demand, they will depend on their flexibility to shift adjudicators from NIV work to passport work, as needed. In addition, consular officials stated they will have the option of using NIV interview windows to adjudicate passports applications--possibly during off hours, if necessary. In addition, posts are seeking ways to become more efficient in how they process the increasing volume of passports. For example, many posts have recently implemented an appointment system to better manage the flow of passport applicants and have also improved their Web sites to help provide better assistance to applicants, many of whom do not speak English and are applying for passports for the first time. State is also upgrading its software used for passport processing in overseas posts to enable posts to scan passport applications, which they expect will reduce staff resources needed for data entry. Some posts are also considering increased use of consular agents in other locations, such as Puerto Vallarta or Cabo San Lucas, to accept passport applications to help relieve some of the workload for consular staff. In addition, some posts have suggested exploring possibilities for processing passport renewals by mail, which would also help relieve overcrowding. In anticipation of the expected surge in demand for NIVs and U.S. passports in Mexico over the next several years, State has taken several steps to project workloads and expand the capacity of its consulates to avoid the type of backlogs that have occurred in Mission Mexico in the past. State's efforts to increase the number of hardened interview windows at several of its consulates and hire additional temporary consular officers represent a substantial increase in resources needed to keep pace with the projected surge in NIV and passport workload. As State continues to revise its estimates of future workload, it may need to adjust its plans for increasing these resources to reflect the latest assumptions about future demand for passports and NIVs. The success of the efforts to prepare for the surges in passport and NIV workload is likely to depend on State's ability to fill the roughly 100 slots it has budgeted for temporary adjudicators in time to meet the surge in workload. Several posts in Mexico will rely heavily on these additional staff to keep pace with expected demand for NIVs and avoid excessive wait times for interviews of applicants. However, State officials have expressed confidence that they will be able to fill these positions with qualified candidates. In addition, Mission Mexico may reap productivity gains from a pilot program to outsource part of the NIV application process at off-site facilities and from State's policy to waive interviews for some renewal applicants; however, these efforts are in their early stages and are not yet widely implemented. Consequently, it would be premature to assess the potential effects of these efforts. We discussed this testimony with State officials, who agreed with our findings. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Jess T. Ford at (202) 512-4128 or [email protected]. Juan Gobel, Assistant Director; Ashley Alley; Joe Carney; Howard Cott; David Dornisch; Michael Hoffman; and Ryan Vaughan made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Mission in Mexico is the Department of State's largest consular operation. In fiscal year 2007, it processed 1.5 million of the 8 million nonimmigrant visas (NIV) State handled worldwide. The U.S. Mission in Mexico also provided services, including passport processing and emergency assistance, to 20,000 American citizens in fiscal year 2007. This already significant consular workload is expected to increase dramatically in the coming years as millions of NIV Border Crossing Cards issued in Mexico between fiscal years 1998 and 2002 expire and need to be renewed. In addition, the implementation of new travel requirements under the Western Hemisphere Travel Initiative (WHTI) will, for the first time, require U.S. citizens to carry passports, or other approved documentation, when traveling between the United States and Mexico. This testimony addresses (1) State's estimates of the workload for consulates in Mexico through 2012 resulting from, in particular, new travel requirements and the reissue of Border Crossing Cards; and (2) the actions State has taken to ensure consulates in Mexico keep pace with projected workload increases through 2012. This testimony is based on work currently in process that involves analyzing State's workload forecasts and forecast methodology, interviewing State officials, and visiting five posts in Mexico. GAO discussed this testimony with State officials, who agreed with GAO's findings. According to State forecasts, as of April 2008, the U.S. Mission in Mexico's (Mission Mexico) NIV demand will peak at slightly over 3 million applications in fiscal year 2011, about twice the number from fiscal year 2007. State acknowledges there are uncertainties regarding the number of Border Crossing Card holders who will renew their cards and the number of first time NIV applicants, which may affect the accuracy of its forecasts. State will be revising the forecasts on a periodic basis as new data become available. In addition to its increase in NIV workload, Mission Mexico will also be facing increases in its passport workload due to the implementation of WHTI. The exact magnitude of the increase in passport workload is more difficult to forecast than for NIVs, because there is not the same historical precedent. There is also a great deal of uncertainty as to how many U.S. citizens actually live in Mexico or the number of these citizens likely to apply for a passport. In anticipation of this surge in demand for NIVs and U.S. passports, State is taking steps to ensure consulates in Mexico keep pace, including adding consular interview windows to several high-demand posts and planning to hire about 100 temporary adjudicating officers. Consular officials GAO met with at several posts in Mexico generally agreed that these efforts to expand resources should be adequate for Mission Mexico to keep pace with expected workload increases, and GAO's analysis indicates the mission will generally have enough interviewing windows during the surge. Several posts will rely on the addition of temporary adjudicators to keep pace with increased NIV demand and would face backlogs if these slots cannot be filled or if the temporary staff are not as productive as expected. However, State is confident that it has an adequate pool of potential applicants. Mission Mexico may also gain additional capacity from a pilot program, currently under way at two posts, that outsources a portion of the NIV application process to off-site facilities; however, the pilot was implemented too recently to assess its potential impact on productivity, fraud, or security.
5,440
726
The mission of the Customs Service is to ensure that all goods and persons entering and exiting the United States do so in compliance with all U.S. laws and regulations. It does this by (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. During fiscal year 1997, Customs collected $22.1 billion in revenue at more than 300 ports of entry and reported that it processed nearly 450 million passengers who entered the United States during the year. To accomplish its mission, Customs is organized into six lines of business--trade compliance, outbound, passenger, finance, human resources, and investigations. Each business area is described below. Trade compliance includes enforcement of laws and regulations associated with the importation of goods into the United States. To do so, Customs (1) works with the trade community to promote understanding of applicable laws and regulations, (2) selectively examines cargo to ensure that only eligible goods enter the country, (3) reviews documentation associated with cargo entries to ensure that they are properly valued and classified, (4) collects billions of dollars annually in duties, taxes, and fees associated with imported cargo, (5) assesses fines and penalties for noncompliance with trade laws and regulations, (6) seizes and accounts for illegal cargo, and (7) manages the collection of these moneys to ensure that all trade-related debts due to Customs are paid and properly accounted for. Outbound includes Customs enforcement of laws and regulations associated with the movement of merchandise and conveyances from the United States. To do so, Customs (1) selectively inspects cargo at U.S. ports to guard against the exportation of illegal goods, such as protected technologies, stolen vehicles, and illegal currency, (2) collects, disseminates, and uses intelligence to identify high-risk cargo and passengers, (3) seizes and accounts for illegal cargo, (4) assesses and collects fines and penalties associated with the exportation of illegal cargo, and (5) physically examines baggage and cargo at airport facilities for explosive and nuclear materials. In addition, the outbound business includes collecting and disseminating trade data within the federal government. Accurate trade data are crucial to establishing accurate trade statistics on which to base trade policy decisions and negotiate trade agreements with other countries. By the year 2000, Customs estimates that exports will be valued at $1.2 trillion, compared to a reported $696 million in 1994. Passenger includes processing all passengers and crew of arriving and departing (1) air and sea conveyances and (2) land vehicles and pedestrians. In fiscal year 1997, Customs reported it processed nearly 450 million travelers and, by the year 2000, expects almost 500 million passengers to arrive in the United States annually. Many of Customs' passenger activities focus on illegal immigration and drug smuggling and are coordinated with other federal agencies, such as the Immigration and Naturalization Service and the Department of Agriculture's Animal and Plant Health Inspection Service. Activities include targeting high-risk passengers, which requires timely and accurate information, and physically inspecting selected passengers, baggage, and vehicles to determine compliance with laws and regulations. Finance includes asset and revenue management activities. Asset management consists of activities to (1) formulate Customs' budget, (2) properly allocate and distribute funds, and (3) acquire, manage, and account for personnel, goods, and services. Revenue management encompasses all Customs activities to identify and establish amounts owed Customs, collect these amounts, and accurately report the status of revenue from all sources. Sources of revenue include duties, fees, taxes, other user fees, and forfeited currency and property. The revenue management activities interrelate closely with the revenue collection activities in the trade compliance, outbound, and passenger business areas. Human resources is responsible for filling positions, providing employee benefits and services, training employees, facilitating workforce effectiveness, and processing personnel actions for Customs' 18,000 employees and managers. Investigations includes activities to detect and eliminate narcotics and money laundering operations. Customs works with other agencies and foreign governments to reduce drug-related activity by interdicting (seizing and destroying) narcotics, investigating organizations involved in drug smuggling, and deterring smuggling efforts through various other methods. Customs also develops and provides information to the trade and carrier communities to assist them in their efforts to prevent smuggling organizations from using cargo containers and commercial conveyances to introduce narcotics into the United States. To carry out its responsibilities, Customs relies on information systems and processes to assist its staff in (1) documenting, inspecting, and accounting for the movement and disposition of imported goods and (2) collecting and accounting for the related revenues. Customs expects its reliance on information systems to increase as a result of its burgeoning workload. For 1995 through 2001, Customs estimates that the annual volume of import trade between the United States and other countries will increase from $761 billion to $1.1 trillion. This will result in Customs processing an estimated increase of 7.5 million commercial entries--from 13.1 million to 20.6 million annually--during the same period. Recent trade agreements, such as the North American Free Trade Agreement (NAFTA), have also increased the number and complexity of trade provisions that Customs must enforce. Customs recognizes that its ability to process the growing volume of imports while improving compliance with trade laws depends heavily on successfully modernizing its trade compliance process and its supporting automated systems. To speed the processing of imports and improve compliance with trade laws, the Congress enacted legislation that eliminated certain legislatively mandated paper requirements and required Customs to establish the National Customs Automation Program (NCAP). The legislation also specified certain functions that NCAP must provide, including giving members of the trade community the capability to electronically file import entries at remote locations and enabling Customs to electronically process "drawback" claims. In response to the legislation, Customs began in 1994 to modernize the information systems that support operations. Customs has several projects underway to develop and acquire new software and evolve (i.e., maintain) existing software to support its six business areas. Customs' fiscal year 1998 budget for information management and technology activities was about $147 million. Customs' major information technology effort is its Automated Commercial Environment (ACE) system. In 1994, Customs began to develop ACE to replace its existing automated import system, the Automated Commercial System (ACS). ACE is intended to provide an integrated, automated information system for collecting, disseminating, and analyzing import-related data and ensuring the proper collection and allocation of revenues, totaling about $19 billion annually. According to Customs, ACE is planned to automate critical functions that the Congress specified when it established NCAP. Customs reported that it spent $47.8 million on ACE as of the end of fiscal year 1997. In November 1997, Customs estimated it would cost $1.05 billion to develop, operate, and maintain ACE over the 15 years from fiscal year 1994 through fiscal year 2008. Customs plans to deploy ACE to more than 300 ports that handle commercial cargo imports. Customs plans to develop and deploy ACE in multiple phases. According to Customs, the first phase, known as NCAP, is an ACE prototype. Customs currently plans to deploy NCAP in four releases. The first release was deployed for field evaluation at three locations in May 1998, and the fourth is scheduled for 1999. Customs, however, has not adhered to previous NCAP deployment schedules. Specifically, implementation of the NCAP prototype slipped from January 1997 to August 1997 and then again to a series of four releases beginning in October 1997, with the fourth release starting in June 1998. Customs also has several other projects underway to modify or enhance existing systems that support its six business areas. For example, in fiscal year 1998, Customs planned to spend about $3.7 million to enhance its Automated Export System (AES), which supports the outbound business area and is designed to improve Customs' collection and reporting of export statistics and to enforce export regulations. In addition, Customs planned to spend another $4.6 million to maintain its administrative systems supporting its finance and human resource business areas. The Chairman, Subcommittee on Treasury and General Government, Senate Committee on Appropriations, and the Chairman, Subcommittee on Treasury, Postal Service and General Government, House Committee on Appropriations, requested that we review Customs' ability to develop software for its computer systems. Our objectives were to determine (1) the maturity of Customs' software development processes and (2) the effectiveness of Customs' software process improvement program. To determine Customs' software development process maturity, we applied the Software Engineering Institute's (SEI) Software Capability ) and its Software Capability Evaluation (SCE) method. SEI's expertise in software process maturity as well as its capability maturity models and evaluation methods are widely accepted throughout the software industry. All our specialists were SEI-trained. The SW-CMM ranks organizational maturity according to five levels. (See figure 1.1.) Maturity levels 2 through 5 require the verifiable existence and use of certain software development processes, known as key process areas (KPA). According to SEI, an organization that has these processes in place is in a much better position to successfully develop software than an organization that does not have these processes in place. We evaluated Customs' software development processes against five of the six level 2 KPAs. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort. The sixth level 2 KPA, software subcontract management, was not evaluated because Customs did not use subcontractors on any of the projects that we evaluated. (See table 1.1.) As established by the model, each KPA contains five common attributes that indicate whether the implementation and institutionalization of a KPA can be effective, repeatable, and lasting. The five common features are: Commitment to perform: The actions that the organization must take to establish the process and ensure that it can endure. Commitment to perform typically involves establishing organizational policies and senior management sponsorship. Ability to perform: The preconditions that must exist in the project or organization to implement the software development process competently. Ability to perform typically involves resources, organizational structures, and training. Activities performed: The roles and procedures necessary to implement a KPA. Activities performed typically involve establishing plans and procedures, performing the work, tracking it, and taking appropriate management actions. Measurement and analysis: Activities performed to measure the process and analyze the measurements. Measurement and analysis typically includes defining the measurements to be taken and the analyses to be conducted to determine the status and effectiveness of the activities performed. Verifying implementation: The steps to ensure that the activities are performed in compliance with the process that has been established. Verification typically encompasses reviews by management. In accordance with SEI's SCE method and, for five of the six KPAs in level 2, we evaluated Customs' institutional policies and practices and compared project-specific guidance and practices against the five common attributes. This project-specific comparison can result in one of four possible outcomes: (1) project strength--an effective implementation of the key practice, (2) project weakness--ineffective implementation of a key practice or failure to implement a key practice, (3) project observation--key practice evaluated but evidence inconclusive and cannot be characterized as either strength or weakness, and (4) not rated--key practice not currently relevant to project, therefore, not evaluated. We performed the project-specific evaluations on three ongoing Customs software development projects, each of which is described below. As requested by the Subcommittee Chairmen, one of the projects evaluated was ACE, which is the largest and most important system that Customs is developing. The other two projects were selected by Customs on the basis of the following GAO specified criteria: (1) each project should be managed by a different software team, (2) at least one project should involve a legacy system, (3) at least one project should involve Year 2000 software conversion, and (4) each project should be relatively large and important to accomplishing Customs' mission. The projects we evaluated are: National Customs Automation Program (NCAP 0.1): NCAP 0.1 was the first component of the National Customs Automation Program Prototype (NCAP/P). NCAP/P, in turn, is the first phase of the Automated Commercial Environment (ACE). Customs began developing ACE in 1994 to address the new import processing requirements established by the National Customs Automation Program. ACE is also intended to replace the agency's legacy automated import system, the Automated Commercial System (ACS). NCAP 0.1 was installed at three field locations in May 1998. Automated Export System (AES): AES is an export information gathering and processing system, developed through cooperative efforts by Customs, the Bureau of Census, other federal agencies with export missions, and the export trade community. AES is designed to improve the collection of trade statistics; assist in the creation of a paperless export environment; facilitate the release of exports subject to licensing requirements; and consolidate export data required by several government agencies, easing the data filing burden for exporters while streamlining the federal data collection process. Customs installed AES in all U.S. vessel ports in October 1996, and currently it is operational in all ports, including air, rail, and truck transit ports. Customs and Census officials estimate that they spent approximately $12.9 million to develop and implement AES from fiscal year 1992 to 1997. These costs included, among other things, expenses for contractors, travel, and training. According to Customs' and Census' figures, both agencies estimate that together they will spend an additional $32.2 million through fiscal year 2002 on AES implementation and maintenance. Administrative Security System: The Administrative Security System assists users in requesting access to administrative systems. Users' requests are electronically submitted to the appropriate official for approvals. In addition, other portions of the Administrative Security System provide functionality to allow the System Administrators the ability to prepare and maintain user profiles, request logs, and electronic approval and disapproval reports. To assess the effectiveness of Customs' software process improvement program, we interviewed the Director, Technical Architecture Group, Office of Information and Technology, to determine: (1) process improvements that are planned and underway, (2) the rationale for each initiative, (3) the relative priority of each, (4) progress made on each initiative, and (5) obstacles, if any, impeding progress. We also reviewed past process improvement plans, meeting minutes, and related documentation. Further, we reviewed SEI's model for software process improvement, known as IDEALSM. IDEAL defines five sequential phases of software process improvement that can be used to develop a long range, integrated plan for initiating and managing a software process improvement program. Customs provided written comments on a draft of this report. These comments are presented and evaluated in chapter 8, and are reprinted in appendix I. We performed our work at Customs' Newington, Virginia, Data Center from February 1998 through November 1998, in accordance with generally accepted government auditing standards. The purpose of requirements management is to establish agreement between the customer and the software developers of the customer's requirements that will be implemented by the software developers. This agreement typically is referred to as the "system requirements allocated to the software." The agreement covers both technical and nontechnical (e.g., delivery dates) requirements. The agreement forms the basis for estimating, planning, performing, and tracking the software developer's activities throughout the software life cycle. According to the SW-CMM, a repeatable requirements management process, among other things, includes (1) documenting the system requirements allocated to software, (2) providing adequate resources and funding for managing the allocated requirements, (3) following a written organizational policy for requirements management, (4) having a quality assurance group that reviews the activities and work products for managing allocated requirements and reports the results, (5) using the allocated requirements as the basis for software plans, work products, and activities, and (6) training members of the software engineering group to perform their requirements management activities. All three projects had practice strengths in this KPA. For example, each project documented the system requirements allocated to software and ensured that adequate resources and funding for managing the allocated requirements were provided. One of the projects, NCAP 0.1, had strengths in all but two practices under this KPA; however, each practice weakness is significant. Collectively, the projects had many weaknesses in this KPA, and thus Customs' requirements management processes do not meet "repeatable" maturity level criteria. For example, none of the projects had a written organizational policy governing requirements management, and none had a quality assurance group for reviewing and reporting on the activities and work products associated with managing the allocated requirements. In the absence of these two practices, management is missing two means for ensuring that software requirements are managed in a prescribed manner. Also, two of the projects did not use the allocated software requirements as the basis for software plans, work products, and activities, which increases the risk that the software developed will not fully satisfy requirements. Further, members of two projects' software engineering groups were not trained to perform requirements management activities, thus increasing the chances of mismanagement. Table 2.1 provides a comprehensive list of the three projects' strengths and weaknesses for the requirements management KPA. The specific findings supporting the practice ratings cited in table 2.1 are in tables 2.2 through 2.4. While Customs' projects had several practice strengths in this KPA, the number and significance of their practice weaknesses mean that Customs' ability to manage software requirements is not repeatable. As a result, Customs is at risk of producing systems that fail to provide promised capabilities, and cost more and take longer than necessary. The purpose of software project planning is to establish reasonable plans for performing the software engineering and for managing the software project. According to the SW-CMM, a repeatable software project planning process, among other things, includes (1) documenting the software project plan, and preparing plans for software engineering facilities and support tools, (2) identifying the work products needed to establish and maintain control of the software project, (3) following a written organizational policy for planning a software project, (4) having a quality assurance group that reviews the activities and work products for software project planning and reports the results, (5) estimating the software project's efforts and costs, and estimating its critical computer resources according to a documented procedure, (6) making and using measurements to determine the status of planning activities, and (7) training personnel in software project planning and estimating. All of the projects that we evaluated had key practice strengths in this KPA. For example, all had strengths in (1) documenting a software project plan and preparing plans for the software engineering facilities and support tools needed to develop the software and (2) identifying the work products needed to control the software project. NCAP 0.1, in particular, had many additional practice strengths. However, many significant practice weaknesses were found in all three projects. None of the projects followed an organizational software project planning policy, and none had a quality assurance group conducting reviews and/or audits. As a result, the projects performed these practices differently and inconsistently, and controls were unreliable. For example, while the NCAP 0.1 project followed a documented procedure for estimating the size of software work products (or changes to the size of work products), and made and used measurements to determine the status of software planning activities, neither of the other two projects performed these practices and none of the projects had personnel trained in software project planning and estimating. Such project planning weaknesses mean that management has no assurance that it will get the consistent, complete, and reliable information about the projects' expected costs and schedules needed to make expeditious and informed investment decisions. Table 3.1 provides a comprehensive list of the three projects' strengths, weaknesses, and observations for the software project planning KPA. The specific findings supporting the practice ratings cited in table 3.1 are in tables 3.2 through 3.4. Effective planning is the cornerstone of successful software development project management. While Customs showed some strengths in this KPA, its many weaknesses render its software project planning processes unrepeatable. Therefore, Customs has no assurance that the projects are effectively establishing plans, including reliable projections of costs and schedules, and effectively measuring and monitoring progress and taking needed corrective actions expeditiously. The purpose of software project tracking and oversight is to provide adequate visibility into the progress of the software development so that management can act effectively when the software project's performance deviates significantly from the software plans. Software project tracking and oversight involves tracking and reviewing the software accomplishments and results against documented estimates, commitments, and plans, and adjusting these plans based on the actual accomplishments and results. According to the SW-CMM, effective software project tracking and oversight, among other things, includes (1) designating a project software manager to be responsible for the project's software activities and results, (2) having a documented software development plan for tracking software activities and communicating status, (3) following a written organizational policy for managing the project, (4) conducting periodic internal reviews to track technical progress, plans, performance, and issues against the software development plan, (5) tracking the software risks associated with the cost, resource, schedule, and technical aspects of the project, (6) explicitly assigning responsibility for software work products and activities, (7) tracking the sizes of the software work products (or sizes of the changes to the software work products) and taking corrective actions as necessary, and (8) periodically reviewing the activities for software project tracking and oversight with senior management. The projects evaluated exhibited some software project tracking and oversight practice strengths. For example, all three of the projects had a project software manager designated to be responsible for the project's software activities and results, and all had a documented software development plan for tracking software activities and communicating status. Also, NCAP 0.1 had strengths in all but five of this KPA's 24 key practices. However, the three projects collectively had many weaknesses, and these weaknesses, including the five for NCAP 0.1, were significant and thus preclude Customs from meeting SEI's repeatable maturity level criteria. For example, none of the projects followed a written organizational policy for managing the software project. With no established policy, Customs increases the risk that key tracking and oversight activities will not be performed effectively. For example, for two of the three projects, the project managers did not (1) conduct periodic internal reviews to track technical progress, plans, performance, and issues against the software development plan, (2) track software risks associated with cost, resource, schedule, and technical aspects of the project, (3) explicitly assign responsibility to individuals for software work products and activities, (4) track the sizes of the software work products (or sizes of the changes to the software work products) and take corrective actions, or (5) periodically review software project tracking and oversight activities with senior management. Table 4.1 provides a comprehensive list of the three projects' strengths, weaknesses, and observations for the software project tracking and oversight KPA. The specific findings supporting the practice ratings cited in table 4.1 are in tables 4.2 through 4.4. Despite several practice strengths in this KPA, the number and significance of the practice weaknesses that we found mean that Customs' current process for tracking and overseeing its projects is not repeatable, thereby increasing the chances of its software projects being late, costing more than expected, and not performing as intended. The purpose of software quality assurance is to independently review and audit the software products and activities to verify that they comply with the applicable procedures and standards and to provide the software project and higher-level managers with the results of these independent reviews and audits. According to the SW-CMM, a repeatable software quality assurance process, among other things, includes (1) preparing a software quality assurance plan for the project according to a documented procedure, (2) having a written organizational policy for implementing software quality assurance, (3) conducting audits of designated work processes and products to verify compliance, (4) documenting deviations identified in the software activities and software work products and handling them according to a documented procedure, and (5) having experts independent of the software quality assurance group periodically review the activities and work products of the project's software quality assurance group. All of the projects evaluated had extensive and significant software quality assurance practice weaknesses. For example, two of the projects did not have a software quality assurance plan; and none of the projects (1) had a written organizational policy for implementing software quality assurance, (2) conducted audits of designated work products to verify compliance, (3) documented deviations identified in the software activities and software work products and handled them according to a documented procedure, or (4) had experts independent of the software quality assurance group periodically review the group's work products. In fact, only one of the projects, AES, had any software quality assurance practice strengths, and these strengths were limited to only a few practices. In this case, the project had assigned responsibility for software quality assurance to a single individual and, for example, a software quality assurance plan had been drafted, although not according to a documented procedure. This virtual absence of software quality assurance on Customs' software projects increases greatly the risk of software process and product standards not being met, which in turn increases the risk of software not performing as intended, and costing more and taking longer to develop than necessary. Table 5.1 provides a comprehensive list of the three projects' strengths, weaknesses, and observations for the software quality assurance KPA. The specific findings supporting the practice ratings cited in table 5.1 are in tables 5.2 through 5.4. Customs' software quality assurance process has many weaknesses and is, therefore, undefined and undisciplined. As a result, Customs cannot provide management with independent information about adherence to software process and product standards. To develop and maintain software effectively, Customs must adopt a structured and rigorous approach to software quality assurance. The purpose of software configuration management is to establish and maintain the integrity of the products of the software project throughout the project's software life-cycle. Software configuration management involves establishing product baselines and systematically controlling changes to them. According to the SW-CMM, a repeatable software configuration management process, among other things, includes (1) preparing a software configuration management plan according to a documented procedure, (2) establishing a configuration management library system as a repository for the software baselines, (3) identifying software work products to be placed under configuration management, (4) controlling the release of products from the software baseline library according to a documented procedure, (5) following a written organizational policy for implementing software configuration management, (6) recording the status of configuration items/units according to a documented procedure, (7) making and using measurements to determine the status of the software configuration management activities, and (8) reviewing software configuration management activities with senior management on a periodic basis. Customs' processes for software configuration management show strengths in several activities. For example, all three projects had developed software configuration management plans according to a documented procedure. Also, two of the projects (NCAP 0.1 and AES) established configuration management library systems as repositories for the software baselines, identified software work products to be placed under configuration management, and controlled the release of products from the software baseline library according to a documented procedure. However, the projects had many practice weakness that collectively jeopardize Customs' ability to maintain the integrity of the projects' software products. For example, none of the projects had a written organizational policy for implementing software configuration management, and none had documented procedures for recording the status of configuration items (e.g., code, documents). Moreover, none of the projects made or used measurements to determine the status of the software configuration management activities, or reviewed software configuration management activities with senior management on a periodic basis. Table 6.1 provides a comprehensive list of the three projects' strengths and weaknesses for the software configuration management KPA. The specific findings supporting the practice ratings cited in table 6.1 are in tables 6.2 through 6.4. Customs has many configuration management process weaknesses, and thus its capability to establish and maintain the integrity of the wide range of software products is nonrepeatable and ineffective. Without a mature configuration management process, Customs can lose control of the current software product baseline, potentially producing and using inconsistent product versions and creating operational problems. To consistently develop software with specified functionality on time and within budget, Customs must improve its software development processes. According to SEI, an effective process improvement program includes (1) establishing a process improvement management structure, (2) developing a process improvement plan, (3) determining the organization's baseline capability and using this as a basis for targeting process initiatives, and (4) dedicating adequate resources for implementing the plan. Although it has attempted in the past to initiate and sustain process improvement activities, these activities were terminated without having improved Customs processes. Currently, Customs has no software process improvement program. In 1996, SEI published a software process improvement model, called IDEAL. This model has five phases: Initiating, Diagnosing, Establishing, Acting, and Leveraging--IDEAL. Each of the phases is summarized below. Initiating phase: During this phase, an organization establishes the management structure of the process improvement program, defines and assigns roles and responsibilities, allocates initial resources, develops a plan to guide the organization through the first three phases of the program, and obtains management approval and funding for the program. Two key organizational components of the program management structure established during this phase are a management steering group and a software engineering process group (SEPG). Responsibility for this phase rests with senior management. Diagnosing phase: During this phase, the SEPG appraises the current level of software process maturity to establish a baseline of the organization's process capability, and identifies any ongoing process improvement initiatives. The SEPG then uses the baseline to identify weaknesses and target process improvement activities. It also compares these targeted activities with any ongoing process improvement activities and reconciles any differences. Responsibility for this phase rests primarily with line managers and practitioners. Establishing phase: During this phase, the SEPG prioritizes the software process improvement activities and develops strategies for pursuing them. It then develops a process improvement action plan that details the activities and strategies and includes measurable goals for the activities and metrics for monitoring progress against the goals. Also during this phase, the resources needed to implement the plan are committed and training is provided for SEPG's technical working groups, who will be responsible for developing and testing new or improved processes. Responsibility for this phase resides primarily with line managers and practitioners. Acting phase: In this phase, the work groups create and evaluate new and improved processes. Evaluation of the processes is based on pilot tests that are formally planned and executed. If the pilots are successful, the work groups develop plans for organization-wide adoption and institutionalization, and once approved, execute them. Responsibility for this phase resides primarily with line managers and practitioners. Leveraging phase: During this phase, results and lessons learned from earlier phases are assessed and applied, as appropriate, to enhance the process improvement program's structure and plans. Responsibility for this phase rests primarily with senior management. In 1996, Customs initiated some limited software process improvement activities. Specifically, it hired a contractor to develop a process improvement plan, which was completed in September 1996. According to the plan, Customs was to reach CMM level 2 process maturity (the repeatable level) by 1998 and CMM level 3 (the defined level) by 2002. Customs began limited implementation of the plan in May of 1997, when it established process improvement teams for two KPAs--software project planning and project tracking and oversight. Generally, the teams were tasked with defining, implementing, and maintaining CMM-based processes for their respective KPAs. Customs did not staff or fund any other KPA improvement activities at this time. In August 1997, Customs discontinued all process improvement activities. Customs officials stated that this decision was based on the need to focus staff and resources on the agency's Year 2000 conversion program. Currently, Customs does not have a software development process improvement program, and it has not taken steps to initiate one. Although it has assigned two people part-time to process improvement, it has not assigned organizational responsibility and authority, established a program management structure, developed a plan of action, and committed resources needed (trained staff and funding) to execute the plan. Customs does not have an effective software development process improvement program. As a result, it cannot expect to improve its immature software development processes. Customs develops and maintains software for systems that are critical to its ability to fulfill its mission. However, its software development processes are ad hoc and sometimes chaotic, and are not repeatable even on a project-by-project basis. As a result, Customs' success or failure in developing software depends largely on specific individuals, rather than on well-defined and disciplined software management practices. This greatly reduces the probability that its software projects, whether new developments or maintenance of existing software, will consistently perform as intended and be delivered on schedule and within budget. For Customs software projects to mature beyond this initial level, the agency must implement basic management controls and instill self-discipline in its software projects. Customs acknowledges the importance of software process maturity and the need to improve its software development processes. However, it does not have a program for improving its software development processes and has not begun to establish one. Until it does, Customs has no assurance that its large investment in software development and maintenance will produce systems that perform needed functions, on time, and within budget. We recommend that, after ensuring that its mission-critical systems are Year 2000 compliant but before investing in major software development efforts like ACE, the Commissioner of Customs direct the Chief Information Officer to assign responsibility and authority for software development process develop and implement a formal plan for software development process improvement that is based on the software capability evaluation results contained in this report and specifies measurable goals and time frames, prioritizes initiatives, estimates resource requirements (trained staff and funding) and defines a process improvement management structure; ensure that every new software development effort in Customs adopts processes that satisfy at least SW-CMM level 2 requirements; and ensure that process improvement activities are initiated for all ongoing essential software maintenance projects. In its written comments on a draft of this report, Customs acknowledged the importance of software process improvement and maturity. Also, it agreed with GAO's overall findings, including that Customs' software development processes have not attained SW-CMM level 2 maturity. To address these weaknesses, Customs stated that it has taken the first step toward implementing our recommendations by assigning responsibility and authority for software process improvement as part of a reorganization of its Office of Information and Technology, which Customs stated will be implemented in early 1999. Customs further stated that once the reorganization is implemented, a formal software process improvement program will be established, and that this program will include definition of an action plan, commitment of resources, and specification of goals for achieving CMM levels 2 and 3. According to Customs, these improvement activities are in their early stages. When they are successfully implemented, they should address many of our recommendations. Customs also stated that because its legacy systems are aging and need to be enhanced and replaced, software process improvement must occur in parallel with continued software development investments. History has shown that attempting to modernize without first instituting disciplined software processes has been a characteristic of failed modernization programs. Until it implements disciplined software processes (i.e., at least level 2 process maturity), Customs cannot prudently manage major system investments, such as ACE with an estimated life cycle cost exceeding $1 billion. Customs' comments also included a request to meet with us to discuss system-specific KPA practice strength and weakness determinations. We met prior to requesting comments on a draft of this report and then again on January 12, 1999, to discuss SEI's SW-CMM requirements and the basis for our determinations. We are prepared to continue assisting Customs as it improves its software processes. Appendix I provides the full text of Customs' comments and our responses to additional Customs comments not discussed above.
Pursuant to a congressional request, GAO reviewed the Customs Service's software development maturity and improvement activities, focusing on: (1) the maturity of Customs' software development processes; and (2) whether Customs has an effective software process improvement program. GAO noted that: (1) because of the number and severity of Customs' software development process weaknesses, Customs did not fully satisfy any of the key process areas (KPA) necessary to achieve the repeatable level of process maturity; (2) as a result, its processes for developing software, a complex and expensive component of Customs' systems, are ad hoc, sometimes chaotic, and not repeatable across projects; (3) Customs had some practice strengths in all but one of the five KPAs evaluated (i.e., requirements management, software project planning, software project tracking and oversight, software quality assurance, and software configuration management); however, GAO also found extensive and significant weaknesses in each of these KPAs; (4) some of these weaknesses were systemic, recurring in each of the KPAs; (5) for example, Customs had no written policy for managing or implementing any of the KPAs; (6) none of the projects had: (a) an approved quality assurance plan; (b) documented procedures for determining the project cost, schedule, or effort; or (c) any outside group reviewing or reporting on the project's compliance with defined processes; (7) these weaknesses are some of the reasons for Customs' limited success, for example, in delivering promised Automated Commercial Environment (ACE) capabilities on time; (8) Customs does not have a software development process improvement program, and it has not taken the basic steps to initiate one; (9) these steps, many of which are described in Software Engineering Institute's initiating, diagnosing, establishing, acting, and leveraging model for process improvement, include assigning responsibility and authority for process improvement, establishing a process improvement management structure, defining a plan of action, and committing needed resources; and (10) until Customs establishes an effective process improvement program, its software processes will remain poorly defined and undisciplined, and its software projects are likely to suffer cost, schedule, and performance shortfalls.
7,837
482
Because of such emergencies as natural disasters, hazardous material spills, and riots, all levels of government have had some experience in preparing for different types of disasters and emergencies. Preparing for all potential hazards is commonly referred to as the "all-hazards" approach. While terrorism is a component within an all-hazards approach, terrorist attacks potentially impose a new level of fiscal, economic, and social dislocation within this nation's boundaries. Given the specialized resources that are necessary to address a chemical or biological attack, the range of governmental services that could be affected, and the vital role played by private entities in preparing for and mitigating risks, state and local resources alone will likely be insufficient to meet the terrorist threat. Some of these specific challenges can be seen in the area of bioterrorism. For example, a biological agent released covertly might not be recognized for a week or more because symptoms may only appear several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics or vaccines to large segments of the population to prevent the spread of an infectious disease. The resources necessary to undertake these responses are generally beyond state and local capabilities and would require assistance from and close coordination with the federal government. National preparedness is a complex mission that involves a broad range of functions performed throughout government, including national defense, law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. While only the federal government is empowered to wage war and regulate interstate commerce, state and local governments have historically assumed primary responsibility for managing emergencies through police, fire-fighting, and emergency medical personnel. The federal government's role in responding to major disasters is generally defined in the Stafford Act, which requires a finding that the disasters is so severe as to be beyond the capacity of state and local governments to respond effectively before major disaster or emergency assistance from the federal government is warranted. Once a disaster is declared, the federal government--through the Federal Emergency Management Agency (FEMA)--may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. There has been an increasing emphasis over the past decade on preparedness for terrorist events. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar- Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in the Department of Justice, Department of Energy, FEMA and Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. The attacks of September 11, 2001, as well as the subsequent attempts to contaminate Americans with anthrax, dramatically exposed the nation's vulnerabilities to domestic terrorism and prompted numerous legislative proposals to further strengthen our preparedness and response. During the first session of the 107th Congress, several bills were introduced with provisions relating to state and local preparedness. For instance, the Preparedness Against Domestic Terrorism Act of 2001, which you co sponsored, Mr. Chairman, proposes the establishment of a Council on Domestic Preparedness to enhance the capabilities of state and local emergency preparedness and response. The funding for homeland security increased substantially after the attacks. According to documents supporting the president's fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. The Congress added to this amount by passing an emergency supplemental appropriation of $40 billion dollars. According to the budget request documents, about one- quarter of that amount, nearly $9.8 billion, was dedicated to strengthening our defenses at home, resulting in an increase in total federal funding on homeland security of about 50 percent, to $29.3 billion. Table 1 compares fiscal year 2002 funding for homeland security by major categories with the president's proposal for fiscal year 2003. We have tracked and analyzed federal programs to combat terrorism for many years and have repeatedly called for the development of a national strategy for preparedness. We have not been alone in this message; for instance, national commissions, such as the Gilmore Commission, and other national associations, such as the National Emergency Management Association and the National Governors Association, have advocated the establishment of a national preparedness strategy. The attorney general's Five-Year Interagency Counterterrorism Crime and Technology Plan, issued in December 1998, represents one attempt to develop a national strategy on combating terrorism. This plan entailed a substantial interagency effort and could potentially serve as a basis for a national preparedness strategy. However, we found it lacking in two critical elements necessary for an effective strategy: (1) measurable outcomes and (2) identification of state and local government roles in responding to a terrorist attack. In October 2001, the president established the Office of Homeland Security as a focal point with a mission to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. While this action represents a potentially significant step, the role and effectiveness of the Office of Homeland Security in setting priorities, interacting with agencies on program development and implementation, and developing and enforcing overall federal policy in terrorism-related activities is in the formative stages of being fully established. The emphasis needs to be on a national rather than a purely federal strategy. We have long advocated the involvement of state, local, and private-sector stakeholders in a collaborative effort to arrive at national goals. The success of a national preparedness strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. To develop this essential national strategy, the federal role needs to be considered in relation to other levels of government, the goals and objectives for preparedness, and the most appropriate tools to assist and enable other levels of government and the private sector to achieve these goals. Although the federal government appears monolithic to many, in the area of terrorism prevention and response, it has been anything but. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 federal entities in bioterrorism alone. One of the areas that the Office of Homeland Security will be reviewing is the coordination among federal agencies and programs. Concerns about coordination and fragmentation in federal preparedness efforts are well founded. Our past work, conducted prior to the creation of the Office of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. There had been no single leader in charge of the many terrorism- related functions conducted by different federal departments and agencies. In fact, several agencies had been assigned leadership and coordination functions, including the Department of Justice, the Federal Bureau of Investigation, FEMA, and the Office of Management and Budget. We previously reported that officials from a number of agencies that combat terrorism believe that the coordination roles of these various agencies are not always clear. The recent Gilmore Commission report expressed similar concerns, concluding that the current coordination structure does not provide the discipline necessary among the federal agencies involved. In the past, the absence of a central focal point resulted in two major problems. The first of these is a lack of a cohesive effort from within the federal government. For example, the Department of Agriculture, the Food and Drug Administration, and the Department of Transportation have been overlooked in bioterrorism-related policy and planning, even though these organizations would play key roles in response to terrorist acts. In this regard, the Department of Agriculture has been given key responsibilities to carry out in the event that terrorists were to target the nation's food supply, but the agency was not consulted in the development of the federal policy assigning it that role. Similarly, the Food and Drug Administration was involved with issues associated with the National Pharmaceutical Stockpile, but it was not involved in the selection of all items procured for the stockpile. Further, the Department of Transportation has responsibility for delivering supplies under the Federal Response Plan, but it was not brought into the planning process and consequently did not learn the extent of its responsibilities until its involvement in subsequent exercises. Second, the lack of leadership has resulted in the federal government's development of programs to assist state and local governments that were similar and potentially duplicative. After the terrorist attack on the federal building in Oklahoma City, the federal government created additional programs that were not well coordinated. For example, FEMA, the Department of Justice, the Centers for Disease Control and Prevention, and the Department of Health and Human Services all offer separate assistance to state and local governments in planning for emergencies. Additionally, a number of these agencies also condition receipt of funds on completion of distinct but overlapping plans. Although the many federal assistance programs vary somewhat in their target audiences, the potential redundancy of these federal efforts warrants scrutiny. In this regard, we recommended in September 2001 that the president work with the Congress to consolidate some of the activities of the Department of Justice's Office for State and Local Domestic Preparedness Support under FEMA. State and local response organizations believe that federal programs designed to improve preparedness are not well synchronized or organized. They have repeatedly asked for a one-stop "clearinghouse" for federal assistance. As state and local officials have noted, the multiplicity of programs can lead to confusion at the state and local levels and can expend precious federal resources unnecessarily or make it difficult for them to identify available federal preparedness resources. As the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds and have argued that the application process is burdensome and inconsistent among federal agencies. Although the federal government can assign roles to federal agencies under a national preparedness strategy, it will also need to reach consensus with other levels of government and with the private sector about their respective roles. Clearly defining the appropriate roles of government may be difficult because, depending upon the type of incident and the phase of a given event, the specific roles of local, state and federal governments and of the private sector may not be separate and distinct. A new warning system, the Homeland Security Advisory System, is intended to tailor notification of the appropriate level of vigilance, preparedness and readiness in a series of graduated threat conditions. The Office of Homeland Security announced the new warning system on March 12, 2002. The new warning system includes five levels of alert for assessing the threat of possible terrorist attacks: low, guarded, elevated, high and severe. These levels are also represented by five corresponding colors: green, blue, yellow, orange, and red. When the announcement was made, the nation stood in the yellow condition, in elevated risk. The warning can be upgraded for the entire country or for specific regions and economic sectors, such as the nuclear industry. The system is intended to address a problem with the previous blanket warning system that was used. After September 11th, the federal government issued four general warnings about possible terrorist attacks, directing federal and local law enforcement agencies to place themselves on the "highest alert." However, government and law enforcement officials, particularly at the state and local levels, complained that general warnings were too vague and a drain on resources. To obtain views on the new warning system from all levels of government, law enforcement, and the public, the Attorney General, who will be responsible for the system, provided a 45-day comment period from the announcement of the new system on March 12th. This provides an opportunity for state and local governments as well as the private sector to comment on the usefulness of the new warning system, and the appropriateness of the five threat conditions with associated suggested protective measures. Numerous discussions have been held about the need to enhance the nation's preparedness, but national preparedness goals and measurable performance indicators have not yet been developed. These are critical components for assessing program results. In addition, the capability of state and local governments to respond to catastrophic terrorist attacks is uncertain. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures; (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments; (3) ensuring that goals at lower organizational levels align with and support general goals; and (4) identifying the resources that will be required to achieve the goals. A former assistant professor of public policy at the Kennedy School of Government, now the senior director for policy and plans with the Office of Homeland Security, noted in a December 2000 paper that a preparedness program lacking broad but measurable objectives is unsustainable. This is because it deprives policymakers of the information they need to make rational resource allocations, and program managers are prevented from measuring progress. He recommended that the government develop a new statistical index of preparedness,incorporating a range of different variables, such as quantitative measures for special equipment, training programs, and medicines, as well as professional subjective assessments of the quality of local response capabilities, infrastructure, plans, readiness, and performance in exercises. Therefore, he advocated that the index should go well beyond the current rudimentary milestones of program implementation, such as the amount of training and equipment provided to individual cities. The index should strive to capture indicators of how well a particular city or region could actually respond to a serious terrorist event. This type of index, according to this expert, would then allow the government to measure the preparedness of different parts of the country in a consistent and comparable way, providing a reasonable baseline against which to measure progress. In October 2001, FEMA's director recognized that assessments of state and local capabilities have to be viewed in terms of the level of preparedness being sought and what measurement should be used for preparedness. The director noted that the federal government should not provide funding without assessing what the funds will accomplish. Moreover, the president's fiscal year 2003 budget request for $3.5 billion through FEMA for first responders--local police, firefighters, and emergency medical professionals--provides that these funds be accompanied by a process for evaluating the effort to build response capabilities, in order to validate that effort and direct future resources. FEMA has developed an assessment tool that could be used in developing performance and accountability measures for a national strategy. To ensure that states are adequately prepared for a terrorist attack, FEMA was directed by the Senate Committee on Appropriations to assess states' response capabilities. In response, FEMA developed a self-assessment tool--the Capability Assessment for Readiness (CAR)--that focuses on 13 key emergency management functions, including hazard identification and risk assessment, hazard mitigation, and resource management. However, these key emergency management functions do not specifically address public health issues. In its fiscal year 2001 CAR report, FEMA concluded that states were only marginally capable of responding to a terrorist event involving a weapon of mass destruction. Moreover, the president's fiscal year 2003 budget proposal acknowledges that our capabilities for responding to a terrorist attack vary widely across the country. Many areas have little or no capability to respond to a terrorist attack that uses weapons of mass destruction. The budget proposal further adds that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Proposed standards have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and are currently being pilot tested through the Emergency Management Accreditation Program at the state and local levels. Its purpose is to establish minimum acceptable performance criteria by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement that (1) the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. Although FEMA has experience in working with others in the development of assessment tools, it has had difficulty in measuring program performance. As the president's fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA's Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. Since September 11th, many state and local governments have faced declining revenues and increased security costs. A survey of about 400 cities conducted by the National League of Cities reported that since September 11th, one in three American cities saw their local economies, municipal revenues, and public confidence decline while public-safety spending is up. Further, the National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. State and local revenue shortfalls coupled with increasing demands on resources makes it more critical that federal programs be designed carefully to match the priorities and needs of all partners--federal, state, local and private. Our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have at their disposal a variety of policy instruments, such as grants, regulations, tax incentives, and regional coordination and partnerships, that they can use to motivate or mandate other levels of government and private-sector entities to take actions to address security concerns. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Key to the national effort will be determining the appropriate level of funding so that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while also protecting against federal funds being used to substitute for spending that would have occurred anyway. The federal government often uses grants to state and local governments as a means of delivering federal programs. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad national purpose and to provide a great deal of discretion to state and local officials. Either type of grant can be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as "supplantation," with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. More specifically: Targeting: The formula for the distribution of any new grant could be based on several considerations, including the state or local government's capacity to respond to a disaster. This capacity depends on several factors, the most important of which perhaps is the underlying strength of the state's tax base and whether that base is expanding or is in decline. In an August 2001 report on disaster assistance, we recommended that the director of FEMA consider replacing the per-capita measure of state capability with a more sensitive measure, such as the amount of a state's total taxable resources, to assess the capabilities of state and local governments to respond to a disaster. Other key considerations include the level of need and the costs of preparedness. Maintenance of effort: In our earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. Our recent work on the Temporary Assistance to Needy Families block grant found that a strong maintenance of effort provision limits states' ability to supplant. Recipients can be penalized for not meeting a maintenance-of-effort requirement. Balance accountability and flexibility: Experience with block grants shows that such programs are sustainable if they are accompanied by sufficient information and accountability for national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be established for measured results and outcomes that permitting greater flexibility in how funds are used while at the same time ensuring some national oversight. Grants previously have been used for enhancing preparedness and recent proposals direct new funding to local governments. In recent discussions, local officials expressed their view that federal grants would be more effective if local officials were allowed more flexibility in the use of funds. They have suggested that some funding should be allocated directly to local governments. They have expressed a preference for block grants, which would distribute funds directly to local governments for a variety of security-related expenses. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president's fiscal year 2003 budget, have included some of these provisions. This matching grant would be administered by FEMA, with 25 percent being distributed to the states based on population. The remainder would go to states for pass-through to local jurisdictions, also on a population basis, but states would be given the discretion to determine the boundaries of sub-state areas for such a pass-through--that is, a state could pass through the funds to a metropolitan area or to individual local governments within such an area. Although the state and local jurisdictions would have discretion to tailor the assistance to meet local needs, it is anticipated that more than one- third of the funds would be used to improve communications; an additional one-third would be used to equip state and local first responders, and the remainder would be used for training, planning, technical assistance, and administration. Federal, state and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, highways, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors (for example, for chemical and nuclear facilities). In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Five models of shared regulatory authority are: Fixed federal standards that preempt all state regulatory action in the Federal minimum standards that preempt less stringent state laws but permit states to establish standards that are more stringent than the federal; Inclusion of federal regulatory provisions not established through preemption in grants or other forms of assistance that states may choose to accept; Cooperative programs in which voluntary national standards are formulated by federal and state officials working together; Widespread state adoption of voluntary standards formulated by quasi- official entities. Any one of these shared regulatory approaches could be used in designing standards for preparedness. The first two of these mechanisms involve federal preemption. The other three represent alternatives to preemption. Each mechanism offers different advantages and limitations that reflect some of the key considerations in the federal-state balance. To the extent that private entities will be called upon to improve security over dangerous materials or to protect vital assets, the federal government can use tax incentives to encourage and enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Promoting partnerships between critical actors (including different levels of government and the private sector) facilitates the maximizing of resources and also supports coordination on a regional level. Partnerships could encompass federal, state, and local governments working together to share information, develop communications technology, and provide mutual aid. The federal government may be able to offer state and local governments assistance in certain areas, such as risk management and intelligence sharing. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats in their communities. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the United States Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation's overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S1615, H.R. 3285) that would provide for information sharing among federal, state and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information sharing needs. For instance, this proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them, and state and local officials who participate in federal counter-terrorism working groups or regional terrorism task forces. Local officials have emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality's supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in response to an emergency. Because individual jurisdictions may not have all the resources they need to respond to all types of emergencies, these agreements allow for resources to be deployed quickly within a region. The terms of mutual aid agreements vary for different services and different localities. These agreements may provide for the state to share services, personnel, supplies, and equipment with counties, towns, and municipalities within the state, with neighboring states, or, in the case of states bordering Canada, with jurisdictions in another country. Some of the agreements also provide for cooperative planning, training, and exercises in preparation for emergencies. Some of these agreements involve private companies and local military bases, as well as local government entities. Such agreements were in place for the three sites that were involved on September 11th-- New York City, the Pentagon, and a rural area of Pennsylvania--and provide examples of some of the benefits of mutual aid agreements and of coordination within a region. With regard to regional planning and coordination, there may be federal programs that could provide models for funding proposals. In the 1962 Federal-Aid Highway Act, the federal government established a comprehensive cooperative process for transportation planning. This model of regional planning continues today under the Transportation Equity Act for the 21st century (TEA-21, originally ISTEA) program. This model emphasizes the role of state and local officials in developing a plan to meet regional transportation needs. Metropolitan Planning Organizations (MPOs) coordinate the regional planning process and adopt a plan, which is then approved by the state. Mr. Chairman, in conclusion, as increasing demands are placed on budgets at all levels of government, it will be necessary to make sound choices to maintain fiscal stability. All levels of government and the private sector will have to communicate and cooperate effectively with each other across a broad range of issues to develop a national strategy to better target available resources to address the urgent national preparedness needs. Involving all levels of government and the private sector in developing key aspects of a national strategy that I have discussed today - a definition and clarification of the appropriate roles and responsibilities, an establishment of goals and performance measures, and a selection of appropriate tools-- is essential to the successful formulation of the national preparedness strategy and ultimately to preparing and defending our nation from terrorist attacks. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-6787, Paul Posner at (202) 512-9573, or JayEtta Hecker at (202) 512- 2834. Other key contributors to this testimony include Jack Burriesci, Matthew Ebert, Colin J. Fallon, Thomas James, Kristen Sullivan Massey, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA's Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation's Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD's Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President's Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Bioterrorism: The Centers for Disease Control and Prevention's Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
Federal, state, and local governments share responsibility in preparing for catastrophic terrorist attacks. Because the national security threat is diffuse and the challenge is highly intergovernmental, national policymakers must formulate strategies with a firm understanding of the interests, capacity, and challenges in addressing these issues. Key aspects of this strategy should include a definition and clarification of the appropriate roles and responsibilities of federal, state, and local entities. GAO has found fragmentation and overlap among federal assistance programs. More than 40 federal entities have roles in combating terrorism, and past federal efforts have resulted in a lack of accountability, a lack of cohesive effort, and program duplication. As state and local officials have noted, this situation has led to confusion, making it difficult to identify available federal preparedness resources and effectively partner with the federal government. Goals and performance measures should be established to guide the nation's preparedness efforts. For the nation's preparedness programs, however, outcomes have yet to be defined in terms of domestic preparedness. Given the recent and proposed increases in preparedness funding, real and meaningful improvements in preparedness and establishing clear goals and performance measures are critical to ensuring a successful and a fiscally responsible effort. The strategy should include a careful choice of the most appropriate tools of government to best achieve national goals. The choice and design of policy tools, such as grants, regulations, and partnerships, can enhance the government's capacity to (1) target areas of highest risk to better ensure that scarce federal resources address the most pressing needs, (2) promote shared responsibility by all parties, and (3) track and assess progress toward achieving national goals.
7,810
341
The traditional Medicare program does not have a comprehensive outpatient prescription drug benefit, but under part B (which covers physician and other outpatient services), it covers roughly 450 pharmaceutical products and biologicals. In 1999, spending for Medicare part B-covered prescription drugs totaled almost $4 billion. A small number of products accounts for the majority of Medicare spending and billing volume for part B drugs. In 1999, 35 drugs accounted for 82 percent of Medicare spending and 95 percent of the claims volume for these products. The 35 products included, among others, injectible drugs to treat cancer, inhalation therapy drugs, and oral immunosuppressive drugs (such as those used to treat organ transplant patients). The physician-billed drugs accounted for the largest share of program spending, while pharmacy supplier-billed drugs constituted the largest share of the billing volume. Three specialties--hematology oncology, medical oncology, and urology--submitted claims for 80 percent of total physician billings for part B drugs. Two inhalation therapy drugs accounted for 88 percent of the Medicare billing volume for pharmacy- supplied drugs administered in a patient's residence. Medicare's payment for part B-covered drugs is based on the product's AWP, which is a price assigned by the product's manufacturer and may be neither "average" nor "wholesale." Instead, the AWP is often described as a "list price," "sticker price," or "suggested retail price." The term AWP is not defined in law or regulation, so the manufacturer is free to set an AWP at any level, regardless of the actual price paid by purchasers. Manufacturers periodically report AWPs to publishers of drug pricing data, such as the Medical Economics Company, Inc., which publishes the Red Book, and First Data Bank, which compiles the National Drug Data File. In paying claims, Medicare carriers use published AWPs to determine Medicare's payment amount, which is 95 percent of AWP.Thus, given the latitude manufacturers have in setting AWP, these payments may be unrelated to market prices that physicians and suppliers actually pay for the products. The actual price that providers pay for Medicare part B drugs is often not transparent. Physicians and suppliers may belong to group purchasing organizations (GPO) that pool the purchasing of multiple entities to negotiate prices with wholesalers or manufacturers. GPOs may negotiate different prices for different purchasers, such as physicians, suppliers, or hospitals. In addition, providers can purchase part B-covered drugs from general or specialty pharmaceutical wholesalers or can have direct purchase agreements with manufacturers. Certain practices involving these various entities can result in prices paid at the time of sale that do not reflect the final net cost to the purchaser. Manufacturers or wholesalers may offer purchasers rebates based on the volume of products purchased not in a single sale but over a period of time. Manufacturers may also establish "chargeback" arrangements for end purchasers, which result in wholesalers' prices overstating what those purchasers pay. Under these arrangements, the purchaser negotiates a price with the manufacturer that is lower than the price the wholesaler charges for the product. The wholesaler provides the product to the purchaser for the lower negotiated price, and the manufacturer then pays the wholesaler the difference between the wholesale price and the negotiated price. For the part B-covered drugs accounting for the bulk of Medicare spending and claims, Medicare payments in 2001 were almost always considerably higher than wholesalers' prices that were widely available to physicians and suppliers. This was true regardless of whether the drugs had competing products or were available from a single manufacturer. Physicians who billed Medicare for relatively small quantities of these drugs also obtained similar prices. Our study shows that there can be wide disparities between a drug's estimated acquisition cost and Medicare's payment for that drug. Physician-billed drugs account for the bulk of Medicare spending on part B drugs. Of those billed by physicians, drugs used to treat cancer accounted for most of Medicare's expenditures. Specifically: Widely available discounts for 17 of the physician-billed drugs we examined averaged between 13 percent and 34 percent less than AWP. For two other physician-billed drugs, Dolasetron mesylate and Leucovorin calcium, average discounts were considerably larger--65 percent and 86 percent less than AWP. The discounts on physician-billed drugs, based on wholesaler and GPO catalogue prices, are notably lower than Medicare's payment, which reflects a discount of 5 percent below AWP. The discounts indicate that Medicare's payments for these drugs were at least $532 million higher than providers' acquisition costs in 2000. Further, the discounts we report may only be the starting point for additional discounts provided to certain purchasers, as chargebacks, rebates, and other discounts may drive down the final sale price. Concerns have been expressed that small providers either could not or do not obtain such favorable prices. Therefore, we surveyed a sample of physicians who billed Medicare for low volumes of chemotherapy drugs to see if they were able to obtain similar discounts. All of the low-volume purchasers who responded to our survey reported obtaining similar or better discounts than the widely available prices we had documented. More than one-third of these physicians reported belonging to GPOs and obtained the GPOs' substantial discounts, while others said they had contracts with manufacturers and wholesalers. As with physician-billed drugs, Medicare's payments for pharmacy supplier-billed drugs generally far exceeded the prices available to these suppliers. For the drugs we examined, Medicare's payments were at least $483 million more than what the suppliers paid in 2000. Further, the discounts we report were largest for products that could be obtained from more than one source. Inhalation therapy drugs administered through DME and oral immunosuppressive drugs represent most of the high- expenditure, high-volume drugs billed to Medicare by suppliers. Specifically: Two drugs, albuterol and ipratropium bromide, used with DME for respiratory conditions, account for most of the pharmacy-supplied drugs paid for by Medicare. In 2001, they were available to pharmacy suppliers at prices that averaged, respectively, 85 percent and 78 percent less than AWP. Other high-volume DME-administered drugs had prices averaging 69 percent and 72 percent less than AWP. These findings are consistent with prior studies of the prices of similar drugs. Two of the four high-volume oral immunosuppressives were available from wholesalers with average discounts of 14 percent and 77 percent. Wholesale price information on the other two was not available, but retail prices from online pharmacies were as much as 13 percent and 8 percent below AWP. Medicare payment policies for administering or delivering a drug vary, depending on who provides the drug to the patient. Physicians are compensated directly for drug administration through the physician fee schedule. Pharmacy suppliers are compensated for dispensing inhalation therapy drugs used with a nebulizer, which make up the majority of their part B drug claims. No explicit payments are made to pharmacy suppliers for dispensing other drugs, but they may receive payments for equipment and supplies associated with DME-administered drugs. Both physicians and pharmacy suppliers contend that the excess in Medicare's payments for part B-covered drugs compensates for related service costs inadequately reimbursed or not explicitly covered at all. In prior work on the Medicare physician fee schedule, we concluded that the agency's basic method of computing practice expense payments to physicians was sound. The implementation of this fee schedule, however, has been controversial. The Congress required that payments be budget neutral relative to prior spending. Medicare's physician payments were, in the aggregate, seemingly adequate, as most physicians were participating in Medicare and accepting the program's fees as payment in full. Because of the budget neutrality requirement, if one specialty's fees increased on average, some others would have to decline. Such redistributions have occurred and some are significant. Oncologists, who represent the majority of physicians billing for drugs, argue that Medicare's payments for administering chemotherapy are inappropriately low and that the excess Medicare drug payments are needed to offset their losses. Yet oncology is one of the specialties to gain under the resource-based physician fee schedule. In our separate study on physicians' practice expenses under Medicare's fee schedule, we will show that payments to oncologists were 8 percent higher than they would have been if the prior charge-based payment method had been maintained; the study will also show that oncologists' payments relative to their estimated practice expenses, which include chemotherapy administration, were close to the average for all specialties. While oncologists do not appear disadvantaged overall under the fee schedule, adjustments HCFA made to the basic method of computing payments reduced fees for some oncologists' services. In those adjustments, HCFA modified the basic method in computing payments for services delivered without direct physician involvement, like much of chemotherapy administration. The modifications were intended to correct for perceived low payments for these services. While they increased payments for some of these services, they lowered them for many others. Moreover, they increased payments on average for services involving physicians. Oncology payments were particularly affected, as services without physician involvement constitute about one-third of oncologists' Medicare-billed services, compared to about 5 percent of all physician- billed services. Because of the modifications to the basic method, oncology practice expense payments for nonphysician chemotherapy administration were on average 15 percent lower, while payments for physician-administered services were 1 percent higher, than if HCFA had used the basic method. Across all services, the modifications resulted in oncology practice expense payments that were 6 percent lower. Using the basic method for all services would eliminate these reductions and add about $31 million to oncology payments. Our study will recommend that CMS revert to the use of the basic methodology to determine practice expense payments for all services. We will also recommend that CMS address a data adjustment it made that affects oncology payments under the new fee schedule. The agency reduced oncology's reported supply expenses to keep from paying twice for drugs that are reimbursed separately by Medicare. Oncologists acknowledge that the supply expense estimate needed to be reduced, but argue that the reduction was too large. We have recommended that the agency develop the appropriate data to more accurately estimate oncology supply expenses. Substituting a supply expense estimate based on a methodology developed by the American Society of Clinical Oncology would raise practice expense payments an additional $20 million, if done in conjunction with our recommendation to use the basic method to calculate payments for all services. Oncologists have raised concerns about whether the data used to estimate their practice expenses constituted a representative sample of practices surveyed and whether these data reflect current practices in delivering services. How improvements in the data to estimate practice expenses may affect payment levels is uncertain. Payments are based on the differences in expenses of services of one specialty compared to those of others. Some of the data concerns raised by oncologists may apply to other specialties as well, so that additional and more current data may reveal that the relative cost of one service compared to others may have changed only modestly. We are conducting a separate study to determine how CMS can improve and update the information used to estimate specialties' practice expenses. Similar to the physicians who bill for part B drugs, pharmacy suppliers and their representatives contend that the margin on the Medicare drug payment is needed to compensate them for costs not covered by Medicare--that is, the clinical, administrative, and other labor costs associated with delivering the drug. These include costs for billing and collection; facility and employee accreditation; licensing and certifications; and providing printed patient education materials. Medicare pays a dispensing fee of $5.00 for inhalation therapy drugs used with a nebulizer, which are the vast majority of the pharmacy-supplied drugs. This fee was instituted in 1994. It is higher than dispensing fees paid by pharmacy benefit managers, which average around $2.00, and is comparable to many state Medicaid programs, which range from $2.00 to over $6.00. For other pharmacy-supplied drugs, Medicare makes no explicit payment for dispensing the drug. Besides the profits on the DME-related drugs, pharmacy suppliers may receive additional compensation through the payment for DME and related supplies. Our prior work suggests that, for two reasons, Medicare DME and supply payments may exceed market prices. First, because of an imprecise coding system, Medicare carriers cannot determine from the DME claims they process which specific products the program is paying for. Medicare pays one fee for all products classified under a single billing code, regardless of whether their market prices are greatly below or above that fee. Second, DME fees are often out of line with current market prices. Until recently, DME fees had generally been adjusted only for inflation because the process required to change the fees was lengthy and cumbersome. As a result, payment levels may not reflect changes in technology and other factors that could significantly change market prices. Private insurers and federal agencies, such as VA, employ different approaches in paying for or purchasing drugs that may provide useful lessons for Medicare. In general, these payers make use of the leverage of their volume and competition to secure better prices. The federal purchasers, furthermore, use that leverage to secure verifiable data on actual market transactions to establish their price schedules. Private payers can negotiate with some suppliers to the exclusion of others and arrive at terms without clear criteria or a transparent process. This practice would not be easily adaptable to Medicare, given the program's size and need to ensure access for providers and beneficiaries. How other federal agencies have exercised their leverage may be more instructive and readily adaptable for Medicare. VA and certain other government purchasers buy drugs based on actual prices paid by private purchasers--specifically, on the prices that drug manufacturers charge their "most-favored" private customers. In exchange for being able to sell their drugs to state Medicaid programs, manufacturers agree to offer VA and other government purchasers drugs at favorable prices, known as Federal Supply Schedule (FSS) prices. So that VA can determine the most-favored customer price, manufacturers provide information on price discounts and rebates offered to domestic customers and the terms and conditions involved, such as length of contract periods and ordering and delivery practices. (Manufacturers must also be willing to supply similar information to CMS to support the data on the average manufacturer's price, known as AMP, and best price they report for computing any rebates required by the Medicaid program.) VA has been successful in using competitive bidding to obtain even more favorable prices for certain drugs. Through these competitive bids, VA has obtained national contracts for selected drugs at prices that are even lower than FSS prices. These contracts seek to concentrate the agency's purchase on one drug within therapeutically equivalent categories for the agency's national formulary. In 2000, VA contract prices averaged 33 percent lower than corresponding FSS prices. Medicare's use of competition has been restricted to several limited-scale demonstration projects authorized by the Balanced Budget Act of 1997. In one of these demonstrations under way in San Antonio, Texas, suppliers bid to provide nebulizer drugs, such as albuterol, to Medicare beneficiaries. While Medicare normally allows any qualified provider to participate in the program, under the demonstration only 11 bidders for nebulizer drugs were selected to participate. In exchange for restricting their choice of providers to the 11 selected, beneficiaries are not liable for any differences between what suppliers charge and what Medicare allows. Preliminary CMS information on the San Antonio competitive bidding demonstration suggests no reported problems with access and a savings of about 26 percent realized for the inhalation drugs. Our study on Medicare payments for part B drugs shows that Medicare pays providers much more for these drugs than necessary, given what the providers likely paid to purchase these drugs from manufacturers, wholesalers, or other suppliers. Unlike the market-based fees paid by VA and other federal agencies, Medicare's fees are based on AWP, which is a manufacturer-reported price that is not based on actual transactions between seller and purchaser. Physicians contend that the profits they receive from Medicare's payments for part B drugs are needed to compensate for inappropriately low Medicare fees for most drug administration services. Similarly, the case argued by some pharmacy suppliers for Medicare's high drug payments is that not all of their costs of providing the drugs are covered.
The pricing of Medicare's part B-covered prescription drugs--largely drugs that cannot be administered by patients themselves--has been under scrutiny for years. Most of the part B drugs with the highest Medicare payments and billing volume fall into three categories: those that are billed for by physicians and typically provided in a physician office setting, those that are billed for by pharmacy suppliers and administered through a durable medical equipment (DME) item, and those that are also billed by pharmacy suppliers but are patient-administered and covered explicitly by statute. Studies show that Medicare sometimes pays physicians and other providers significantly more than their actual costs for the drugs. In September 2000, the Health Care Financing Administration's (HCFA)--now the Centers for Medicare and Medicaid Services--took steps to reduce Medicare's payment for part B-covered drugs by authorizing Medicare carriers, the contractors that pay part B claims, to use prices obtained in the Justice Department investigations of providers' drug acquisition costs. HFCA retracted this authority in November 2000 after providers raised concerns. GAO found that Medicare's method for establishing drug payments is flawed. Medicare pays 95 percent of the average wholesale price (AWP), which, despite its name, may be neither an average nor what wholesalers charge. It is a price that manufacturers derive using their own criteria; there are no requirements or conventions that AWP reflect the price of any actual sale of drugs by a manufacturer. Manufacturers report AWPs to organizations that publish them in drug price compendia, and Medicare carriers that pay claims for part B drugs base providers' payments on the published AWPs. In 2001, widely available prices at which providers could purchase drugs were substantially below AWP, on which Medicare payments are based. For both physician-billed drugs and pharmacy supplier-billed drugs, Medicare payments often far exceeded widely available prices. Physicians and pharmacy suppliers contend that the excess payments for covered drugs are necessary to offset what they claim are inappropriately low or nonexistent Medicare payments for services related to these drugs. For delivery pharmacy supplier-billed drugs, Medicare's payment policies are uneven. Pharmacy suppliers billing Medicare receive a dispensing fee for one drug type--inhalation therapy drugs--but there are no similar payments for other DME-administered or oral drugs. Other payers and purchasers, such as health plans and the Department of Veterans Affairs, use different approaches to pay for or purchase drugs that may be instructive for Medicare. In general, they make use of the leverage from their volume and competition to secure better prices.
3,559
557
As we reported in 2009, more than 5 million third parties submitted more than 82 million miscellaneous income information forms (Form 1099- MISC) to the IRS reporting more than $6 trillion in payments for tax year 2006. Third-party payers are businesses, governmental units, and other organizations that make payments to other businesses or individuals. Payers must submit payment information on 1099-MISCs to IRS when they make a variety of payments labeled miscellaneous income. Payees, or those being compensated, are required to report the payments on their income tax returns. The types of payments reportable on a Form 1099-MISC--shown in figure 1--and their reporting thresholds vary widely. Under existing law, information reporting is required for payments by persons engaged in a trade or business to nonemployees for services of $600 or more (called nonemployee compensation), royalty payments of $10 or more, and medical and health care payments made to physicians or other suppliers (including payments by insurers) of $600 or more. However, personal payments, such as a payment by a homeowner to a contractor to paint his or her personal residence, are not required to be reported because these payments are not made in the course of a payer's trade or business. Existing regulations also exempt certain payments to a corporation, payments for merchandise, wages paid to employees, and payments of rent to real estate agents. The expansion of information reporting to payments to corporations and for merchandise will apply to payments payments made after December 31, 2011. made after December 31, 2011. Payers must provide 1099-MISC statements to payees by the end of January. Payers submitting fewer than 250 1099-MISCs may submit paper forms, which are due to IRS by the end of February. Payers submitting paper 1099-MISCs are required to use IRS's official forms or substitute forms with special red ink readable by IRS's scanning equipment. Photocopies and copies of the 1099-MISC form downloaded from the Internet or generated from software packages in black ink do not conform to IRS processing specifications. Payers submitting 250 or more 1099- MISCs are required by IRS to submit the forms electronically. Most 1099- MISCs for tax year 2006 were submitted electronically. However, most payers submitted small numbers of 1099-MISCs, and most payers submitted paper 1099-MISCs. By matching 1099-MISCs received from payers with what payees report on their tax returns, IRS can detect underreporting of income including failure to file a tax return. Figure 2 shows the automated process IRS uses to detect mismatches between nonemployee compensation and other payments reported on 1099-MISCs and payees' income tax returns. The Nonfiler program handles cases where no income tax return was filed by a 1099-MISC payee. The Automated Underreporter (AUR) program handles cases where a payee filed a tax return but underreported 1099-MISC payments. AUR's case inventory includes payee mismatches over a certain threshold, and IRS has a methodology using historical data to select cases for review. AUR reviewers manually screen the selected cases to determine whether the discrepancy can be resolved without taxpayer contact. For the remaining cases selected, IRS sends notices asking the payee to explain discrepancies or pay any additional taxes assessed. Third-party information reporting is widely acknowledged to increase voluntary tax compliance in part because taxpayers know that IRS is aware of their income. As shown in figure 3, voluntary reporting compliance is substantially higher for income subject to withholding or information reporting than for other income. For example, for wages and salaries, which are subject to withholding and substantial information reporting, taxpayers have consistently misreported an estimated 1 percent of their income. For income with little or no information reporting, the tax year 2001 estimated percentage was about 54 percent. IRS has long recognized that if payments made to businesses are not reported on 1099- MISCs, it is less likely that they will be reported on payee tax returns. In a 2007 report we highlighted the connection between a lack of information reporting and the contribution of sole proprietors, a significant portion of the small business community, to the tax gap. IRS estimated the gross tax gap--the difference between what taxpayers actually paid and what they should have paid on a timely basis--to be $345 billion for tax year 2001, the most recent estimate made. IRS also estimated that it will collect $55 billion, leaving a net tax gap of $290 billion. IRS estimated that a large portion of the gross tax gap, $197 billion, was caused by the underreporting of income on individual tax returns. Of this, IRS estimated that $68 billion was caused by sole proprietors underreporting their net business income. The $68 billion does not include other sole proprietor contributions to the tax gap, including not paying because of failing to file a tax return, underpaying the tax due on income that was correctly reported, and underpaying employment taxes. Nor does it include tax noncompliance by other types of businesses such as partnerships and S corporations. In the report, we noted that a key reason for this noncompliance was that sole proprietors were not subject to tax withholding, and only a portion of their net business income was reported to IRS by third parties. Tax noncompliance by some small businesses is unfair to businesses and other taxpayers that pay their taxes--tax rates must be higher to collect the same amount of revenue. The 1099-MISCs are a powerful tool through which IRS can encourage voluntary compliance by payees and detect underreported income of payees that do not voluntarily comply. Increasing the numbers of 1099- MISCs IRS receives from payers in turn would increase information available for use in IRS's automated matching programs to detect tax underreporting, including failure to file a tax return. For tax year 2004 (the last full year available for our 2009 report), the AUR program assessed $972 million in additional taxes for payee underreporting detected using 1099-MISC information. To help IRS improve its use of 1099-MISC information, we recommended in 2009 that IRS collect data to help refine its matching process and select the most productive cases for review. In response to our recommendation, IRS reviewed a sample of AUR cases and plans to modify its tax year 2010 matching criteria for 1099-MISC information. Information reporting has allowed IRS to use its computerized matching programs as an alternative to audits to address some issues. The matching programs generally require less contact with taxpayers and thus are less intrusive and involve less taxpayer time. In addition, information reporting may reduce taxpayers' costs of preparing their tax returns. In a 2006 report we described how additional information reporting on the basis of securities transactions could reduce taxpayers' need to track the basis of securities they sold. The extent to which 1099-MISC reporting reduces taxpayer recordkeeping costs is not known, but to the extent it reduces the need to track receipts by year from each payer it could have some effect on those costs. IRS does not know the magnitude of 1099-MISC payer noncompliance or the characteristics of payers that fail to comply with the reporting requirements. Without an estimate of payer noncompliance, IRS has no way of determining to what extent 1099-MISC payer noncompliance creates a window of opportunity for payees to underreport their business income and go undetected by IRS. Research would be key for IRS in developing a cost-effective strategy to identify payers that never submit 1099-MISCs. In 2009, we recommended that IRS study the extent of 1099- MISC payer noncompliance and its contribution to the tax gap, as well as the nature and characteristics of those payers who do not comply. In response to our recommendations, IRS plans to study payer noncompliance through its National Research Program studies with results estimated to be available in December 2015. Existing information reporting requirements impose costs on the third- party businesses required to file Form 1099-MISC. The expanded reporting requirements will impose new costs. To comply with information reporting requirements, third parties incur costs internally or pay external parties. In-house costs may involve additional recordkeeping costs beyond normal recordkeeping costs related to running a business, as well as the costs of preparing and filing the information returns themselves. If the third parties go outside their organizations for help, they would incur out- of-pocket costs to buy software or pay for others to prepare and file their returns. Data on the magnitude of these information reporting costs are not readily available because taxpayers generally do not keep records of the time and money spent complying with the tax system. A major difficulty in measuring tax compliance costs, including the costs of filing information returns, is disentangling accounting and recordkeeping costs due to taxes from the costs that would have been incurred in the absence of the federal tax system. Data on compliance costs are typically collected by contacting a sample of taxpayers, through surveys or interviews, and asking them for their best recollection of the total time and money they spent on particular compliance activities. The quality of the resulting data depends on the ability of taxpayers to accurately recall the amount of time and money they spent. In the nine case studies we conducted in 2007, filers of information returns told us that existing information return costs, both in-house and for external payments, were relatively low. While these nine case studies are not to be generalized to the entire population, they do provide examples of costs and insights from the perspective of organizations of different sizes and from different industries and of organizations filing their own information returns and those filing on behalf of others. In-house compliance costs include the costs of getting taxpayer identification numbers (TIN), buying software, tracking reportable payments, filing returns with IRS, and mailing copies to taxpayers. One organization with employees numbering in the low thousands estimated that its costs of preparing and filing a couple hundred Forms 1099, which include recordkeeping and distinguishing goods from services, were a minimal addition to its normal business costs. One small business employing under five people told us of possibly spending 3 to 5 hours per year filing Form 1099 information returns manually, using an accounting package to gather the information. An organization with more than 10,000 employees estimated spending less than .005 percent of its yearly staff time on preparing and filing Forms 1099, including recordkeeping. Unit prices for services provided to payers by selected software vendors, service bureaus, and return preparers decreased as the number of forms handled increased. Two external parties selling services reported prices for preparing and filing Forms 1099 with IRS of about $10 per form for 5 forms to about $2 per form for 100 forms, with one of them charging about $0.80 per form for 100,000 forms. These prices do not include the payers' recordkeeping costs. This relationship of price to size for entities we studied is consistent with what studies that we have seen show about the role of fixed costs and economies of scale in complying with the tax code; we are familiar with no similar studies of information returns. Although our case study organizations indicated that 1099 recordkeeping and reporting costs are relatively low, costs may not be as low as they could be. According to IRS, advisory group members, and others we interviewed for our 2009 report, payers are confronted with a variety of impediments to preparing and submitting 1099-MISC forms. Some payers that do not submit their 1099-MISCs as required may be unaware of their 1099-MISC reporting responsibilities. Other payers may be confused about whether payments are reportable because of different dollar reporting thresholds and the general exemption for payments to corporations under current law. Some payers misreport or neglect to report payee taxpayer identification numbers (TIN) and could be subject to penalty and required to do backup withholding on 1099-MISC payments to payees with bad TINs. For the large number of payers each submitting a few 1099-MISCs, IRS does not offer a fillable form on its Web site and requires payers to submit scannable red ink forms, but some payers submit black and white 1099-MISCs anyway. Although businesses will face additional costs for each additional Form 1099, some options for modifying the 1099-MISC reporting requirements could help mitigate the burden and promote payer reporting compliance. Table 1 highlights options we previously reported. We noted those options that were proposed by IRS, IRS advisory groups, and the National Taxpayer Advocate. Our list of 1099-MISC impediments and options is not exhaustive, nor is the list of pros and cons associated with the options. Improved IRS guidance and education are relatively low-cost options, but most taxpayers use either tax preparers or tax software to prepare their tax returns and may not read IRS instructions and guidance. While taxpayer service options may improve compliance for those that are inadvertently noncompliant, they are not likely to affect those that are intentionally noncompliant. Some options to change 1099-MISC reporting requirements require congressional action, and other options would be costly for IRS to implement. Where the option involves particular issues, such as cost or taxpayer burden, we note them in our table. As we reported in 2009, multiple approaches could help IRS to mitigate the reporting costs and promote payer compliance with 1099-MISC reporting requirements. For example, the evidence shows that the benefits outweigh the costs for information reporting for payments to corporations. For other options, it is not clear whether the benefits outweigh the associated costs, and additional research by IRS could help to evaluate the feasibility of more costly options, such as allowing black and white paper 1099-MISCs. Action to move forward on options to target outreach to specific payer groups or clarify guidance to reduce common reporting mistakes would hinge on IRS first conducting research to understand the magnitude of and reasons for payer noncompliance. In 2009, we recommended two actions that IRS could take to help payers understand their 1099-MISC reporting responsibilities: Provide payers with a chart to identify reportable payments. IRS disagreed with our recommendation and stated that the Form 1099-MISC instructions already list which payments are reportable and explain the rules for specific payment types. We believe that a chart would provide taxpayers with a quick guide for navigating the Form 1099-MISC instructions, already eight pages long under the current reporting requirements. Evaluate adding a new checkbox on business tax returns for payers to attest whether they submitted their 1099-MISCs as required. IRS also disagreed with this recommendation and stated that a similar question was removed from the corporate tax return after the Paperwork Reduction Act of 1980 was enacted. We believe results from the evaluation we recommend would be useful in weighing the benefits and burdens associated with a checkbox option. To reduce the submission burden facing many payers submitting small numbers of 1099-MISCs, we also recommended that IRS evaluate the cost- effectiveness of eliminating or relaxing the red ink requirement to allow payers to submit computer-generated black and white 1099-MISCs. In April 2009, IRS conducted a test to determine the labor to process a sample of 4,027 red-ink 1099-MISCs versus the same documents photocopied. IRS told us that, using the same scanning equipment and employees, the red-ink sample took 2 hours and 9 minutes to process versus 28 hours and 44 minutes to process and manually key the photocopy sample. Based on the test results, IRS decided to maintain the red ink requirement to minimize labor costs. We have not reviewed the results of the IRS test. Our prior work did not assess requiring 1099-MISC reporting on payments for goods. Some of our findings and recommendations may be relevant, but we do not know the extent of relevance. Madam Chair, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this statement, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Amy Bowser, Bertha Dong, Lawrence Korb, MaryLynn Sergent, and Cheri Truett. Tax Gap: IRS Could Do More to Promote Compliance by Third Parties with Miscellaneous Income Reporting Requirements. GAO-09-238. Washington, D.C.: January 28, 2009. Tax Gap: Actions That Could Improve Rental Real Estate Reporting Compliance. GAO-08-956. Washington, D.C.: August 28, 2008. Highlights of the Joint Forum on Tax Compliance: Options for Improvement and Their Budgetary Potential. GAO-08-703SP. Washington, D.C.: June 2008. Tax Administration: Costs and Uses of Third-Party Information Returns. GAO-08-266. Washington, D.C.: November 20, 2007. Business Tax Reform: Simplification and Increased Uniformity of Taxation Would Yield Benefits. GAO-06-1113T. Washington, D.C.: September 20, 2006. Capital Gains Tax Gap: Requiring Brokers to Report Securities Cost Basis Would Improve Compliance if Related Challenges Are Addressed. GAO-06-603. Washington, D.C.: June 13, 2006. Tax Policy: Summary of Estimates of the Costs of the Federal Tax System. GAO-05-878. Washington, D.C.: August 26, 2005. Tax Administration: IRS Should Continue to Expand Reporting on Its Enforcement Efforts. GAO-03-378. Washington, D.C.: January 31, 2003. Tax Administration: Benefits of a Corporate Document Matching Program Exceed the Costs. GAO/GGD-91-118. Washington, D.C.: September 27, 1991. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Third parties, often businesses, reported more than $6 trillion in miscellaneous income payments to the Internal Revenue Service (IRS) in tax year 2006 on Form 1099-MISC. Payees are to report this income on their tax returns. It has been long known that if these payments are not reported on 1099-MISCs, it is less likely that they will be reported on payee tax returns. In 2010, the reporting requirements were expanded to cover payments for goods and payments to corporations, both previously exempt, beginning in 2012. This testimony summarizes recent GAO reports and provides information on (1) benefits of the current requirements in terms of improved compliance by taxpayers and reduced taxpayer recordkeeping, (2) costs to the third-party businesses of the current 1099-MISC reporting requirement, and (3) options for mitigating the reporting burden for third-party businesses. GAO has not assessed the expansion of 1099-MISC reporting to payments for goods. Information reporting is a powerful tool for encouraging voluntary compliance by payees and helping IRS detect underreported income. Also, information reporting may sometimes reduce taxpayers' costs of preparing their tax returns, although by how much is not known. IRS estimated that $68 billion of the annual $345 billion gross tax gap for 2001, the most current available estimate, was caused by sole proprietors underreporting their net business income. A key reason for this noncompliance was that sole proprietors were not subject to tax withholding and only a portion of their net business income was reported to IRS by third parties. The benefits from information reporting are affected by payers' compliance with reporting requirements and IRS's ability to use the information in its process that matches third-party data with tax returns. However, IRS does not have estimates of the number or characteristics of payers that fail to submit 1099-MISCs as required. To improve its use of 1099-MISC information, IRS has collected data to help identify ways to refine its matching process and select the most productive cases for review, as GAO recommended in 2009. Current 1099-MISC requirements impose costs on the third parties required to file them. The magnitude of these costs is not easily estimated because payers generally do not track these costs separate from other accounting costs. In nongeneralizable case studies conducted in 2007 with four payers and five vendors that file information returns on behalf of their clients, GAO was told that existing information return costs were relatively low. One small business employing under five people told GAO of possibly spending 3 to 5 hours per year filing Form 1099 information returns manually, using an accounting package to gather the information. Two vendors reported prices for preparing and filing Forms 1099 of about $10 per form for 5 forms to about $2 per form for 100 forms, with one charging about $0.80 per form for 100,000 forms. However, these prices did not include clients' recordkeeping costs. Payers face a variety of impediments preparing and submitting 1099-MISC forms, including complex rules and an inconvenient submission process. For example, payers must determine whether payees are incorporated, must get the payees' taxpayer identification number, and must use special forms if filing on paper. A variety of options exist for mitigating the costs of filing Form 1099-MISC. Most have pros and cons. IRS has already exempted payments, including those paid by credit card, which will be reported to IRS by other means. Other options include improving IRS guidance and education; adding a check-the-box question to business tax forms that would force return preparers to ask their clients whether they have complied with 1099-MISC reporting requirements; waiving late submission penalties for first-time payers; raising the payment reporting threshold; initially limiting the types of payments covered; having IRS develop an online filing capability; and allowing paper filers to submit computer-generated black and white 1099-MISCs rather than IRS's printed forms. GAO is not making new recommendations in this testimony. In 2009, GAO suggested that Congress consider requiring payers to report service payments to corporations. GAO did not study reporting of payments for goods. Other prior GAO recommendations included ways for IRS to improve its use of 1099-MISC information received. IRS agreed with six of eight recommendations and is taking action to address them.
3,999
929
NMB is headed by a three-member board, with each member appointed by the President and confirmed by the Senate for a term of 3 years. Day-to-day administration of the agency is provided by NMB's General Counsel within the Office of Legal Affairs and the Chief of Staff (see fig. 1). NMB does not have an office of inspector general to provide independent audit and investigative oversight. According to NMB, its overall mission is to provide for the independence of air and rail carriers and employees in matters of self-organization, avoid interruption to commerce conducted through the operation of those carriers, and administer statutory adjustment boards as well as develop complementary strategies to resolve disputes. To fulfill its mission, NMB has three program areas: Representation: Unions are selected for the purposes of collective bargaining through secret-ballot elections conducted by NMB. If there is a question concerning representation of a specific craft or class,NMB is charged with resolving the representation dispute through its Office of Legal Affairs, and has the sole jurisdiction to decide these disputes. Mediation and Alternative Dispute Resolution: The RLA provides mediation to help resolve disputes that can occur between management and labor during collective bargaining negotiations.When rail or air carriers and unions cannot reach agreement on the terms of a new or revised collective bargaining agreement--such as working conditions or rates of pay--either party can apply for NMB's mediation services to resolve their differences or NMB may impose mediation if it finds that resolving the dispute is in the public's interest. NMB also offers grievance mediation to parties as an alternative way to resolve disputes filed for grievance arbitration. Arbitration: The RLA also offers grievance arbitration to help resolve disagreements between carriers and unions over how to interpret and apply provisions of existing collective bargaining agreements. For example, employees may file grievances if they believe they were wrongfully fired or disciplined in violation of the agreement. If the carrier and the employee cannot resolve the grievance, the RLA permits either of these parties to refer the dispute to arbitration before an adjustment board. The adjustment board consists of a carrier representative, a union representative, and a neutral arbitrator provided by NMB. In this capacity, the arbitrator is called upon to break a tie. NMB does not directly provide arbitration services, but rather maintains a list of registered arbitrators from which the parties can select someone to review and decide their case. In the airline industry, the parties pay the costs of arbitration. In the railroad industry, however, consistent with the requirements of the RLA, NMB pays the fee and travel expenses of the arbitrator. NMB has made some progress in implementing each of the seven recommendations we made in December 2013. In our December 2013 report, we found that NMB lacked a formal strategic planning process and officials confirmed that they did not have a systematic mechanism for involving congressional and other stakeholders in this process. We concluded that without a robust process, NMB lacked assurance that its limited resources were effectively targeted toward its highest priorities. In this review, we found that NMB has implemented a strategic planning process but has not formalized it through written policies and procedures. In fiscal year 2014, NMB developed and published a strategic plan covering fiscal years 2014 through 2019, which we determined was largely consistent with OMB guidance on implementing the GPRA Modernization Act of 2010 (GPRAMA). NMB officials told us they used a strategic planning process soliciting input from staff in NMB program areas as well as from external stakeholders and Congress. Five of the seven external stakeholder groups that we interviewed said they commented on a draft of the strategic plan or discussed aspects of it during regular meetings with the agency, and all reported being satisfied with their overall communication with NMB. However, the agency has not developed a written policy or set of procedures outlining its strategic planning process. Federal internal control standards call for agencies to document the policies and procedures necessary to achieve their objectives, including strategic planning. Specifically, agencies should (1) establish policies and procedures to ensure that management directives are carried out and (2) appropriately document transactions and other significant events, and ensure that those records are properly managed, maintained, and available for examination. Further, through these policies and procedures, agency management can define responsibilities, assign key roles, and delegate authority. Meeting these requirements may be particularly important for NMB. NMB officials said there was little need for preparing written documentation of the strategic planning process, such as a standard operating procedure, because it was simple and would be easy to replicate in the future. Officials also said, because the agency is small, with 51 full-time positions, its staff frequently communicate informally, limiting the need for written procedures. However, three of NMB's five senior managers, including the Chief of Staff and General Counsel, are eligible to retire, as are many other employees, increasing the risk of the agency losing institutional knowledge should they do so. Moreover, because the agency is small, some staff members have multiple responsibilities, increasing the magnitude of knowledge loss when an individual staff member leaves the agency. In our December 2013 report, we found that NMB was not meeting OMB guidance to implement GPRAMA requirements for annual performance planning and reporting. Specifically, the agency's performance goals were not objective, quantifiable, and measurable--as required by GPRAMA-- and did not have targets and a time period over which to measure performance--as recommended by OMB guidance implementing GPRAMA. Without meeting this federal guidance, we concluded that the agency was not positioned to track and publicly report progress or results in its program areas. In this review, we found that NMB has developed new performance goals. However, of the 19 goals in its fiscal year 2016 performance plan, one goal specified a target, another specified a timeframe; but none followed all elements of OMB guidance for implementing GPRAMA. Several NMB officials told us that it is difficult for the agency to design performance goals because some outcomes are out of its control, such as how long it takes parties to reach agreement through mediation. Many federal agencies also set measurable performance goals for outcomes that include external factors outside of their direct control. Our prior work has shown that there are a number of strategies that federal agencies can use to reduce the influence of external factors on agencies' measures. OMB officials told us NMB could seek assistance from them to refine its performance goals or could partner with another agency that has a strong performance management department. GAO 2013 Recommendation: NMB should develop and implement a formal mechanism to ensure the prompt resolution of findings and recommendations by independent auditors, including clearly assigning responsibility for this follow-up to agency management. In our December 2013 report, we found that NMB was following most key practices for financial accountability and control, but had an outstanding significant deficiency from its fiscal year 2012 financial statement audit and did not have a mechanism for ensuring prompt resolution of audit findings. As a result, we concluded that some recommendations made by auditors to improve the agency's internal controls or operations may not have been addressed. In this review, we found that while NMB in fiscal year 2015 resolved the significant deficiency identified in its 2012 financial statement audit, it does not have a formal mechanism to promptly resolve all audit findings consistent with federal internal control standards. Specifically, while NMB drafted a financial audit standard operating procedure in 2014, it does not cover the agency's response to findings from non-financial audits. The agency has also not addressed two recommendations made in previous auditors' management letters that accompanied NMB's financial audit reports. One of those recommendations was made as a result of the fiscal year 2014 audit and the other recommendation was related to a discrepancy that has been unresolved since 1995. In its fiscal year 2015 response, NMB indicated that the only management official with knowledge of the long-term discrepancy had retired and the agency would work to resolve the discrepancy through its financial management system. In our December 2013 report, we found that NMB had not fully implemented key practices for information security and privacy. Without implementation of these key practices, we concluded that NMB had increased risks that the confidentiality, integrity, and availability of its information would be compromised and it had limited assurance that the personal information it collected and used was adequately protected. In this review, we found that NMB has fully transitioned its network infrastructure and records management system into a cloud computing environment as a result of federal initiatives aimed at improving, among other things, the federal government's operational efficiencies and overall IT security posture. NMB also fully transitioned its financial systems to third-party service providers. Specifically, NMB relies on other agencies' systems, such as the Department of the Interior, for payroll, personnel, and human resources services, and the Department of the Treasury's Bureau of the Fiscal Service for a full range of accounting services, including hosting its financial management system. In addition, NMB has begun to take steps to improve its information security program. Specifically, NMB developed a policy for managing agency information, documents, and records. The agency has also drafted procedures for its new enterprise network that include provisions for access and identity control, configuration management, planning, contingency monitoring, and audits. Further, it has developed a procedure for handling cyber incidents. Finally, it has an agreement in place with the Bureau of the Fiscal Service to, among other things, conduct a security assessment of its enterprise network. However, NMB has not fully implemented most key information security and privacy practices. (For additional details, see appendix II.) For example, the agency does not have policies and procedures in place for its information security program, including those for the oversight of third- party providers--entities that provide or manage information systems that support NMB operations. In addition, NMB has not conducted the required assessments of its third-party providers to ensure their systems are in compliance with the Federal Information Security Modernization Act (FISMA) of 2014. FISMA requires federal agencies to develop, document, and implement an agency-wide information security program to protect the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Moreover, NMB has not assessed whether its third-party providers' systems were in compliance with the Privacy Act of 1974 and E- Government Act of 2002, which describe, among other things, agency responsibilities with regard to protecting personally identifiable information. NMB officials said the agency is taking steps to address its remaining information security and privacy issues. For example, because the agency's new enterprise network is now a cloud-based system, NMB plans to use the Federal Risk and Authorization Management Program (FedRAMP), to the extent possible, to guide the development of its agency-wide policies and procedures, including how it will oversee its third-party providers and ensure they are in compliance with FISMA. In addition, NMB reached out to OMB in September 2015, and NMB officials said they have tried reaching out to the Department of Homeland Security (DHS) to ensure NMB is doing what is required to meet annual FISMA reporting requirements. NMB officials said they have been receiving information on FISMA from OMB, but not from DHS. Further, NMB officials said they are working on drafting information security program and privacy policies and procedures. They said that finalizing the information security policies and procedures will assist the agency in completing all of its required reviews in the future. In our December 2013 report, we found that NMB's human capital program was not guided by a strategic workforce plan. Without workforce planning, a key internal control, we concluded that agency management could not ensure that skill needs would be continually assessed and that the agency would be able to obtain and maintain a workforce with the skills necessary to achieve organizational goals. Without a plan, the agency could not monitor and evaluate the results of its workforce planning efforts, including whether those efforts contributed to the agency accomplishing its strategic goals. In this review, we found NMB in October 2014 completed a strategic workforce plan that at least partially addressed four of the five practices from our December 2013 recommendation (see table 2). While members of one of NMB's advisory groups said they provided input on some of the agency's workforce decisions, our prior work suggests that formally including stakeholders in the workforce planning process can help the agency develop ways to streamline processes and improve human capital strategies. In addition, NMB's performance goals, including those related to human capital, do not meet federal guidance. Without performance goals that meet guidance or the inclusion of other monitoring and evaluation efforts in its workforce and succession plan, the agency is not positioned to measure the outcomes of its human capital strategies or evaluate whether these strategies helped it accomplish its goals. In our December 2013 report, we found that NMB was struggling to efficiently manage grievance cases in the rail industry and it lacked data on the types of grievances filed to more efficiently manage the process. As a result, we concluded that if NMB did not address this demand on its limited resources, it could face a growing backlog of arbitration cases. In this review, we found that NMB is collecting data on the type of grievances filed for arbitration in some, but not all cases. An NMB official said part of the reason the agency is not collecting complete data on grievance types is because it does not have access to all cases. NMB reviews all grievances filed for arbitration by either a railroad or union and then forwards them to one of three types of adjustment boards--the National Railroad Adjustment Board (NRAB), a Public Law Board, or a Special Board of Adjustment. NMB is able to track information on the type of grievances filed for arbitration by the Public Law Boards and Special Boards of Adjustment because NMB requires parties to code their grievance type when they file their request for these boards. Parties filing grievances with NRAB, however, are not required by NMB to code their grievance type, an NMB official said, because NRAB is an independent organization that sets its own procedures and NMB cannot require that grievance codes be included in requests for arbitration filed with NRAB. However, NMB may be able to obtain that information because NRAB officials told us they track information on grievance type and are willing to share this information with NMB. Even with data on types of grievances filed for all cases, it is not clear the extent to which NMB would analyze them to address the arbitration backlog. One program official said that NMB does not have a systematic way to identify cases that may be good candidates for some type of alternative to arbitration, such as grievance mediation, because staff cannot easily access case information in a way that makes it readily available for analysis. Analysis of these cases continues to be largely a manual process that takes significant staff resources, he said. According to an NMB information technologist, in the past, the agency has primarily relied on staff to sort and analyze these data. However, the agency's new arbitration case management system, which it upgraded in November 2015, should be able to produce by spring 2016 standard electronic data reports that would facilitate this analysis. Until those reports are available, it appears NMB will have limited ability to analyze data that might help it reduce its arbitration backlog, which, according to NMB, continues to grow. NMB is following key procurement practices in 2 of 3 areas that our prior work on assessing the acquisition function at federal agencies identified as promoting agencies' efficient, effective, and accountable procurement functions--organizational alignment and leadership; and knowledge and information management. NMB, however, has not developed policies and processes--a third area our prior work identified--that reflect its new procurement interagency agreement. After the retirement of its only contracting officer in January 2014, NMB entered into an interagency agreement with the Department of the Treasury's Bureau of the Fiscal Service (Fiscal Service) for provision of certain procurement functions that NMB had previously handled in-house. In this new environment, NMB is continuing to align its procurement function with its mission and ensure adequate resources to meet its procurement needs, a key practice to facilitate efficient and effective management of acquisition activities. Its Office of Administration is at an organizational level comparable to other key mission offices, such as the Office of Mediation and Alternative Dispute Resolution and the Office of Arbitration. An NMB official told us that the agency also involves internal stakeholders in acquisition decisions, including determining procurement needs, reviewing existing contracts before automatically renewing them, and justifying purchase requests. In addition, the agency entered into the interagency agreement with the Fiscal Service in response to changes in its workforce (i.e., the retirement of its contracting officer). Further, to ensure NMB has a procurement workforce adequate to support the organization's needs, two staff are being trained as contracting officers because not having a contracting officer is a risk to the agency, the NMB official said. In addition, six NMB staff were trained and certified as contracting officer representatives, who assist the contracting officer in providing administration of contract actions under the interagency agreement and evaluating performance. NMB is also following a second key practice by establishing knowledge and information tools to help it make well-informed procurement decisions. NMB now has access to electronic data on purchase requests from the procurement and financial management systems administered by Fiscal Service. Fiscal Service also provides NMB with monthly billing reconciliations and weekly updates on the status of its contracts. A NMB official said that the information received from Fiscal Service has helped the agency analyze and adjust its spending and, as a result, NMB has eliminated contracts for items it no longer needs, such as storage space, copiers, and periodical subscriptions. However, NMB is not following an element of a third key practice--to have policies and processes in place consistent with internal control standards and best practice. NMB has not developed written internal policies and processes that reflect its new interagency agreement procurement environment. The NMB procurement official said the agency does not have current policies and processes because some of the previous policies and procedures were lost in the transition which occurred when the previous contracting officer retired. As a result, the agency has had to recreate them, the official said. The agency's fiscal year 2014 to 2019 strategic plan (as amended in fiscal year 2015) called for procurement processes to be updated by the end of fiscal year 2014 in light of the new procurement environment. Developing and implementing written procurement policies and processes that reflect NMB's current procurement environment could help ensure its staff use consistent processes under this new environment. Since we made our recommendations in December 2013, NMB has taken several positive steps in response, such as developing strategic and workforce plans and closing a long-standing deficiency in a financial statement audit. However, additional actions are needed to fully respond to those recommendations. For example, NMB's performance goals do not yet meet all federal guidance. As a result, the agency is not positioned to track and publicly report progress or results in its program areas. In the areas of strategic planning and information security and privacy, officials were not able to provide the written policies and procedures that guide their actions, consistent with standards for internal control. Without fully implementing these recommendations, NMB cannot ensure that its limited resources are effectively targeted toward its highest priorities. Moreover, it may be missing opportunities to improve performance and mitigate risks in its program and management areas. In addition, NMB has not developed written policies and procedures that reflect its new procurement environment under its interagency agreement. Without written policies and processes--as called for by internal control standards and best practice--NMB cannot ensure the use of consistent procurement processes. We continue to believe, as suggested in our December 2013 report, that Congress should consider authorizing an appropriate federal agency's Office of Inspector General to provide independent audit and investigative oversight of NMB. We recommend that the Chairman of the National Mediation Board develop and implement written policies and processes to reflect the agency's current procurement environment. We provided a draft of this report to the National Mediation Board (NMB) for comment. The agency provided written comments, which are reproduced in their entirety in appendix III. We also shared a draft with the Office of Management and Budget (OMB) and Office of Personnel Management (OPM). Neither agency provided comments. NMB commented that many of the areas we had concerns about are not under its direct control considering that NMB has entered into interagency agreements for certain services with the Department of the Interior and the Department of the Treasury's Bureau of the Fiscal Service (Fiscal Service). We continue to believe, however, that NMB must retain ultimate control and responsibility for all its programs and data, regardless of which agencies are to perform the services. For example, with regard to our findings related to information security, NMB commented that it will develop standard operating procedures for reviewing audits conducted by its third-party providers, but that it does not have the resources to conduct its own audits of those contracted agencies. Under FISMA, however, NMB is responsible for developing, documenting, and implementing a security program to protect its information systems and data, including those managed by another agency, contractor, or other source. We believe that developing procedures for reviewing audits conducted by third-party providers will be a positive step toward ensuring that NMB is conducting this required oversight. NMB agreed with our recommendation to develop and implement policies and processes to reflect the agency's current procurement environment and indicated it is taking steps to do so. NMB also commented that because it does not manage all of its own procurements, its policies and processes would be largely subordinate to those of the Fiscal Service. However, because NMB's interagency agreement with the Fiscal Service for performance of certain procurement functions does not absolve NMB of its responsibility to develop policies and processes as called for by internal control standards and best practice, NMB must develop its own set of complementary policies and processes to ensure the agency meets its needs through efficient, effective, and accountable procurement functions. In terms of its response to audits, NMB commented that all program audit findings have been addressed and that there are no outstanding issues related to any audits. While we recognized in the report that NMB resolved a long-standing, significant deficiency in its fiscal year 2015 financial statement audit, we disagree that all audit issues have been resolved. NMB needs to develop policies and procedures to address findings from all audits, not solely those reported in the financial statement audit report. NMB also commented that some of our concerns appeared to be related to the agency's failure to sufficiently document its processes and that this was not necessarily an indication of noncompliance with any particular requirement. We agree that NMB has made strides in certain areas, such as developing strategic and workforce plans, but its actions are not in accordance with federal guidance or federal internal control standards in some areas. For example, in the areas of strategic planning, information security and privacy, and procurement, officials were not able to provide the written policies and procedures that guide their actions. We continue to believe it is important for NMB to create this documentation to ensure future consistency and success in its management and program areas. Finally, NMB commented it had concerns that we continued to assert the agency did not adequately consult with its stakeholders even though we noted several times in the report that stakeholders told us they had input and were satisfied with their overall communication with the agency. Stakeholder groups did tell us they had good communication with NMB, particularly with regard to the agency's strategic planning. However, in the development of its workforce plan, NMB officials said that they did not specifically solicit stakeholder input, and the agency's workforce and succession plan does not address collecting and incorporating feedback from stakeholders. As our prior work suggests, it will be important for NMB to formally include stakeholders in this process in the future because involving stakeholders can help the agency develop ways to streamline processes and improve human capital strategies. We are sending copies of this report to the Chairman of NMB and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The FAA Modernization and Reform Act of 2012 included a provision for us to evaluate and audit the programs, operations, and activities of the National Mediation Board (NMB) every 2 years. Our first report was issued in December 2013. This is the second review of NMB and this report examines the extent to which NMB: 1. has implemented each of our December 2013 recommendations, and 2. has incorporated key procurement practices. To address our research objectives, we reviewed key NMB documents and compared those documents with relevant federal laws, regulations, guidance, and related leading practices identified in our previous work (see table 3). We interviewed NMB officials and current board members. In addition, we interviewed key stakeholders who were interviewed for the December 2013 report, among others. Specifically, we interviewed representatives from key rail and air management and labor groups including Airlines for America, National Railway Labor Conference, AFL-CIO Transportation Trades Department and affiliated rail and air unions, and the International Brotherhood of Teamsters. Further, we interviewed representatives from the National Association of Railroad Referees, an association representing railroad arbitrators; the Dunlop II Committee, an informal NMB advisory group; and the National Railroad Adjustment Board, which hears rail grievance arbitration cases. The results of these interviews are not generalizable to all NMB stakeholders. Finally, we interviewed officials at the Office of Management and Budget and the Office of Personnel Management to determine how these agencies provide oversight and guidance to NMB. In addition, we reviewed NMB procurement data. Specifically, we reviewed data provided by the Department of the Treasury's Bureau of the Fiscal Service on NMB's fiscal year 2014 and 2015 contract actions. We assessed the reliability of data from the Bureau of the Fiscal Service and NMB's fiscal year 2013 through 2015 financial statement audit reports by interviewing knowledgeable officials and reviewing relevant documents. We determined that these data were sufficiently reliable for our purposes. We conducted this performance audit from April 2015 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Example of NMB's status NMB has not conducted risk assessments of its new enterprise network and financial management systems. Partially following NMB developed a policy for managing agency information, documents, and records in May 2013. In addition, it drafted procedures for its new enterprise network that include provisions for access and identity control, configuration management, planning, contingency monitoring, and audits. However, NMB has not developed agency-wide policies and procedures that govern its information security program, including policies and procedures for the oversight of third-party providers. Partially following NMB has drafted a security plan for its new enterprise network dated May 2014. However, NMB has not developed a security plan for its new financial management systems. NMB stated its staff was provided security awareness training during 2015. However, NMB did not provide evidence to support that all employees and contractors had received the training. Partially following NMB conducted an initial review of its new enterprise network in May 2014. NMB stated its new financial management systems were reviewed in September 2014. However, NMB was unable to provide evidence to support the review. Partially following NMB has not established and documented a remedial action process for its information security control weaknesses. NMB has not formally documented and tracked its preliminary plan of actions for its new enterprise network and has not included required attributes, such as milestones and scheduled completion dates. In addition, NMB has not yet begun formally documenting and tracking any information security control weaknesses which have been identified through other reviews (e.g., GAO). Partially following NMB has developed a procedure for handling cyber incidents. However, there are no indicators of date, review and approval. In addition, the procedure does not include required actions, such as mitigating risks associated with incidents before substantial damage is done, and notifying and consulting with the federal information security incident center. NMB has not established and maintained up-to-date continuity of operations plans and procedures for its information systems. Specifically, its continuity of operations plan has not been updated since June 2011 and does not reflect the current information technology environment. NMB has designated its Assistant Chief of Staff as its senior agency official for privacy. Example of NMB's status NMB does not have policies and procedures for privacy protections. NMB has not conducted a privacy impact assessment for its financial management systems, which contain personally identifiable information. NMB did not issue system of records notice for its financial management systems. Cindy Brown Barnes, (202) 512-7215, [email protected]. In addition to the contact named above, Clarita Mrena (Assistant Director), Amy Anderson (analyst in charge), Benjamin L. Sponholtz, Shirley Abel, Marie Ahearn, James Rebbe, Shaunyce Wallace, and Candice Wright made significant contributions to this report. In addition, key support was provided by James Bennett, Rachael Chamberlin, Susan Chin, David Chrisinger, Larry Crosland, Karin Fangman, Maria Gaona, Gretta Goodwin, Christopher Jones, Julia Kennon, Jason Kirwan, Kathy Leslie, Benjamin Licht, Steven Lozano, Monica Perez-Nelson, and Walter Vance.
NMB was established under the Railway Labor Act to facilitate labor relations for railroads and airlines by mediating and arbitrating labor disputes and overseeing union elections. The FAA Modernization and Reform Act of 2012 included a provision for GAO to evaluate NMB programs and activities every 2 years. GAO's first report under this provision, issued in December 2013, included seven recommendations for NMB based on assessments of policies and processes in several management and program areas. This second report examines the extent to which NMB has 1) implemented recommendations made by GAO in December 2013, and 2) incorporated key procurement practices. GAO reviewed relevant federal laws, regulations, and NMB documents, such as its strategic and workforce plans; and contracting data for fiscal years 2014-2015; and interviewed NMB officials. The National Mediation Board (NMB) has made some progress in addressing the seven recommendations GAO made in December 2013; however, additional actions are needed to fully implement those recommendations and strengthen operations (see table). Without full implementation, NMB lacks reasonable assurance that its limited resources are effectively targeted and may be missing opportunities to improve performance and mitigate risks in program and management areas. Source: GAO analysis of NMB documents and interviews with officials. | GAO-16-240 NMB is following some key procurement practices that GAO has identified in prior work. However, NMB has not developed and implemented written policies and processes--consistent with internal control standards and best practice--that reflect its new interagency agreement with the Department of the Treasury for the performance of certain procurement functions. Without this documentation NMB cannot ensure the use of consistent processes in its new procurement environment. GAO recommends that NMB develop and implement written policies and processes to reflect its current procurement environment. NMB agreed with the recommendation and indicated it would take steps to implement it.
6,465
384